HomeWork Assignment 7 ECE661
|
|
- Clarence Andrews
- 6 years ago
- Views:
Transcription
1 HomeWork Assignment 7 ECE661 Muhammad Ahmad Ali PUID: November 23,
2 1 Problem Solution 1.1 PCA For doing face recognition through PCA we proceed as follows. First, all the images are vectorized so that i-th image is represented as X i vector. Then a matrix X is formed as follows, Where, X = [X 1 m, X 2 m,.., X N m] m = 1 N Then, we find the eigenvector matrix U of X T X. As explained in class, the eigenvectors W of XX T are found as given below, N i=1 W = XU We then retain p eigenvectors from the matrix W with largest eigenvalues to get the PCA basis W p. Then the projection of all training images on W p is obtained, y i = W T p (X i m) Similarly, each test image is projected on to the PCA basis. X i y j = W T p (X j m) For each j-th test image and i-th training image the distance d ij is calculated, d ij = y T i y j The i for which d ij is minimum for j-th test image, gives the index of best matching training image. If the class label of that training image matches the class label of the test image, a correct classification is registered. This is essentially a nearest neighbor classification in the reduced dimensionality in the form of PCA basis. The accuracy of the classification is obtained by, accuracy(p) = No. of images correctly classified Total No. of images Where, p is the number of eigenvectors used. 2
3 1.2 LDA For LDA we need to find directions w that maximize, J(w) = wt S B w w T S W w Where, S B is between class variance matrix and S W is between class variance matrix given by, S B = 1 N c (m i m)(m i m) T N c S W = 1 N c N c i=1 1 N i N i=1 i j=1 (X ij m i )(X ij m i ) T To solve this problem Yu and Yang algorithm is used. First of all, we need to find the eigenvectors of S B. These are found using the same trick as used in PCA. A matrix M is formed as follows, M = [m 1 m, m 2 m,.., m Nc m] Where m i is the mean of the i-th class. Then obviously, S B = MM T (ignoring the scale factor 1/N c ). So first the eigenvector matrix V of M T M is obtained and the eigenvectors V B of MM T are found as given below, V B = MV The eigenvectors obtained above are sorted in descing order of their eigenvalues and those with eigenvalues close to zero were discarded. I obtained 29 eigenvectors by this method. Let us call Y the eigenvector matrix left behind after discarding. Then a matrix Z is obtained, Z = YD 1/2 B Where, D B = Y T S B Y. Finally we obtain the eigenvector matrix U of Z T S W Z and they are sorted in ascing order of their eigenvalues. I do not discard the eigenvectors corresponding to largest eigenvalues at this point. So the LDA basis is obtained by, W = ZU Once LDA basis is obtained, the rest of the procedure is same as explained for PCA. We retain first p eigenvectors from W as a matrix W p and then project both the training and test images onto the basis and perform nearest neighbor classification in this basis as described in previous section. The results of applying both algorithms to the data are shown in next section. The answer to the question about space dimensionality p is also given in next section. 3
4 2 Results 2.1 PCA Using the training data set we first compute the PCA basis as described in section 1.1. This gives us a matrix, whose columns contain the desired eigenvectors. If we analyze the eigenvalues we see that the bulk of the energy is concentrated in the initial directions. Specifically we found out that the first 75 vectors span the 95% of the energy. From this we can conclude that 75 eigenvectors are sufficient for good result of the PCA method. The plot of the energy as a function of the number of the eigenvectors is given below, Energy % No. of PCA Eigenvectors Figure 1: Cumulative Energy in eigenvectors The plot of the PCA classification accuracy as a function of the number of eigenvectors is given below. We noticed that we achieved 100% accuracy by using 17 eigenvectors: 1 Accuracy of PCA No. of Eigenvectors Figure 2: PCA accuracy as a function of No. of Eigenvectors 4
5 2.2 LDA The LDA basis was obtained by following the procedure described in section 1.2. This gives a sized matrix W containing 29 basis vectors. Also, as explained in class, the number of basis vectors in LDA cannot be greater than (No. of classes - 1). In our case (No. of classes = 30), so at most 29 basis vectors can be obtained. The plot of the LDA classification accuracy as a function of the number of eigenvectors is given below. We noticed that we achieved 100% accuracy by using 10 eigenvectors: 1 Accuracy of LDA No. of Eigenvectors Figure 3: LDA accuracy as a function of No. of Eigenvectors 5
6 2.3 Comparison of PCA and LDA The comparison of the two methods is shown in the figure below. We can see that PCA achieves 100% accuracy at 17 eigenvectors while LDA achieves 100% accuracy for just 10 eigenvectors PCA LDA Accuracy No. of Eigenvectors Figure 4: Comparison of PCA and LDA accuracy as a function of No. of Eigenvectors 6
7 3 Matlab Code The code is given below, % readdata.m function [X,labelVector] = readdata srcdir = D:\CourseWork\ECE661\HW7\pca-lda\ECE661_hw7_images\hw7_images\train ; d = dir([srcdir, /*.png ]); % N = # of examples N = length(d); im = imread([srcdir, /, d(1).name ]); [h,w,k] = size( im ); % M = feature dimensionality M = h*w; X = zeros( M, N ); for( i = 1 : N ) dispstr = [ >>>>>> Reading file :, num2str(i), of, num2str(n) ]; disp( dispstr ); filename = d(i).name; im = imread( [srcdir, /, d(i).name] ); [h,w,k] = size( im ); if( k == 3 ) im = rgb2gray( im ); X( :, i ) = double( im(:) ); labelvector(i) = getsubjectid( filename ); function sub_id = getsubjectid( filename ) usindices = find( filename == _ ); sub_id = str2num( filename( 1 : usindices(1)-1) ); 7
8 % pca.m function [W,mn] = pca(x) [M,N] = size(x); % subtract off the mean for each dimension mn = mean(x,2); X = X - repmat(mn,1,n); G = X *X; % find the eigenvectors and eigenvalues [EV, V] = eig(g); size(v) [ V, idx ] = sort( diag(v) ); idx = idx( : -1 : 1 ); V = V( idx ); EV = EV( :, idx ); energy = cumsum(v)./ sum(v); save( Eigen-Value-Energy.mat, energy ); cutoff = find( energy >= 1 ); cutoff = cutoff(1); W = X * EV; W = normalizevectors(w); % runpcaclassification.m function accuracy = runpcaclassification load Train-Data; X_train = X; labels_train = labelvector; load Test-Data; 8
9 X_test = X; labels_test = labelvector; V_pca = pca(x_train); nbasis = size( V_pca, 2 ); for( i = 1 : nbasis ) dispstr = [ >>>>> Testing using the first, num2str(i), eigen vectors ]; disp( dispstr ); V_subset = V_pca( :, 1 : i ); X_train_sub = V_subset * X_train; X_test_sub = V_subset * X_test; accuracy(i) = runnearestneighborclassification( X_train_sub, labels_train, X_test_sub, labels_test ); % lda.m function W = lda( X, labelvector ) [ featurevectorsize, datasetsize ] = size(x); uniquelabels = unique( labelvector ); nclasses = length( uniquelabels ); globalmean = mean( X, 2 ); phi_b = zeros( featurevectorsize, nclasses ); phi_w = zeros( featurevectorsize, datasetsize); for( i = 1 : nclasses ) thisclassindices = find( labelvector == uniquelabels( i ) ); thisclassdata = X( :, thisclassindices ); thisclassmean = mean( thisclassdata, 2 ); M = thisclassmean - globalmean; phi_b(:,i) = M; thisclasscount = length( thisclassindices ); Y = thisclassdata - repmat( thisclassmean, 1, thisclasscount ); 9
10 phi_w( :, thisclassindices) = Y; Sb_trick = phi_b * phi_b; [V,D] = eigs( Sb_trick, nclasses ); V = phi_b * V; retained = length( find( diag(d)>0.05 ) ); Y = V(:,1:retained); Db = D(1:retained,1:retained); Z = Y * diag( (diag(db)).^(-0.5)); phi_w_z = phi_w *Z; Z_Sw_Z = phi_w_z *phi_w_z; [U,Dw] = eigs( Z_Sw_Z, size(z_sw_z, 1) ); U = U(:,:-1:1); W = Z*U; W = normalizevectors(w); % runldaclassification.m function accuracy = runldaclassification load Train-Data; X_train = X; labels_train = labelvector; load Test-Data ; X_test = X; labels_test = labelvector; V_lda = lda( X_train, labels_train ) ; nbasis = size( V_lda, 2 ) ; for( i = 1 : nbasis ) dispstr = [ >>>>> Testing using the first, num2str(i), of, num2str(nbasis), eigen vectors ]; disp( dispstr ) ; V_subset = V_lda( :, 1 : i ) ; X_train_sub = V_subset * X_train ; X_test_sub = V_subset * X_test ; 10
11 accuracy(i) = runnearestneighborclassification( X_train_sub, labels_train, X_test_sub, labels_test ); % runnearestneighborclassification.m function p = runnearestneighborclassification( X_train, labels_train, X_test, labels_test ) testcount = size( X_test, 2 ); traincount = size( X_train, 2 ); correct = 0; for( i = 1 : testcount ) thisexample = X_test( :, i ); for( j = 1 : traincount ) ssd(j) = sum( (thisexample(:) - X_train(:,j)).^2 ); [mindiff, minidx ] = min( ssd ); predicted_label( i ) = labels_train( minidx ); if( predicted_label(i) == labels_test(i) ) correct = correct + 1; p = correct / testcount; % normalizevectors.m function K = normalizevectors(v) sizev = size(v); K = V; numberofvectors = sizev(2); V2 = V.^2; for i = 1 : numberofvectors denom = sum(v2(:,i)); denom = sqrt(denom); 11
12 K(:,i) = V(:,i)/denom; 12
ECE 661: Homework 10 Fall 2014
ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;
More informationExample: Face Detection
Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,
More informationLEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach
LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits
More informationFace Recognition. Lecture-14
Face Recognition Lecture-14 Face Recognition imple Approach Recognize faces mug shots) using gray levels appearance). Each image is mapped to a long vector of gray levels. everal views of each person are
More informationMachine Learning 11. week
Machine Learning 11. week Feature Extraction-Selection Dimension reduction PCA LDA 1 Feature Extraction Any problem can be solved by machine learning methods in case of that the system must be appropriately
More information1 Principal Components Analysis
Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for
More informationCS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)
CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions
More informationA Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag
A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data
More informationLinear & Non-Linear Discriminant Analysis! Hugh R. Wilson
Linear & Non-Linear Discriminant Analysis! Hugh R. Wilson PCA Review! Supervised learning! Fisher linear discriminant analysis! Nonlinear discriminant analysis! Research example! Multiple Classes! Unsupervised
More informationLECTURE 16: PCA AND SVD
Instructor: Sael Lee CS549 Computational Biology LECTURE 16: PCA AND SVD Resource: PCA Slide by Iyad Batal Chapter 12 of PRML Shlens, J. (2003). A tutorial on principal component analysis. CONTENT Principal
More informationPrincipal Component Analysis -- PCA (also called Karhunen-Loeve transformation)
Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations
More informationFace Recognition. Lecture-14
Face Recognition Lecture-14 Face Recognition imple Approach Recognize faces (mug shots) using gray levels (appearance). Each image is mapped to a long vector of gray levels. everal views of each person
More informationLinear Dimensionality Reduction
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis
More informationFace Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi
Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Overview Introduction Linear Methods for Dimensionality Reduction Nonlinear Methods and Manifold
More informationRobot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction
Robot Image Credit: Viktoriya Sukhanova 13RF.com Dimensionality Reduction Feature Selection vs. Dimensionality Reduction Feature Selection (last time) Select a subset of features. When classifying novel
More informationLecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University
Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization
More informationPCA and LDA. Man-Wai MAK
PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer
More informationWhat is Principal Component Analysis?
What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most
More informationFace Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition
ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr
More informationData Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395
Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection
More informationConstructing Optimal Subspaces for Pattern Classification. Avinash Kak Purdue University. November 16, :40pm. An RVL Tutorial Presentation
Constructing Optimal Subspaces for Pattern Classification Avinash Kak Purdue University November 16, 2018 1:40pm An RVL Tutorial Presentation Originally presented in Summer 2008 Minor changes in November
More informationUniversität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen PCA. Tobias Scheffer
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen PCA Tobias Scheffer Overview Principal Component Analysis (PCA) Kernel-PCA Fisher Linear Discriminant Analysis t-sne 2 PCA: Motivation
More informationKarhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering
Karhunen-Loève Transform KLT JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform Has many names cited in literature: Karhunen-Loève Transform (KLT); Karhunen-Loève Decomposition
More informationMachine Learning 2nd Edition
INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010
More informationModeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview
4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System processes System Overview Previous Systems:
More informationLecture VIII Dim. Reduction (I)
Lecture VIII Dim. Reduction (I) Contents: Subset Selection & Shrinkage Ridge regression, Lasso PCA, PCR, PLS Lecture VIII: MLSC - Dr. Sethu Viayakumar Data From Human Movement Measure arm movement and
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationImage Analysis & Retrieval. Lec 14. Eigenface and Fisherface
Image Analysis & Retrieval Lec 14 Eigenface and Fisherface Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu Z. Li, Image Analysis & Retrv, Spring
More informationPCA and LDA. Man-Wai MAK
PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer
More informationNotes on Implementation of Component Analysis Techniques
Notes on Implementation of Component Analysis Techniques Dr. Stefanos Zafeiriou January 205 Computing Principal Component Analysis Assume that we have a matrix of centered data observations X = [x µ,...,
More informationMachine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis
Machine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis Sham M Kakade c 2019 University of Washington cse446-staff@cs.washington.edu 0 / 10 Announcements Please do Q1
More informationECE295, Data Assimila0on and Inverse Problems, Spring 2015
ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch
More informationPrincipal Component Analysis (PCA)
Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha
More informationMachine Learning (Spring 2012) Principal Component Analysis
1-71 Machine Learning (Spring 1) Principal Component Analysis Yang Xu This note is partly based on Chapter 1.1 in Chris Bishop s book on PRML and the lecture slides on PCA written by Carlos Guestrin in
More informationPrincipal Component Analysis and Linear Discriminant Analysis
Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29
More informationImage Analysis & Retrieval Lec 14 - Eigenface & Fisherface
CS/EE 5590 / ENG 401 Special Topics, Spring 2018 Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface Zhu Li Dept of CSEE, UMKC http://l.web.umkc.edu/lizhu Office Hour: Tue/Thr 2:30-4pm@FH560E, Contact:
More informationFace Recognition and Biometric Systems
The Eigenfaces method Plan of the lecture Principal Components Analysis main idea Feature extraction by PCA face recognition Eigenfaces training feature extraction Literature M.A.Turk, A.P.Pentland Face
More informationPrincipal Component Analysis!! Lecture 11!
Principal Component Analysis Lecture 11 1 Eigenvectors and Eigenvalues g Consider this problem of spreading butter on a bread slice 2 Eigenvectors and Eigenvalues g Consider this problem of stretching
More informationEigenfaces. Face Recognition Using Principal Components Analysis
Eigenfaces Face Recognition Using Principal Components Analysis M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, 3(1), pp. 71-86, 1991. Slides : George Bebis, UNR
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Linear Discriminant Analysis Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Principal
More informationMachine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395
Machine Learning Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1395 1 / 47 Table of contents 1 Introduction
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images
More informationPrincipal Component Analysis
B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition
More information7. Variable extraction and dimensionality reduction
7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality
More informationL11: Pattern recognition principles
L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction
More informationCS 340 Lec. 6: Linear Dimensionality Reduction
CS 340 Lec. 6: Linear Dimensionality Reduction AD January 2011 AD () January 2011 1 / 46 Linear Dimensionality Reduction Introduction & Motivation Brief Review of Linear Algebra Principal Component Analysis
More informationPrincipal Component Analysis
Principal Component Analysis Anders Øland David Christiansen 1 Introduction Principal Component Analysis, or PCA, is a commonly used multi-purpose technique in data analysis. It can be used for feature
More informationDiscriminant Uncorrelated Neighborhood Preserving Projections
Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,
More informationPrincipal Component Analysis (PCA) Theory, Practice, and Examples
Principal Component Analysis (PCA) Theory, Practice, and Examples Data Reduction summarization of data with many (p) variables by a smaller set of (k) derived (synthetic, composite) variables. p k n A
More informationDimensionality Reduction
Lecture 5 1 Outline 1. Overview a) What is? b) Why? 2. Principal Component Analysis (PCA) a) Objectives b) Explaining variability c) SVD 3. Related approaches a) ICA b) Autoencoders 2 Example 1: Sportsball
More informationAssignment 3. Introduction to Machine Learning Prof. B. Ravindran
Assignment 3 Introduction to Machine Learning Prof. B. Ravindran 1. In building a linear regression model for a particular data set, you observe the coefficient of one of the features having a relatively
More informationUncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization
Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,
More informationL26: Advanced dimensionality reduction
L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern
More informationTable of Contents. Multivariate methods. Introduction II. Introduction I
Table of Contents Introduction Antti Penttilä Department of Physics University of Helsinki Exactum summer school, 04 Construction of multinormal distribution Test of multinormality with 3 Interpretation
More informationEigenimages. Digital Image Processing: Bernd Girod, 2013 Stanford University -- Eigenimages 1
Eigenimages " Unitary transforms" Karhunen-Loève transform" Eigenimages for recognition" Sirovich and Kirby method" Example: eigenfaces" Eigenfaces vs. Fisherfaces" Digital Image Processing: Bernd Girod,
More informationICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization
ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n
More informationTime Series Classification
Distance Measures Classifiers DTW vs. ED Further Work Questions August 31, 2017 Distance Measures Classifiers DTW vs. ED Further Work Questions Outline 1 2 Distance Measures 3 Classifiers 4 DTW vs. ED
More informationLecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016
Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen
More informationClassification for High Dimensional Problems Using Bayesian Neural Networks and Dirichlet Diffusion Trees
Classification for High Dimensional Problems Using Bayesian Neural Networks and Dirichlet Diffusion Trees Rafdord M. Neal and Jianguo Zhang Presented by Jiwen Li Feb 2, 2006 Outline Bayesian view of feature
More informationAssignment 4. Machine Learning, Summer term 2014, Ulrike von Luxburg To be discussed in exercise groups on May 12-14
Assignment 4 Machine Learning, Summer term 2014, Ulrike von Luxburg To be discussed in exercise groups on May 12-14 Exercise 1 (Rewriting the Fisher criterion for LDA, 2 points) criterion J(w) = w, m +
More informationFace Detection and Recognition
Face Detection and Recognition Face Recognition Problem Reading: Chapter 18.10 and, optionally, Face Recognition using Eigenfaces by M. Turk and A. Pentland Queryimage face query database Face Verification
More informationMotivating the Covariance Matrix
Motivating the Covariance Matrix Raúl Rojas Computer Science Department Freie Universität Berlin January 2009 Abstract This note reviews some interesting properties of the covariance matrix and its role
More information12. Machine learning I. Regression Overfitting Classification Example: Handwritten digit recognition Support vector machines (SVMs)
12. Machine learning I Regression Overfitting Classification Example: Handwritten digit recognition Support vector machines (SVMs) 48 Regression Suppose we are given N pairs of a vector x and a scalar
More informationFactor Analysis (1) Factor Analysis
Factor Analysis (1) Outlines: 1. Introduction of factor analysis 2. Principle component analysis 4. Factor rotation 5. Case Shan-Yu Chou 1 Factor Analysis Combines questions or variables to create new
More informationPrincipal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014
Principal Component Analysis and Singular Value Decomposition Volker Tresp, Clemens Otte Summer 2014 1 Motivation So far we always argued for a high-dimensional feature space Still, in some cases it makes
More informationDimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014
Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their
More informationAdvanced Introduction to Machine Learning CMU-10715
Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationUnsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent
Unsupervised Machine Learning and Data Mining DS 5230 / DS 4420 - Fall 2018 Lecture 7 Jan-Willem van de Meent DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Dimensionality Reduction Goal:
More informationAnnouncements (repeat) Principal Components Analysis
4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long
More informationECE661 Computer Vision: HW 9 Object Recognition
ECE661 Computer Vision: HW 9 Object Recognition Problem 1: Face Recognition using PCA and LDA By Sirui Hu, 12-16-2012 PCA Object Recognition (Eigen Faces) PCA is a simple, non-parametric method extracting
More informationData Mining Techniques
Data Mining Techniques CS 6220 - Section 3 - Fall 2016 Lecture 12 Jan-Willem van de Meent (credit: Yijun Zhao, Percy Liang) DIMENSIONALITY REDUCTION Borrowing from: Percy Liang (Stanford) Linear Dimensionality
More informationConnection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis
Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Alvina Goh Vision Reading Group 13 October 2005 Connection of Local Linear Embedding, ISOMAP, and Kernel Principal
More informationFisher s Linear Discriminant Analysis
Fisher s Linear Discriminant Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationMathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One
More informationPerformance Evaluation of Nonlinear Dimensionality Reduction Methods on the BANCA Database
Performance Evaluation of Nonlinear Dimensionality Reduction Methods on the BANCA Database CMPE 544 Pattern Recognition - Term Project Report Bogazici University Hasan Faik ALAN January 5, 2013 Abstract
More informationRegularized Discriminant Analysis and Reduced-Rank LDA
Regularized Discriminant Analysis and Reduced-Rank LDA Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Regularized Discriminant Analysis A compromise between LDA and
More informationNonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold.
Nonlinear Methods Data often lies on or near a nonlinear low-dimensional curve aka manifold. 27 Laplacian Eigenmaps Linear methods Lower-dimensional linear projection that preserves distances between all
More informationDecember 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis
.. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make
More informationChemometrics: Classification of spectra
Chemometrics: Classification of spectra Vladimir Bochko Jarmo Alander University of Vaasa November 1, 2010 Vladimir Bochko Chemometrics: Classification 1/36 Contents Terminology Introduction Big picture
More informationMachine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University
More informationCS281 Section 4: Factor Analysis and PCA
CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we
More informationKernel Methods. Machine Learning A W VO
Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance
More informationRecognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015
Recognition Using Class Specific Linear Projection Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Articles Eigenfaces vs. Fisherfaces Recognition Using Class Specific Linear Projection, Peter N. Belhumeur,
More informationData Preprocessing Tasks
Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can
More informationIntroduction PCA classic Generative models Beyond and summary. PCA, ICA and beyond
PCA, ICA and beyond Summer School on Manifold Learning in Image and Signal Analysis, August 17-21, 2009, Hven Technical University of Denmark (DTU) & University of Copenhagen (KU) August 18, 2009 Motivation
More informationStatistical Machine Learning
Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction
More informationMachine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012
Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component
More informationUnsupervised Learning: K- Means & PCA
Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning
More informationNumerical Methods I Singular Value Decomposition
Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)
More informationSupervised locally linear embedding
Supervised locally linear embedding Dick de Ridder 1, Olga Kouropteva 2, Oleg Okun 2, Matti Pietikäinen 2 and Robert P.W. Duin 1 1 Pattern Recognition Group, Department of Imaging Science and Technology,
More informationLearning from Data Logistic Regression
Learning from Data Logistic Regression Copyright David Barber 2-24. Course lecturer: Amos Storkey a.storkey@ed.ac.uk Course page : http://www.anc.ed.ac.uk/ amos/lfd/ 2.9.8.7.6.5.4.3.2...2.3.4.5.6.7.8.9
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationMATH 285 HW3. Xiaoyan Chong
MATH 285 HW3 Xiaoyan Chong November 20, 2015 1. (a) Draw the graph Figure 1: Graph (b) Find the degrees of all vertices d 1 = 0 +.15 +.15 +.3 + 0 =.6 d 2 =.15 + 0 +.85 + 0 + 0 = 1 d 3 =.15 +.85 + 0 + 0
More informationMachine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.
Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted
More informationClustering VS Classification
MCQ Clustering VS Classification 1. What is the relation between the distance between clusters and the corresponding class discriminability? a. proportional b. inversely-proportional c. no-relation Ans:
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationCHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION
59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods
More informationHW5 Solutions. Gabe Hope. March 2017
HW5 Solutions Gabe Hope March 2017 1 Problem 1 1.1 Part a Supposing wy T x>wy Ṱ x + 1. In this case our loss function is defined to be 0, therefore we know that any partial derivative of the function will
More information