Nonnegativity Constraints in Numerical Analysis

Size: px
Start display at page:

Download "Nonnegativity Constraints in Numerical Analysis"

Transcription

1 Nonnegativity Constraints in Numerical Analysis Bob Plemmons Wake Forest University Collaborator -D. Chen, WFU Birth of Numerical Analysis, Leuven, Belgium, Oct., 07 Related Papers at:

2 Outline Some Personal Thoughts on my Early Days in NA Nonnegativity: Perron Frobenius Theory, M-matrices, Regular Splittings, Classical Iterative Methods, R. Varga, D. Young Historical Developments in Nonnegative Least Squares (NNLS) Computations From Lawson and Hansen (1974) Natural Extensions to (Approximate) Nonnegative Matrix and Tensor Factorization (NMF and NTF) Algorithms for NMF and NTF from 1990s - present Survey of Applications: NNLS, NMF, NTF Some of our Work Computations using Data from AF Maui Space Center & NASA 2

3 Looking Ahead: The creation and observation of a reflectance spectrum SUN Satellite A B anit, Maui Research and Technology Center, 590 Lipoa Parkway, Ste. 264, Kihei, HI 96 Atmosphere Day Night C EL Z D Zenith 3

4 Some Personal Thoughts on the Birth of NA Alston Householder My Mentor at ORNL (ret. 1969) and UTK Introduced me to Numerical Linear Algebra Fortran Coding Accuracy and Stability of Numerical Algs. QR Factorization - HT SIAM J. Appl. Math. Gatlinburg Meetings 4

5 Preliminaries Nonnegativity

6 Classical Perron-Frobenius Theorem Perron (1907) Frobenius (1917) Formed the theoretical basis for (classical) iterative methods in Numerical Analysis. 6

7 Classical Iterative Methods for Linear Systems Ax = b Time frame: 1950's, 1960's, 1970's People: Garrett Birkhoff 1948 started work with student David Young Some other Principal players: Sir Richard Southwell, L. R. Richardson, Richard Varga, Gene Golub Early Locations: Harvard University, Aberdeen Proving Ground, NBS, Argonne National Lab., Cal Tech JPL, Stanford 7

8 Jacobi, Gauss-Seidel, SOR, Etc. Convergence based on regular splittings: A = M N is a regular splitting if M is nonsingular with M -1 and N nonnegative. M-matrices (Minkowski 1900): A = (a ij ), with a ii > 0 and a ij nonpositive for i j, and A -1 nonnegative. Arise, e.g., in finite difference methods for elliptic PDEs. Many generalizations in Numerical Analysis, 1950s present. Nonnegativity plays a fundamental role. Books: Varga (1963), Young (1971), Berman & Ple (1979), etc. Preconditioners for CG, Henk van der Vorst (1977), incomplete LU - based on M-matrix theory. (Will move on to LS) 8

9 Linear Least Squares 9

10 Nonnegative Least Squares (NNLS) 10

11 Will Survey the Following Topics 11

12 Lawson and Hansen s Algorithm C. L. Lawson and R. J. Hanson, Solving Least Squares Problems, Prentice Hall, J. Longley, Least Squares Problems Using Orthogonal Methods, Dekker, L. Shure, Brief History of Nonnegative Least Squares in MATLAB, Blog available at: lsqnonneg in Matlab. 12

13 lsqnonneg In their landmark text Lawson and Hanson, 1974, gave the standard algorithm for NNLS - an active set method. Mathworks modified the algorithm NNLS, which ultimately was renamed as lsqnonneg. 13

14 Active Set Methods Active-set methods are based on the observation that only a small subset of constraints are usually active (i.e. satisfied exactly) at the solution. There are n inequality constraints in NNLS problem. The i th constraint is said to be active if the i th regression coefficient is negative, otherwise the constraint is passive. lsqnonneg uses the fact that if the true active set is known, solution to the least squares problem will be the unconstrained least squares solution to the problem using only variables corresponding to the passive set, setting the regression coefficients of the active set to zero. 14

15 15

16 Lsqnonneg, cont. 16

17 Comments on L&H s lsqnonneg They show that the NNLS algorithm is finite. Given sufficient time, the algorithm will reach a point where the Kuhn-Tucker conditions are satisfied, and it will terminate. In that sense it is a direct algorithm. There is no good way of telling exactly how many iterations it will require in a practical sense. lsqnonneg can be quite time consuming if n is large. 17

18 Bro and Jong s Fast NNLS (fnnls) Rasmus Bro, S. D. Jong, A fast non-negativityconstrained least squares algorithm, Journal of Chemometrics, Fast NNLS (fnnls), often speeds up the basic algorithm, especially in the presence of multiple right-hand sides, by avoiding unnecessary recomputations. Application is nonnegative tensor factorization for 3D data analysis (more later). Designed for use in multiway decomposition methods for tensor arrays such as PARAFAC and N-mode PCA. 18

19 Bro and Jong s fnnls, cont. In an extension to their NNLS algorithm characterized as being for ``advanced users", they retain information about the previous iteration s solution and extract further improvements in ALS applications that employ NNLS. These innovations can lead to a substantial performance improvement when analyzing large multivariate, multiway array data sets. Extension: M. H. van Benthem, M. R. Keenan, Fast algorithm for the solution of large-scale nonnegativity constrained least squares problems, Journal of Chemometrics,

20 Algorithms Based on Iterative Methods Active-set methods typically deal with only one active constraint at each iteration. Main advantage of this class of algorithms is that by using information from a projected gradient step along with a good guess of the active set, one can handle multiple active constraints per iteration. Some of the iterative methods are based on the solution of a corresponding LCP. In contrast to an active set approach, iterative methods, like gradient projection, enable the incorporation of multiple active constraints at each iteration. 20

21 Recent Iterative Algorithms Franc, et al. Sequential Coordinate-wise Algorithm for NNLS, Czech Tech. U., Bellavia, et al., Interior Point Newton-like Algorithm for NNLS, NLA, Dhillon et al., UTA, Projective Quasi-Newton Method for NNLS, Dhillon et al., UTA, Fast Newton-type Algorithm for NNLS, SIAM Data Mining Conf., These methods have made solving large NNLS problems feasible. 21

22 Nonnegative Matrix Factorization (NMF A Natural Extension of Large Scale NNLS) Convex in W or H, but not both. 22

23 Necessary Conditions for W & H to Solve NMF Problem Lee & Seung, Nature,

24 NMF an SVD-Type Concept X = [X 1,X 2,,X n ] column vectors (data) o w j Approximately factor X W H = 1 k w (j) oh (j) denotes outer product is j th col of W, h j is j th col of H T 24

25 Approximate NMF in Data Analysis Utilize constraint that values in a data matrix X are nonnegative Apply non-negativity constrained low rank approximation for blind source separation, dimension reduction, and/or unsupervised unmixing Low rank approximation to data matrix X : X WH, W 0, H 0 Columns of W are initial basis vectors for database, may want smoothness and statistical independence in W. Columns of H represent mixing coefficients or weights, often desire statistical sparsity in H to force essential uniqueness in W. 25

26 - Brief Overview - Principal Component Analysis (PCA) Independent Component Analysis (ICA) 26

27 PCA Based on eigen-decomposition of covariance matrix for X = [X 1,X 2,,X m ] training set of column vectors, scaled and centered, XX T (or SVD of X itself). Low-rank approx. In the PCA context each column of W represents an eigenvector (hidden component), and H represents eigenprojections. Principal components correspond to largest eigenvalues. (Components called eigenfaces in face recognition applications.) Used for: orthogonal representation, dimension reduction, clustering into principal components, all computed by simple linear algebra. 27

28 Characteristics of PCA May not enforce nonnegativity in W and H, where X WH May not enforce statistical independence of basis vectors in W May not enforce statistical sparsity in H (separation by parts). This led to work beginning in late 1990s to present on ICA and on NMF. 28

29 ICA Based on neural computation unsupervised learning. Often identified with - blind source separation (BSS), feature extraction, finding hidden components. Most research based on equality, X = WH, not necessary. Statistical independence for components in W, a guiding principle, but seldom holds in practical situations. Data in X assumed to have non-gausssian PDF, find hidden components as independent as possible mutual information content in different components c i, c j, is (near) zero, or p(c i,c j ) p(c i )p(c j ). 29

30 Some Related References Lee and Seung. Learning the Parts of Objects by Non-Negative Matrix Factorization", Nature, Hoyer. Non-Negative Sparse Coding", Neural Networks for Signal Proc., Hyvärinen and Hoyer. Emergence of Phase and Shift Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces", Neural Computation, David Donoho and Stodden. ``When does Nonnegative Matrix Factorization give a Correct Decomposition into Parts?", preprint, Dept. Stat., Stanford, Berman and Plemmons. Non-Negative Matrices in the Mathematical Sciences, SIAM Press, Sajda, Du, and Parra, Recovery of Constituent Spectra using Non-negative Matrix Factorization, Tech. Rept., Columbia U. & Sarnoff Corp Cooper and Foote, Summarizing Video using Non-Negative Similarity Matrix Factorization, Tech. Rept. FX Palo Alto Lab, Szu and Kopriva, Deterministic Blind Source Separation for Space Variant Imaging, 4 th Inter. Conf. Independent Component. Anal., Nara Japan, Umeyama, Blind Deconvolution of Images using Gabor Filters and Independent Component Analysis, 4 th Inter. Conf. Independent Component. Anal., Nara Japan, Our papers on web page: 30

31 Nonnegative Matrix Factorization (NMF) Will extend concept to 3D arrays - tensors. 31

32 ALS- NNLS for Approximate Solution to NMF Initialize W using recursive application of Perron-Frobenius Theory to SVD. 32

33 ALS- NNLS for Approximate Solution to NMF 33

34 Tensors Multi-way Arrays in Multilinear Algebra 3D Viewpoint Tensor as a Box

35 Extend NMF: Nonnegative Tensor Factorization (NTF) Our interest: 3-D data. 2-D images stacked into 3-D Array 35

36 Multilinear NMF = Nonnegative Tensor Factorization (NTF) 36

37 NTF (for 3-D Arrays) 37

38 Datasets of images modeled as tensors Goal: Extract features from a tensor dataset (naively, a dataset subscripted by multiple indices). M x n x p tensor A Tensor (outer product) rank can be defined as the minimum number of rank 1 factors. 38

39 Properties of Matrix Rank Rank of A in R mx n easy to determine (Gauss elimination). Optimal rank-r approximation to A always exists, and easy to find ( SVD). Pick A in R mx n at random, then A has full rank with probability 1, i.e., rank(a) = min{m,n}. Rank A is is the same whether we consider A as an element in R mx n or in C mx n. Every statement above is false for order-k tensors, k > 2. 39

40 NMF/NTF Applications to Data Analysis 40

41 NMF Allows only additive, not subtractive combinations of the original data, in comparison to orthogonal decomposition methods, e.g. PCA. NMF used by Lee and Seung (MIT) in Nature, 1999, in biometrics, preceded and followed by numerous papers related to applications. Earlier work by Paatero, et al. Matlab Toolbox: NMFLAB, Historical Perspectives: Problem 73-14, Rank Factorization of Nonnegative Matrices, by Berman and Ple., SIAM Review 15 (1973), p. 655: (Also in Berman/Ple. book). 41

42 Historical Persectives, Cont. Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl., De Lathauwer L., De Moor B., Vandewalle J., "Independent component analysis and (simultaneous) third-order tensor diagonalization", IEEE Transactions on Signal Processing, De Lathauwer L., Vandewalle J., "Dimensionality reduction in higher-order signal processing and rank- (R_1, R_2,...,R_N) reduction in multilinear algebra", Linear Algebra and its Applications, Special Issue on Linear Algebra in Signal and Image Processing,

43 Some General Applications of NMF and NTF Source separation in acoustics, speech, video, image processing and computer vision. EEG in Medicine, electric potentials. Spectroscopy in chemistry. Molecular pattern discovery genomics. surveillance. Document clustering in text data mining. Atmospheric pollution source identification. Hyperspectral sensor data compression. Spectroscopy for space applications spectral data mining Identifying object surface materials and substances 43

44 Outline of SOI Application Space Object Identification from spectral data (SOI) Nonnegative Matrix Factorization (NMF) Methods for Spectral Unmixing Nonnegative Tensor Factorization (NTF) for SOI 3-D Tensor Factorization: Applications to SOI, using Hyperspectral Data Compression Material identification Spectral Abundances Another Application (Medical) 44

45 Blind Source Separation for Finding Hidden Components (Endmembers) Mixing of Sources basic physics often leads to linear mixing X = [X 1,X 2,,X n ] column vectors (1-D spectral scans) o w j X W H Approximately factor X W H = 1 k w (j) oh (j) denotes outer product is jth col of W, h j is jth col of H T sensor readings (mixed components observed data) separated components (feature basis matrix, unknown, low rank) mixing coefficients 45

46 Simple Analog Illustration Hidden Components in Light Separated by a Prism Our purpose finding hidden components by data analysis 46

47 Space Object Identification and Characterization from Spectral Reflectance Data More than 20,000 known objects in orbit: various types of military and commercial satellites, rocket bodies, residual parts, and debris need for space object database mining, object identification, clustering, classification, etc. 47

48 Maui Space Surveillance Site 48

49 The creation and observation of a reflectance spectrum SUN Satellite A B anit, Maui Research and Technology Center, 590 Lipoa Parkway, Ste. 264, Kihei, HI 96 Atmosphere Day Night C EL Z D Zenith 49

50 Typical Scan 50

51 Some Laboratory Electromagnetic Spectral Signatures Reflectance Reflectance Wavelength (µm) Mylar Wavelength (µm) White Paint Reflectance Reflectance Wavelength (µm) Wavelength (µm) Aluminum Solar Cell 51

52 Hyperpectral Imaging of Space Objects Current operational capability for spectral imaging of space objects Panchromatic images Non-imaging spectra UNCLASSIFIED al sandal Reflectance al7075 al6061 al2219 al1100 UNCLASSIFIED Jorgensen Wavelength, microns 52

53 Recall NTF (for 3-D Arrays) Optimal solution exists (Lim 2006, Stanford MMDS workshop) Issues: initialization, efficient optimization algorithms, avoid local minima. 53

54 Hyperspectral Imaging Hyperspectral remote sensing technology allows one to capture images objects (multiple pixels), using spectra from ultraviolet and visible to infrared. Multiple images of a scene or object are created using light from different parts of the spectrum. Hyperspectral images can be used to: Identify surface minerals, objects, buildings, etc. from space. Enable space object identification from the ground. 54

55 Hyperspectral Data Cube for Hubble Telescope 55

56 56

57 NTF Algorithm using Alternating NNLS 57

58 Simulated Data 177 x 193 x D model of Hubble satellite. Assign each pixel a certain spectral signature from lab data supplied by NASA. 8 materials used. Bands of spectra ranging from.4 μm to 2.5 μm, with 100 evenly distributed spectral values. Spatial blurring followed by Gaussian and Poisson noise and applied over the spectral bands. 58

59 Materials Assigned to Pixels 59

60 Reconstruction of Blurred & Noisy Data at wavelengths:.42,.68, 1.55, 2.29 μm Top row Original Images Bottom row Reconstructions Compression factor

61 Some Spectral Scan Matching Graphs Red is original, Blue is matched endmember scan matched from Z-factors. Black rubber edge (8%) not as well matched. Blurred and noisy data cube. 61

62 Telescope images taken at the Air Force Maui Space Center of the shuttle Columbia on it's final orbit before disintegration on re-entry, Feb Below are 3 of 64 video images. 62

63 Separation by Parts using NTF 63

64 Biomedical Application using Spectral Imaging & Nonnegative Tensor Factorization Burn and tissue wound assessment (joint with the WFU Hospital Burn Center) 1 st degree burn 3 rd degree burn 2 nd degree burn 64

65 Summary Some Personal Thoughts on the Birth of NA Nonnegativity and Classical Iterative Methods. Historical Developments in Nonnegative Least Squares (NNLS) Computations From Lawson and Hansen (1974) to recent work Natural Extensions to (Approximate) Nonnegative Matrix and Tensor Factorization (NMF and NTF) Algorithms for NMF and NTF from 1990s-present Survey of Applications: NNLS, NMF, NTF Some of our Work Computations using Data from AF Maui Space Center & NASA, and a Biomedical Application 65

Interaction on spectral data with: Kira Abercromby (NASA-Houston) Related Papers at:

Interaction on spectral data with: Kira Abercromby (NASA-Houston) Related Papers at: Low-Rank Nonnegative Factorizations for Spectral Imaging Applications Bob Plemmons Wake Forest University Collaborators: Christos Boutsidis (U. Patras), Misha Kilmer (Tufts), Peter Zhang, Paul Pauca, (WFU)

More information

Some Applications of Nonnegative Tensor Factorizations (NTF) to Mining Hyperspectral & Related Tensor Data. Bob Plemmons Wake Forest

Some Applications of Nonnegative Tensor Factorizations (NTF) to Mining Hyperspectral & Related Tensor Data. Bob Plemmons Wake Forest Some Applications of Nonnegative Tensor Factorizations (NTF) to Mining Hyperspectral & Related Tensor Data Bob Plemmons Wake Forest 1 Some Comments and Applications of NTF Decomposition methods involve

More information

Object Characterization from Spectral Data Using Nonnegative Factorization and Information Theory

Object Characterization from Spectral Data Using Nonnegative Factorization and Information Theory Object Characterization from Spectral Data Using Nonnegative Factorization and Information Theory J. Piper V. Paul Pauca Robert J. Plemmons Maile Giffin Abstract The identification and classification of

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

Non-negative Tensor Factorization Based on Alternating Large-scale Non-negativity-constrained Least Squares

Non-negative Tensor Factorization Based on Alternating Large-scale Non-negativity-constrained Least Squares Non-negative Tensor Factorization Based on Alternating Large-scale Non-negativity-constrained Least Squares Hyunsoo Kim, Haesun Park College of Computing Georgia Institute of Technology 66 Ferst Drive,

More information

Nonnegativity constraints in numerical analysis

Nonnegativity constraints in numerical analysis Nonnegativity constraints in numerical analysis Donghui Chen 1 and Robert J. Plemmons 2 1 Department of Mathematics, Wake Forest University, Winston-Salem, NC 27109. Presently at Dept. Mathematics Tufts

More information

A Modular NMF Matching Algorithm for Radiation Spectra

A Modular NMF Matching Algorithm for Radiation Spectra A Modular NMF Matching Algorithm for Radiation Spectra Melissa L. Koudelka Sensor Exploitation Applications Sandia National Laboratories mlkoude@sandia.gov Daniel J. Dorsey Systems Technologies Sandia

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 02-01-2018 Biomedical data are usually high-dimensional Number of samples (n) is relatively small whereas number of features (p) can be large Sometimes p>>n Problems

More information

ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering

ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering Filippo Pompili 1, Nicolas Gillis 2, P.-A. Absil 2,andFrançois Glineur 2,3 1- University of Perugia, Department

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

NON-NEGATIVE SPARSE CODING

NON-NEGATIVE SPARSE CODING NON-NEGATIVE SPARSE CODING Patrik O. Hoyer Neural Networks Research Centre Helsinki University of Technology P.O. Box 9800, FIN-02015 HUT, Finland patrik.hoyer@hut.fi To appear in: Neural Networks for

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE Artificial Intelligence Grad Project Dr. Debasis Mitra

Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE Artificial Intelligence Grad Project Dr. Debasis Mitra Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE 5290 - Artificial Intelligence Grad Project Dr. Debasis Mitra Group 6 Taher Patanwala Zubin Kadva Factor Analysis (FA) 1. Introduction Factor

More information

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Andrzej CICHOCKI and Rafal ZDUNEK Laboratory for Advanced Brain Signal Processing, RIKEN BSI, Wako-shi, Saitama

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION

AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION AN ALTERNATING MINIMIZATION ALGORITHM FOR NON-NEGATIVE MATRIX APPROXIMATION JOEL A. TROPP Abstract. Matrix approximation problems with non-negativity constraints arise during the analysis of high-dimensional

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Eigenvalues of tensors

Eigenvalues of tensors Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors

More information

Non-negative Matrix Factorization: Algorithms, Extensions and Applications

Non-negative Matrix Factorization: Algorithms, Extensions and Applications Non-negative Matrix Factorization: Algorithms, Extensions and Applications Emmanouil Benetos www.soi.city.ac.uk/ sbbj660/ March 2013 Emmanouil Benetos Non-negative Matrix Factorization March 2013 1 / 25

More information

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT CONOLUTIE NON-NEGATIE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT Paul D. O Grady Barak A. Pearlmutter Hamilton Institute National University of Ireland, Maynooth Co. Kildare, Ireland. ABSTRACT Discovering

More information

Regularized NNLS Algorithms for Nonnegative Matrix Factorization with Application to Text Document Clustering

Regularized NNLS Algorithms for Nonnegative Matrix Factorization with Application to Text Document Clustering Regularized NNLS Algorithms for Nonnegative Matrix Factorization with Application to Text Document Clustering Rafal Zdunek Abstract. Nonnegative Matrix Factorization (NMF) has recently received much attention

More information

Three right directions and three wrong directions for tensor research

Three right directions and three wrong directions for tensor research Three right directions and three wrong directions for tensor research Michael W. Mahoney Stanford University ( For more info, see: http:// cs.stanford.edu/people/mmahoney/ or Google on Michael Mahoney

More information

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang Deep Learning Basics Lecture 7: Factor Analysis Princeton University COS 495 Instructor: Yingyu Liang Supervised v.s. Unsupervised Math formulation for supervised learning Given training data x i, y i

More information

Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization

Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization Nicolas Gillis nicolas.gillis@umons.ac.be https://sites.google.com/site/nicolasgillis/ Department

More information

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal ZDUNEK 1 and Tomasz M. RUTKOWSKI 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Non-Negative Tensor Factorisation for Sound Source Separation

Non-Negative Tensor Factorisation for Sound Source Separation ISSC 2005, Dublin, Sept. -2 Non-Negative Tensor Factorisation for Sound Source Separation Derry FitzGerald, Matt Cranitch φ and Eugene Coyle* φ Dept. of Electronic Engineering, Cor Institute of Technology

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Single-channel source separation using non-negative matrix factorization

Single-channel source separation using non-negative matrix factorization Single-channel source separation using non-negative matrix factorization Mikkel N. Schmidt Technical University of Denmark mns@imm.dtu.dk www.mikkelschmidt.dk DTU Informatics Department of Informatics

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction

More information

Tutorial on Blind Source Separation and Independent Component Analysis

Tutorial on Blind Source Separation and Independent Component Analysis Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...

More information

Jeffrey D. Ullman Stanford University

Jeffrey D. Ullman Stanford University Jeffrey D. Ullman Stanford University 2 Often, our data can be represented by an m-by-n matrix. And this matrix can be closely approximated by the product of two matrices that share a small common dimension

More information

Some Statistical Aspects of Photometric Redshift Estimation

Some Statistical Aspects of Photometric Redshift Estimation Some Statistical Aspects of Photometric Redshift Estimation James Long November 9, 2016 1 / 32 Photometric Redshift Estimation Background Non Negative Least Squares Non Negative Matrix Factorization EAZY

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

R, 1 i 1,i 2,...,i m n.

R, 1 i 1,i 2,...,i m n. SIAM J. MATRIX ANAL. APPL. Vol. 31 No. 3 pp. 1090 1099 c 2009 Society for Industrial and Applied Mathematics FINDING THE LARGEST EIGENVALUE OF A NONNEGATIVE TENSOR MICHAEL NG LIQUN QI AND GUANGLU ZHOU

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to: System 2 : Modelling & Recognising Modelling and Recognising Classes of Classes of Shapes Shape : PDM & PCA All the same shape? System 1 (last lecture) : limited to rigidly structured shapes System 2 :

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Non-negative matrix factorization with fixed row and column sums

Non-negative matrix factorization with fixed row and column sums Available online at www.sciencedirect.com Linear Algebra and its Applications 9 (8) 5 www.elsevier.com/locate/laa Non-negative matrix factorization with fixed row and column sums Ngoc-Diep Ho, Paul Van

More information

Example: Face Detection

Example: Face Detection Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

Structured matrix factorizations. Example: Eigenfaces

Structured matrix factorizations. Example: Eigenfaces Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix

More information

Deriving Principal Component Analysis (PCA)

Deriving Principal Component Analysis (PCA) -0 Mathematical Foundations for Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deriving Principal Component Analysis (PCA) Matt Gormley Lecture 11 Oct.

More information

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery Tim Roughgarden & Gregory Valiant May 3, 2017 Last lecture discussed singular value decomposition (SVD), and we

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Introduction Consider a zero mean random vector R n with autocorrelation matri R = E( T ). R has eigenvectors q(1),,q(n) and associated eigenvalues λ(1) λ(n). Let Q = [ q(1)

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

CPSC 340: Machine Learning and Data Mining. More PCA Fall 2017

CPSC 340: Machine Learning and Data Mining. More PCA Fall 2017 CPSC 340: Machine Learning and Data Mining More PCA Fall 2017 Admin Assignment 4: Due Friday of next week. No class Monday due to holiday. There will be tutorials next week on MAP/PCA (except Monday).

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

ON THE LIMITING PROBABILITY DISTRIBUTION OF A TRANSITION PROBABILITY TENSOR

ON THE LIMITING PROBABILITY DISTRIBUTION OF A TRANSITION PROBABILITY TENSOR ON THE LIMITING PROBABILITY DISTRIBUTION OF A TRANSITION PROBABILITY TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper we propose and develop an iterative method to calculate a limiting probability

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Evrim Acar, Daniel M. Dunlavy, and Tamara G. Kolda* Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia

More information

Optimal solutions to non-negative PARAFAC/multilinear NMF always exist

Optimal solutions to non-negative PARAFAC/multilinear NMF always exist Optimal solutions to non-negative PARAFAC/multilinear NMF always exist Lek-Heng Lim Stanford University Workshop on Tensor Decompositions and Applications CIRM, Luminy, France August 29 September 2, 2005

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

A Coupled Helmholtz Machine for PCA

A Coupled Helmholtz Machine for PCA A Coupled Helmholtz Machine for PCA Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 3 Hyoja-dong, Nam-gu Pohang 79-784, Korea seungjin@postech.ac.kr August

More information

Lecture Notes 10: Matrix Factorization

Lecture Notes 10: Matrix Factorization Optimization-based data analysis Fall 207 Lecture Notes 0: Matrix Factorization Low-rank models. Rank- model Consider the problem of modeling a quantity y[i, j] that depends on two indices i and j. To

More information

Dimensionality reduction

Dimensionality reduction Dimensionality Reduction PCA continued Machine Learning CSE446 Carlos Guestrin University of Washington May 22, 2013 Carlos Guestrin 2005-2013 1 Dimensionality reduction n Input data may have thousands

More information

Constrained Projection Approximation Algorithms for Principal Component Analysis

Constrained Projection Approximation Algorithms for Principal Component Analysis Constrained Projection Approximation Algorithms for Principal Component Analysis Seungjin Choi, Jong-Hoon Ahn, Andrzej Cichocki Department of Computer Science, Pohang University of Science and Technology,

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

MULTIVARIATE STATISTICAL ANALYSIS OF SPECTROSCOPIC DATA. Haisheng Lin, Ognjen Marjanovic, Barry Lennox

MULTIVARIATE STATISTICAL ANALYSIS OF SPECTROSCOPIC DATA. Haisheng Lin, Ognjen Marjanovic, Barry Lennox MULTIVARIATE STATISTICAL ANALYSIS OF SPECTROSCOPIC DATA Haisheng Lin, Ognjen Marjanovic, Barry Lennox Control Systems Centre, School of Electrical and Electronic Engineering, University of Manchester Abstract:

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018 CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2018 Last Time: PCA with Orthogonal/Sequential Basis When k = 1, PCA has a scaling problem. When k > 1, have scaling, rotation,

More information

Clustering. SVD and NMF

Clustering. SVD and NMF Clustering with the SVD and NMF Amy Langville Mathematics Department College of Charleston Dagstuhl 2/14/2007 Outline Fielder Method Extended Fielder Method and SVD Clustering with SVD vs. NMF Demos with

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

Matrix Decomposition in Privacy-Preserving Data Mining JUN ZHANG DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF KENTUCKY

Matrix Decomposition in Privacy-Preserving Data Mining JUN ZHANG DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF KENTUCKY Matrix Decomposition in Privacy-Preserving Data Mining JUN ZHANG DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF KENTUCKY OUTLINE Why We Need Matrix Decomposition SVD (Singular Value Decomposition) NMF (Nonnegative

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Using Matrix Decompositions in Formal Concept Analysis

Using Matrix Decompositions in Formal Concept Analysis Using Matrix Decompositions in Formal Concept Analysis Vaclav Snasel 1, Petr Gajdos 1, Hussam M. Dahwa Abdulla 1, Martin Polovincak 1 1 Dept. of Computer Science, Faculty of Electrical Engineering and

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

On Spectral Basis Selection for Single Channel Polyphonic Music Separation

On Spectral Basis Selection for Single Channel Polyphonic Music Separation On Spectral Basis Selection for Single Channel Polyphonic Music Separation Minje Kim and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong, Nam-gu

More information

MULTIPLICATIVE ALGORITHM FOR CORRENTROPY-BASED NONNEGATIVE MATRIX FACTORIZATION

MULTIPLICATIVE ALGORITHM FOR CORRENTROPY-BASED NONNEGATIVE MATRIX FACTORIZATION MULTIPLICATIVE ALGORITHM FOR CORRENTROPY-BASED NONNEGATIVE MATRIX FACTORIZATION Ehsan Hosseini Asl 1, Jacek M. Zurada 1,2 1 Department of Electrical and Computer Engineering University of Louisville, Louisville,

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection Instructor: Herke van Hoof (herke.vanhoof@cs.mcgill.ca) Based on slides by:, Jackie Chi Kit Cheung Class web page:

More information

Nonnegative Matrix Factorization. Data Mining

Nonnegative Matrix Factorization. Data Mining The Nonnegative Matrix Factorization in Data Mining Amy Langville langvillea@cofc.edu Mathematics Department College of Charleston Charleston, SC Yahoo! Research 10/18/2005 Outline Part 1: Historical Developments

More information