Independent Component Analysis

Size: px
Start display at page:

Download "Independent Component Analysis"

Transcription

1 Independent Component Analysis Seungjin Choi Department of Computer Science Pohang University of Science and Technology, Korea March 4, / 78

2 Outline Theory and Preliminaries for ICA 1 Theory and Preliminaries for ICA Model Theory 2 Criteria Unsupervised Learning Algorithms Algebraic Algorithms / 78

3 Outline Theory and Preliminaries for ICA Model Theory 1 Theory and Preliminaries for ICA Model Theory 2 Criteria Unsupervised Learning Algorithms Algebraic Algorithms / 78

4 Books on ICA Theory and Preliminaries for ICA Model Theory T. -W. Lee, Independent Component Analysis, S. Haykin, Unsupervised Adaptive Filtering, volume 1 and 2, A. Hyvärinen, J. Karhunen, and E. Oja, Independent Component Analysis, A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing, / 78

5 ICALAB Toolbox Theory and Preliminaries for ICA Model Theory ICALAB is a Matlab toolbox, containing various ICA algorithms. Check out 5 / 78

6 What is ICA? Theory and Preliminaries for ICA Model Theory ICA is a statistical method, the goal of which is to decompose the multivariate data x R n into a linear sum of statistically independent components, i.e., x = s 1 a 1 + s 2 a s n a n = As, where {s i } are coefficients (sources, latent variables, encoding variables) and {a i } are basis vectors. Constraints: Coefficients {s i } are statistically independent. Goal: Learn basis vectors A from data samples only {x(1),..., x(n)} 6 / 78

7 ICA vs. PCA Theory and Preliminaries for ICA Model Theory Linear transform Compression (dimensionality reduction) Classification (feature extraction) PCA Second-order statistics (Gaussian) Linear orthogonal transform Optimal coding in MS sense ICA Higher-order statistics (non-gaussian) Linear non-orthogonal transform Related with projection pursuit (non-gaussian is interesting) Better features for classification? 7 / 78

8 An Example of PCA vs ICA Model Theory (a) PCA (b) ICA 8 / 78

9 Two Aspects of ICA Theory and Preliminaries for ICA Model Theory Blind source separation Acoustic source separation (cocktail party speech recognition) Biomedical data analysis (EEG, ECG, MEG, fmri, PET) Digital communications (multiuser detection, blind equalization, MIMO channels) Information representation (e.g., feature extraction) Natural sound/image statistics Computer vision (e.g. face recogntion/detection) Empirical data analysis (stock market returns, gene expression data, etc) Data visualization (lower-dimensional embedding) 9 / 78

10 Blind Source Separation Model Theory s Mixing x Demixing y A W Unknown Mixing: x = Ax. Demixing: y = Wx. 10 / 78

11 An Example of EEG Theory and Preliminaries for ICA Model Theory (c) Raw EEG (d) After ICA 11 / 78

12 Transparent Transformation Model Theory Given a set of observed data X = [x(1),..., x(n)] that was generated from unknown sources s through an unknown linear transform A, i.e., x = As, the task of blind source separtion is to restore sources S by estimating the mixing matrix A. To this end, we constrcut a demixing matrix W such that the elements of y = Wx are statistically independent. Impopsing independence in {y i } leads to y = WAs = PΛs where P is the permutation matrix and Λ is a scaling matrix. The transformation PΛ is referred to as transparent transformation. For example, y 1 y 2 y λ 3 λ λ 2 0 s 1 s 2 s / 78

13 Darmois Theorem Theory and Preliminaries for ICA Model Theory Theorem Supposed that random variables s 1,..., s n are mutually independent. Consider two linear combinations of s i, y 1 = α 1 s 1 + α n s n, y 2 = β 1 s 1 + β n s n. If y 1 and y 2 are statistically independent, then α i β i 0 only when s i is Gaussian. Remark: In other words, assume that at most one of {s i } is Gaussian. Suppose that the mixing matrix is of full-column rank. Then the pairwise independence between {y i } leads to WA is a transparent transformation. 13 / 78

14 Outline Theory and Preliminaries for ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms 1 Theory and Preliminaries for ICA Model Theory 2 Criteria Unsupervised Learning Algorithms Algebraic Algorithms / 78

15 Mutual Information Minimization Criteria Unsupervised Learning Algorithms Algebraic Algorithms Mutual information is the relative entropy between the joint distribution and the product of marginal distributions, [ ] p(y) I (y 1,..., y n ) = p(y) log i p dy i(y i ) [ ] = KL p(y) i p i (y i ), which is always nonnegative and its minimum is achieved only when y i are independent. Note that p(y) = p(x) det W. This leads to the objective function J = log det W n log p i (y i ). i=1 15 / 78

16 Maximum Likelihood Estimation Criteria Unsupervised Learning Algorithms Algebraic Algorithms Consider a single factor of the log-likelihood, L = log p(x A, r) = log p(x s, A)r(s)ds = log det A + n r i (s i ). Replacing r i ( ) = p i ( ), s i = y i, and A = W 1, the negative log-likelihood becomes L = log det W i=1 n log p i (y i ). Maximum likelihood estimation = mutual information minimization in the context of ICA. i=1 16 / 78

17 Criteria Unsupervised Learning Algorithms Algebraic Algorithms An Information Geometrical View of ICA 17 / 78

18 More Criteria... Theory and Preliminaries for ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms Information maximization: Infomax seeks for a linear transform such that the output entropy of f (y) = [f 1 (y 1 ),..., f n (y n )] is maximized. In the case where f i ( ) is the cumulative distribution function of y i, then Infomax = MLE = MMI. Nongaussianity maximizataion Negentropy maximization: The negentropy is defined by J(y) = H(y G ) H(y) where y G is a Gaussian random vector whose mean and covariance matrix is the same as y. Kurtosis extremization: Maximizes kurtosis for super-gaussian and minimizes kurtosis for sub-gaussian. 18 / 78

19 Learning Algorithms Theory and Preliminaries for ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms Gradient descent/ascent. Natural gradient (or relative gradient) descent/ascent Conjugate gradient Newton and Quasi-Newton Fixed point iteration Relative trust-region optimization 19 / 78

20 Relative Gradient Theory and Preliminaries for ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms Conventional gradient involves the following first-order approximation { } J J (W + E) J (W) + tr W E, and searches for a direction that minimizes J (W + E) under a norm constraint on E = const. Relative gradient involves the follwing first-order approximation J (W + EW) J (W) + tr { r J E } This leads to r J = J W. J (W) + tr { J (EW) }. 20 / 78

21 Natural Gradient Theory and Preliminaries for ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms Let S w = {w R n } be a parameter space on which an objective function J (w) is defined. If the coordinate system is nonorthogonal, then dw 2 = g ij (w)dw i dw j, i j Theorem g ij (w) is Riemannian metric. The steepest descent direction of J (w) in a Riemannian space is given by ng J (w) = G 1 (w)j (w). 21 / 78

22 Natural Gradient ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms It turned out that the natural gradient in the context of ICA has the form ng J (W) = J (W)W W. The natural gradient ICA algorithm is of the form W (t+1) = W (t) + η { I ϕ(y(t))y (t) } W (t), where ϕ(y) = [ϕ 1 (y i ),..., ϕ n (y n )] and ϕ i ( ) = d log p i (y i ) dy i. Relatively fast convergence (compared to the conventional gradient) and equivariance property (uniform performance, regardless of condition of A). 22 / 78

23 Hypothesized Distributions Criteria Unsupervised Learning Algorithms Algebraic Algorithms The ICA algorithm requires p i ( ), hence, we use a hypothesized distribution. Super-Gaussian: ϕ i (y i ) = sign(y i ) or tanh(y i ). Sub-Gaussian: ϕ i = y 3 i Switching nonlinearity: y i ± tanh(αy i ). Flexible ICA: Generalized Gaussian distribution. 23 / 78

24 Generalized Gaussian Distribution Criteria Unsupervised Learning Algorithms Algebraic Algorithms α p(y; α) = 2λΓ ( 1 )e y λ α. α p(y) alpha=2 alpha=4 alpha=1 alpha= Note that if α = 1, the distribution becomes Laplacian distribution. If α = 2, the distribution is Gaussian distribution y 24 / 78

25 Simultaneous Diagonalization Criteria Unsupervised Learning Algorithms Algebraic Algorithms A symmetric matrix R R n n is diagonalized if R = UΣU. Whitening transformation seeks for a linear transformation such that the correlation matrix of z = Vx is the identity matrix, i.e. zz = V xx V = I. The whitening transformation is given by V = Σ 1 2 U where xx = UΣU. Simultaneous diagonalization aims at diagonalizing two symmetric matrics R 1 and R 2 simultaneously by a linear transformation. Whitening transformation + unitary transformation Generalized eigenvalue problem: R 2U = R 1UΣ. 25 / 78

26 Symmetric-Definite Pencils Criteria Unsupervised Learning Algorithms Algebraic Algorithms Definition The set of all matrices of the form R 2 λr 1 with λ R is said to be a pencil. Definition Pencils (R 1, R 2 ) where R 2 is symmetric and R 1 is symmetric and positive definite, are referred to as symmetric-definite pencils. Theorem If R 2 λr 1 is symmetric-definite, then there exists a nonsingular matrix U = [u 1,..., u n ] such that U R 1 U = diag {γ 1 (τ 1 ),..., γ n (τ 1 )}, (1) U R 2 U = diag {γ 1 (τ 2 ),..., γ n (τ 2 )}. (2) Moreover R 2 u i = λ i R 1 u i for i = 1,..., n, and λ i = γ i (τ 2) γ i (τ 1). 26 / 78

27 Fundamental Theorem Criteria Unsupervised Learning Algorithms Algebraic Algorithms Theorem Let Λ 1, D 1 R n n be diagonal matrices with positive diagonal entries and Λ 2, D 2 R n n be diagonal matrices with non-zero diagonal entries. Suppose that G R n n satisfies the following decompositions: D 1 = GΛ 1 G T, D 2 = GΛ 2 G T. Then the matrix G is the generalized permutation matrix, i.e., G = PΛ if D 1 1 D 2 and Λ 1 1 Λ 2 have distinct diagonal entries. 27 / 78

28 AMUSE Theory and Preliminaries for ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms A second-order method that exploits a time-delayed correlation matrix (temporal structure) as well as a equal-time correlation matrix (whitening transformation). Whitening transformation Compute a equal-time correlation matrix, R x(0) = 1 N N t=1 x(t)x (t) { and make it symmetric, M x(0) = 1 2 Rx(0) + R x (0) }. Do spectral decomposition, R x(0) = UΣU. Whitening transformation leads to z = Σ 1 2 U x. Unitary transformation Find a unitary transformation, y = V z, that diagonalizes z(t)z (t τ) = VΛV. The demixing matrix is given by W = V Σ 1 2 U 28 / 78

29 Source Separation via Matrix Pencil Criteria Unsupervised Learning Algorithms Algebraic Algorithms Note that simultaneous diagonalization is solved by the generalized eigenvalue problem. Compute symmetric two correlation matrices M x (0) and M x (τ). Note that (M x (0), M x (τ)) is a symmetric-definite pencil. Find the generalized eigenvector matrix V of the pencil M x (τ) λm x (0) which satisfies M x (τ)v = M x (0)VΛ. The demixing matrix is given by W = V T. 29 / 78

30 SOBI Theory and Preliminaries for ICA Criteria Unsupervised Learning Algorithms Algebraic Algorithms A second-order method that exploits a multiple time-delayed correlation matrices. Seeks for a linear transformation W that jointly diagonalizes multiple time-delayed correlation matrices (joint approximate diagonalization). Whitening transformation Whitening transformation leads to z = Σ 1 2 U x where R x(0) = UΣU. Unitary transformation Find a unitary joint diagonalizer V of {M z(τ j )} which satisfies where {Λ j } is a set of diagonal matrices. V T M z(τ j )V = Λ j, (3) The demixing matrix is given by W = V Σ 1 2 U 30 / 78

31 More Algebraic Algorithms Criteria Unsupervised Learning Algorithms Algebraic Algorithms FOBI: 4th-order moment matrix JADE: slices of 4th-order cumulant matices SEONS A generalization of SOBI Quasi-nonstationarity More / 78

32 Outline Theory and Preliminaries for ICA 1 Theory and Preliminaries for ICA Model Theory 2 Criteria Unsupervised Learning Algorithms Algebraic Algorithms / 78

33 Variations and Extensions of ICA Spatial, temporal, spatiotemporal ICA Independent subspace analysis (ISA) Topographic ICA (TICA) Nonnegative matrix factorization (NMF) 33 / 78

34 Independent Subspace Analysis (ISA) Multidimensional ICA + Invariant feature subspace. L(W, X) = 1 N J log p ( w T N i x(t) ) 2 + log det W. t=1 j=1 i Fj 34 / 78

35 Pooling in Complex Cells 35 / 78

36 Topographic ICA s i = σ i z i = φ ( j h(i, j)u j ) z i. Further extension of ISA, incorporating with topographic representation. Dependencies between near-by components are modelled by higher-order correlations. 36 / 78

37 Nonnegative Matrix Factorization (NMF) Parts-based representation 37 / 78

38 Algorithm for NMF Find a factorization such that X AS, subject to A ij 0 and S ij 0. For example, we seeks for a matrix factorization which minimizes I-divergence, defined by E = [ ] X ij log X ij X ij + (AS) (AS) ij. i,j ij A multiplicative algorithm is given by k S ij S [A kix kj / (AS) kl ] ij l A, li k A ij A [S jkx ik / (AS) ik ] ij l S. jl 38 / 78

39 Outline Theory and Preliminaries for ICA 1 Theory and Preliminaries for ICA Model Theory 2 Criteria Unsupervised Learning Algorithms Algebraic Algorithms / 78

40 Applications Theory and Preliminaries for ICA Computational/Computer vision Natural image statistics Face recognition Speech/Language Audio source separation Semantic structure of language Bioinformatics (gene expression data analysis) Dynamic PET image analysis 40 / 78

41 Compact Coding Theory and Preliminaries for ICA 41 / 78

42 Sparse Coding Theory and Preliminaries for ICA 42 / 78

43 Learn Statistical Structure of Natural Scenes 43 / 78

44 Theory and Preliminaries for Algorithms for Beyond Applications of ICA ICA ICA ICA Examples of Natural Images 44 / 78

45 Learned Basis Images: PCA 45 / 78

46 Learned Basis Images: ICA 46 / 78

47 Learned Basis Images: ISA 47 / 78

48 Learned Basis Images: Topographic ICA 48 / 78

49 Eigenfaces Theory and Preliminaries for ICA 49 / 78

50 Factorial Faces Theory and Preliminaries for ICA 50 / 78

51 AR Face Database Theory and Preliminaries for ICA 51 / 78

52 Eigenfaces vs Factorial Faces 52 / 78

53 Performance Comparison 53 / 78

54 Multiple View Images 54 / 78

55 Learned Face Basis Images: ISA 55 / 78

56 Learned Face Basis Images: Topographic ICA 56 / 78

57 Audio Source Separation 57 / 78

58 ST-NB95 Database Theory and Preliminaries for ICA 58 / 78

59 Word Error Rate Theory and Preliminaries for ICA 59 / 78

60 Emergence of Linguistic Features and ICA A collection of s sent to connectionists mailing list. One hundred common words were manually selected and the contextual information was calculated using the 2000 most common types. A context matrix C was formed, where C ij represents the number of occurrences of the jth word in the immediate context of ith word. 60 / 78

61 Example 1: ICA Features of Contextual Data Figure: ICA features for neuroscience and psychology. 61 / 78

62 Example 2: ICA Features of Contextual Data Figure: ICA features for models and problems. 62 / 78

63 Example 3: ICA Features of Contextual Data Figure: ICA features for my, his, our, and their. 63 / 78

64 Example 4: ICA Features of Contextual Data Figure: ICA features for will, can, may, and must. 64 / 78

65 Modern Biologists Theory and Preliminaries for ICA We need a something different tool! 65 / 78

66 Gene Expression Data 66 / 78

67 Data Preparation Theory and Preliminaries for ICA 67 / 78

68 Why Linear Models for Gene Expression Data Analysis? 68 / 78

69 Eigengenes and Eigenarrays 69 / 78

70 Spatial ICA Theory and Preliminaries for ICA 70 / 78

71 Temporal ICA Theory and Preliminaries for ICA 71 / 78

72 Theory and Preliminaries for ICA Performance Comparison: Gene Clustering tica tica tica stica PCA sica log 10 (pvalue) log 10 (p value) log 10 (p value) log 10 (p value) log 10 (p value) log 10 (p value) 72 / 78

73 Temporal Modes Theory and Preliminaries for ICA 73 / 78

74 Theory and Preliminaries for Algorithms for Beyond Applications of ICA ICA ICA ICA Positron Emission Tomography 74 / 78

75 What PET can do for Heart? Quantify the extent of heart disease. Can calculate myocardial blood flow or metabolism quantitatively. 75 / 78

76 Dynamic PET Image Acquisition 76 / 78

77 Independent Components of Heart PET Images 77 / 78

78 ICs vs Arterial Sampling 78 / 78

Independent Component Analysis (ICA)

Independent Component Analysis (ICA) Independent Component Analysis (ICA) Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Independent Component Analysis. Contents

Independent Component Analysis. Contents Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Independent Component Analysis Barnabás Póczos Independent Component Analysis 2 Independent Component Analysis Model original signals Observations (Mixtures)

More information

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference

More information

CIFAR Lectures: Non-Gaussian statistics and natural images

CIFAR Lectures: Non-Gaussian statistics and natural images CIFAR Lectures: Non-Gaussian statistics and natural images Dept of Computer Science University of Helsinki, Finland Outline Part I: Theory of ICA Definition and difference to PCA Importance of non-gaussianity

More information

Independent Component Analysis

Independent Component Analysis 1 Independent Component Analysis Background paper: http://www-stat.stanford.edu/ hastie/papers/ica.pdf 2 ICA Problem X = AS where X is a random p-vector representing multivariate input measurements. S

More information

Independent Component Analysis and Unsupervised Learning

Independent Component Analysis and Unsupervised Learning Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent

More information

Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego

Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu References 1 Hyvarinen, A, Karhunen, J, & Oja, E (2004) Independent component analysis (Vol 46)

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Introduction Consider a zero mean random vector R n with autocorrelation matri R = E( T ). R has eigenvectors q(1),,q(n) and associated eigenvalues λ(1) λ(n). Let Q = [ q(1)

More information

HST.582J/6.555J/16.456J

HST.582J/6.555J/16.456J Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear

More information

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA) Fundamentals of Principal Component Analysis (PCA),, and Independent Vector Analysis (IVA) Dr Mohsen Naqvi Lecturer in Signal and Information Processing, School of Electrical and Electronic Engineering,

More information

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood

More information

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 02-01-2018 Biomedical data are usually high-dimensional Number of samples (n) is relatively small whereas number of features (p) can be large Sometimes p>>n Problems

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseWare http://ocw.mit.edu HST.582J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

Introduction to Independent Component Analysis. Jingmei Lu and Xixi Lu. Abstract

Introduction to Independent Component Analysis. Jingmei Lu and Xixi Lu. Abstract Final Project 2//25 Introduction to Independent Component Analysis Abstract Independent Component Analysis (ICA) can be used to solve blind signal separation problem. In this article, we introduce definition

More information

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM Pando Georgiev a, Andrzej Cichocki b and Shun-ichi Amari c Brain Science Institute, RIKEN, Wako-shi, Saitama 351-01, Japan a On leave from the Sofia

More information

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction

More information

Independent Component Analysis and Its Applications. By Qing Xue, 10/15/2004

Independent Component Analysis and Its Applications. By Qing Xue, 10/15/2004 Independent Component Analysis and Its Applications By Qing Xue, 10/15/2004 Outline Motivation of ICA Applications of ICA Principles of ICA estimation Algorithms for ICA Extensions of basic ICA framework

More information

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

Independent Component Analysis and Blind Source Separation

Independent Component Analysis and Blind Source Separation Independent Component Analysis and Blind Source Separation Aapo Hyvärinen University of Helsinki and Helsinki Institute of Information Technology 1 Blind source separation Four source signals : 1.5 2 3

More information

Dimensionality Reduction. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Dimensionality Reduction. CS57300 Data Mining Fall Instructor: Bruno Ribeiro Dimensionality Reduction CS57300 Data Mining Fall 2016 Instructor: Bruno Ribeiro Goal } Visualize high dimensional data (and understand its Geometry) } Project the data into lower dimensional spaces }

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Independent Component Analysis. PhD Seminar Jörgen Ungh

Independent Component Analysis. PhD Seminar Jörgen Ungh Independent Component Analysis PhD Seminar Jörgen Ungh Agenda Background a motivater Independence ICA vs. PCA Gaussian data ICA theory Examples Background & motivation The cocktail party problem Bla bla

More information

Non-negative Matrix Factorization: Algorithms, Extensions and Applications

Non-negative Matrix Factorization: Algorithms, Extensions and Applications Non-negative Matrix Factorization: Algorithms, Extensions and Applications Emmanouil Benetos www.soi.city.ac.uk/ sbbj660/ March 2013 Emmanouil Benetos Non-negative Matrix Factorization March 2013 1 / 25

More information

Unsupervised learning: beyond simple clustering and PCA

Unsupervised learning: beyond simple clustering and PCA Unsupervised learning: beyond simple clustering and PCA Liza Rebrova Self organizing maps (SOM) Goal: approximate data points in R p by a low-dimensional manifold Unlike PCA, the manifold does not have

More information

ICA [6] ICA) [7, 8] ICA ICA ICA [9, 10] J-F. Cardoso. [13] Matlab ICA. Comon[3], Amari & Cardoso[4] ICA ICA

ICA [6] ICA) [7, 8] ICA ICA ICA [9, 10] J-F. Cardoso. [13] Matlab ICA. Comon[3], Amari & Cardoso[4] ICA ICA 16 1 (Independent Component Analysis: ICA) 198 9 ICA ICA ICA 1 ICA 198 Jutten Herault Comon[3], Amari & Cardoso[4] ICA Comon (PCA) projection persuit projection persuit ICA ICA ICA 1 [1] [] ICA ICA EEG

More information

Logistic Regression. Seungjin Choi

Logistic Regression. Seungjin Choi Logistic Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

An Introduction to Independent Components Analysis (ICA)

An Introduction to Independent Components Analysis (ICA) An Introduction to Independent Components Analysis (ICA) Anish R. Shah, CFA Northfield Information Services Anish@northinfo.com Newport Jun 6, 2008 1 Overview of Talk Review principal components Introduce

More information

Nonnegative Matrix Factorization

Nonnegative Matrix Factorization Nonnegative Matrix Factorization Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Fisher s Linear Discriminant Analysis

Fisher s Linear Discriminant Analysis Fisher s Linear Discriminant Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Artificial Intelligence Module 2. Feature Selection. Andrea Torsello

Artificial Intelligence Module 2. Feature Selection. Andrea Torsello Artificial Intelligence Module 2 Feature Selection Andrea Torsello We have seen that high dimensional data is hard to classify (curse of dimensionality) Often however, the data does not fill all the space

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information

Introduction PCA classic Generative models Beyond and summary. PCA, ICA and beyond

Introduction PCA classic Generative models Beyond and summary. PCA, ICA and beyond PCA, ICA and beyond Summer School on Manifold Learning in Image and Signal Analysis, August 17-21, 2009, Hven Technical University of Denmark (DTU) & University of Copenhagen (KU) August 18, 2009 Motivation

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Semi-Blind approaches to source separation: introduction to the special session

Semi-Blind approaches to source separation: introduction to the special session Semi-Blind approaches to source separation: introduction to the special session Massoud BABAIE-ZADEH 1 Christian JUTTEN 2 1- Sharif University of Technology, Tehran, IRAN 2- Laboratory of Images and Signals

More information

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.Box 5400,

More information

Independent Component Analysis and Its Application on Accelerator Physics

Independent Component Analysis and Its Application on Accelerator Physics Independent Component Analysis and Its Application on Accelerator Physics Xiaoying Pang LA-UR-12-20069 ICA and PCA Similarities: Blind source separation method (BSS) no model Observed signals are linear

More information

TRINICON: A Versatile Framework for Multichannel Blind Signal Processing

TRINICON: A Versatile Framework for Multichannel Blind Signal Processing TRINICON: A Versatile Framework for Multichannel Blind Signal Processing Herbert Buchner, Robert Aichner, Walter Kellermann {buchner,aichner,wk}@lnt.de Telecommunications Laboratory University of Erlangen-Nuremberg

More information

LECTURE :ICA. Rita Osadchy. Based on Lecture Notes by A. Ng

LECTURE :ICA. Rita Osadchy. Based on Lecture Notes by A. Ng LECURE :ICA Rita Osadchy Based on Lecture Notes by A. Ng Cocktail Party Person 1 2 s 1 Mike 2 s 3 Person 3 1 Mike 1 s 2 Person 2 3 Mike 3 microphone signals are mied speech signals 1 2 3 ( t) ( t) ( t)

More information

Tutorial on Blind Source Separation and Independent Component Analysis

Tutorial on Blind Source Separation and Independent Component Analysis Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...

More information

MULTI-VARIATE/MODALITY IMAGE ANALYSIS

MULTI-VARIATE/MODALITY IMAGE ANALYSIS MULTI-VARIATE/MODALITY IMAGE ANALYSIS Duygu Tosun-Turgut, Ph.D. Center for Imaging of Neurodegenerative Diseases Department of Radiology and Biomedical Imaging duygu.tosun@ucsf.edu Curse of dimensionality

More information

STATS 306B: Unsupervised Learning Spring Lecture 12 May 7

STATS 306B: Unsupervised Learning Spring Lecture 12 May 7 STATS 306B: Unsupervised Learning Spring 2014 Lecture 12 May 7 Lecturer: Lester Mackey Scribe: Lan Huong, Snigdha Panigrahi 12.1 Beyond Linear State Space Modeling Last lecture we completed our discussion

More information

Independent Component Analysis of Incomplete Data

Independent Component Analysis of Incomplete Data Independent Component Analysis of Incomplete Data Max Welling Markus Weber California Institute of Technology 136-93 Pasadena, CA 91125 fwelling,rmwg@vision.caltech.edu Keywords: EM, Missing Data, ICA

More information

BLIND SEPARATION OF POSITIVE SOURCES USING NON-NEGATIVE PCA

BLIND SEPARATION OF POSITIVE SOURCES USING NON-NEGATIVE PCA BLIND SEPARATION OF POSITIVE SOURCES USING NON-NEGATIVE PCA Erkki Oja Neural Networks Research Centre Helsinki University of Technology P.O.Box 54, 215 HUT, Finland erkki.oja@hut.fi Mark Plumbley Department

More information

Independent Component Analysis on the Basis of Helmholtz Machine

Independent Component Analysis on the Basis of Helmholtz Machine Independent Component Analysis on the Basis of Helmholtz Machine Masashi OHATA *1 ohatama@bmc.riken.go.jp Toshiharu MUKAI *1 tosh@bmc.riken.go.jp Kiyotoshi MATSUOKA *2 matsuoka@brain.kyutech.ac.jp *1 Biologically

More information

Lecture 10: Dimension Reduction Techniques

Lecture 10: Dimension Reduction Techniques Lecture 10: Dimension Reduction Techniques Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2018 Input Data It is assumed that there is a set

More information

A Convex Cauchy-Schwarz Divergence Measure for Blind Source Separation

A Convex Cauchy-Schwarz Divergence Measure for Blind Source Separation INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING Volume, 8 A Convex Cauchy-Schwarz Divergence Measure for Blind Source Separation Zaid Albataineh and Fathi M. Salem Abstract We propose

More information

FuncICA for time series pattern discovery

FuncICA for time series pattern discovery FuncICA for time series pattern discovery Nishant Mehta and Alexander Gray Georgia Institute of Technology The problem Given a set of inherently continuous time series (e.g. EEG) Find a set of patterns

More information

Complete Blind Subspace Deconvolution

Complete Blind Subspace Deconvolution Complete Blind Subspace Deconvolution Zoltán Szabó Department of Information Systems, Eötvös Loránd University, Pázmány P. sétány 1/C, Budapest H-1117, Hungary szzoli@cs.elte.hu http://nipg.inf.elte.hu

More information

Blind Machine Separation Te-Won Lee

Blind Machine Separation Te-Won Lee Blind Machine Separation Te-Won Lee University of California, San Diego Institute for Neural Computation Blind Machine Separation Problem we want to solve: Single microphone blind source separation & deconvolution

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Final Report For Undergraduate Research Opportunities Project Name: Biomedical Signal Processing in EEG. Zhang Chuoyao 1 and Xu Jianxin 2

Final Report For Undergraduate Research Opportunities Project Name: Biomedical Signal Processing in EEG. Zhang Chuoyao 1 and Xu Jianxin 2 ABSTRACT Final Report For Undergraduate Research Opportunities Project Name: Biomedical Signal Processing in EEG Zhang Chuoyao 1 and Xu Jianxin 2 Department of Electrical and Computer Engineering, National

More information

Dimensionality Reduction

Dimensionality Reduction Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of

More information

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection Instructor: Herke van Hoof (herke.vanhoof@cs.mcgill.ca) Based on slides by:, Jackie Chi Kit Cheung Class web page:

More information

Statistical Analysis of fmrl Data

Statistical Analysis of fmrl Data Statistical Analysis of fmrl Data F. Gregory Ashby The MIT Press Cambridge, Massachusetts London, England Preface xi Acronyms xv 1 Introduction 1 What Is fmri? 2 The Scanning Session 4 Experimental Design

More information

Some Interesting Problems in Pattern Recognition and Image Processing

Some Interesting Problems in Pattern Recognition and Image Processing Some Interesting Problems in Pattern Recognition and Image Processing JEN-MEI CHANG Department of Mathematics and Statistics California State University, Long Beach jchang9@csulb.edu University of Southern

More information

Constrained Projection Approximation Algorithms for Principal Component Analysis

Constrained Projection Approximation Algorithms for Principal Component Analysis Constrained Projection Approximation Algorithms for Principal Component Analysis Seungjin Choi, Jong-Hoon Ahn, Andrzej Cichocki Department of Computer Science, Pohang University of Science and Technology,

More information

Independent Component Analysis

Independent Component Analysis A Short Introduction to Independent Component Analysis Aapo Hyvärinen Helsinki Institute for Information Technology and Depts of Computer Science and Psychology University of Helsinki Problem of blind

More information

ICA and ISA Using Schweizer-Wolff Measure of Dependence

ICA and ISA Using Schweizer-Wolff Measure of Dependence Keywords: independent component analysis, independent subspace analysis, copula, non-parametric estimation of dependence Abstract We propose a new algorithm for independent component and independent subspace

More information

Independent Component Analysis (ICA)

Independent Component Analysis (ICA) Independent Component Analysis (ICA) Université catholique de Louvain (Belgium) Machine Learning Group http://www.dice.ucl ucl.ac.be/.ac.be/mlg/ 1 Overview Uncorrelation vs Independence Blind source separation

More information

Independent component analysis: an introduction

Independent component analysis: an introduction Research Update 59 Techniques & Applications Independent component analysis: an introduction James V. Stone Independent component analysis (ICA) is a method for automatically identifying the underlying

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

New Machine Learning Methods for Neuroimaging

New Machine Learning Methods for Neuroimaging New Machine Learning Methods for Neuroimaging Gatsby Computational Neuroscience Unit University College London, UK Dept of Computer Science University of Helsinki, Finland Outline Resting-state networks

More information

ICA. Independent Component Analysis. Zakariás Mátyás

ICA. Independent Component Analysis. Zakariás Mátyás ICA Independent Component Analysis Zakariás Mátyás Contents Definitions Introduction History Algorithms Code Uses of ICA Definitions ICA Miture Separation Signals typical signals Multivariate statistics

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

A Coupled Helmholtz Machine for PCA

A Coupled Helmholtz Machine for PCA A Coupled Helmholtz Machine for PCA Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 3 Hyoja-dong, Nam-gu Pohang 79-784, Korea seungjin@postech.ac.kr August

More information

Slide11 Haykin Chapter 10: Information-Theoretic Models

Slide11 Haykin Chapter 10: Information-Theoretic Models Slide11 Haykin Chapter 10: Information-Theoretic Models CPSC 636-600 Instructor: Yoonsuck Choe Spring 2015 ICA section is heavily derived from Aapo Hyvärinen s ICA tutorial: http://www.cis.hut.fi/aapo/papers/ijcnn99_tutorialweb/.

More information

NONNEGATIVE matrix factorization (NMF) is a

NONNEGATIVE matrix factorization (NMF) is a Algorithms for Orthogonal Nonnegative Matrix Factorization Seungjin Choi Abstract Nonnegative matrix factorization (NMF) is a widely-used method for multivariate analysis of nonnegative data, the goal

More information

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES Mika Inki and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FIN-215 HUT, Finland ABSTRACT

More information

MTTS1 Dimensionality Reduction and Visualization Spring 2014 Jaakko Peltonen

MTTS1 Dimensionality Reduction and Visualization Spring 2014 Jaakko Peltonen MTTS1 Dimensionality Reduction and Visualization Spring 2014 Jaakko Peltonen Lecture 3: Linear feature extraction Feature extraction feature extraction: (more general) transform the original to (k < d).

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

Separation of Different Voices in Speech using Fast Ica Algorithm

Separation of Different Voices in Speech using Fast Ica Algorithm Volume-6, Issue-6, November-December 2016 International Journal of Engineering and Management Research Page Number: 364-368 Separation of Different Voices in Speech using Fast Ica Algorithm Dr. T.V.P Sundararajan

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 6 Non-Negative Matrix Factorization Rainer Gemulla, Pauli Miettinen May 23, 23 Non-Negative Datasets Some datasets are intrinsically non-negative: Counters (e.g., no. occurrences

More information

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Jiho Yoo and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong,

More information

Independent component analysis: algorithms and applications

Independent component analysis: algorithms and applications PERGAMON Neural Networks 13 (2000) 411 430 Invited article Independent component analysis: algorithms and applications A. Hyvärinen, E. Oja* Neural Networks Research Centre, Helsinki University of Technology,

More information

PCA and admixture models

PCA and admixture models PCA and admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price PCA and admixture models 1 / 57 Announcements HW1

More information

Blind signal processing algorithms

Blind signal processing algorithms 12th Int. Workshop on Systems, Signals & Image Processing, 22-24 September 2005, Chalkida, Greece 105 Blind signal processing algorithms Athanasios Margaris and Efthimios Kotsialos Department of Applied

More information

Independent Component Analysis

Independent Component Analysis Independent Component Analysis James V. Stone November 4, 24 Sheffield University, Sheffield, UK Keywords: independent component analysis, independence, blind source separation, projection pursuit, complexity

More information

Acoustic Source Separation with Microphone Arrays CCNY

Acoustic Source Separation with Microphone Arrays CCNY Acoustic Source Separation with Microphone Arrays Lucas C. Parra Biomedical Engineering Department City College of New York CCNY Craig Fancourt Clay Spence Chris Alvino Montreal Workshop, Nov 6, 2004 Blind

More information

Undercomplete Independent Component. Analysis for Signal Separation and. Dimension Reduction. Category: Algorithms and Architectures.

Undercomplete Independent Component. Analysis for Signal Separation and. Dimension Reduction. Category: Algorithms and Architectures. Undercomplete Independent Component Analysis for Signal Separation and Dimension Reduction John Porrill and James V Stone Psychology Department, Sheeld University, Sheeld, S10 2UR, England. Tel: 0114 222

More information

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018 CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2018 Last Time: PCA with Orthogonal/Sequential Basis When k = 1, PCA has a scaling problem. When k > 1, have scaling, rotation,

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

Principal Component Analysis CS498

Principal Component Analysis CS498 Principal Component Analysis CS498 Today s lecture Adaptive Feature Extraction Principal Component Analysis How, why, when, which A dual goal Find a good representation The features part Reduce redundancy

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016 Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen

More information

A two-layer ICA-like model estimated by Score Matching

A two-layer ICA-like model estimated by Score Matching A two-layer ICA-like model estimated by Score Matching Urs Köster and Aapo Hyvärinen University of Helsinki and Helsinki Institute for Information Technology Abstract. Capturing regularities in high-dimensional

More information

Principal Component Analysis (PCA) for Sparse High-Dimensional Data

Principal Component Analysis (PCA) for Sparse High-Dimensional Data AB Principal Component Analysis (PCA) for Sparse High-Dimensional Data Tapani Raiko, Alexander Ilin, and Juha Karhunen Helsinki University of Technology, Finland Adaptive Informatics Research Center Principal

More information

Natural Image Statistics

Natural Image Statistics Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex

More information

Principal Component Analysis vs. Independent Component Analysis for Damage Detection

Principal Component Analysis vs. Independent Component Analysis for Damage Detection 6th European Workshop on Structural Health Monitoring - Fr..D.4 Principal Component Analysis vs. Independent Component Analysis for Damage Detection D. A. TIBADUIZA, L. E. MUJICA, M. ANAYA, J. RODELLAR

More information

A Modular NMF Matching Algorithm for Radiation Spectra

A Modular NMF Matching Algorithm for Radiation Spectra A Modular NMF Matching Algorithm for Radiation Spectra Melissa L. Koudelka Sensor Exploitation Applications Sandia National Laboratories mlkoude@sandia.gov Daniel J. Dorsey Systems Technologies Sandia

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

To appear in Proceedings of the ICA'99, Aussois, France, A 2 R mn is an unknown mixture matrix of full rank, v(t) is the vector of noises. The

To appear in Proceedings of the ICA'99, Aussois, France, A 2 R mn is an unknown mixture matrix of full rank, v(t) is the vector of noises. The To appear in Proceedings of the ICA'99, Aussois, France, 1999 1 NATURAL GRADIENT APPROACH TO BLIND SEPARATION OF OVER- AND UNDER-COMPLETE MIXTURES L.-Q. Zhang, S. Amari and A. Cichocki Brain-style Information

More information

Tree-Dependent Components of Gene Expression Data for Clustering

Tree-Dependent Components of Gene Expression Data for Clustering Tree-Dependent Components of Gene Expression Data for Clustering Jong Kyoung Kim and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong, Nam-gu Pohang

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information