CIFAR Lectures: Non-Gaussian statistics and natural images
|
|
- Scot Jenkins
- 5 years ago
- Views:
Transcription
1 CIFAR Lectures: Non-Gaussian statistics and natural images Dept of Computer Science University of Helsinki, Finland
2 Outline Part I: Theory of ICA Definition and difference to PCA Importance of non-gaussianity Part II: Natural images and ICA Application of ICA and sparse coding on natural images Extensions of ICA with dependent components Part III: Estimation of unnormalized models Motivation by extensions of ICA Score matching Noise-contrastive estimation Part IV: Recent extensions of ICA and natural image statistics A three-layer model, towards deep learning
3 Part I: Theory of ICA Definition of ICA as non-gaussian generative model Importance of non-gaussianity Fundamental difference to PCA Estimation by maximization of non-gaussianity Measures of non-gaussianity
4 Problem of blind source separation There is a number of source signals : Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Due to some external circumstances, only linear mixtures of the source signals are observed. Estimate (separate) original signals!
5 A solution is possible Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity PCA does not recover original signals
6 A solution is possible Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity PCA does not recover original signals Use information on statistical independence to recover:
7 Independent Component Analysis (Hérault and Jutten, ) Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Observed random variables x i are modelled as linear sums of hidden variables: m x i = a ij s j, i = 1...n (1) j=1 Mathematical formulation of blind source separation problem Not unlike factor analysis Matrix of a ij is parameter matrix, called mixing matrix. The s i are hidden random variables called independent components, or source signals Problem: Estimate both a ij and s j, observing only x i.
8 Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity When can the ICA model be estimated? Must assume: The si are mutually statistically independent The si are nongaussian (non-normal) (Optional:) Number of independent components is equal to number of observed variables Then: mixing matrix and components can be identified (Comon, 1994) A very surprising result!
9 Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Reminder: Principal component analysis Basic idea: find directions i w ix i of maximum variance We must constrain the norm of w: i w2 i = 1, otherwise solution is that w i are infinite. For more than one component, find direction of max var orthogonal to components previously found. Classic factor analysis has essentially same idea as in PCA: explain maximal variance with limited number of components
10 Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Comparison of ICA, factor analysis and principal component analysis ICA is nongaussian FA with no noise or specific factors. So many components that all variance is explained by them. No factor rotation left unknown because of identifiability result In contrast to FA and PCA, components really give the original source signals or underlying hidden variables Catch: only works when components are nongaussian Many psychological hidden variables (e.g. intelligence ) may be (practically) gaussian because sum of many independent variables (central limit theorem). But signals measured by sensors are usually quite nongaussian
11 Some examples of nongaussianity Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity
12 Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Why classic methods cannot find original components or sources In PCA and FA: find components y i which are uncorrelated cov(y i,y j ) = E{y i y j } E{y i }E{y j } = 0 (2) and maximize explained variance (or variance of components) Such methods need only the covariances, cov(x i,x j ) However, there are many different component sets that are uncorrelated, because The number of covariances is n 2 /2 due to symmetry So, we cannot solve the n 2 factor loadings, not enough information! ( More variables than equations )
13 Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Nongaussianity, with independence, gives more information For independent variables we have E{h 1 (y 1 )h 2 (y 2 )} E{h 1 (y 1 )}E{h 2 (y 2 )} = 0. (3) For nongaussian variables, nonlinear covariances give more information than just covariances. This is not true for multivariate gaussian distribution Distribution is completely determined by covariances Uncorrelated gaussian variables are independent, and their distribution (standardized) is same in all directions (see below) ICA model cannot be estimated for gaussian data.
14 Illustration Definition of ICA Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Two components with uniform distributions: Original components, observed mixtures, PCA, ICA PCA does not find original coordinates, ICA does!
15 Blind source separation Linear generative model Comparison to PCA Identifiability by nongaussianity Illustration of problem with gaussian distributions Original components, observed mixtures, PCA Distribution after PCA is the same as distribution before mixing! Factor rotation problem in classic FA
16 Maximization of non-gaussianity Measures of non-gaussianity Basic intuitive principle of ICA estimation Inspired the Central Limit Theorem: Average of many independent random variables will have a distribution that is close(r) to gaussian In the limit of an infinite number of random variables, the distribution tends to gaussian Consider a linear combination i w ix i = i q is i Because of theorem, i q is i should be more gaussian than s i. Maximizing the nongaussianity of i w ix i, we can find s i. Also known as projection pursuit. Cf. principal component analysis: maximize variance of i w ix i.
17 Histogram and scatterplot, mixturescifar given Lectures: by PCA Non-Gaussian statistics and natural images Definition of ICA Maximization of non-gaussianity Measures of non-gaussianity Illustration of changes in nongaussianity Histogram and scatterplot, original uniform distributions
18 Maximization of non-gaussianity Measures of non-gaussianity Sparsity is the dominant form of non-gaussianity In natural signals, fundamental non-gaussianity is sparsity Sparsity = probability density has heavy tails and peak at zero: gaussian sparse (Another form of non-gaussianity is skewness or asymmetry)
19 Maximization of non-gaussianity Measures of non-gaussianity Kurtosis as nongaussianity measure Problem: how to measure nongaussianity (sparsity)? Definition: kurt(x) = E{x 4 } 3(E{x 2 }) 2 (4) if variance constrained to unity, essentially 4th moment. Simple algebraic properties because it s a cumulant: kurt(s 1 +s 2 ) = kurt(s 1 )+kurt(s 2 ) (5) kurt(αs 1 ) = α 4 kurt(s 1 ) (6) zero for gaussian RV, non-zero for most nongaussian RV s. positive vs. negative kurtosis have typical forms of pdf. Variance must be constrained to measure non-gaussianity
20 Illustration of pos and neg kurtosis Maximization of non-gaussianity Measures of non-gaussianity Left: Laplacian pdf, positive kurt ( supergaussian ). Right: Uniform pdf, negative kurt ( subgaussian ).
21 Maximization of non-gaussianity Measures of non-gaussianity Why kurtosis is not optimal Sensitive to outliers: Consider a sample of 1000 values with unit var, and one value equal to 10. Kurtosis equals at least 10 4 / = 7. For supergaussian variables, statistical performance not optimal even without outliers. Other measures of nongaussianity should be considered.
22 Maximization of non-gaussianity Measures of non-gaussianity Differential entropy as nongaussianity measure Generalization of ordinary discrete Shannon entropy: H(x) = E{ logp(x)} (7) for fixed variance, maximized by gaussian distribution. often normalized to give negentropy J(x) = H(x gauss ) H(x) (8) Good statistical properties, but computationally difficult.
23 Approximation of negentropy Maximization of non-gaussianity Measures of non-gaussianity Approximations of negentropy (Hyvärinen, 1998): J G (x) = (E{G(x)} E{G(x gauss )}) 2 (9) where G is a nonquadratic function. Generalization of (square of) kurtosis (which is G(x) = x 4 ). A good compromise? statistical properties not bad (for suitable choice of G) computationally simple Further possibility: Skewness (for nonsymmetric ICs)
24 Maximization of non-gaussianity Measures of non-gaussianity Basic ICA estimation procedure 1. Whiten the data to give z. 2. Set iteration count i = Take a random vector w i. 4. Maximize nongaussianity of w T i z, under constraints w i 2 = 1 and w T i w j = 0,j < i 5. increment iteration count by 1, go back to 3 Alternatively: maximize all the w i in parallel, keeping them orthogonal.
25 Development of ICA algorithms Maximization of non-gaussianity Measures of non-gaussianity Nongaussianity measure: Essential ingredient Kurtosis: global consistency, but nonrobust. Differential entropy: statistically justified, but difficult to compute. Essentially same as likelihood (Pham et al, 1992/97) or infomax (Bell and Sejnowski, 1995) Rough approximations of entropy: compromise Optimization methods Gradient methods (e.g. natural gradient; Amari et al, 1996) Fast fixed-point algorithm, FastICA (Hyvärinen, 1999)
26 Maximization of non-gaussianity Measures of non-gaussianity Conclusion: Theory of ICA ICA is a non-gaussian factor analysis Basic principle: maximize non-gaussianity of components (Really very different from PCA: maximize variance of components) Sparsity is a form of non-gaussianity prevalent in natural signals Measures of non-gaussianity crucial: kurtosis vs. differential entropy
27 Part II: Natural images and ICA Natural images have statistical regularities Statistical models show optimal processing Basic model is independent component analysis Components are not really independent Need and opportunit Instead of nongaussianity we could use temporal correlations A unifying framework: bubbles
28 Linear statistical models of images = s 1 + s s k Each image (patch) is a linear sum of basis vectors (features) What are the best basis vectors for natural images?
29 The visual cortex of the brain LGN V1 retina Receptive field of a simple cell in V1:
30 Sparse coding Sparse coding means: For random vector x, find linear representation: x = As (10) so that the components s i are as sparse (=supergaussian) as possible. Important property: a given data point is represented using only a limited number of active (clearly non-zero) components s i. In contrast to PCA, the active components change from image patch to another. Cf. vocabulary of a language which can describe many different things by combining a small number of active words. Maximizes non-gaussianity, therefore like ICA!
31 Independent subspaces Topographic ICA ICA / sparse coding of natural images (Olshausen and Field, 1996; Bell and Sejnowski, 1997) Features similar to wavelets, Gabor functions, simple cells.
32 Independent subspaces Topographic ICA Dependence of independent components Components estimated from natural images are not really independent Next, we model some of the dependencies Independent subspaces + Topographic ICA
33 Independent subspaces Topographic ICA Correlation of squares What kind of dependence remains between the components? Answer: Squares s 2 i and s 2 j are correlated inside a subspace Dependence through variances Similar to the models by Simoncelli et al on wavelet coefficients; Valpola et al on variance sources Two signals that are uncorrelated but whose squares are correlated.
34 Independent subspaces Topographic ICA Grouping components (Cardoso, 1998; Hyvärinen and Hoyer, 2000) Assumption: the s i can be divided into groups (subspaces), such that the si in the same group are dependent on each other dependencies between different groups are not allowed We also need to specify the distributions inside the groups Invariant features given by norms of projections on the subspaces spherically symmetric inside subspaces
35 Independent subspaces Topographic ICA Independent subspaces of natural images Emergence of phase-invariance, as in complex cells in V1.
36 Independent subspaces Topographic ICA Topographic ICA (Hyvärinen, Hoyer and Inki, 2001) Components are arranged on a two-dimensional lattice Statistical dependency follows topography: The squares s 2 i are correlated for near-by components Each local region is like a subspace
37 Topographic ICA on natural images Independent subspaces Topographic ICA Topography similar to what is found in the cortex.
38 Independent subspaces Topographic ICA Temporally coherent components (Hurri and Hyvärinen, 2003) In image sequences (video) we can look at the temporal correlations An alternative to nongaussianity Linear correlations give only Fourier-like receptive fields We proposed temporal correlations of squares Similar to source separation using nonstationary variance (Matsuoka et al, 1995)
39 Temporal coherence Bubbles Temporally coherent features on natural image sequences Features similar to those obtained by ICA
40 Temporal coherence Bubbles Bubbles: a unifying framework Correlation of squares both over time and over components spatiotemporal modulating variance variables Simple approximation of the likelihood can be obtained like in topographic ICA ( T n n G h(i,j,τ)(wi T x(t τ)) )+T 2 log detw.(11) t=1 j=1 i=1 τ where h(i,j,τ) is neighbourhood function, and G a nonlinear function.
41 Temporal coherence Bubbles Illustration of four types of representation sparse sparse topographic < position of filter > < position of filter > time > sparse temporally coherent time > bubbles < position of filter > < position of filter > time > time >
42 Temporal coherence Bubbles Conclusion: Natural images and ICA ICA is a non-gaussian factor analysis Basic principle: maximize non-gaussianity of components Measures of non-gaussianity crucial: kurtosis vs. differential entropy ICA and related models show optimal features for natural images. ICA models basic linear features. Independent subspaces and topographic ICA model basic dependencies or nonlinearities. Temporal coherence is an alternative approach, leading to bubbles.
Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II
Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference
More informationIndependent Component Analysis
A Short Introduction to Independent Component Analysis Aapo Hyvärinen Helsinki Institute for Information Technology and Depts of Computer Science and Psychology University of Helsinki Problem of blind
More informationIndependent Component Analysis
A Short Introduction to Independent Component Analysis with Some Recent Advances Aapo Hyvärinen Dept of Computer Science Dept of Mathematics and Statistics University of Helsinki Problem of blind source
More informationNatural Image Statistics
Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex
More informationIndependent Component Analysis and Blind Source Separation
Independent Component Analysis and Blind Source Separation Aapo Hyvärinen University of Helsinki and Helsinki Institute of Information Technology 1 Blind source separation Four source signals : 1.5 2 3
More informationIndependent Component Analysis and Its Applications. By Qing Xue, 10/15/2004
Independent Component Analysis and Its Applications By Qing Xue, 10/15/2004 Outline Motivation of ICA Applications of ICA Principles of ICA estimation Algorithms for ICA Extensions of basic ICA framework
More informationIndependent Component Analysis. Contents
Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle
More informationEXTENSIONS OF ICA AS MODELS OF NATURAL IMAGES AND VISUAL PROCESSING. Aapo Hyvärinen, Patrik O. Hoyer and Jarmo Hurri
EXTENSIONS OF ICA AS MODELS OF NATURAL IMAGES AND VISUAL PROCESSING Aapo Hyvärinen, Patrik O. Hoyer and Jarmo Hurri Neural Networks Research Centre Helsinki University of Technology P.O. Box 5400, FIN-02015
More informationTWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen
TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES Mika Inki and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FIN-215 HUT, Finland ABSTRACT
More informationFundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)
Fundamentals of Principal Component Analysis (PCA),, and Independent Vector Analysis (IVA) Dr Mohsen Naqvi Lecturer in Signal and Information Processing, School of Electrical and Electronic Engineering,
More informationLecture'12:' SSMs;'Independent'Component'Analysis;' Canonical'Correla;on'Analysis'
Lecture'12:' SSMs;'Independent'Component'Analysis;' Canonical'Correla;on'Analysis' Lester'Mackey' May'7,'2014' ' Stats'306B:'Unsupervised'Learning' Beyond'linearity'in'state'space'modeling' Credit:'Alex'Simma'
More informationIndependent Component Analysis. PhD Seminar Jörgen Ungh
Independent Component Analysis PhD Seminar Jörgen Ungh Agenda Background a motivater Independence ICA vs. PCA Gaussian data ICA theory Examples Background & motivation The cocktail party problem Bla bla
More informationFrom independent component analysis to score matching
From independent component analysis to score matching Aapo Hyvärinen Dept of Computer Science & HIIT Dept of Mathematics and Statistics University of Helsinki Finland 1 Abstract First, short introduction
More informationAdvanced Introduction to Machine Learning CMU-10715
Advanced Introduction to Machine Learning CMU-10715 Independent Component Analysis Barnabás Póczos Independent Component Analysis 2 Independent Component Analysis Model original signals Observations (Mixtures)
More informationEmergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces
LETTER Communicated by Bartlett Mel Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces Aapo Hyvärinen Patrik Hoyer Helsinki University
More informationIndependent Component Analysis
1 Independent Component Analysis Background paper: http://www-stat.stanford.edu/ hastie/papers/ica.pdf 2 ICA Problem X = AS where X is a random p-vector representing multivariate input measurements. S
More informationBlind separation of sources that have spatiotemporal variance dependencies
Blind separation of sources that have spatiotemporal variance dependencies Aapo Hyvärinen a b Jarmo Hurri a a Neural Networks Research Centre, Helsinki University of Technology, Finland b Helsinki Institute
More informationINDEPENDENT COMPONENT ANALYSIS
INDEPENDENT COMPONENT ANALYSIS A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Bachelor of Technology in Electronics and Communication Engineering Department By P. SHIVA
More informationIntroduction to Independent Component Analysis. Jingmei Lu and Xixi Lu. Abstract
Final Project 2//25 Introduction to Independent Component Analysis Abstract Independent Component Analysis (ICA) can be used to solve blind signal separation problem. In this article, we introduce definition
More informationIndependent component analysis: algorithms and applications
PERGAMON Neural Networks 13 (2000) 411 430 Invited article Independent component analysis: algorithms and applications A. Hyvärinen, E. Oja* Neural Networks Research Centre, Helsinki University of Technology,
More informationDifferent Estimation Methods for the Basic Independent Component Analysis Model
Washington University in St. Louis Washington University Open Scholarship Arts & Sciences Electronic Theses and Dissertations Arts & Sciences Winter 12-2018 Different Estimation Methods for the Basic Independent
More informationHST.582J/6.555J/16.456J
Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear
More informationHigher Order Statistics
Higher Order Statistics Matthias Hennig Neural Information Processing School of Informatics, University of Edinburgh February 12, 2018 1 0 Based on Mark van Rossum s and Chris Williams s old NIP slides
More informationCPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018
CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2018 Last Time: PCA with Orthogonal/Sequential Basis When k = 1, PCA has a scaling problem. When k > 1, have scaling, rotation,
More informationTemporal Coherence, Natural Image Sequences, and the Visual Cortex
Temporal Coherence, Natural Image Sequences, and the Visual Cortex Jarmo Hurri and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 9800, 02015 HUT, Finland {jarmo.hurri,aapo.hyvarinen}@hut.fi
More informationUnsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA
Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA with Hiroshi Morioka Dept of Computer Science University of Helsinki, Finland Facebook AI Summit, 13th June 2016 Abstract
More informationTHE functional role of simple and complex cells has
37 A Novel Temporal Generative Model of Natural Video as an Internal Model in Early Vision Jarmo Hurri and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 9800,
More informationGatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV
Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts III-IV Aapo Hyvärinen Gatsby Unit University College London Part III: Estimation of unnormalized models Often,
More informationSimple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video
LETTER Communicated by Bruno Olshausen Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video Jarmo Hurri jarmo.hurri@hut.fi Aapo Hyvärinen aapo.hyvarinen@hut.fi Neural Networks
More informationIs early vision optimised for extracting higher order dependencies? Karklin and Lewicki, NIPS 2005
Is early vision optimised for extracting higher order dependencies? Karklin and Lewicki, NIPS 2005 Richard Turner (turner@gatsby.ucl.ac.uk) Gatsby Computational Neuroscience Unit, 02/03/2006 Outline Historical
More informationNonlinear reverse-correlation with synthesized naturalistic noise
Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California
More informationHST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007
MIT OpenCourseWare http://ocw.mit.edu HST.582J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationArtificial Intelligence Module 2. Feature Selection. Andrea Torsello
Artificial Intelligence Module 2 Feature Selection Andrea Torsello We have seen that high dimensional data is hard to classify (curse of dimensionality) Often however, the data does not fill all the space
More informationA Canonical Genetic Algorithm for Blind Inversion of Linear Channels
A Canonical Genetic Algorithm for Blind Inversion of Linear Channels Fernando Rojas, Jordi Solé-Casals, Enric Monte-Moreno 3, Carlos G. Puntonet and Alberto Prieto Computer Architecture and Technology
More informationSimple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video
Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video Jarmo Hurri and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 98, 215 HUT, Finland
More informationA Constrained EM Algorithm for Independent Component Analysis
LETTER Communicated by Hagai Attias A Constrained EM Algorithm for Independent Component Analysis Max Welling Markus Weber California Institute of Technology, Pasadena, CA 91125, U.S.A. We introduce a
More information1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo
The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.Box 5400,
More informationPrincipal Component Analysis vs. Independent Component Analysis for Damage Detection
6th European Workshop on Structural Health Monitoring - Fr..D.4 Principal Component Analysis vs. Independent Component Analysis for Damage Detection D. A. TIBADUIZA, L. E. MUJICA, M. ANAYA, J. RODELLAR
More informationIndependent Component Analysis
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 1 Introduction Indepent
More informationMassoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39
Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction
More informationACENTRAL problem in neural-network research, as well
626 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 Fast and Robust Fixed-Point Algorithms for Independent Component Analysis Aapo Hyvärinen Abstract Independent component analysis (ICA)
More informationNatural Gradient Learning for Over- and Under-Complete Bases in ICA
NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent
More informationSTATS 306B: Unsupervised Learning Spring Lecture 12 May 7
STATS 306B: Unsupervised Learning Spring 2014 Lecture 12 May 7 Lecturer: Lester Mackey Scribe: Lan Huong, Snigdha Panigrahi 12.1 Beyond Linear State Space Modeling Last lecture we completed our discussion
More informationOne-unit Learning Rules for Independent Component Analysis
One-unit Learning Rules for Independent Component Analysis Aapo Hyvarinen and Erkki Oja Helsinki University of Technology Laboratory of Computer and Information Science Rakentajanaukio 2 C, FIN-02150 Espoo,
More informationA two-layer ICA-like model estimated by Score Matching
A two-layer ICA-like model estimated by Score Matching Urs Köster and Aapo Hyvärinen University of Helsinki and Helsinki Institute for Information Technology Abstract. Capturing regularities in high-dimensional
More informationLecture 6: April 19, 2002
EE596 Pat. Recog. II: Introduction to Graphical Models Spring 2002 Lecturer: Jeff Bilmes Lecture 6: April 19, 2002 University of Washington Dept. of Electrical Engineering Scribe: Huaning Niu,Özgür Çetin
More informationAn Introduction to Independent Components Analysis (ICA)
An Introduction to Independent Components Analysis (ICA) Anish R. Shah, CFA Northfield Information Services Anish@northinfo.com Newport Jun 6, 2008 1 Overview of Talk Review principal components Introduce
More informationComparative Analysis of ICA Based Features
International Journal of Emerging Engineering Research and Technology Volume 2, Issue 7, October 2014, PP 267-273 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Comparative Analysis of ICA Based Features
More informationICA. Independent Component Analysis. Zakariás Mátyás
ICA Independent Component Analysis Zakariás Mátyás Contents Definitions Introduction History Algorithms Code Uses of ICA Definitions ICA Miture Separation Signals typical signals Multivariate statistics
More informationFinal Report For Undergraduate Research Opportunities Project Name: Biomedical Signal Processing in EEG. Zhang Chuoyao 1 and Xu Jianxin 2
ABSTRACT Final Report For Undergraduate Research Opportunities Project Name: Biomedical Signal Processing in EEG Zhang Chuoyao 1 and Xu Jianxin 2 Department of Electrical and Computer Engineering, National
More informationEstimating Overcomplete Independent Component Bases for Image Windows
Journal of Mathematical Imaging and Vision 17: 139 152, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Estimating Overcomplete Independent Component Bases for Image Windows AAPO
More informationSlide11 Haykin Chapter 10: Information-Theoretic Models
Slide11 Haykin Chapter 10: Information-Theoretic Models CPSC 636-600 Instructor: Yoonsuck Choe Spring 2015 ICA section is heavily derived from Aapo Hyvärinen s ICA tutorial: http://www.cis.hut.fi/aapo/papers/ijcnn99_tutorialweb/.
More informationEstimation of linear non-gaussian acyclic models for latent factors
Estimation of linear non-gaussian acyclic models for latent factors Shohei Shimizu a Patrik O. Hoyer b Aapo Hyvärinen b,c a The Institute of Scientific and Industrial Research, Osaka University Mihogaoka
More informationICA [6] ICA) [7, 8] ICA ICA ICA [9, 10] J-F. Cardoso. [13] Matlab ICA. Comon[3], Amari & Cardoso[4] ICA ICA
16 1 (Independent Component Analysis: ICA) 198 9 ICA ICA ICA 1 ICA 198 Jutten Herault Comon[3], Amari & Cardoso[4] ICA Comon (PCA) projection persuit projection persuit ICA ICA ICA 1 [1] [] ICA ICA EEG
More informationA GUIDE TO INDEPENDENT COMPONENT ANALYSIS Theory and Practice
CONTROL ENGINEERING LABORATORY A GUIDE TO INDEPENDENT COMPONENT ANALYSIS Theory and Practice Jelmer van Ast and Mika Ruusunen Report A No 3, March 004 University of Oulu Control Engineering Laboratory
More informationDonghoh Kim & Se-Kang Kim
Behav Res (202) 44:239 243 DOI 0.3758/s3428-02-093- Comparing patterns of component loadings: Principal Analysis (PCA) versus Independent Analysis (ICA) in analyzing multivariate non-normal data Donghoh
More informationDimensionality Reduction. CS57300 Data Mining Fall Instructor: Bruno Ribeiro
Dimensionality Reduction CS57300 Data Mining Fall 2016 Instructor: Bruno Ribeiro Goal } Visualize high dimensional data (and understand its Geometry) } Project the data into lower dimensional spaces }
More informationIndependent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego
Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu References 1 Hyvarinen, A, Karhunen, J, & Oja, E (2004) Independent component analysis (Vol 46)
More informationNew Machine Learning Methods for Neuroimaging
New Machine Learning Methods for Neuroimaging Gatsby Computational Neuroscience Unit University College London, UK Dept of Computer Science University of Helsinki, Finland Outline Resting-state networks
More informationPROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE. Noboru Murata
' / PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE Noboru Murata Waseda University Department of Electrical Electronics and Computer Engineering 3--
More informationSeparation of Different Voices in Speech using Fast Ica Algorithm
Volume-6, Issue-6, November-December 2016 International Journal of Engineering and Management Research Page Number: 364-368 Separation of Different Voices in Speech using Fast Ica Algorithm Dr. T.V.P Sundararajan
More informationNew Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit
New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.
More informationSpeed and Accuracy Enhancement of Linear ICA Techniques Using Rational Nonlinear Functions
Speed and Accuracy Enhancement of Linear ICA Techniques Using Rational Nonlinear Functions Petr Tichavský 1, Zbyněk Koldovský 1,2, and Erkki Oja 3 1 Institute of Information Theory and Automation, Pod
More informationHierarchical Sparse Bayesian Learning. Pierre Garrigues UC Berkeley
Hierarchical Sparse Bayesian Learning Pierre Garrigues UC Berkeley Outline Motivation Description of the model Inference Learning Results Motivation Assumption: sensory systems are adapted to the statistical
More informationLearning features by contrasting natural images with noise
Learning features by contrasting natural images with noise Michael Gutmann 1 and Aapo Hyvärinen 12 1 Dept. of Computer Science and HIIT, University of Helsinki, P.O. Box 68, FIN-00014 University of Helsinki,
More informationIndependent Component Analysis of Incomplete Data
Independent Component Analysis of Incomplete Data Max Welling Markus Weber California Institute of Technology 136-93 Pasadena, CA 91125 fwelling,rmwg@vision.caltech.edu Keywords: EM, Missing Data, ICA
More informationRobustness of Principal Components
PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.
More informationCPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2017
CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2017 Admin Assignment 4: Due Friday. Assignment 5: Posted, due Monday of last week of classes Last Time: PCA with Orthogonal/Sequential
More informationBlind Machine Separation Te-Won Lee
Blind Machine Separation Te-Won Lee University of California, San Diego Institute for Neural Computation Blind Machine Separation Problem we want to solve: Single microphone blind source separation & deconvolution
More informationEfficient Coding. Odelia Schwartz 2017
Efficient Coding Odelia Schwartz 2017 1 Levels of modeling Descriptive (what) Mechanistic (how) Interpretive (why) 2 Levels of modeling Fitting a receptive field model to experimental data (e.g., using
More informationNon-Euclidean Independent Component Analysis and Oja's Learning
Non-Euclidean Independent Component Analysis and Oja's Learning M. Lange 1, M. Biehl 2, and T. Villmann 1 1- University of Appl. Sciences Mittweida - Dept. of Mathematics Mittweida, Saxonia - Germany 2-
More informationADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA. Mark Plumbley
Submitteed to the International Conference on Independent Component Analysis and Blind Signal Separation (ICA2) ADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA Mark Plumbley Audio & Music Lab Department
More informationCPSC 340: Machine Learning and Data Mining. More PCA Fall 2017
CPSC 340: Machine Learning and Data Mining More PCA Fall 2017 Admin Assignment 4: Due Friday of next week. No class Monday due to holiday. There will be tutorials next week on MAP/PCA (except Monday).
More informationIndependent Component Analysis
Independent Component Analysis Seungjin Choi Department of Computer Science Pohang University of Science and Technology, Korea seungjin@postech.ac.kr March 4, 2009 1 / 78 Outline Theory and Preliminaries
More informationIndependent Component Analysis
Independent Component Analysis James V. Stone November 4, 24 Sheffield University, Sheffield, UK Keywords: independent component analysis, independence, blind source separation, projection pursuit, complexity
More informationBlind Source Separation Using Artificial immune system
American Journal of Engineering Research (AJER) e-issn : 2320-0847 p-issn : 2320-0936 Volume-03, Issue-02, pp-240-247 www.ajer.org Research Paper Open Access Blind Source Separation Using Artificial immune
More informationc Springer, Reprinted with permission.
Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent
More informationConnections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Revised submission to IEEE TNN
Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables Revised submission to IEEE TNN Aapo Hyvärinen Dept of Computer Science and HIIT University
More informationUncorrelatedness and Independence
Uncorrelatedness and Independence Uncorrelatedness:Two r.v. x and y are uncorrelated if C xy = E[(x m x )(y m y ) T ] = 0 or equivalently R xy = E[xy T ] = E[x]E[y T ] = m x m T y White random vector:this
More informationAn Improved Cumulant Based Method for Independent Component Analysis
An Improved Cumulant Based Method for Independent Component Analysis Tobias Blaschke and Laurenz Wiskott Institute for Theoretical Biology Humboldt University Berlin Invalidenstraße 43 D - 0 5 Berlin Germany
More informationIndependent Component Analysis (ICA)
Independent Component Analysis (ICA) Université catholique de Louvain (Belgium) Machine Learning Group http://www.dice.ucl ucl.ac.be/.ac.be/mlg/ 1 Overview Uncorrelation vs Independence Blind source separation
More informationIndependent Component Analysis and Unsupervised Learning
Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent
More informationMTTS1 Dimensionality Reduction and Visualization Spring 2014 Jaakko Peltonen
MTTS1 Dimensionality Reduction and Visualization Spring 2014 Jaakko Peltonen Lecture 3: Linear feature extraction Feature extraction feature extraction: (more general) transform the original to (k < d).
More informationFile: ica tutorial2.tex. James V Stone and John Porrill, Psychology Department, Sheeld University, Tel: Fax:
File: ica tutorial2.tex Independent Component Analysis and Projection Pursuit: A Tutorial Introduction James V Stone and John Porrill, Psychology Department, Sheeld University, Sheeld, S 2UR, England.
More informationUnsupervised learning: beyond simple clustering and PCA
Unsupervised learning: beyond simple clustering and PCA Liza Rebrova Self organizing maps (SOM) Goal: approximate data points in R p by a low-dimensional manifold Unlike PCA, the manifold does not have
More informationOptimization and Testing in Linear. Non-Gaussian Component Analysis
Optimization and Testing in Linear Non-Gaussian Component Analysis arxiv:1712.08837v2 [stat.me] 29 Dec 2017 Ze Jin, Benjamin B. Risk, David S. Matteson May 13, 2018 Abstract Independent component analysis
More informationUnsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation
Cognitive Science 30 (2006) 725 731 Copyright 2006 Cognitive Science Society, Inc. All rights reserved. Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation Geoffrey Hinton,
More informationIndependent Component Analysis of Rock Magnetic Measurements
Independent Component Analysis of Rock Magnetic Measurements Norbert Marwan March 18, 23 Title Today I will not talk about recurrence plots. Marco and Mamen will talk about them later. Moreover, for the
More informationNON-NEGATIVE SPARSE CODING
NON-NEGATIVE SPARSE CODING Patrik O. Hoyer Neural Networks Research Centre Helsinki University of Technology P.O. Box 9800, FIN-02015 HUT, Finland patrik.hoyer@hut.fi To appear in: Neural Networks for
More informationRecursive Generalized Eigendecomposition for Independent Component Analysis
Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu
More informationLinear Factor Models. Sargur N. Srihari
Linear Factor Models Sargur N. srihari@cedar.buffalo.edu 1 Topics in Linear Factor Models Linear factor model definition 1. Probabilistic PCA and Factor Analysis 2. Independent Component Analysis (ICA)
More informationIndependent Component Analysis
Chapter 5 Independent Component Analysis Part I: Introduction and applications Motivation Skillikorn chapter 7 2 Cocktail party problem Did you see that Have you heard So, yesterday this guy I said, darling
More informationIndependent Component Analysis and Unsupervised Learning. Jen-Tzung Chien
Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood
More informationThe statistical properties of local log-contrast in natural images
The statistical properties of local log-contrast in natural images Jussi T. Lindgren, Jarmo Hurri, and Aapo Hyvärinen HIIT Basic Research Unit Department of Computer Science University of Helsinki email:
More informationA survey of dimension reduction techniques
A survey of dimension reduction techniques Imola K. Fodor Center for Applied Scientific Computing, Lawrence Livermore National Laboratory P.O. Box 808, L-560, Livermore, CA 94551 fodor1@llnl.gov June 2002
More informationUndercomplete Independent Component. Analysis for Signal Separation and. Dimension Reduction. Category: Algorithms and Architectures.
Undercomplete Independent Component Analysis for Signal Separation and Dimension Reduction John Porrill and James V Stone Psychology Department, Sheeld University, Sheeld, S10 2UR, England. Tel: 0114 222
More informationEvaluating Models of Natural Image Patches
1 / 16 Evaluating Models of Natural Image Patches Chris Williams Neural Information Processing School of Informatics, University of Edinburgh January 15, 2018 Outline Evaluating Models Comparing Whitening
More informationFactor Analysis (10/2/13)
STA561: Probabilistic machine learning Factor Analysis (10/2/13) Lecturer: Barbara Engelhardt Scribes: Li Zhu, Fan Li, Ni Guan Factor Analysis Factor analysis is related to the mixture models we have studied.
More informationDistinguishing Causes from Effects using Nonlinear Acyclic Causal Models
JMLR Workshop and Conference Proceedings 6:17 164 NIPS 28 workshop on causality Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models Kun Zhang Dept of Computer Science and HIIT University
More information