Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University
TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood ratio ICA 3. Case Study II: Blind Source Separation Convex divergence ICA Nonstationary Bayesian ICA Online Gaussian process ICA 4. Summary
Introduction Independent component analysis (ICA) is essential for blind source separation. ICA is applied to separate the mixed signals and find the independent components. The demixed components can be grouped into clusters where the intra-cluster elements are dependent and intercluster elements are independent. ICA provides unsupervised learning approach to acoustic modeling, signal separation and many others. APSIPA DL: Independent Component Analysis and Unsupervised Learning 3
Blind Source Separation Cocktail-party problem Goal Unknown: A and s Reconstruct the source signals via demixing matrix W Mixture matrix A is assumed to be fixed. APSIPA DL: Independent Component Analysis and Unsupervised Learning 4
Independent Component Analysis Three assumptions sources statistically independent independent component nongaussian distribution mixing matrix square matrix m t = mt m t mm m m mt m t S S S S A A A A X X X X 1 1 11 1 1 11 1 1 11 m m t m x = As APSIPA DL: Independent Component Analysis and Unsupervised Learning 5
ICA Objective Function Independent Component Analysis Maximum Likelihood Mutual Information Kurtosis Negentropy APSIPA DL: Independent Component Analysis and Unsupervised Learning 6
ICA Learning Rule ICA demixing matrix can be estimated by optimizing an objective function via gradient descent algorithm or natural gradient algorithm APSIPA DL: Independent Component Analysis and Unsupervised Learning 7
TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood ratio ICA 3. Case Study II: Blind Source Separation Convex divergence ICA Nonstationary Bayesian ICA Online Gaussian process ICA 4. Summary
ICA for Speech Recognition Mismatch between training and test data always exists. Adaptation of HMM parameters is important. Eigenvoice (PCA) versus Independent Voice (ICA) PCA performs a linear de-correlation process ICA extracts the higher-order statistics E[ e e2 em ] = E[ e1 ] E[ e2 ] E[ e 1 M r r r E[ s1 s2 sm ] = E[ s1 ] E[ s2 r r ] E[ s ] r M ] uncorrelation PCA higher-order correlations are zero ICA APSIPA DL: Independent Component Analysis and Unsupervised Learning 9
Sparseness & Information Redundancy The degree of sparseness in distribution of the transformed signals is proportional to the amount of information conveyed by the transformation. Sparseness measurement fourth-order statistics (kurtosis) nongaussianity 4 2 2 kurt( s) = E[ s ] E [ s ] 3 Information redundancy reduction using ICA is higher than that using PCA. APSIPA DL: Independent Component Analysis and Unsupervised Learning 10
Eigenvoices versus Independent Voices Independent voice Reference Model 1 Eigenvoice Reference Model 2 Reference Model 3 Independent voice adapted model Adaptation data Eigenvoice adapted model APSIPA DL: Independent Component Analysis and Unsupervised Learning 11
Evaluation of Kurtosis 35 30 Independent voice Eigenvoice 25 Kurtosis 20 15 10 5 0 5 10 15 20 25 30 35 Voice index APSIPA DL: Independent Component Analysis and Unsupervised Learning 12
Word Error Rates on Aurora2 Word error rates (%) 11.5 11.0 10.5 10.0 9.5 9.0 8.5 8.0 7.5 No adaptation Eigenvoice L=5 Eigenvoice L=10 Eigenvoice L=15 Independent voice L=5 Independent voice L=10 Independent voice L=15 7.0 2 0 K=10 K=15 K: number of components L: number of adaptation sentences APSIPA DL: Independent Component Analysis and Unsupervised Learning 13
TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood ratio ICA 3. Case Study II: Blind Source Separation Convex divergence ICA Nonstationary Bayesian ICA Online Gaussian process ICA 4. Summary
Test of Independence Given the demixing signals, the null & alternative hypotheses are defined as If y is Gaussian distributed, we are testing whether the correlation between and is equal to zero, i.e. or APSIPA DL: Independent Component Analysis and Unsupervised Learning 15
Likelihood Ratio LR serves as the test statistics which measures the confidence for against. LR is a measure of independence for and can act as an objective function for finding ICA demixing matrix. However, it is not allowed to assume Gaussianity for ICA problem. APSIPA DL: Independent Component Analysis and Unsupervised Learning 16
Nonparametric Approach Let each sample be transformed by. Instead of assuming Gaussianity, we apply the kernel density estimation using Gaussian kernel Kernel centroid is given by APSIPA DL: Independent Component Analysis and Unsupervised Learning 17
Nonparametric Likelihood Ratio NLR objective function with multivariate Gaussian kernel APSIPA DL: Independent Component Analysis and Unsupervised Learning 18
ICA Learning Procedure Parameter Initialization Centering Whitening Output W Stopping criterion NLR-ICA Learning Log likelihood ratio for null and alternative hypotheses Maximizing with respect to,, we obtain APSIPA DL: Independent Component Analysis and Unsupervised Learning 19
Training data Viterbi alignment Segment-based supervector collection for a subword unit X NLR-ICA Y K-means clustering.... Cluster 1 Cluster 2 Cluster M Hidden Markov model training Lexicon HMM 1 HMM 2 HMM M.... θ1 θ2 θ θm Test data Speech recognizer Recognition result
Segment-Based Supervector Aligned utterance Aligned utterance x s1 xs 2 x 1... x s N x 2... Aligned utterance... xt 1 x T Segment-based supervector matrix X = { [ ] [ ] [ ] [ ] } x1 2 x... xt 1 x T APSIPA DL: Independent Component Analysis and Unsupervised Learning 21
Syllable Error Rates Continuous Mandarin speech recognition 7080 training utterances (40 males and 40 females) 1000 test utterances (10 males and 10 females) Context-dependent subsyllable HMM modeling Each HMM cluster had at most four clusters. Without Clustering Clustering with no ICA Clustering with MMI- ICA Clustering with NLR- ICA SER (%) 38.9 36.5 33.6 31.4 23
TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood ratio ICA 3. Case Study II: Blind Source Separation Convex divergence ICA Nonstationary Bayesian ICA Online Gaussian process ICA 4. Summary
ICA Objective Function Independent Component Analysis Maximum Likelihood Divergence Measure Kurtosis Negentropy α -Divergence Kullback- Leiblier (KL) Divergence Euclidean Divergence Cauchy Schwartz Divergence Convex Divergence APSIPA DL: Independent Component Analysis and Unsupervised Learning 25
Mutual Information & KL Divergence Mutual information between two variables and is defined by using the Shannon entropy. It can be formulated as the KL divergence or relative entropy between the joint distribution and the product of marginal distribution where. APSIPA DL: Independent Component Analysis and Unsupervised Learning 26
Divergence Measures Euclidean divergence Cauchy-Schwartz divergence -divergence APSIPA DL: Independent Component Analysis and Unsupervised Learning 27
Divergence Measures f-divergence Jensen-Shannon divergence where. Entropy is a concave function. APSIPA DL: Independent Component Analysis and Unsupervised Learning 28
Convex Function A convex function should meet the Jensen s inequality f ( ) : convex function A general convex function is defined by APSIPA DL: Independent Component Analysis and Unsupervised Learning 29
Convex Divergence By assuming equal weight, we have When, C-DIV is derived as a case with convex function APSIPA DL: Independent Component Analysis and Unsupervised Learning 30
Different Divergence Measures APSIPA DL: Independent Component Analysis and Unsupervised Learning 31
Different Divergence Measures APSIPA DL: Independent Component Analysis and Unsupervised Learning 32
Convex Divergence ICA C-ICA learning algorithm Nonparametric C-ICA is established by using Parzen window density function. APSIPA DL: Independent Component Analysis and Unsupervised Learning 33
Simulated Experiments A parametric demixing matrix Two sources: super-gaussian and sub-gaussian distribution p Kurtosis W cosθ1 = cosθ 2 1, s [ τ, τ ] s ) = 2τ 1 0, otherwise 1 1 1 ( 1 sinθ1 sinθ 2 Source 1: -1.13, source 2: 2.23 1 p( s 2 ) exp 2τ 2 s 2 = τ 2 APSIPA DL: Independent Component Analysis and Unsupervised Learning 34
KL-DIV C-DIV alpha=1 C-DIV, alpha= -1 APSIPA DL: Independent Component Analysis and Unsupervised Learning 35
Learning Curves APSIPA DL: Independent Component Analysis and Unsupervised Learning 36
Experiments on Blind Source Separation One music signal and two speech signals from two male speakers were sampled from ICA 99 BSS Test Sets at http://sound.media.mit.edu/ica-bench/ Mixing matrix A = 0.8 0.3 0.3 0.2 0.8 0.7 0.3 0.2 0.3 Evaluation metric signal-to-interference ratio (SIR) SIR(dB) T t t t= 1 T 2 = 10log 10 s = 1 y t s t 2 APSIPA DL: Independent Component Analysis and Unsupervised Learning 37
Comparison of Different Methods PC-ICA NC-ICA APSIPA DL: Independent Component Analysis and Unsupervised Learning 38
TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood ratio ICA 3. Case Study II: Blind Source Separation Convex divergence ICA Nonstationary Bayesian ICA Online Gaussian process ICA 4. Summary
Why Nonstationary Source Separation? Real-world blind source separation number of sources is unknown BSS is a dynamic time-varying system mixing process is nonstationary Why nonstationary? Bayesian method using ARD can determine the changing number of sources recursive Bayesian for online tracking of nonstationary conditions Gaussian process provides a nonparametric solution to represent temporal structure of time-varying mixing system. APSIPA DL: Independent Component Analysis and Unsupervised Learning 40
Nonstationary Mixing Systems Time-varying mixing matrix Source signals may abruptly appear or disappear S 3 S 1 a a ( t) 23 ( + ) 1) ( 1) = t+ ( + 2) a t a 23 = 23 a t 23 ( t) d a ( t+ 1) 2) 21 3 21 = 0 ( t+ 2) d 1 a ' 22 d ' ( ) ( 1) 1 d a t ( t) 22 a = a t+ 2 22 22 d 2 S 2 S 2 APSIPA DL: Independent Component Analysis and Unsupervised Learning 41
Nonstationary Bayesian (NB) Learning Maximum a posteriori estimation of NB-ICA parameters and compensation parameters updating Learning epoch t Learning epoch t+1 (t-1) θ (t) θ (t+1) θ Prior Updating Prior Updating η (t-1) η (t) η (t+1) Learning epoch t Learning epoch t+1 APSIPA DL: Independent Component Analysis and Unsupervised Learning 42
Model Construction Noisy ICA model Likelihood function of an observation Distribution of model parameters source mixing matrix noise APSIPA DL: Independent Component Analysis and Unsupervised Learning 43
Prior & Marginal Distributions Prior distributions precision of noise precision of mixing matrix Marginal likelihood of NB-ICA model APSIPA DL: Independent Component Analysis and Unsupervised Learning 44
Automatic Relevance Determination Detection of source signals number of sources can be determined APSIPA DL: Independent Component Analysis and Unsupervised Learning 45
Compensation for Nonstationary ICA Prior density of compensation parameter conjugate prior (Wishart distribution) APSIPA DL: Independent Component Analysis and Unsupervised Learning 46
Graphical Model for NB-ICA ( l 1) ρ ( l 1) V ( l 1) u α ( l ) ( l) H (l) A NM (l) Π (l) R ( l 1) Φ π ( l 1) Φ r ( l 1) ω (l) B N (l) ε t (l) x t (l) s t L (l) M M ( l 1) Φ m APSIPA DL: Independent Component Analysis and Unsupervised Learning 47
Experiments Nonstationary Blind Source Separation ICA'99 http://sound.media.mit.edu/ica-bench/ Scenarios state of source signals: active or inactive source signals or sensors are moving: nonstationary mixing matrix APSIPA DL: Independent Component Analysis and Unsupervised Learning 48
Source Signals and ARD Curves Alpha 800 400 0 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 sec Blue: first source signal Red: second source signal APSIPA DL: Independent Component Analysis and Unsupervised Learning 49
TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood ratio ICA 3. Case Study II: Blind Source Separation Convex divergence ICA Nonstationary Bayesian ICA Online Gaussian process ICA 4. Summary
Online Gaussian Process (OLGP) Basic ideas incrementally detect the status of source signals and estimate the corresponding distributions from online observation data. temporal structure of time-varying mixing coefficients characterized by Gaussian process. are Gaussian process is a nonparametric model which defines the prior distribution over functions for Bayesian inference. APSIPA DL: Independent Component Analysis and Unsupervised Learning 51
Model Construction Noisy ICA model Likelihood function Distribution of model parameters source noise P APSIPA DL: Independent Component Analysis and Unsupervised Learning 52
Gaussian Process Mixing matrix is generated by the latent function GP is adopted to describe the distribution of are hyperparameters of kernel function APSIPA DL: Independent Component Analysis and Unsupervised Learning 53
Graphical Model for OLGP-ICA ( l 1) Λ a ( l 1) Ξ a ( l 1) M a ( l 1) R a NM ( l 1) λ s ( l 1) u (l) A t ( l 1) R s ( l 1) ρ s ( l 1) ω (l) (l) B ε t N (l) x t (l) s t L ( l 1) M s M APSIPA DL: Independent Component Analysis and Unsupervised Learning 54
Experimental Setup Nonstationary source separation using source signals from http://www.kecl.ntt.co.jp/icl/signal/ Nonstationary scenarios status of source signals: active or inactive source signals or sensors are moving: nonstationary mixing matrix APSIPA DL: Independent Component Analysis and Unsupervised Learning 56
Male Music Female APSIPA DL: Independent Component Analysis and Unsupervised Learning 57
Comparison of Different Methods Signal-to-interference ratios (SIRs) (db) VB-ICA BICA-HMM Switching- ICA Online VB-ICA OLGP-ICA Demixed signal 1 Demixed signal 2 7.97 9.04 12.06 11.26 17.24-3.23-1.5-4.82 4.47 9.96 APSIPA DL: Independent Component Analysis and Unsupervised Learning 58
Summary We presented speaker adaptation method based on independent voices by fulfilling ICA perspective. A nonparametric likelihood ratio ICA was proposed according to hypothesis test theory. A convex divergence was developed as an optimization metric for ICA algorithm. A nonstationary Bayesian ICA was proposed to deal with nonstationary mixing system. An online Gaussian process ICA was presented for nonstationary and temporally correlated source separation. ICA methods could be extended to solve nonnegative matrix factorization and single-channel separation. APSIPA DL: Independent Component Analysis and Unsupervised Learning 59
References J.-T. Chien and B.-C. Chen, A new independent component analysis for speech recognition and separation, IEEE Transactions on Audio, Speech and Language Processing, vol. 14, no. 4, pp. 1245-1254, 2006. J.-T. Chien, H.-L. Hsieh and S. Furui, A new mutual information measure for independent component analysis, in Proc. ICASSP, pp. 1817-1820, 2008. H.-L. Hsieh, J.-T. Chien, K. Shinoda and S. Furui, Independent component analysis for noisy speech recognition, in Proc. ICASSP, pp. 4369-4372, 2009. H.-L. Hsieh and J.-T. Chien, Online Bayesian learning for dynamic source separation, in Proc. ICASSP, pp. 1950-1953, 2010. H.-L. Hsieh and J.-T. Chien, Online Gaussian process for nonstationary speech separation, in Proc. INTERSPEECH, pp. 394-397, 2010. H.-L. Hsieh and J.-T. Chien, Nonstationary and temporally-correlated source separation using Gaussian process, in Proc. ICASSP, pp. 2120-2123, 2011. J.-T. Chien and H.-L. Hsieh, Convex divergence ICA for blind source separation, IEEE Transactions on Audio, Speech and Language Processing, vol. 20, no. 1, pp. 290-301, 2012. APSIPA DL: Independent Component Analysis and Unsupervised Learning 60
Thanks to H.-L. Hsieh K. Shinoda S. Furui http://chien.csie.ncku.edu.tw jtchien@mail.ncku.edu.tw APSIPA DL: Independent Component Analysis and Unsupervised Learning 61