Using Kernel Density Estimator in Nonlinear Mixture

Size: px
Start display at page:

Download "Using Kernel Density Estimator in Nonlinear Mixture"

Transcription

1 9 ECTI TRANSACTIONS ON ELECTRICAL ENG, ELECTRONICS, AND COMMUNICATIONS VOL3, NO AUGUST 5 Using Kernel Density Estimator in Nonlinear Mixture W Y Leong and J Homer, Non-members ABSTRACT Generally, blind separation of sources from their nonlinear mixtures is rather difficult This nonlinear mapping, constituted by unsupervised linear mixing followed by unknown and invertible nonlinear distortion, is found in many signal processing cases We propose using a kernel density estimator incorporated within an equivariant gradient algorithm to separate the nonlinear mixed sources The proposed algorithm is a generalization of natural gradient algorithm and Gram-Charlier series, which is able to adapt to the actual statistical distributions of the sources by estimating the kernel density distribution at the output signals Experiments are presented to illustrate the performance of our proposed nonlinear blind source separation method Keywords: Nonlinear signal Processing, blind source separation 1 INTRODUCTION Independent Component Analysis (ICA) [1-4] and the closely related Blind Source Separation (BSS) [5, 6] methodology have been well studied in the case of linear mixtures This analysis has received attention in signal processing, telecommunications [7] and speech enhancement systems The idea of ICA methodology is to find the independent components from mixtures of the independent sources based on a generative approach such as negentropy maximization, [8], maximum likelihood estimation [9] and cumulant-based [1] methods; where the goal is to explain how the observations are generated from the independent component sources It is assumed that there exist certain source signals which have generated the observed data through an unknown mapping The goal of the generative learning is to identify both the source signals and the unknown generative mapping via a recursive blind algorithm In general, most of the proposed blind source separation algorithms based on the theory of independent 4PSI7: Manuscript received on December 13, 4 ; revised on May 13, 5 This work is supported by The University of Queensland Graduate School Award Parts of this paper appeared at ISCIT4, Japan The authors are with School of Information Technology and Electrical Engineering The University of Queensland, Australia leong@iteeuqeduau, homerj@iteeuqeduau component analysis (ICA) assume the mixture mapping to be linear [6] Little work has been dedicated to the problem of blind source separation in nonlinear mixtures However, when the mapping is nonlinear, we naturally need to use a nonlinear adaptive algorithm It is evident that nonlinear mixing is far more difficult than the linear case This nonlinear mapping problem has already attracted some attention recently [11-13] In the nonlinear mapping model, the linear ICA theory does not have the pruning capability to learn and detect the nonlinear observations, as will be shown later The blind separation algorithms for the linear mixture model generally fail to extract the independent sources from nonlinear mixtures To tackle the nonlinear mapping problem, the nonlinear adaptive algorithms involve the use of one or more layers of hidden computational units in addition to the output layer [14] Among the contributions to the nonlinear mapping problem, Valpola, Giannakopoulos, Honkela and Karhunen [1] proposed using Bayesian ensemble learning in nonlinear ICA In ensemble learning, a simpler approximative distribution is fitted to the true posterior distribution by minimizing their Kullback-Leibler divergence Here the ensemble learning provides the necessary regularisation for nonlinear ICA by choosing the model and sources that have most probably generated the observed data Also, Taleb and Jutten [11] proved that it is not possible to separate the general nonlinearly mixed sources; the source independence assumption is not strong enough in the general nonlinear case They proposed a direct estimation of the score functions by minimizing the mean square error of the parameter vector in post-nonlinear mixtures The results show that the least mean square estimation of the score functions performs well with hard nonlinearities To justify the theoretical analysis, Taleb [15] investigated a structured nonlinear model framework, and proposed a stochastic algorithm to deal with the parametric nonlinear mixtures The main objective of this paper is to investigate a class of nonlinear transformations for which an iterative and equivariant algorithm is presented This proposed algorithm is related to the Equivariant Adaptive Source Separation via ICA () [16] algorithm, which is based on the idea of serial multiplicative updating It was proposed by Cardoso and Laheld [16] and extended by Karhunen etal in [17] The serial multiplicative update algorithm consists

2 Using Kernel Density Estimator in Nonlinear Mixture 93 s 1 s s n Fig1: A R 1 R R n Nonlinear function e 1 e e n g 1 g g n Inverse function W Nonlinear mixing and unmixing model of an nxn matrix valued nonlinear function, which is used for updating the linear unmixing matrix W Here n is the number of source signals and the number of observed mixtures The algorithm however is typically unsuitable for nonlinear mixtures The paper is organized as follows: In section II, the nonlinear model is presented and a set of learning rules is derived in section III based on an equivariant gradient criterion The learning rules are verified via simulation in section IV The concluding remarks are discussed in section V NONLINEAR MIXTURES The problem of nonlinear mapping from the unknown sources s(t)) = [s 1 (t),s (t),,s n (t)] T to the observations e(t) is modelled with the multi-layer perceptron (MLP) network as illustrated in Fig1 Here A represents a nxn linear mixing matrix and R = {R i } n i=1 represents an (n 1) vector of unknown one-to-one nonlinear mappings The unmixing system involves application of an (n 1) vector of nonlinear transfer function {g i } n i=1 followed by a linear unmixing nxn matrix, W This model is appropriate for multiple inputs and multiple outputs (MIMO) channels; where an n-sensor array provides the observed nonlinear mixture signals e(t) = [e 1 (t),e (t),,e n (t)] T We assume the received observation e(t) is a vector of nonlinear memoryless mixtures of n unknown statistically independent sources s(t): e(t) =R(As(t)) (1) where A R nxn is an unknown non-singular mixing matrix, which represents the hidden layer of the network To adjust and pre-process the received signals, normalization will be applied by centering the observable variables e The normalization method is chosen because it has gained good results in initializing the model vectors of mixtures Good initial values are important for the method since the network can effectively prune away unused components The normalization operation is y 1 y y n e nom = e E{e} () where E denotes the expectation operator Our goal is to recover the sources without any particular knowledge of their distributions and the nonlinear mixing mechanism The principle is to derive a separation structure (g, W) which involves the learning rule for the estimation of the matrix W and the nonlinear transfer parameters of g The unmixing output y is given by: y(t) =W(g e (t)) = W g (R (As(t))) (3) Our mission is to have the unmixed output components of y =(y 1,y,,y n ) as statistically independent as possible; this can be achieved by minimizing the mutual information of the output y We say that signal components are independent if their joint probability distribution function factorises as the product of the marginal densities [3, 6] p y (y) =p y (y 1,y,,y n )= n p yi (y i ) (4) i=1 where p yi (y i ) is the probability density function of the ith output component y i Maximizing the statistical independence corresponds to minimizing the mutual information of y The mutual information of the observed vector y can be measured by the Kullback-Leibler (KL) divergence of the multivariate density from the product of the marginal (univariate) densities [3] p y (y) I(y) = p y (y)log n i=1 p dy (5) y i (y i ) Note independence is achieved if and only if the KL divergence is equal to zero The KL divergence of eq(5) is equal to Shannon s mutual information I(y) between the components of output vector y [] This can be rewritten as I(y) = n H(y i ) H(y) (6) i=1 where H denotes the differential entropy: H(y i )= Elogp yi (y i ),where E denotes the expectation operator, and H(y) = H(y 1,,y n ) Thus estimation of the separation structure corresponds to the minimization of I(y); this requires the computation of its gradient with respect to the separation structure parameters W and g Rewriting eq(6), gives I(y 1,,y n ) = H(y 1 )++ H(y n ) H(y 1,,y n ) { = p y (y 1,y,,y n ) log p y(y 1,y,,y n ) } n i=1 p dy 1,dy,,dy n y i (y i ) p y (y) = p y (y)log n i=1 p dy (7) y i (y i )

3 94 ECTI TRANSACTIONS ON ELECTRICAL ENG, ELECTRONICS, AND COMMUNICATIONS VOL3, NO AUGUST 5 Fig: The Flow of Nonlinear Separation 3 EQUIVARIANT NONLINEAR SEPARA- TION To stabilize and equivariant the nonlinear mapping signals, we propose an equivariant gradient algorithm consistingofthekerneldensityf(y) estimation within the nonlinear system (W, ge(t)) See Fig Blind source separation of the nonlinear mixtures e(t) requires first to estimate the nonlinear estimation block g This procedure is done by applying an iterative procedure to isolate each component of the observations of the initial observation vector, e(t) = e (t): e k+l (t) =g(e k (t)) = e k (t) η F(e k (t)) (31) where η is a positive adaptation step size and F( ) is the probability density function vector of the initial nonlinear observation vector e(t) = e (t),k denotes iteration index The second stage involves iterative estimation of the linear unmixing matrix W via the following equivariant gradient algorithm: W k+l = W+η [I diag(f(y)) k y T k +diag(f(e(t)))]w k (3) where η is a positive adaptation step size, I is the identity matrix, f = logf(e(t)) is the kernel density score function of the pdf vector F(e(t)) diag( ) denotes diagonal matrix, the diagonal elements corresponding to the elements of n vector e The proposed equivariant gradient algorithm involves the probability density function F( ) and the corresponding kernel density f( ) We employ the Gram-Charlier and Edgeworth series [18, 19] to approximate the probability distribution function F The summary of the Gram-Charlier and Edgeworth series is as follow 3 1 Kernel Density Estimation Through the mixing operation, the observation vector e has a pdf F described approximately by a normal pdf Accordingly, it is quite natural to approximate F with a normal pdf This is the basis of the Gram-Charlier and Edgeworth series [18, 19] method which optimizes the determination of the probability density function (pdf) by data moments In particular, the pdf F(e(t)) can be represented via Hermite s polynomials H n (e) truncated expansion: F(e) = 1 [ (e m) ] k σ π exp σ C n H n (e) = αm σ(e) n=1 k C n H n (e) (33) n=1 where C n are coefficients to be determined and αm σ( ) is the Gaussian distribution vector having vector m as mean and vector σ as standard deviation The values of m and σ are calculated directly from the observation Note, each of the vector multiplication, division operations above are carried out on an element-by-element basis 3 Hermite s Polynomials Determination Referring to eq(33), the Hermite s polynomials are evaluated by the following iteration rules: H n (e) = ( 1)n αm σ(e) Dn e αm σ(g(e)) (34)

4 Using Kernel Density Estimator in Nonlinear Mixture 95 The Gaussian function is αm σ defined as αm σ(e) = 1 σ π exp{[e m] /σ } (35) The operator D n e is defined as D n e = dn d(e) n (36) The first six Hermite s polynomials terms are obtained as follow: H (e) = 1, H 1 (e) = (e-m) σ, H (e) = (e-m) σ 1 4 σ, H 3 (e) = (e-m)3 σ 3 (e-m) 8 H 4 (e) H 5 (e) = (e-m)4 σ 8 ( = (e-m) (e-m) 4 σ σ 8 σ 4, 6 (e-m) σ σ 4, 1 (e-m) σ σ 4 ), (37) 33 Coefficient C n Determination The coefficients C n can be evaluated by the orthogonalization process [1] F(e)H(e)de C n = H(e)σ(e)de = σn F(e)H(e)de (38) n! It can also be shown that C n+δd = n a= σ α α α! n+δ d α β= { ( 1) n+δ d α β (n + δ d α β)!β! m n+δ d α β u β}, (39) 4 SIMULATION RESULTS In this section, we describe several experiments that were made to evaluate the validity and performance of the proposed nonlinear algorithm, in comparison with the conventional ICA based [16] and [, 8] algorithms These experiments are used to assess the proposed nonlinear algorithm s ability to perform blind source separation of several nonlinear mixtures The complete proposed learning scheme for all the experiments is the same and corresponds to that shown in Fig First, normalization is used to prepare the observed mixtures to enable more robust and fast adaptation The normalization stage is also used to find sensible initial values for the posterior means of the mixtures Also, the initial weight W of the iterative algorithm is set as a matrix of random values The mixtures are then adjusted via the nonlinear function g i ( ) and the weight matrix, W In each simulation, a specified number of iterations is adopted, where one iteration means going through the full time sequence of the observations, e(t) As shown by the results, the proposed nonlinear algorithm is able to recover the sources from nonlinear mixtures that involved relatively smooth nonlinearities The experimental results of the above algorithms are presented for, 6 and 1 mixtures of sources This section investigates the performance by presenting four illustrative examples 4 1 Performance Analysis, Noise Absent [ We assume n =, A = distortion operations: ] and the nonlinear where n =, 1,, 3, and δ d is defined as { +1 if (n + δd )is odd δ d = if (n + δ d )is even (31) and the terms u k are k-order input moments defined as + u k = (e) k F(e)de (311) m = u 1,σ = u (u 1 ) The first six coefficients are C = u, C 1 = u 1 mu, C = 1 (u u 1 m + u m σ u ), C 3 = 1 3! (u3 3u m +3u 1 m u m 3 ) u σ 1 + u σ m, C 4 = 1 4! [u4 4u 3 m +6u (m σ ) 4u 1 (m 3 3σ m)+u (m 4 +3σ 4 ) 6σ m ], C 5 = 1 5! [u5 5u 4 m +1u 3 (m σ ) 1u (m 3 3σ m)+5u 1 (m 4 6σ m +3σ 4 ) u (m 5 1σ m 3 +15σ 4 m)], (31) e 1 (t) = A 1 s(t)+tanh(5 A 1 s(t)), e (t) = tanh(a s(t)) (41) where the data length is 1 t and A i = ith row of A These nonlinear mixtures are separated via the procedure of Fig The learning step size η is and 35 iterations were employed We present in Fig3-4 the behaviour of the proposed algorithm The joint distribution of the original sources, nonlinear mixtures and unmixed output are each displayed in the third row of Fig3 The original sources are both independent and the joint distribution is contained within a simple parallelogram The linear mixing matrix A and nonlinear functions of eq(41) are applied, producing nonlinear mixtures The corresponding joint distribution is no longer contained in a simple parallelogram At the final stage, the proposed nonlinear unmixing method has successfully compensated the effects of the nonlinear mixing operation Figure 4 shows the estimated nonlinear separating functions g 1, g and the corresponding compensation functions g i R 1 i

5 96 ECTI TRANSACTIONS ON ELECTRICAL ENG, ELECTRONICS, AND COMMUNICATIONS VOL3, NO AUGUST 5-1 Source Source 1 Nonlinear Mixture Nonlinear Mixture 1-1 Output Output 1 BER 1-1 noiseless -5dbW -1dBw -dbw Joint Distribution -1 1 Joint Distribution 1-1 Joint Distribution 1-4 Source Source Mixture Mixture 5 Output Output Fig3: The joint distributions of original sources, nonlinear mixtures and unmixed output signals 4 - nonlinear function, g nonlinear function, g Iteration 1 3 Fig5: BER of the output signal as a function of the 8 iterations over various noise power levels 4 3 Performance Analysis, when n> Consider now, 4, 6 sets of independent sources The components of s(t) are independent random variables identically and uniformly distributed in [ 3, 3] The proposed algorithm is applied to these mixtures with η =1 The data length is t = 5 and 1 iterations are employed 4 nonlinear function,g 1 compensation - nonlinear function, g compensation n=6 sources n=4 sources n= sources Fig4: The nonlinear transfer function has compensated the effect of the nonlinear distortion BER Performance Analysis, Noise Present Figure 5 displays the performance analysis (Bit Error Rate) with various level of Additive White Gaussian Noise (AWGN) present in the mixed signals The SNR in all cases is 1dB, the learning [ rate η] is 1, 1 6 the linear mixing matrix, A = and the 7 1 post nonlinear functions are same as in eq(41) As displayed in the previous experimental results, the proposed equivariant based algorithm is able to compensate nonlinear mapping in noiseless conditions This is further demonstrated by the result of Fig5 Notice that the noiseless system achieves the BER of 1 48 after iterations The results with noise present are achieved without any noise filtering Better results are achievable by employing conventional noise filtering Iteration 1 3 Fig6: BER over 1 iterations for, 4 and 6 different sources 4 4 Comparison with and Algorithm We present in Fig7 the behaviour of the proposed algorithm in comparison with the [16] and [, 8] algorithms We consider n = 6 and 1 nonlinear mixtures The output displays for the proposed algorithm, and are compared in

6 Using Kernel Density Estimator in Nonlinear Mixture 97 MSE Sources Sources MSE Sources Sources Per I ndex Mutual-Info Sources Per I ndex Mutual-Info Sources Fig7: Performance analysis between the proposed nonlinear algorithm, the and the method for n =6and 1 nonlinear mixtures: Top row: Mean-squared-error, Middle row: Mutual information, Bottom row: Performance index versus 35 iterations terms of mean-squared-error, mutual information and performance index respectively The performance index PI: PI = 1 ( n ( n p ij ) n max n p in 1 + j=1 i=1 i=1 j=1 n ( n p ij )) max n p nj 1 (4) where p ij is the i, j element of p = WgRA Note p is close to a permutation of the scaled identity matrix when the sources are separated This corresponds to PI = It can be seen that the proposed nonlinear algorithm is able to better recover the sources from the nonlinear distorted mixtures compared with the and algorithm Note, the algorithm because of its equivariant characteristic provides relatively good signal unmixing, but is still noticeably inferior to the proposed algorithm In our observation, the which is a linear ICA method is not generally suitable for separation of nonlinear mixtures; the output signals are distorted, unstable and failed to be unmixed properly A measure of separation performance is given by the similarity in kurtosis of the source and the corresponding unmixed signals Table 1 shows the kurtosis

7 98 ECTI TRANSACTIONS ON ELECTRICAL ENG, ELECTRONICS, AND COMMUNICATIONS VOL3, NO AUGUST 5 Table 1: The zero-mean white sources with sub-gaussian distribution tested with 4 and 6 nonlinear mixing Original Mixes Kurtosis (6 sources) Kurtosis (4 sources) Sources Kurtosis Kurtosis Recovered Recovered INFOMAX INFOMAX results for 4 and 6-mixing It can be seen that the proposed algorithm provides, in general, better kurtosis matching of source and output signals Note, the source signals are sub-gaussian and hence have negative kurtosis 5 CONCLUSION In this paper, we considered the problem of blind separation of nonlinearly mixed source signals We proposed a multilayer perceptron method which incorporates Gram-Charlier kernel density estimation and equivariant gradient learning to compensate the nonlinear mixing Simulation results demonstrate convergence and robustness of the proposed nonlinear algorithm Also, we compared and investigated the proposed nonlinear algorithm with the and algorithms Experimental results show that the proposed algorithm outperformed the and algorithm with lower BER and better independency References [1] PComon, Independent Component Analysis, A New Concept?, Special Issue on Higher-Order Statistics, vol 36, pp , April 1994 [] AHyvarinen and EOja, Independent Component Analysis: A Tutorial, Helsinki University of Technology, Finland April 1999 [3] TWLee, Independent Component Analysis: Theory and Applications Netherlands: Kluwer Academic Publishers, 1998 [4] Jutten and Herault, Independent Component Analysis versus Principal Components Analysis, presented at ProcEuropean Signal Processing Conf EUSIPCO, Grenoble, France, 1988 [5] ACichocki and SIAmari, Adaptive Blind Signal and Image Processing England: John Wiley & Sons,Ltd, [6] AHyvarinen, JKarhunen, EOja, Independent Component Analysis Canada: John Wiley & Sons, Inc, 1 [7] TRistaniemi, JJoutsensalo, Advanced ICAbased Receivers for Blocking Fading DS-CDMA Channels, Signal Processing, vol 8, pp , [8] Bell and Sejnowksi, An Information Maximization Approach to Blind Separation and Blind Deconvolution, Neural Computation, vol7,pp , 1995 [9] HPark, S-IAmari, KFukumizu, Adaptive Natural Gradient Learning Algorithms for Various Stochastic Models, Neural Networks, vol 13, pp , [1] Tobias Blaschke, Laurenz Wiskott, CuBICA: Independent Component Analysis by Simultaneous Third and Fourth Order Cumulant Diagonalization, Computer Science Preprint Server (CSPS), April 9, 3 [11] ATaleb, CJutten, Source Separation in Post- Nonlinear Mixtures, IEEE Transaction on Signal Processing, vol 47, pp 87-8, 1999 [1] HValpola, XGiannakopoulos, AHonkela, JKarhunen, Nonlinear Independent Component Analysis Using Ensemble Learning: Experiments and Discussion, Helsinki University of Technology, Neural Networks Research Centre, Finland [13] ATaleb, CJutten,SOlympieff, Source Separation in Post Nonlinear Mixtures: An Entropy- Based Algorithm, presented at International Conference on Acoustics, Speech, and Signal Processing, Washington, USA, May 1998 [14] SHaykin, Neural Networks: A Comprehensive Foundation, nd ed Englewood Cliffs, NJ: Prentice Hall, 1999 [15] ATaleb, A generic framework for blind source separation in structured nonlinear models, IEEE Trans on Signal Processing, vol 5, pp , 1 [16] JFCardoso and BHLaheld, Equivariant Adaptive Source Separation, IEEE Transactions on Signal Processing, vol 44, pp , Dec 1996 [17] JKarhunen and PPajunen, Hierarchic Nonlinear PCA Algorithms for Neural Blind Source Separation, presented at In NORSIG 96 Proceesings IEEE Nordic Signal Processing Symposium, Espoo, Finland, Sept 1996 [18] APelliccioni, The Gram-Charlier Method to Evaluate the Probability Density Function in

8 Using Kernel Density Estimator in Nonlinear Mixture 99 Monodimensional Case, Il Nuovo Cimento, vol, 1997 [19] MO brien, Using the Gram-Charlier expansion to produce Vibronic Band Shapes in Strong Coupling, Journal Physics: Condensed Matter, vol 4, pp , 199 WYLeong received the Bdegree in Electrical Engineering from The University of Queensland, Australia in She is currently working towards the PhD degree in the School of Information Technology and Electrical Engineering in the same university Her research interests include signal processing and telecommunications John Homer received the BSc degree in physics from the University of Newcastle, Australia in 1985 and the PhD degree in systems engineering from the Australian National University, Canberra, Australia, in 1995 Between his BSc and PhD studies he held a position of Research Engineer at Comalco Research Centre in Melbourne, Australia Following his PhD studies he has held research positions with the University of Queensland, Veritas DGC Pty Ltd and Katholieke Universiteit Leuven, Belgium He is currently a senior lecturer at the University of Queensland within the School of Information Technology and Electrical Engineering His research interests include signal and image processing, particularly in the application areas of telecommunications, audio and radar He is currently an Associate Editor of the Journal of Applied Signal Processing

Independent Component Analysis. Contents

Independent Component Analysis. Contents Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle

More information

Using Kernel PCA for Initialisation of Variational Bayesian Nonlinear Blind Source Separation Method

Using Kernel PCA for Initialisation of Variational Bayesian Nonlinear Blind Source Separation Method Using Kernel PCA for Initialisation of Variational Bayesian Nonlinear Blind Source Separation Method Antti Honkela 1, Stefan Harmeling 2, Leo Lundqvist 1, and Harri Valpola 1 1 Helsinki University of Technology,

More information

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction

More information

Bayesian ensemble learning of generative models

Bayesian ensemble learning of generative models Chapter Bayesian ensemble learning of generative models Harri Valpola, Antti Honkela, Juha Karhunen, Tapani Raiko, Xavier Giannakopoulos, Alexander Ilin, Erkki Oja 65 66 Bayesian ensemble learning of generative

More information

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA) Fundamentals of Principal Component Analysis (PCA),, and Independent Vector Analysis (IVA) Dr Mohsen Naqvi Lecturer in Signal and Information Processing, School of Electrical and Electronic Engineering,

More information

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION Alexandre Iline, Harri Valpola and Erkki Oja Laboratory of Computer and Information Science Helsinki University of Technology P.O.Box

More information

MINIMIZATION-PROJECTION (MP) APPROACH FOR BLIND SOURCE SEPARATION IN DIFFERENT MIXING MODELS

MINIMIZATION-PROJECTION (MP) APPROACH FOR BLIND SOURCE SEPARATION IN DIFFERENT MIXING MODELS MINIMIZATION-PROJECTION (MP) APPROACH FOR BLIND SOURCE SEPARATION IN DIFFERENT MIXING MODELS Massoud Babaie-Zadeh ;2, Christian Jutten, Kambiz Nayebi 2 Institut National Polytechnique de Grenoble (INPG),

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services. Internal

More information

A Canonical Genetic Algorithm for Blind Inversion of Linear Channels

A Canonical Genetic Algorithm for Blind Inversion of Linear Channels A Canonical Genetic Algorithm for Blind Inversion of Linear Channels Fernando Rojas, Jordi Solé-Casals, Enric Monte-Moreno 3, Carlos G. Puntonet and Alberto Prieto Computer Architecture and Technology

More information

An Ensemble Learning Approach to Nonlinear Dynamic Blind Source Separation Using State-Space Models

An Ensemble Learning Approach to Nonlinear Dynamic Blind Source Separation Using State-Space Models An Ensemble Learning Approach to Nonlinear Dynamic Blind Source Separation Using State-Space Models Harri Valpola, Antti Honkela, and Juha Karhunen Neural Networks Research Centre, Helsinki University

More information

Independent Component Analysis and Unsupervised Learning

Independent Component Analysis and Unsupervised Learning Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent

More information

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood

More information

PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE. Noboru Murata

PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE. Noboru Murata ' / PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE Noboru Murata Waseda University Department of Electrical Electronics and Computer Engineering 3--

More information

TRINICON: A Versatile Framework for Multichannel Blind Signal Processing

TRINICON: A Versatile Framework for Multichannel Blind Signal Processing TRINICON: A Versatile Framework for Multichannel Blind Signal Processing Herbert Buchner, Robert Aichner, Walter Kellermann {buchner,aichner,wk}@lnt.de Telecommunications Laboratory University of Erlangen-Nuremberg

More information

CIFAR Lectures: Non-Gaussian statistics and natural images

CIFAR Lectures: Non-Gaussian statistics and natural images CIFAR Lectures: Non-Gaussian statistics and natural images Dept of Computer Science University of Helsinki, Finland Outline Part I: Theory of ICA Definition and difference to PCA Importance of non-gaussianity

More information

Unsupervised Variational Bayesian Learning of Nonlinear Models

Unsupervised Variational Bayesian Learning of Nonlinear Models Unsupervised Variational Bayesian Learning of Nonlinear Models Antti Honkela and Harri Valpola Neural Networks Research Centre, Helsinki University of Technology P.O. Box 5400, FI-02015 HUT, Finland {Antti.Honkela,

More information

Blind Signal Separation: Statistical Principles

Blind Signal Separation: Statistical Principles Blind Signal Separation: Statistical Principles JEAN-FRANÇOIS CARDOSO, MEMBER, IEEE Invited Paper Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array

More information

NONLINEAR INDEPENDENT FACTOR ANALYSIS BY HIERARCHICAL MODELS

NONLINEAR INDEPENDENT FACTOR ANALYSIS BY HIERARCHICAL MODELS NONLINEAR INDEPENDENT FACTOR ANALYSIS BY HIERARCHICAL MODELS Harri Valpola, Tomas Östman and Juha Karhunen Helsinki University of Technology, Neural Networks Research Centre P.O. Box 5400, FIN-02015 HUT,

More information

Blind Machine Separation Te-Won Lee

Blind Machine Separation Te-Won Lee Blind Machine Separation Te-Won Lee University of California, San Diego Institute for Neural Computation Blind Machine Separation Problem we want to solve: Single microphone blind source separation & deconvolution

More information

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.Box 5400,

More information

An Improved Cumulant Based Method for Independent Component Analysis

An Improved Cumulant Based Method for Independent Component Analysis An Improved Cumulant Based Method for Independent Component Analysis Tobias Blaschke and Laurenz Wiskott Institute for Theoretical Biology Humboldt University Berlin Invalidenstraße 43 D - 0 5 Berlin Germany

More information

Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models

Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models JMLR Workshop and Conference Proceedings 6:17 164 NIPS 28 workshop on causality Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models Kun Zhang Dept of Computer Science and HIIT University

More information

Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models

Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models Distinguishing Causes from Effects using Nonlinear Acyclic Causal Models Kun Zhang Dept of Computer Science and HIIT University of Helsinki 14 Helsinki, Finland kun.zhang@cs.helsinki.fi Aapo Hyvärinen

More information

Analytical solution of the blind source separation problem using derivatives

Analytical solution of the blind source separation problem using derivatives Analytical solution of the blind source separation problem using derivatives Sebastien Lagrange 1,2, Luc Jaulin 2, Vincent Vigneron 1, and Christian Jutten 1 1 Laboratoire Images et Signaux, Institut National

More information

On Information Maximization and Blind Signal Deconvolution

On Information Maximization and Blind Signal Deconvolution On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate

More information

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II

Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseWare http://ocw.mit.edu HST.582J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

To appear in Proceedings of the ICA'99, Aussois, France, A 2 R mn is an unknown mixture matrix of full rank, v(t) is the vector of noises. The

To appear in Proceedings of the ICA'99, Aussois, France, A 2 R mn is an unknown mixture matrix of full rank, v(t) is the vector of noises. The To appear in Proceedings of the ICA'99, Aussois, France, 1999 1 NATURAL GRADIENT APPROACH TO BLIND SEPARATION OF OVER- AND UNDER-COMPLETE MIXTURES L.-Q. Zhang, S. Amari and A. Cichocki Brain-style Information

More information

ON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS

ON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS Yugoslav Journal of Operations Research 5 (25), Number, 79-95 ON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS Slavica TODOROVIĆ-ZARKULA EI Professional Electronics, Niš, bssmtod@eunet.yu Branimir TODOROVIĆ,

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

Independent Component Analysis

Independent Component Analysis 1 Independent Component Analysis Background paper: http://www-stat.stanford.edu/ hastie/papers/ica.pdf 2 ICA Problem X = AS where X is a random p-vector representing multivariate input measurements. S

More information

Independent Component Analysis (ICA)

Independent Component Analysis (ICA) Independent Component Analysis (ICA) Université catholique de Louvain (Belgium) Machine Learning Group http://www.dice.ucl ucl.ac.be/.ac.be/mlg/ 1 Overview Uncorrelation vs Independence Blind source separation

More information

Independent Component Analysis and Its Applications. By Qing Xue, 10/15/2004

Independent Component Analysis and Its Applications. By Qing Xue, 10/15/2004 Independent Component Analysis and Its Applications By Qing Xue, 10/15/2004 Outline Motivation of ICA Applications of ICA Principles of ICA estimation Algorithms for ICA Extensions of basic ICA framework

More information

Independent component analysis: algorithms and applications

Independent component analysis: algorithms and applications PERGAMON Neural Networks 13 (2000) 411 430 Invited article Independent component analysis: algorithms and applications A. Hyvärinen, E. Oja* Neural Networks Research Centre, Helsinki University of Technology,

More information

Independent Component Analysis on the Basis of Helmholtz Machine

Independent Component Analysis on the Basis of Helmholtz Machine Independent Component Analysis on the Basis of Helmholtz Machine Masashi OHATA *1 ohatama@bmc.riken.go.jp Toshiharu MUKAI *1 tosh@bmc.riken.go.jp Kiyotoshi MATSUOKA *2 matsuoka@brain.kyutech.ac.jp *1 Biologically

More information

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM Pando Georgiev a, Andrzej Cichocki b and Shun-ichi Amari c Brain Science Institute, RIKEN, Wako-shi, Saitama 351-01, Japan a On leave from the Sofia

More information

Independent Component Analysis

Independent Component Analysis Independent Component Analysis James V. Stone November 4, 24 Sheffield University, Sheffield, UK Keywords: independent component analysis, independence, blind source separation, projection pursuit, complexity

More information

Independent Component Analysis of Incomplete Data

Independent Component Analysis of Incomplete Data Independent Component Analysis of Incomplete Data Max Welling Markus Weber California Institute of Technology 136-93 Pasadena, CA 91125 fwelling,rmwg@vision.caltech.edu Keywords: EM, Missing Data, ICA

More information

A Convex Cauchy-Schwarz Divergence Measure for Blind Source Separation

A Convex Cauchy-Schwarz Divergence Measure for Blind Source Separation INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING Volume, 8 A Convex Cauchy-Schwarz Divergence Measure for Blind Source Separation Zaid Albataineh and Fathi M. Salem Abstract We propose

More information

BLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING FUNCTION. Juha Karvanen and Visa Koivunen

BLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING FUNCTION. Juha Karvanen and Visa Koivunen BLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING UNCTION Juha Karvanen and Visa Koivunen Signal Processing Laboratory Helsinki University of Technology P.O. Box 3, IN-215 HUT, inland Tel.

More information

One-unit Learning Rules for Independent Component Analysis

One-unit Learning Rules for Independent Component Analysis One-unit Learning Rules for Independent Component Analysis Aapo Hyvarinen and Erkki Oja Helsinki University of Technology Laboratory of Computer and Information Science Rakentajanaukio 2 C, FIN-02150 Espoo,

More information

ICA [6] ICA) [7, 8] ICA ICA ICA [9, 10] J-F. Cardoso. [13] Matlab ICA. Comon[3], Amari & Cardoso[4] ICA ICA

ICA [6] ICA) [7, 8] ICA ICA ICA [9, 10] J-F. Cardoso. [13] Matlab ICA. Comon[3], Amari & Cardoso[4] ICA ICA 16 1 (Independent Component Analysis: ICA) 198 9 ICA ICA ICA 1 ICA 198 Jutten Herault Comon[3], Amari & Cardoso[4] ICA Comon (PCA) projection persuit projection persuit ICA ICA ICA 1 [1] [] ICA ICA EEG

More information

Blind Source Separation with a Time-Varying Mixing Matrix

Blind Source Separation with a Time-Varying Mixing Matrix Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Introduction Consider a zero mean random vector R n with autocorrelation matri R = E( T ). R has eigenvectors q(1),,q(n) and associated eigenvalues λ(1) λ(n). Let Q = [ q(1)

More information

Blind separation of nonlinear mixtures by variational Bayesian learning

Blind separation of nonlinear mixtures by variational Bayesian learning Digital Signal Processing 17 (2007) 914 934 www.elsevier.com/locate/dsp Blind separation of nonlinear mixtures by variational Bayesian learning Antti Honkela a,, Harri Valpola b, Alexander Ilin a, Juha

More information

Remaining energy on log scale Number of linear PCA components

Remaining energy on log scale Number of linear PCA components NONLINEAR INDEPENDENT COMPONENT ANALYSIS USING ENSEMBLE LEARNING: EXPERIMENTS AND DISCUSSION Harri Lappalainen, Xavier Giannakopoulos, Antti Honkela, and Juha Karhunen Helsinki University of Technology,

More information

Feature Extraction with Weighted Samples Based on Independent Component Analysis

Feature Extraction with Weighted Samples Based on Independent Component Analysis Feature Extraction with Weighted Samples Based on Independent Component Analysis Nojun Kwak Samsung Electronics, Suwon P.O. Box 105, Suwon-Si, Gyeonggi-Do, KOREA 442-742, nojunk@ieee.org, WWW home page:

More information

Non-Euclidean Independent Component Analysis and Oja's Learning

Non-Euclidean Independent Component Analysis and Oja's Learning Non-Euclidean Independent Component Analysis and Oja's Learning M. Lange 1, M. Biehl 2, and T. Villmann 1 1- University of Appl. Sciences Mittweida - Dept. of Mathematics Mittweida, Saxonia - Germany 2-

More information

Post-Nonlinear Blind Extraction in the Presence of Ill-Conditioned Mixing Wai Yie Leong and Danilo P. Mandic, Senior Member, IEEE

Post-Nonlinear Blind Extraction in the Presence of Ill-Conditioned Mixing Wai Yie Leong and Danilo P. Mandic, Senior Member, IEEE IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 55, NO. 9, OCTOBER 2008 2631 Post-Nonlinear Blind Extraction in the Presence of Ill-Conditioned Mixing Wai Yie Leong and Danilo P. Mandic,

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

Approximating Nonlinear Transformations of Probability Distributions for Nonlinear Independent Component Analysis

Approximating Nonlinear Transformations of Probability Distributions for Nonlinear Independent Component Analysis Approximating Nonlinear Transformations of Probability Distributions for Nonlinear Independent Component Analysis Antti Honkela Neural Networks Research Centre, Helsinki University of Technology P.O. Box

More information

Independent Component Analysis

Independent Component Analysis A Short Introduction to Independent Component Analysis Aapo Hyvärinen Helsinki Institute for Information Technology and Depts of Computer Science and Psychology University of Helsinki Problem of blind

More information

An Introduction to Independent Components Analysis (ICA)

An Introduction to Independent Components Analysis (ICA) An Introduction to Independent Components Analysis (ICA) Anish R. Shah, CFA Northfield Information Services Anish@northinfo.com Newport Jun 6, 2008 1 Overview of Talk Review principal components Introduce

More information

Lecture'12:' SSMs;'Independent'Component'Analysis;' Canonical'Correla;on'Analysis'

Lecture'12:' SSMs;'Independent'Component'Analysis;' Canonical'Correla;on'Analysis' Lecture'12:' SSMs;'Independent'Component'Analysis;' Canonical'Correla;on'Analysis' Lester'Mackey' May'7,'2014' ' Stats'306B:'Unsupervised'Learning' Beyond'linearity'in'state'space'modeling' Credit:'Alex'Simma'

More information

Comparative Analysis of ICA Based Features

Comparative Analysis of ICA Based Features International Journal of Emerging Engineering Research and Technology Volume 2, Issue 7, October 2014, PP 267-273 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Comparative Analysis of ICA Based Features

More information

Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation

Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation Hsiao-Chun Wu and Jose C. Principe Computational Neuro-Engineering Laboratory Department of Electrical and Computer Engineering

More information

Blind separation of instantaneous mixtures of dependent sources

Blind separation of instantaneous mixtures of dependent sources Blind separation of instantaneous mixtures of dependent sources Marc Castella and Pierre Comon GET/INT, UMR-CNRS 7, 9 rue Charles Fourier, 9 Évry Cedex, France marc.castella@int-evry.fr, CNRS, I3S, UMR

More information

POLYNOMIAL EXPANSION OF THE PROBABILITY DENSITY FUNCTION ABOUT GAUSSIAN MIXTURES

POLYNOMIAL EXPANSION OF THE PROBABILITY DENSITY FUNCTION ABOUT GAUSSIAN MIXTURES POLYNOMIAL EXPANSION OF THE PROBABILITY DENSITY FUNCTION ABOUT GAUSSIAN MIXTURES Charles C. Cavalcante, J. C. M. Mota and J. M. T. Romano GTEL/DETI/CT/UFC C.P. 65, Campus do Pici, CEP: 6.455-76 Fortaleza-CE,

More information

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH BLID SEPARATIO OF TEMPORALLY CORRELATED SOURCES USIG A QUASI MAXIMUM LIKELIHOOD APPROACH Shahram HOSSEII, Christian JUTTE Laboratoire des Images et des Signaux (LIS, Avenue Félix Viallet Grenoble, France.

More information

FAST ALGORITHM FOR ESTIMATING MUTUAL INFORMATION, ENTROPIES AND SCORE FUNCTIONS. Dinh Tuan Pham

FAST ALGORITHM FOR ESTIMATING MUTUAL INFORMATION, ENTROPIES AND SCORE FUNCTIONS. Dinh Tuan Pham FAST ALGORITHM FOR ESTIMATING MUTUAL INFORMATION, ENTROPIES AND SCORE FUNCTIONS Dinh Tuan Pham Laboratoire de Modelling and Computation, CNRS, IMAG BP. 53, 384 Grenoble cedex, France ABSTRACT This papers

More information

Wavelet de-noising for blind source separation in noisy mixtures.

Wavelet de-noising for blind source separation in noisy mixtures. Wavelet for blind source separation in noisy mixtures. Bertrand Rivet 1, Vincent Vigneron 1, Anisoara Paraschiv-Ionescu 2 and Christian Jutten 1 1 Institut National Polytechnique de Grenoble. Laboratoire

More information

MISEP Linear and Nonlinear ICA Based on Mutual Information

MISEP Linear and Nonlinear ICA Based on Mutual Information Journal of Machine Learning Research 4 (23) 297-38 Submitted /2; Published 2/3 MISEP Linear and Nonlinear ICA Based on Mutual Information Luís B. Almeida INESC-ID, R. Alves Redol, 9, -29 Lisboa, Portugal

More information

HST.582J/6.555J/16.456J

HST.582J/6.555J/16.456J Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear

More information

Nonlinear Blind Source Separation Using a Radial Basis Function Network

Nonlinear Blind Source Separation Using a Radial Basis Function Network 124 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 1, JANUARY 2001 Nonlinear Blind Source Separation Using a Radial Basis Function Network Ying Tan, Member, IEEE, Jun Wang, Senior Member, IEEE, and

More information

An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models

An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models Harri Valpola and Juha Karhunen Helsinki University of Technology, Neural Networks Research Centre P.O.Box 5400, FIN-02015

More information

Estimation of linear non-gaussian acyclic models for latent factors

Estimation of linear non-gaussian acyclic models for latent factors Estimation of linear non-gaussian acyclic models for latent factors Shohei Shimizu a Patrik O. Hoyer b Aapo Hyvärinen b,c a The Institute of Scientific and Industrial Research, Osaka University Mihogaoka

More information

Tutorial on Blind Source Separation and Independent Component Analysis

Tutorial on Blind Source Separation and Independent Component Analysis Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...

More information

Blind channel deconvolution of real world signals using source separation techniques

Blind channel deconvolution of real world signals using source separation techniques Blind channel deconvolution of real world signals using source separation techniques Jordi Solé-Casals 1, Enric Monte-Moreno 2 1 Signal Processing Group, University of Vic, Sagrada Família 7, 08500, Vic

More information

ORIENTED PCA AND BLIND SIGNAL SEPARATION

ORIENTED PCA AND BLIND SIGNAL SEPARATION ORIENTED PCA AND BLIND SIGNAL SEPARATION K. I. Diamantaras Department of Informatics TEI of Thessaloniki Sindos 54101, Greece kdiamant@it.teithe.gr Th. Papadimitriou Department of Int. Economic Relat.

More information

BLIND SOURCE SEPARATION IN POST NONLINEAR MIXTURES

BLIND SOURCE SEPARATION IN POST NONLINEAR MIXTURES BLIND SURCE SEPARATIN IN PST NNLINEAR MIXTURES Sophie Achard, Dinh Tuan Pham Univ of Grenoble Laboratory of Modeling Computation, IMAG, CNRS BP 53X, 384 Grenoble Cedex, France SophieAchard@imagfr, DinhTuanPham@imagfr

More information

Blind Separation of Nonlinear Mixtures by Variational Bayesian Learning

Blind Separation of Nonlinear Mixtures by Variational Bayesian Learning Blind Separation of Nonlinear Mixtures by Variational Bayesian Learning Antti Honkela a, Harri Valpola b Alexander Ilin a Juha Karhunen a a Adaptive Informatics Research Centre, Helsinki University of

More information

Kernel Feature Spaces and Nonlinear Blind Source Separation

Kernel Feature Spaces and Nonlinear Blind Source Separation Kernel Feature Spaces and Nonlinear Blind Source Separation Stefan Harmeling 1, Andreas Ziehe 1, Motoaki Kawanabe 1, Klaus-Robert Müller 1,2 1 Fraunhofer FIRST.IDA, Kekuléstr. 7, 12489 Berlin, Germany

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr

More information

ICA. Independent Component Analysis. Zakariás Mátyás

ICA. Independent Component Analysis. Zakariás Mátyás ICA Independent Component Analysis Zakariás Mátyás Contents Definitions Introduction History Algorithms Code Uses of ICA Definitions ICA Miture Separation Signals typical signals Multivariate statistics

More information

Blind Source Separation in Nonlinear Mixture for Colored Sources Using Signal Derivatives

Blind Source Separation in Nonlinear Mixture for Colored Sources Using Signal Derivatives Blind Source Separation in Nonlinear Mixture for Colored Sources Using Signal Derivatives Bahram Ehsandoust, Masoud Babaie-Zadeh, Christian Jutten To cite this version: Bahram Ehsandoust, Masoud Babaie-Zadeh,

More information

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1

GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS. Mitsuru Kawamoto 1,2 and Yujiro Inouye 1 GENERALIZED DEFLATION ALGORITHMS FOR THE BLIND SOURCE-FACTOR SEPARATION OF MIMO-FIR CHANNELS Mitsuru Kawamoto,2 and Yuiro Inouye. Dept. of Electronic and Control Systems Engineering, Shimane University,

More information

Novel spectrum sensing schemes for Cognitive Radio Networks

Novel spectrum sensing schemes for Cognitive Radio Networks Novel spectrum sensing schemes for Cognitive Radio Networks Cantabria University Santander, May, 2015 Supélec, SCEE Rennes, France 1 The Advanced Signal Processing Group http://gtas.unican.es The Advanced

More information

Basic Principles of Unsupervised and Unsupervised

Basic Principles of Unsupervised and Unsupervised Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization

More information

SPARSE REPRESENTATION AND BLIND DECONVOLUTION OF DYNAMICAL SYSTEMS. Liqing Zhang and Andrzej Cichocki

SPARSE REPRESENTATION AND BLIND DECONVOLUTION OF DYNAMICAL SYSTEMS. Liqing Zhang and Andrzej Cichocki SPARSE REPRESENTATON AND BLND DECONVOLUTON OF DYNAMCAL SYSTEMS Liqing Zhang and Andrzej Cichocki Lab for Advanced Brain Signal Processing RKEN Brain Science nstitute Wako shi, Saitama, 351-198, apan zha,cia

More information

Separation of Different Voices in Speech using Fast Ica Algorithm

Separation of Different Voices in Speech using Fast Ica Algorithm Volume-6, Issue-6, November-December 2016 International Journal of Engineering and Management Research Page Number: 364-368 Separation of Different Voices in Speech using Fast Ica Algorithm Dr. T.V.P Sundararajan

More information

Blind Source Separation Using Second-Order Cyclostationary Statistics

Blind Source Separation Using Second-Order Cyclostationary Statistics 694 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 4, APRIL 2001 Blind Source Separation Using Second-Order Cyclostationary Statistics Karim Abed-Meraim, Yong Xiang, Jonathan H. Manton, Yingbo Hua,

More information

Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego

Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego Independent Component Analysis (ICA) Bhaskar D Rao University of California, San Diego Email: brao@ucsdedu References 1 Hyvarinen, A, Karhunen, J, & Oja, E (2004) Independent component analysis (Vol 46)

More information

FuncICA for time series pattern discovery

FuncICA for time series pattern discovery FuncICA for time series pattern discovery Nishant Mehta and Alexander Gray Georgia Institute of Technology The problem Given a set of inherently continuous time series (e.g. EEG) Find a set of patterns

More information

Semi-Blind approaches to source separation: introduction to the special session

Semi-Blind approaches to source separation: introduction to the special session Semi-Blind approaches to source separation: introduction to the special session Massoud BABAIE-ZADEH 1 Christian JUTTEN 2 1- Sharif University of Technology, Tehran, IRAN 2- Laboratory of Images and Signals

More information

Blind Deconvolution via Maximum Kurtosis Adaptive Filtering

Blind Deconvolution via Maximum Kurtosis Adaptive Filtering Blind Deconvolution via Maximum Kurtosis Adaptive Filtering Deborah Pereg Doron Benzvi The Jerusalem College of Engineering Jerusalem, Israel doronb@jce.ac.il, deborahpe@post.jce.ac.il ABSTRACT In this

More information

New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit

New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit New Approximations of Differential Entropy for Independent Component Analysis and Projection Pursuit Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.

More information

SOURCE SEPARATION OF BASEBAND SIGNALS IN POST-NONLINEAR MIXTURES

SOURCE SEPARATION OF BASEBAND SIGNALS IN POST-NONLINEAR MIXTURES SOURCE SEPARATION OF BASEBAND SIGNALS IN POST-NONLINEAR MIXTURES L. T. Duarte, C. Jutten, B. Rivet GIPSA-lab UMR CNRS 526 Institut Polytechnique de Grenoble BP 46-3842 Saint Martin d Hères, France R. Suyama,

More information

Slide11 Haykin Chapter 10: Information-Theoretic Models

Slide11 Haykin Chapter 10: Information-Theoretic Models Slide11 Haykin Chapter 10: Information-Theoretic Models CPSC 636-600 Instructor: Yoonsuck Choe Spring 2015 ICA section is heavily derived from Aapo Hyvärinen s ICA tutorial: http://www.cis.hut.fi/aapo/papers/ijcnn99_tutorialweb/.

More information

1 Introduction The Separation of Independent Sources (SIS) assumes that some unknown but independent temporal signals propagate through a mixing and/o

1 Introduction The Separation of Independent Sources (SIS) assumes that some unknown but independent temporal signals propagate through a mixing and/o Appeared in IEEE Trans. on Circuits and Systems, vol. 42, no. 11, pp. 748-751, November 95 c Implementation and Test Results of a Chip for The Separation of Mixed Signals Ammar B. A. Gharbi and Fathi M.

More information

An Error-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptive Systems

An Error-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptive Systems 1780 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 7, JULY 2002 An Error-Entropy Minimization Algorithm for Supervised Training of Nonlinear Adaptive Systems Deniz Erdogmus, Member, IEEE, and Jose

More information

Blind compensation of polynomial mixtures of Gaussian signals with application in nonlinear blind source separation

Blind compensation of polynomial mixtures of Gaussian signals with application in nonlinear blind source separation Blind compensation of polynomial mixtures of Gaussian signals with application in nonlinear blind source separation Bahram Ehsandoust, Bertrand Rivet, Massoud Babaie-Zadeh, Christian Jutten To cite this

More information

BLIND SOURCES SEPARATION BY SIMULTANEOUS GENERALIZED REFERENCED CONTRASTS DIAGONALIZATION

BLIND SOURCES SEPARATION BY SIMULTANEOUS GENERALIZED REFERENCED CONTRASTS DIAGONALIZATION BLIND SOURCES SEPARATION BY SIMULTANEOUS GENERALIZED REFERENCED CONTRASTS DIAGONALIZATION Abdellah ADIB, Eric MOREAU and Driss ABOUTAJDINE Institut Scientifique, av. Ibn Battouta, BP. 73, Rabat, Maroc.

More information

Introduction to Independent Component Analysis. Jingmei Lu and Xixi Lu. Abstract

Introduction to Independent Component Analysis. Jingmei Lu and Xixi Lu. Abstract Final Project 2//25 Introduction to Independent Component Analysis Abstract Independent Component Analysis (ICA) can be used to solve blind signal separation problem. In this article, we introduce definition

More information

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces

A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 9, NO. 4, MAY 2001 411 A Modified Baum Welch Algorithm for Hidden Markov Models with Multiple Observation Spaces Paul M. Baggenstoss, Member, IEEE

More information

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang

STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION. Badong Chen, Yu Zhu, Jinchun Hu and Ming Zhang ICIC Express Letters ICIC International c 2009 ISSN 1881-803X Volume 3, Number 3, September 2009 pp. 1 6 STOCHASTIC INFORMATION GRADIENT ALGORITHM BASED ON MAXIMUM ENTROPY DENSITY ESTIMATION Badong Chen,

More information

MULTICHANNEL BLIND SEPARATION AND. Scott C. Douglas 1, Andrzej Cichocki 2, and Shun-ichi Amari 2

MULTICHANNEL BLIND SEPARATION AND. Scott C. Douglas 1, Andrzej Cichocki 2, and Shun-ichi Amari 2 MULTICHANNEL BLIND SEPARATION AND DECONVOLUTION OF SOURCES WITH ARBITRARY DISTRIBUTIONS Scott C. Douglas 1, Andrzej Cichoci, and Shun-ichi Amari 1 Department of Electrical Engineering, University of Utah

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

NONLINEAR BLIND SOURCE SEPARATION USING KERNEL FEATURE SPACES.

NONLINEAR BLIND SOURCE SEPARATION USING KERNEL FEATURE SPACES. NONLINEAR BLIND SOURCE SEPARATION USING KERNEL FEATURE SPACES Stefan Harmeling 1, Andreas Ziehe 1, Motoaki Kawanabe 1, Benjamin Blankertz 1, Klaus-Robert Müller 1, 1 GMD FIRST.IDA, Kekuléstr. 7, 1489 Berlin,

More information