Performance Analysis of GEVD-Based Source Separation With Second-Order Statistics

Size: px
Start display at page:

Download "Performance Analysis of GEVD-Based Source Separation With Second-Order Statistics"

Transcription

1 IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 59, O. 0, OCTOBER Performance Analysis of GEVD-Based Source Separation With Second-Order Statistics Arie Yeredor Abstract One of the simplest (and earliest) approaches to blind source separation is to estimate the mixing matrix from the generalized eigenvalue decomposition (GEVD), or Exact Joint Diagonalization, of two target-matrices. In a second-order statistics (SOS) framework, these target-matrices are two different correlation matrices (e.g., at different lags, taken over different time-intervals, etc.), attempting to capture the diversity of the sources (e.g., diverse spectra, different nonstationarity profiles, etc.). More generally, such matrix pairs can be constructed as generalized correlation matrices, whose structure is prescribed by two selected associationmatrices. In this paper, we provide a small-errors performance analysis of GEVD-based separation in such SOS frameworks. We derive explicit expressions for the resulting interference-to-source ratio (ISR) matrix in terms of the association-matrices and of the sources temporal covariance matrices. The validity of our analysis is illustrated in simulation. Index Terms Blind source separation, exact joint diagonalization, generalized eigenvalue decomposition, independent component analysis, matrix pencil, perturbation analysis. I. ITRODUCTIO AD PROBLEM FORMULATIO The blind source separation (BSS) problem consists of separating unobserved source signals from their observed mixtures. The basic BSS paradigm involves a static, linear, square and noiseless, real-valued mixture model X = AS, in which S =[s s sk ] T is a K matrix containing the K unobserved source signals (each of length ) as its rows; A is the unknown K K mixing matrix (assumed to be nonsingular); and X = [x x xk ] T is the K matrix of K observed mixtures. The key to separation is usually some general statistical or structural information on the sources, the most common of which is the assumption of statistical independence thereof. Several popular extensions of this basic model involve additive noise and/or nonsquare mixture matrices. Perhaps one of the most conceptually appealing and computationally simple approaches (proposed in different contexts, e.g., by Tong et al. in AMUSE [], [3], by Cardoso in FOBI [], by Yeredor in CHESS [5], by Tomé in [6], by Parra and Sajda in [7] and more recently by Ollila et al. in DOGMA [8]), is to base the estimation of A on exact joint diagonalization (EJD) of two target-matrices, constructed from the observed mixtures X. Such a framework is sometimes also called a matrix-pencil approach [6] or a generalized eigenvalue decomposition (GEVD) approach [7]. The two K K target-matrices, denoted and, are usually empirical estimates of some true (unknown) matrices R and R satisfying R = AD A T and R = AD A T () Manuscript received February 0, 0; revised June 07, 0; accepted June 09, 0. Date of publication June 7, 0; date of current version September, 0. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Jerome Idier. A preliminary, partial version of this work was presented at the International Conference on Independent Component Analysis and Source Separation (ICA), Paraty, Brazil, March 5 8, 009. The author is with the School of Electrical Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel ( arie@eng.tau.ac.il). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier 0.09/TSP where D and D are diagonal by virtue of the sources statistical independence. R is usually [7] (but not necessarily) the observations zero-lag correlation matrix. R is selected according to the sources statistical model: For example, for separation based only on non-gaussianity of the marginal distributions of the sources (ignoring their temporal structure, if any), R can be taken as a linear combination of certain cumulants matrices of the observations (e.g., [7], [], and [9]) or as differently-weighted covariance matrices, such as in DOGMA [8] or in [5] (where R is taken to be the Hessian of the observations joint log-characteristic function away from the origin). In many other cases of interest, when the diversity of the sources is exhibited through different temporal-correlation structures, both R and R can be based on second-order statistics (SOS). To define a general framework in this context, let P and P denote some arbitrary matrices, which we term association-matrices," and let the empirical generalized correlation matrices and be given by = XP X T = XP X T () [we shall show in the next section that these and can be regarded as estimates of some R and R (respectively) of the form ()]. By proper choice of the association-matrices P and P, several particular classical choices of and may be identified. For example, when P = I; obviously becomes the observations empirical correlation matrix. Then: if P is taken as an all-zeros matrix with (0j`j) along its 6`th diagonals, becomes the unbiased, symmetrized estimate of the observations lagged correlation matrix at lag `, as used for stationary sources, e.g., in the Algorithm for Multiple Unknown Signals Extraction (AMUSE) [], [3], and in similar algorithms (e.g., [0] []); likewise, if P is a general Toeplitz matrix, can be regarded as a linear combination of estimated correlation matrices at different lags, or, alternatively, as the correlation matrix between linearly filtered versions of the observations, as used, e.g., in [6]; if P is taken as an all-zeros matrix with a sequence of M nonzero values of M somewhere along its main diagonal, becomes the empirical zero-lag correlation estimated over the respective time-segment, as used for non-stationary sources, e.g., in [7], [], and [3]; spectral matrices at certain frequencies, time-frequency matrices at certain time-frequency points [], or cyclic correlation matrices [5] can also be obtained by setting P to the appropriate (possibly complex-valued) transformation matrices. Performance analysis of such GEVD-based separation has not been addressed before in the context of the above-mentioned particular approaches, but would be instrumental in predicting the expected performance and possibly for choosing the association-matrices. Our goal in this work is to provide closed-form expressions for the resulting interference to source ratio (ISR) matrix for the general case, namely for GEVD-based separation relying on any chosen pair of generalized correlation matrices, namely on any chosen pair of association-matrices. The paper is structured as follows. In the following section, we provide the basics of the GEVD solution. Our error-analysis is presented in Section III, where we obtain explicit expressions for the resulting mean ISR in terms of the association-matrices and the sources temporal covariance matrices. In Section IV, we present some simulation results illustrating the validity of our analytic results, and providing empirical testing of their noise-sensitivity. Concluding remarks are provided in Section V X/$ IEEE

2 5078 IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 59, O. 0, OCTOBER 0 II. THE MATRIX-PAIR GEVD SOLUTIO Consider the basic mixture model X = AS with zero-mean, statistically-independent sources, each having an temporal covariance matrix C k = E[s k s T k ]; (k =;...;K). Let P and P denote some association-matrices. Define D = E[SP S T ] D = E[SP S T ] (3) which are diagonal due to the sources independence (and zero-mean), regardless of the specific choice of P and P. Likewise, define R = E[XP X T ]=AD A T R = E[XP X T ]=AD A T : () The generalized eigenvalues and eigenvectors matrices of a matrixpair (pencil) (Q ;Q ) are (respectively) a diagonal matrix 3 and a matrix W satisfying Q W = Q W3. Assuming invertibility of Q ; 3 and W are also seen to be the (standard) eigenvalues and eigenvectors matrices of Q 0 Q. Assuming invertibility of both R and R, it is evident from (), that A is (up to arbitrary permutation and scaling of its columns) the generalized eigenvectors matrix of the matrix pencil (R 0 ;R0) (with 3 = D D 0 ), and therefore also the eigenvectors matrix of R R 0. ow let the matrices = XP X T and = XP X T denote estimates of R and R (respectively). ote that while the true R and R of () are symmetric with any choice of P and P (since D and D are diagonal with any P and P ), their estimates and may become nonsymmetric if P and P are nonsymmetric. We shall therefore restrict the discussion from now on to symmetric associationmatrices P and P, which would guarantee that and would be symmetric as well, just like R and R. Under some commonly met conditions (see, e.g., [6]), the closedform EJD of and can be readily obtained from the GEVD of 0 ; 0 ( ), or, equivalently, from the eigen-decomposition of ^Q = 0 : The eigenvectors matrix ^A which satisfies ^Q ^A = ^A ^D (where ^D is some diagonal matrix), also satisfies = ^A ^D ^A T and = ^A ^D ^A T, with some diagonal matrices ^D and ^D, such that ^D = 0 ^D ^D. When the estimates and are exact and the model is identifiable, the resulting ^A coincides with A (up to the inevitable scale and permutation ambiguities). aturally, however, departure of and from their true values inflicts errors on ^A. III. ERROR AALYSIS OF THE GEVD SOLUTIO A common measure of the estimation error (useful in the BSS context) is to consider the resulting overall mixing-unmixing matrix T = ^A 0 A. Assuming, just for simplicity of notations, that the scaling and permutation ambiguities have been resolved, T would ideally be the identity matrix. However, due to estimation errors in ^A, its off-diagonal elements T [k; `] (k 6= `) would not vanish, and would reflect a residual mixing per realization. The second moments of these elements, E[T [k; `]], are usually called the `-to-k ISR (denoted ISR k;`), under the assumption that all sources have equal energy (if the sources have different energies, then each ISR k;` should be normalized by the ratio between the energies of the `th and kth sources, so as to reflect the mean relative residual energy of the `th source in the reconstruction of the kth source). If the scaling and/or permutation ambiguities are not resolved, then the resulting T matrix would equal the nominal (ambiguity-free) T up to permutation and scaling of its rows. Still, ISR k;` of the nominal T would reflect the relative residual energy presence of the `th source in the reconstruction of the kth source, the only difference being that the kth source might be obtained with a different index in the reconstruction, according to the remaining permutation error. Before turning to the error analysis, we observe a very appealing and rather simplifying invariance property of T in this context: Lemma : Given a specific realization S of the sources, the same value of T would be obtained (in our framework) with any (nonsingular) A. Proof: Let us assume first that A = I, and denote the estimated and in this case as (I) (I) = XP X T = SP S T = XP X T = SP S T : (5) Likewise, denote ^Q (I) = 0 (I) (I), with eigenvectors and eigenvalues matrices ^A (I) and ^D (I) (respectively): ^Q (I) ^A (I) = ^A (I) ^D (I) : (6) Evidently, the matrix T in this case is given by T (I) ^A 0 (I)I = = 0 ^A (I)A = 0 ^A (I). ow consider a general mixing matrix A. We then have i = XP ix T = ASP is T A T = A i(i) A T ; i =; (7) and, consequently, ^Q = A ^Q (I) A 0. It is readily observed that the matrix A ^A (I) is the eigenvectors matrix of ^Q (with eigenvalues matrix ^D (I) ), since [using (6)] ^QA ^A (I) = A ^Q (I) A 0 A ^A (I) = A ^Q (I) ^A (I) = A ^A (I) ^D (I) ; (8) ^A 0 so ^A = A ^A (I), and therefore T = ^A 0 A = (I) = T (I), which establishes the invariance of T in A. ote that this property is in accordance with the well-known equivariance property (e.g., [7]), shared by several (but certainly not by all) BSS algorithms, as well as by the GEVD-based algorithm (at least in our SOS framework). For more general conditions for equivariance of GEVD-based separation, see [8]. We note further, that this property only holds in the noiseless case, but falls apart in the presence of additive noise. Our error analysis would therefore only be valid in the noiseless case, but, as we shall demonstrate empirically in simulation, would still serve as a reasonable approximation under high signal-to-noise ratio (SR) conditions. Thanks to this invariance property, we may analyze the perturbation in T under the conveniently simple nonmixing condition A = I, knowing that the same result would hold true with any other invertible mixing matrix A. In the following Section III-A we quantify (under the nonmixing condition) the effect of estimation errors in and on all T [k; `]. Our analysis relates T [k; `] to the estimation errors in and (per realization). Then, in Section III-B we exploit statistical properties of the sources to characterize (still under the nonmixing condition) the relevant statistical properties of the estimation errors in and, which would in turn lead us (using the results of Section III-A) to the second moments of all T [k; `], namely to all ISR k;`. A. A Small-Errors Perturbation Analysis for the GEVD Assuming the nonmixing condition A = I, wehaver = D and R = D, so we may denote (assuming that D is nonsingular) = D + E = D + E = D (I + D 0 E ) (9)

3 IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 59, O. 0, OCTOBER where E D and E D are respective zero-mean estimation errors, assumed small in our framework of small-errors analysis. Defining D = D D 0 and considering the small-errors analysis, which neglects second (and higher) order terms in E ; E, we get ^Q = 0 =(D + E )(I + D 0 E ) 0 D 0 (D + E )(I 0 D 0 E )D 0 D + E D 0 0DE D 0 : (0) In the error-free case the eigenvectors matrix A of ^Q would be the true mixing-matrix, ^A = A = I, and the eigenvalues matrix ^D would equal D. Let us denote by E and the resulting respective errors in these matrices, namely ^A = I + E and ^D = D +, such that ^Q ^A = ^A ^D ) ^Q(I + E) =(I + E)(D + ). Substituting (0), we have (again, using the small-errors assumption) and ^Q(I + E) (D + E D 0 0 DE D 0 )(I + E) D + E D 0 0 DE D 0 + DE () (I + E)(D + ) D + ED + : () Equating these terms, we get DE 0ED = DE D 0 0E D 0 + : (3) Eventually, we would have T = ^A 0 A = ^A 0 I I 0E, and since we are only interested in the off-diagonal terms of T, we may ignore the unknown, which only interacts with the diagonal elements of E in (3). Denoting by d i [k] the [k; k]th element of D i (i = ; ), we have, for the off-diagonal terms in (3) (recalling that D = D D 0 ), condition and the diagonality of D ;D, we note that (for k 6= `) E i [k; `] =s T k P i s` for i = ;. Thus, the covariance of E i [k; `] and E j [k; `] for (i; j) f; g (and k 6= `) is given by Cov(E i [k; `]; E j [k; `]) = E s T k P i s`s T k P j s` = E[s k [p]p i[p; q]s`[q]s k [m]p j [m; n]s`[n]] p;q;m;n= = P i[p; q]pj [m; n]e[sk [p]s k [m]s`[q]s`[n]] pqmn = P i[p; q]pj [m; n]ck [p; m]c`[q; n] pqmn = P i[p; q]c`[q; n]p j [n; m]ck [m; p] pqmn =TrfP ic`p jc k g (7) (where we have used the statistical independence of the sources, as well as the symmetry of P j and of C k ). Therefore, defining the K K matrices Q ; ;Q ; and Q ; with elements Q i;j [k; `] =Cov(E i [k; `]; E j [k; `]) =TrfP ic`p jc k g; i;j =; (8) and using (5), we obtain for all k 6= ` K (assuming equalenergy sources) E[k; `] d [k] d 0 d [`] [k] d [`] = d [k] E[k; `] 0 E[k; `] d [k]d [`] d [`] k 6= ` K: () ISR k;` = E[T [k; `]] = d [k]q ; [k; `] +d [k]q ; [k; `]0d [k]d [k]q ; [k; `] (d [`]d [k]0d [k]d [`]) : (9) Applying some straightforward algebraic manipulations, we end up with the following expression for T [k; `] =0E[k; `]: T [k; `] = d[k]e[k; `] 0 d[k]e[k; `] d [`]d [k] 0 d [k]d [`] k 6= ` K (5) which establishes the explicit dependence of T (under the small-errors assumption) on the generalized-correlations (small) estimation errors E and E under the nonmixing condition A = I. B. Explicit Expressions for the ISR We now turn to calculate the second moments of the off-diagonal elements of T (namely, the ISRs), based on (5). We begin with expressions for the coefficients d [k] and d [k] (for k =; ;...;K): d i[k] =Di[k; k] =E[s T k P is k ]=TrfP ic k g; i =; : (6) ext, recall the zero-mean error-matrices E = 0 D ; E = 0 D (defined under the nonmixing condition X = S). For computing the ISR from (5) we need to know the variances and covariance of E [k; `] and E [k; `] (for all k 6= `). From the nonmixing By denoting ED, we imply that all eigenvalues of E are much smaller than the smallest eigenvalue of D; We shall also use the Taylor series expansion (I + E) = I 0E + o(e ) for E I. If the sources have different energies, then each ISR k;` has to be further multiplied by the energy-ratio of the `th and kth sources, namely by TrfC`g=TrfC k g. Considerable simplification occurs in the case of stationary sources, whenever Toeplitz matrices are used as the association-matrices. The simplification is based on the observation, that circulant Toeplitz matrices are diagonalized by the Fourier Transform matrix. Assuming that all sources have correlation sequences with some finite maximal effective-width L, their noncirculant Toeplitz covariance matrices can be approximated as circulant Toeplitz matrices for L (say > 0L). We may therefore approximate the respective terms as follows: d i[k] =TrfP ic k g L Trf(F H ~ P if )(F H ~ C k F )g =Trf ~ P i ~ C k g = n= ~p i[n]~ck [n] (0) where F is the (unitary) Fourier transform matrix, such that F [m; n] = p expf0j (m0)(n0) g, and P ~ i and C ~ k are diagonal matrices with the sequences ~p i[n] and ~ck [n] (respectively) along their diagonals. These sequences are the (real-valued) Discrete Fourier Transforms (DFTs) of the (symmetric) sequences p i [n] and c k [n], which are in turn the (symmetric) generating sequences of the Toeplitz matrices P i and C k, such that for all m; n [;]; p i [m 0 n +] = P i [m; n]

4 5080 IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 59, O. 0, OCTOBER 0 TABLE I EMPIRICAL IVERSE-ISR VALUES ([DB]) AD THEIR AALYTICALLY-PREDICTED VALUES (I PARETHESES) and c k [m0n+] = C k [m; n] (the latter is simply the autocorrelation sequence of the kth source). amely, for n =;...; ; ~p i [n] = ~c k [n] = =0 m=0= =0 m=0= p i [m]e 0j ; c k [m]e 0j () (assuming that is even). Further approximation (still assuming L), allows conversion of the discrete-sum over the (sufficiently smooth ) product of DFTs into a frequency-domain integral over the product of discrete-time Fourier transforms (DTFTs), with n= ~p i[n]~ck [n] L 5 i (e j! ) = S k (e j! ) = 0 =0 m=0= =0 m=0= 5 i(e j! )Sk (e j! )d!; () p i [m]e 0j! (3) c k [m]e 0j! : () The latter is obviously the power-spectrum of the kth source. For the matrix terms Q i;j[k; `] we similarly obtain Q i;j [k; `] L L Trf ~ P i ~ C` ~P j ~ C k g 0 5 i (e j! )5 j (e j! )S k (e j! )S`(e j! )d!: (5) Thus, these approximations alleviate the need to compute the trace of a product of two (four, respectively) matrices for each d i [k] (Q i;j[k; `], respectively), an O( ) (O( 3 ), respectively) operation: In the stationary case these computations can be substituted with sufficiently fine linear integration (independent of ). For large (say 00) the difference in computational load can be quite significant. We note in passing, that in the nonsquare case of more mixtures than sources, a GEVD solution can still exist, since the rows of the unmixing matrix can still be estimated from the GEVD of and (attained using standard GEVD tools), followed by elimination of rows orthogonal to all columns of and (as such rows would essentially reconstruct nonexisting, all-zeros sources). As far as our performance analysis is concerned, in such cases the missing sources can simply be regarded as sources with zero-covariance matrices. Since each ISR k;` depends only on C k and on C` (and not on the other sources covariances), these fictitious sources will have no effect on the relevant ISR expressions for the true sources. IV. SIMULATIO RESULTS We present simulation results for three experiments. The first two experiments demonstrate the good match between the analytically-predicted and the empirically-obtained performance, with both nonstationary and stationary sources. The third experiment tests the sensitivity of the performance prediction to additive noise. In the first experiment, we consider a mixture of K = 5sources. The first source s [n] is a stationary zero-mean, unit variance Gaussian white noise process, and the other four are generated (for k =; 3; ; 5) as s k [n] = 3`=0 h k[`] ~w k [n 0 `], where fh k [`]g 3`=0 are fixed filter coefficients, and ~w k [n] are nonstationary uncorrelated sequences, each generated as ~w k [n] =(+0:5 cos(n=m k + k ))w k [n], in which w k [n] is a zero-mean unit variance Gaussian white noise sequence, independent of the other sequences. The specific values of the filters coefficients fh k [`]gl=0, 3 the modulation periods M k and the phases k are specified in Table I below. We used an observation length of = 50. The two association-matrices were chosen as P = I, and P a block-diagonal matrix with two blocks, each a symmetric 5 5 Toeplitz matrix. The first block had the value along its main diagonal and 3 along its two first sub-diagonals (and zeros elsewhere); the second block had the value along its first subdiagonals and 0 along its second (and zeros elsewhere). So, in effect (up to irrelevant scaling), is the observations sample zero-lag correlation matrix, whereas is the sum of several sample-correlation matrices taken over the two halves of the observation interval as follows: two times the zero-lag correlation over the first half; six times the lag-one correlations over the first half; eight times the lag-one correlations over the second half; and 0 times the lag-two correlations over the second half. Table I summarizes the resulting empirical ISR k;` values for each of the signal-pairs combinations, in terms of the inverse averaged ISR in [db], averaged over 000 independent trials. The mixing matrix elements were redrawn independently from a standard Gaussian distribution in each trial. The numbers in parentheses represent our analytically-predicted values, calculated in (9) [using the matrix-form expressions (6) and (8) and normalizing by the different energy-ratios of the sources]. The good match (usually to within [db]) is evident. ext, we turn to stationary sources, so as to enable exploitation of the frequency-domain expressions for long observation intervals. In the second experiment we mixed and separated K =3stationary sources. Each source is a Gaussian autoregressive moving-average (ARMA) process of orders (,), namely, each source was generated by passing a zero-mean unit-variance white Gaussian process through a linear, timeinvariant system with two poles and two zeros, as specified in Table II below, followed by power-normalization. We present results obtained with three different sets of Toeplitz-structured association-matrices, with generating-sequences p [m] and p [m] as specified in Table III for jmj 3 (for jmj > 3 all p i [m] were set to zeros). The results for the three cases are shown (versus the observation length ) in Fig. in terms of the empirically obtained ISR k;` values for all signal-pairs combinations, superimposed on the analytically-

5 IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 59, O. 0, OCTOBER TABLE II POLES AD ZEROS TABLE III TOEPLITZ-GEERATIG SEQUECES FOR THE THREE CASES Fig.. Empirical ISRs versus mixing-matrix parameter in (6); Analytically predicted ISR for the noiseless case. Fig.. Empirical and analytically-predicted ISRs versus observation length. predicted plots, obtained by (9) [using the frequency-domain expressions (0) and (5)]. The results reflect averaging over 000 independent trials (with randomized mixing matrices). Once again, the good match is evident for all three cases. It is important to observe, that none of the cases dominates (or is dominated) by the others for all ISRs: Each of the three cases attains the best (or worst) performance among the three for at least one ISR k;`. In the last experiment, we test (empirically) the sensitivity of the performance to additive, spatially and spectrally white Gaussian noise at the mixtures. We used the same three signals from the second experiment, with case c association-matrices (the only case which ignores the zero-lag correlations, and might therefore be the least sensitive to temporally-white noise), using the longest observation length =0. Since the equivariance property does not hold in the presence of additive noise, the performance depends on the mixing matrix. To illustrate this dependence, we used parameterized mixing matrices generated as A = cos() sin() sin() 0 sin() cos() sin () cos() 0 sin() cos() sin() sin () cos() cos() cos() (6) followed by normalization of the rows, so as to obtain mixtures with equal power, enabling convenient definition of the SR as the ratio between the (equal) power of each mixture and the variance of the additive noise. These mixing matrices are all parameterized by a single parameter, such that for = 0we have A = I, and for different values of we get different (nonorthogonal) matrices, with different condition-numbers. the expression is obviously 360 -periodic. We present the empirically obtained ISR values versus the parameter [0; 360] for three different SR values: High (30 [db]), medium (0 [db]) and (relatively) low (0 [db]). ote that these values only reflect the separation quality, computed from ^A 0 A, and not the estimation quality of the sources, which also involves the inevitable noise-term (multiplied by ^A 0 ). We also present the analytically-predicted performance, which is obviously independent of. At the high SR the separation performance is clearly seen to still coincide uniformly with the predicted values (to within less than [db]). At the medium SR, notable (-dependent) deviation is observed, but may still be considered tolerable. However, as could be expected, at the lower SR very significant deviations (up to more than 0 [db] for some values of ) occur, reflecting the strong dependence of the separation performance on the mixing matrix in the presence of significant noise. V. COCLUSIO We considered the GEVD-based separation of independent sources with different temporal covariance structures, when the two target-matrices are based on SOS. We provided analytic expressions for the expected separation performance under a small-errors assumption (without any further assumptions on the sources distributions). Such an assumption can be justified in practice whenever all the predicted ISRs are sufficiently low (say, below 00 [db]) and the SR (if noise is present) is sufficiently high (say, above 0 [db]) as demonstrated in our simulations. In the general case the ISR expression involves manipulation of potentially prohibitively large ( ) matrices. However, we also derived approximate frequency-domain expressions for the stationary case (with Toeplitz association-matrices), which eliminate the need to

6 508 IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 59, O. 0, OCTOBER 0 manipulate such matrices. The good agreement between the theoretically-predicted and the empirically-obtained separation performance was demonstrated in simulation. A remaining key-question is, given the sources covariance matrices and our analytic expressions for the performance prediction, whether (and if so, how) the association-matrices can be chosen so as to optimize the performance (in some sense). This question will be addressed in future work. REFERECES [] A. Yeredor, On optimal selection of correlation matrices for matrixpencil-based separation, in Proc. 8th Int. Conf. Independent Component Anal. Source Separation (ICA), 009, pp [] L. Tong, V. C. Soon, Y.-F. Huang, and R. Liu, AMUSE: A new blind identification algorithm, in Proc. ISCAS, 990, pp [3] L. Tong and R. Liu, Blind estimation of correlated source signals, in Proc. th Asilomar Conf., 990, pp [] J. F. Cardoso, Source separation using higher-order moments, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), 989, pp. 09. [5] A. Yeredor, Blind source separation via the second characteristic function, Signal Process., vol. 80, no. 5, pp , 000. [6] A. M. Tomé, The generalized eigendecomposition approach to the blind source separation problem, Digit. Signal Process., vol. 6, no. 3, pp , 006. [7] L. Parra and P. Sajda, Blind source separation via generalized eigenvalue decomposition, J. Mach. Learn. Res., vol., pp. 6 69, 003. [8] E. Ollila, H. Oja, and V. Koivunen, Complex-valued ICA based on a pair of generalized covariance matrices, Comput. Statist. Data Anal., vol. 5, pp , 008. [9] J. F. Cardoso and A. Souloumiac, Blind beamforming for non Gaussian signals, Proc. Inst. Electr. Eng. F, vol. 0, no. 6, pp , 993. [0] L. Fêty and J.-P. V. Uffelen, ew methods for signal separation, in Proc. th Conf. HF Radio Syst. Tech., 988, pp [] L. Parra and C. Spence, Convolutive blind source separation of nonstationary sources, IEEE Trans. Speech Audio Process., pp , 000. [] A. Belouchrani, K. Abed-Meraim, J.-F. Cardoso, and E. Moulines, A blind source separation technique using second-order statistics, IEEE Trans. Signal Process., vol. 5, no., pp. 3, 997. [3] D.-T. Pham and J.-F. Cardoso, Blind separation of instantaneous mixtures of nonstationary sources, IEEE Trans. Signal Process., vol. 9, no. 9, pp , 00. [] A. Belouchrani and M. G. Amin, Blind source separation based on time-frequency signal representations, IEEE Trans. Signal Process.., vol. 6, no., pp , 998. [5] K. Abed-Meraim, Y. Xiang, J. H. Manton, and Y. Hua, Blind source separation using second-order cyclostationary statistics, IEEE Trans. Signal Process., vol. 9, no., pp , 00. [6] A. Yeredor, On using exact joint diagonalization for non-iterative approximate joint diagonalization, IEEE Signal Process. Lett., vol., no. 9, pp , 005. [7] J.-F. Cardoso and B. Laheld, Equivariant adaptive source separation, IEEE Trans. Signal Process., vol., no., pp , 996. A Class of Scaled Bessel Sampling Theorems Luc Knockaert Abstract Sampling theorems for a class of scaled Bessel unitary transforms are presented. The derivations are based on the properties of the generalized Laguerre functions. This class of scaled Bessel unitary transforms includes the classical sine and cosine transforms, but also novel chirp sine and modified Hankel transforms. The results for the sine and cosine transform can also be utilized to yield a sampling theorem, different from Shannon s, for the Fourier transform. Index Terms Bessel functions, chirp transform, Hankel transform, sampling theorems. I. ITRODUCTIO The classical Shannon [] sampling theorem is omnipresent in large areas of signal processing. umerous generalizations have been derived [] [5], while extensions of the sampling theorem have arisen in connection with wavelets [6], the fractional Fourier and quasi-fourier transforms [7] [0]. In this correspondence, we develop sampling theorems for a class of scaled Bessel unitary transforms in a constructive way, based on the generalized Laguerre functions. This class includes the sine and cosine transforms, but also novel chirp sine and modified Hankel transforms. The results for the sine and cosine transform can also be utilized to yield a sampling theorem, different from Shannon s, for the Fourier transform. ovel asymptotic truncation error expressions are derived and pertinent examples are presented. II. MAI RESULTS We begin by stating the following result due to Kramer [], [], [5]. Theorem : Let be a real unitary operator over L [0; ] with product kernel K(xt) such that the set fk(x k )g forms a complete orthogonal basis in L [0; ] for some countable set of strictly increasing positive constants f k g. Suppose further that f (t) is -bandlimited to B, i.e., the transform ^f (x) = 0 K(xt)f(t)dt is in L [0;B] such that ^f (x) =0for x>b. Then, f (t) admits the sampling representation f (t) = where the sampling kernel is k= and k = K(u 0 k) du. Proof: See [], [], and [5]. f k (v) = K(uv)K(u k )du k 0 k B k(bt) () X/$ IEEE Manuscript received February 5, 0; revised June, 0; accepted June, 0. Date of publication June 7, 0; date of current version September, 0. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Xiang-Gen Xia. This work was supported by a grant of the Research Foundation-Flanders (FWO-Vlaanderen). The author is with the ITEC-IBC-IBBT, Ghent University, B-9050 Gent, Belgium ( luc.knockaert@intec.ugent.be). Digital Object Identifier 0.09/TSP

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr

More information

Analytical solution of the blind source separation problem using derivatives

Analytical solution of the blind source separation problem using derivatives Analytical solution of the blind source separation problem using derivatives Sebastien Lagrange 1,2, Luc Jaulin 2, Vincent Vigneron 1, and Christian Jutten 1 1 Laboratoire Images et Signaux, Institut National

More information

ORIENTED PCA AND BLIND SIGNAL SEPARATION

ORIENTED PCA AND BLIND SIGNAL SEPARATION ORIENTED PCA AND BLIND SIGNAL SEPARATION K. I. Diamantaras Department of Informatics TEI of Thessaloniki Sindos 54101, Greece kdiamant@it.teithe.gr Th. Papadimitriou Department of Int. Economic Relat.

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA

More information

THE THREE EASY ROUTES TO INDEPENDENT COMPONENT ANALYSIS; CONTRASTS AND GEOMETRY. Jean-François Cardoso

THE THREE EASY ROUTES TO INDEPENDENT COMPONENT ANALYSIS; CONTRASTS AND GEOMETRY. Jean-François Cardoso TH THR ASY ROUTS TO IPT COMPOT AALYSIS; COTRASTS A GOMTRY. Jean-François Cardoso CRS/ST, 6 rue Barrault, 563 Paris, France mailto:cardoso@tsi.enst.fr http://tsi.enst.fr/ cardoso/stuff.html ABSTRACT Blind

More information

A UNIFIED PRESENTATION OF BLIND SEPARATION METHODS FOR CONVOLUTIVE MIXTURES USING BLOCK-DIAGONALIZATION

A UNIFIED PRESENTATION OF BLIND SEPARATION METHODS FOR CONVOLUTIVE MIXTURES USING BLOCK-DIAGONALIZATION A UNIFIED PRESENTATION OF BLIND SEPARATION METHODS FOR CONVOLUTIVE MIXTURES USING BLOCK-DIAGONALIZATION Cédric Févotte and Christian Doncarli Institut de Recherche en Communications et Cybernétique de

More information

Blind Source Separation via Generalized Eigenvalue Decomposition

Blind Source Separation via Generalized Eigenvalue Decomposition Journal of Machine Learning Research 4 (2003) 1261-1269 Submitted 10/02; Published 12/03 Blind Source Separation via Generalized Eigenvalue Decomposition Lucas Parra Department of Biomedical Engineering

More information

Blind separation of instantaneous mixtures of dependent sources

Blind separation of instantaneous mixtures of dependent sources Blind separation of instantaneous mixtures of dependent sources Marc Castella and Pierre Comon GET/INT, UMR-CNRS 7, 9 rue Charles Fourier, 9 Évry Cedex, France marc.castella@int-evry.fr, CNRS, I3S, UMR

More information

A Class of Scaled Bessel Sampling Theorems REFERENCES

A Class of Scaled Bessel Sampling Theorems REFERENCES 58 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO., OCTOER maniulate such matrices. The good agreement between the theoretically-redicted and the emirically-obtained searation erformance was demonstrated

More information

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr

More information

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH BLID SEPARATIO OF TEMPORALLY CORRELATED SOURCES USIG A QUASI MAXIMUM LIKELIHOOD APPROACH Shahram HOSSEII, Christian JUTTE Laboratoire des Images et des Signaux (LIS, Avenue Félix Viallet Grenoble, France.

More information

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA) Fundamentals of Principal Component Analysis (PCA),, and Independent Vector Analysis (IVA) Dr Mohsen Naqvi Lecturer in Signal and Information Processing, School of Electrical and Electronic Engineering,

More information

BLIND SOURCE SEPARATION TECHNIQUES ANOTHER WAY OF DOING OPERATIONAL MODAL ANALYSIS

BLIND SOURCE SEPARATION TECHNIQUES ANOTHER WAY OF DOING OPERATIONAL MODAL ANALYSIS BLIND SOURCE SEPARATION TECHNIQUES ANOTHER WAY OF DOING OPERATIONAL MODAL ANALYSIS F. Poncelet, Aerospace and Mech. Eng. Dept., University of Liege, Belgium G. Kerschen, Aerospace and Mech. Eng. Dept.,

More information

Blind Source Separation Using Second-Order Cyclostationary Statistics

Blind Source Separation Using Second-Order Cyclostationary Statistics 694 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 4, APRIL 2001 Blind Source Separation Using Second-Order Cyclostationary Statistics Karim Abed-Meraim, Yong Xiang, Jonathan H. Manton, Yingbo Hua,

More information

A more efficient second order blind identification method for separation of uncorrelated stationary time series

A more efficient second order blind identification method for separation of uncorrelated stationary time series A more efficient second order blind identification method for separation of uncorrelated stationary time series Sara Taskinen 1a, Jari Miettinen a, Klaus Nordhausen b,c a Department of Mathematics and

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 11, NOVEMBER Blind MIMO Identification Using the Second Characteristic Function

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 11, NOVEMBER Blind MIMO Identification Using the Second Characteristic Function IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 53, NO 11, NOVEMBER 2005 4067 Blind MIMO Identification Using the Second Characteristic Function Eran Eidinger and Arie Yeredor, Senior Member, IEEE Abstract

More information

Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation

Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation Hsiao-Chun Wu and Jose C. Principe Computational Neuro-Engineering Laboratory Department of Electrical and Computer Engineering

More information

CCA BASED ALGORITHMS FOR BLIND EQUALIZATION OF FIR MIMO SYSTEMS

CCA BASED ALGORITHMS FOR BLIND EQUALIZATION OF FIR MIMO SYSTEMS CCA BASED ALGORITHMS FOR BLID EQUALIZATIO OF FIR MIMO SYSTEMS Javier Vía and Ignacio Santamaría Dept of Communications Engineering University of Cantabria 395 Santander, Cantabria, Spain E-mail: {jvia,nacho}@gtasdicomunicanes

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE. Noboru Murata

PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE. Noboru Murata ' / PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE Noboru Murata Waseda University Department of Electrical Electronics and Computer Engineering 3--

More information

Least square joint diagonalization of matrices under an intrinsic scale constraint

Least square joint diagonalization of matrices under an intrinsic scale constraint Least square joint diagonalization of matrices under an intrinsic scale constraint Dinh-Tuan Pham 1 and Marco Congedo 2 1 Laboratory Jean Kuntzmann, cnrs - Grenoble INP - UJF (Grenoble, France) Dinh-Tuan.Pham@imag.fr

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

The Joint MAP-ML Criterion and its Relation to ML and to Extended Least-Squares

The Joint MAP-ML Criterion and its Relation to ML and to Extended Least-Squares 3484 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 12, DECEMBER 2000 The Joint MAP-ML Criterion and its Relation to ML and to Extended Least-Squares Arie Yeredor, Member, IEEE Abstract The joint

More information

Tutorial on Blind Source Separation and Independent Component Analysis

Tutorial on Blind Source Separation and Independent Component Analysis Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...

More information

Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem

Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem Borching Su Department of Electrical Engineering California Institute of Technology Pasadena, California 91125

More information

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group

NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll

More information

Semi-Blind approaches to source separation: introduction to the special session

Semi-Blind approaches to source separation: introduction to the special session Semi-Blind approaches to source separation: introduction to the special session Massoud BABAIE-ZADEH 1 Christian JUTTEN 2 1- Sharif University of Technology, Tehran, IRAN 2- Laboratory of Images and Signals

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

A comparative study of approximate joint diagonalization algorithms for blind source separation in presence of additive noise

A comparative study of approximate joint diagonalization algorithms for blind source separation in presence of additive noise IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. Y, MONTH 26 1 A comparative study of approximate joint diagonalization algorithms for blind source separation in presence of additive noise Serge DÉGERINE*,

More information

BLIND SOURCE SEPARATION WITH PURE DELAY MIXTURES. Arie Yeredor

BLIND SOURCE SEPARATION WITH PURE DELAY MIXTURES. Arie Yeredor BLIND SOURCE SEPARATION WITH PURE DELAY MIXTURES Arie Yeredor Department of Electrical Engineering - Systems Tel-Aviv University, Tel-Aviv, 69978, ISRAEL e-mail: arie@eng.tau.ac.il ABSTRACT We address

More information

Publication VI. Esa Ollila On the circularity of a complex random variable. IEEE Signal Processing Letters, volume 15, pages

Publication VI. Esa Ollila On the circularity of a complex random variable. IEEE Signal Processing Letters, volume 15, pages Publication VI Esa Ollila 2008 On the circularity of a complex rom variable IEEE Signal Processing Letters, volume 15, pages 841 844 2008 Institute of Electrical Electronics Engineers (IEEE) Reprinted,

More information

Improved system blind identification based on second-order cyclostationary statistics: A group delay approach

Improved system blind identification based on second-order cyclostationary statistics: A group delay approach SaÅdhanaÅ, Vol. 25, Part 2, April 2000, pp. 85±96. # Printed in India Improved system blind identification based on second-order cyclostationary statistics: A group delay approach P V S GIRIDHAR 1 and

More information

Tensor approach for blind FIR channel identification using 4th-order cumulants

Tensor approach for blind FIR channel identification using 4th-order cumulants Tensor approach for blind FIR channel identification using 4th-order cumulants Carlos Estêvão R Fernandes Gérard Favier and João Cesar M Mota contact: cfernand@i3s.unice.fr June 8, 2006 Outline 1. HOS

More information

Blind Machine Separation Te-Won Lee

Blind Machine Separation Te-Won Lee Blind Machine Separation Te-Won Lee University of California, San Diego Institute for Neural Computation Blind Machine Separation Problem we want to solve: Single microphone blind source separation & deconvolution

More information

Estimation of linear non-gaussian acyclic models for latent factors

Estimation of linear non-gaussian acyclic models for latent factors Estimation of linear non-gaussian acyclic models for latent factors Shohei Shimizu a Patrik O. Hoyer b Aapo Hyvärinen b,c a The Institute of Scientific and Industrial Research, Osaka University Mihogaoka

More information

THE CENTERED DISCRETE FRACTIONAL FOURIER TRANSFORM AND LINEAR CHIRP SIGNALS. Juan G. Vargas-Rubio and Balu Santhanam

THE CENTERED DISCRETE FRACTIONAL FOURIER TRANSFORM AND LINEAR CHIRP SIGNALS. Juan G. Vargas-Rubio and Balu Santhanam THE CETERED DISCRETE FRACTIOAL FOURIER TRASFORM AD LIEAR CHIRP SIGALS Juan G. Vargas-Rubio and Balu Santhanam University of ew Mexico, Albuquerque, M 7 Tel: 55 77-, Fax: 55 77-9 Email: jvargas,bsanthan@ece.unm.edu

More information

On Information Maximization and Blind Signal Deconvolution

On Information Maximization and Blind Signal Deconvolution On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Independent Component Analysis. Contents

Independent Component Analysis. Contents Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle

More information

A Generalization of Blind Source Separation Algorithms for Convolutive Mixtures Based on Second-Order Statistics

A Generalization of Blind Source Separation Algorithms for Convolutive Mixtures Based on Second-Order Statistics 120 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL 13, NO 1, JANUARY 2005 A Generalization of Blind Source Separation Algorithms for Convolutive Mixtures Based on Second-Order Statistics Herbert

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 2D SYSTEMS & PRELIMINARIES Hamid R. Rabiee Fall 2015 Outline 2 Two Dimensional Fourier & Z-transform Toeplitz & Circulant Matrices Orthogonal & Unitary Matrices Block Matrices

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES. Ofir Lindenbaum, Arie Yeredor Ran Vitek Moshe Mishali

BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES. Ofir Lindenbaum, Arie Yeredor Ran Vitek Moshe Mishali 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 22 25, 2013, SOUTHAPMTON, UK BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES Ofir Lindenbaum,

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

ON CONSISTENCY AND ASYMPTOTIC UNIQUENESS IN QUASI-MAXIMUM LIKELIHOOD BLIND SEPARATION OF TEMPORALLY-DIVERSE SOURCES

ON CONSISTENCY AND ASYMPTOTIC UNIQUENESS IN QUASI-MAXIMUM LIKELIHOOD BLIND SEPARATION OF TEMPORALLY-DIVERSE SOURCES ON CONSISENCY AND ASYMPOIC UNIQUENESS IN QUASI-MAXIMUM LIKELIHOOD BLIND SEPARAION OF EMPORALLY-DIVERSE SOURCES Amir Weiss, Arie Yeredor, Sher Ali Cheema and Martin Haardt School of Electrical Engineering

More information

Blind separation of sources that have spatiotemporal variance dependencies

Blind separation of sources that have spatiotemporal variance dependencies Blind separation of sources that have spatiotemporal variance dependencies Aapo Hyvärinen a b Jarmo Hurri a a Neural Networks Research Centre, Helsinki University of Technology, Finland b Helsinki Institute

More information

Using Hankel structured low-rank approximation for sparse signal recovery

Using Hankel structured low-rank approximation for sparse signal recovery Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,

More information

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Seema Sud 1 1 The Aerospace Corporation, 4851 Stonecroft Blvd. Chantilly, VA 20151 Abstract

More information

A Sequentially Drilled Joint Congruence (SeD- JoCo) Transformation With Applications in Blind Source Separation and Multiuser MIMO Systems

A Sequentially Drilled Joint Congruence (SeD- JoCo) Transformation With Applications in Blind Source Separation and Multiuser MIMO Systems 2744 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 6, JUNE 2012 A Sequentially Drilled Joint Congruence (SeD- JoCo) Transformation With Applications in Blind Source Separation and Multiuser MIMO

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 12, DECEMBER

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 12, DECEMBER IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 53, NO 12, DECEMBER 2005 4429 Group Decorrelation Enhanced Subspace Method for Identifying FIR MIMO Channels Driven by Unknown Uncorrelated Colored Sources Senjian

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

SOURCE SEPARATION OF TEMPORALLY CORRELATED SOURCE USING BANK OF BAND PASS FILTERS. Andrzej Cichocki and Adel Belouchrani

SOURCE SEPARATION OF TEMPORALLY CORRELATED SOURCE USING BANK OF BAND PASS FILTERS. Andrzej Cichocki and Adel Belouchrani SOURCE SEPRTION OF TEMPORLLY CORRELTED SOURCE USING BNK OF BND PSS FILTERS ndrzej Cichocki del Belouchrani Laboratory for Open Information Systems Brain Science Institute, Riken, Japan. Email: cia@brain.riken.go.jp

More information

Dimensionality Reduction. CS57300 Data Mining Fall Instructor: Bruno Ribeiro

Dimensionality Reduction. CS57300 Data Mining Fall Instructor: Bruno Ribeiro Dimensionality Reduction CS57300 Data Mining Fall 2016 Instructor: Bruno Ribeiro Goal } Visualize high dimensional data (and understand its Geometry) } Project the data into lower dimensional spaces }

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

BLIND SOURCE SEPARATION ALGORITHM FOR MIMO CONVOLUTIVE MIXTURES. Kamran Rahbar and James P. Reilly

BLIND SOURCE SEPARATION ALGORITHM FOR MIMO CONVOLUTIVE MIXTURES. Kamran Rahbar and James P. Reilly BLIND SOURCE SEPARATION ALGORITHM FOR MIMO CONVOLUTIVE MIXTURES Kamran Rahbar and James P. Reilly Electrical & Computer Eng. Mcmaster University, Hamilton, Ontario, Canada Email: kamran@reverb.crl.mcmaster.ca,

More information

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South

More information

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de

More information

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Blind Signal Separation: Statistical Principles

Blind Signal Separation: Statistical Principles Blind Signal Separation: Statistical Principles JEAN-FRANÇOIS CARDOSO, MEMBER, IEEE Invited Paper Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array

More information

Independent Component Analysis and Blind Source Separation

Independent Component Analysis and Blind Source Separation Independent Component Analysis and Blind Source Separation Aapo Hyvärinen University of Helsinki and Helsinki Institute of Information Technology 1 Blind source separation Four source signals : 1.5 2 3

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Least-Squares Performance of Analog Product Codes

Least-Squares Performance of Analog Product Codes Copyright 004 IEEE Published in the Proceedings of the Asilomar Conference on Signals, Systems and Computers, 7-0 ovember 004, Pacific Grove, California, USA Least-Squares Performance of Analog Product

More information

Constrained Projection Approximation Algorithms for Principal Component Analysis

Constrained Projection Approximation Algorithms for Principal Component Analysis Constrained Projection Approximation Algorithms for Principal Component Analysis Seungjin Choi, Jong-Hoon Ahn, Andrzej Cichocki Department of Computer Science, Pohang University of Science and Technology,

More information

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery Tim Roughgarden & Gregory Valiant May 3, 2017 Last lecture discussed singular value decomposition (SVD), and we

More information

Fall 2011, EE123 Digital Signal Processing

Fall 2011, EE123 Digital Signal Processing Lecture 6 Miki Lustig, UCB September 11, 2012 Miki Lustig, UCB DFT and Sampling the DTFT X (e jω ) = e j4ω sin2 (5ω/2) sin 2 (ω/2) 5 x[n] 25 X(e jω ) 4 20 3 15 2 1 0 10 5 1 0 5 10 15 n 0 0 2 4 6 ω 5 reconstructed

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Ergodicity in Stationary Graph Processes: A Weak Law of Large Numbers

Ergodicity in Stationary Graph Processes: A Weak Law of Large Numbers IEEE TRASACTIOS O SIGAL PROCESSIG (SUBMITTED) Ergodicity in Stationary Graph Processes: A Weak Law of Large umbers Fernando Gama and Alejandro Ribeiro arxiv:803.04550v [eess.sp] Mar 08 Abstract For stationary

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

On Computation of Approximate Joint Block-Diagonalization Using Ordinary AJD

On Computation of Approximate Joint Block-Diagonalization Using Ordinary AJD On Computation of Approximate Joint Block-Diagonalization Using Ordinary AJD Petr Tichavský 1,3, Arie Yeredor 2,andZbyněkKoldovský 1,3 1 Institute of Information Theory and Automation, Pod vodárenskou

More information

ACCORDING to Shannon s sampling theorem, an analog

ACCORDING to Shannon s sampling theorem, an analog 554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,

More information

Random matrix pencils and level crossings

Random matrix pencils and level crossings Albeverio Fest October 1, 2018 Topics to discuss Basic level crossing problem 1 Basic level crossing problem 2 3 Main references Basic level crossing problem (i) B. Shapiro, M. Tater, On spectral asymptotics

More information

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 54, NO 10, OCTOBER 2009 2365 Properties of Zero-Free Spectral Matrices Brian D O Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE Abstract In

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

Probability density of nonlinear phase noise

Probability density of nonlinear phase noise Keang-Po Ho Vol. 0, o. 9/September 003/J. Opt. Soc. Am. B 875 Probability density of nonlinear phase noise Keang-Po Ho StrataLight Communications, Campbell, California 95008, and Graduate Institute of

More information

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL. Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is

More information

conventions and notation

conventions and notation Ph95a lecture notes, //0 The Bloch Equations A quick review of spin- conventions and notation The quantum state of a spin- particle is represented by a vector in a two-dimensional complex Hilbert space

More information

Design of FIR Nyquist Filters with Low Group Delay

Design of FIR Nyquist Filters with Low Group Delay 454 IEEE TRASACTIOS O SIGAL PROCESSIG, VOL. 47, O. 5, MAY 999 Design of FIR yquist Filters with Low Group Delay Xi Zhang and Toshinori Yoshikawa Abstract A new method is proposed for designing FIR yquist

More information

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna Kwan Hyeong Lee Dept. Electriacal Electronic & Communicaton, Daejin University, 1007 Ho Guk ro, Pochen,Gyeonggi,

More information

Fourier PCA. Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech)

Fourier PCA. Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech) Fourier PCA Navin Goyal (MSR India), Santosh Vempala (Georgia Tech) and Ying Xiao (Georgia Tech) Introduction 1. Describe a learning problem. 2. Develop an efficient tensor decomposition. Independent component

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

An Iterative Blind Source Separation Method for Convolutive Mixtures of Images

An Iterative Blind Source Separation Method for Convolutive Mixtures of Images An Iterative Blind Source Separation Method for Convolutive Mixtures of Images Marc Castella and Jean-Christophe Pesquet Université de Marne-la-Vallée / UMR-CNRS 8049 5 bd Descartes, Champs-sur-Marne 77454

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Improved PARAFAC based Blind MIMO System Estimation

Improved PARAFAC based Blind MIMO System Estimation Improved PARAFAC based Blind MIMO System Estimation Yuanning Yu, Athina P. Petropulu Department of Electrical and Computer Engineering Drexel University, Philadelphia, PA, 19104, USA This work has been

More information

Acoustic Source Separation with Microphone Arrays CCNY

Acoustic Source Separation with Microphone Arrays CCNY Acoustic Source Separation with Microphone Arrays Lucas C. Parra Biomedical Engineering Department City College of New York CCNY Craig Fancourt Clay Spence Chris Alvino Montreal Workshop, Nov 6, 2004 Blind

More information

Quadratic Optimization for Simultaneous Matrix Diagonalization

Quadratic Optimization for Simultaneous Matrix Diagonalization 1 Quadratic Optimization for Simultaneous Matrix Diagonalization Roland Vollgraf and Klaus Obermayer Roland Vollgraf Bernstein Center for Computational Neuroscience and Neural Information Processing Group

More information

HST.582J/6.555J/16.456J

HST.582J/6.555J/16.456J Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear

More information

Dimensionality Reduction Using the Sparse Linear Model: Supplementary Material

Dimensionality Reduction Using the Sparse Linear Model: Supplementary Material Dimensionality Reduction Using the Sparse Linear Model: Supplementary Material Ioannis Gkioulekas arvard SEAS Cambridge, MA 038 igkiou@seas.harvard.edu Todd Zickler arvard SEAS Cambridge, MA 038 zickler@seas.harvard.edu

More information

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.Box 5400,

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

1254 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 4, APRIL On the Virtual Array Concept for Higher Order Array Processing

1254 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 4, APRIL On the Virtual Array Concept for Higher Order Array Processing 1254 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 4, APRIL 2005 On the Virtual Array Concept for Higher Order Array Processing Pascal Chevalier, Laurent Albera, Anne Ferréol, and Pierre Comon,

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION. Scott Rickard, Radu Balan, Justinian Rosca. Siemens Corporate Research Princeton, NJ 08540

REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION. Scott Rickard, Radu Balan, Justinian Rosca. Siemens Corporate Research Princeton, NJ 08540 REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION Scott Rickard, Radu Balan, Justinian Rosca Siemens Corporate Research Princeton, NJ 84 fscott.rickard,radu.balan,justinian.roscag@scr.siemens.com

More information

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi Signal Modeling Techniques in Speech Recognition Hassan A. Kingravi Outline Introduction Spectral Shaping Spectral Analysis Parameter Transforms Statistical Modeling Discussion Conclusions 1: Introduction

More information