A comparative study of approximate joint diagonalization algorithms for blind source separation in presence of additive noise

Size: px
Start display at page:

Download "A comparative study of approximate joint diagonalization algorithms for blind source separation in presence of additive noise"

Transcription

1 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. Y, MONTH 26 1 A comparative study of approximate joint diagonalization algorithms for blind source separation in presence of additive noise Serge DÉGERINE*, Elimane KANE Abstract A comparative study of approximate joint diagonalization algorithms of a set of matrices is presented. Using a weighted least-squares criterion, without the orthogonality constraint, an algorithm is compared with an analoguous one for blind source separation (BSS). The criterion of the present algorithm is on the separating matrix while the other is on the mixing matrix. The convergence of the algorithm is proved under some mild assumptions. The performances of the two algorithms are compared with usual standard algorithms using BSS simulations results. We show that the improvement in estimating the separating matrix, resulting from the relaxation of the orthogonality restriction, can be achieved in presence of additive noise when the length of observed sequences is sufficiently large and when the mixing matrix is not close to an orthogonal matrix. Index Terms Blind source separation, instantaneous mixture, joint diagonalization, least-squares criterion. I. INTRODUCTION The approximate joint diagonalization of a set of real m- square symmetrical matrices, M = {M, M 1,..., M J }, where M is positive definite, is an essential tool in blind source separation (BSS) algorithms. The goal is to find a diagonalizer B, called separating matrix in BSS, so that the matrices BM j B T, j =,..., J, are as diagonal as possible. [1] and SOBI [2] algorithms use a preprocessing whitening phase: starting with any square root M 1/2 of M, when M is the observations correlation matrix at zero-lag, the set M is replaced by M = {M j = WM j W T, j =,..., J} where W = M 1/2 is called the whitening matrix. We have M = I and, for any orthogonal matrix U, the matrix B = UW provides an exact diagonalizer of M. The problem is now to find an orthogonal diagonalizer U of the set M. Let Off(M) = k l m2 l,k, where m l,k is the (l, k)-th entry of matrix M. Then the solution is obtained by minimizing with respect to (w.r.t.) U the criterion C js (U; M ) = Off(UM j U T ). (1) S. Dégerine and E. Kane are with the LMC-IMAG, UMR CNRS 5523, BP 53, F-3841 Grenoble cedex 9, France, phone: , fax: , Serge.Degerine@imag.fr, Elimane.Kane@imag.fr Let M 2 F = tr(mmt ) = l,k m2 l,k denote the squared Frobenius norm of M. The minimization of (1) is equivalent to minimizing the criterion C lsu (U, D ; M ) = M j U T D j U 2 F, (2) w.r.t. U and to the set of diagonal matrices [3] D = {D = I, D 1,..., D J }. This result is obvious since C lsu (U, D ; M ) = UM j U T D j 2 F, (3) and the minimum w.r.t. D is given by: ˆ D j = Diag{diag(UM j U T )}, j,..., J, where diag(m) is the vector defined by the diagonal elements of the matrix M and Diag(v) is the diagonal matrix whose diagonal elements are given by the vector v. The orthogonality constraint on U, i.e. the whitening phase, is relaxed in [4] (see also [5] for a first presentation of this approach). Introducing some positive weights w, w 1,..., w J in (2), this new criterion is to minimize C lsa (A, D; M) = w j M j AD j A T 2 F, (4) w.r.t. the mixing matrix A and the set D = {D,..., D J } of diagonal matrices. We propose here to relax the orthogonality constraint on U in (3) instead of (2). Then, the criterion consists in minimizing C lsb (B, D; M) = w j BM j B T D j 2 F, (5) w.r.t. the separating matrix B and D. This approach was first presented in [6] (in French). These weighted least-squares (WLS) criteria are not clearly justified by statistical models. In addition the maximum likelihood criterion in [7] as well as the one of mutual information in [8] show that the separating matrix B is a more relevant parameter than the mixing matrix A. These two criteria lead to similar performance in BSS, but it is easier to minimize (5) than (4) and the corresponding algorithm is faster.

2 2 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. Y, MONTH 26 All these optimization problems can be solved in an iterative way using alternating direction type algorithms. The minimization of (1) is realized by successive Givens rotations [9]. It is also the case in [1] for an other criterion, without the orthogonality constraint, on a set of symmetrical positive definite matrices M j. The present algorithm, like the one in [4], alternates between the two following minimization stages. On stage one, the minimization of (4) or (5) is realized w.r.t. the diagonal set D, while keeping A or B fixed. Then the solution is explicit, but simpler for (4) than for (5). On the other stage, these criteria are minimized w.r.t. A or B, while keeping D fixed. In this stage, the minimization is realized w.r.t. each column a k of A or row b l of B successively, while keeping the others fixed. Then the solution is given by the eigenvector associated to the largest positive eigenvalue of some quadratic form P k or to the smallest eigenvalue of an other quadratic form Q l. The difference comes from the fact that Q l is always positive semidefinite while P k can be negative definite. Other approaches, using gradient methods, have been proposed for minimizing C lsa ( ), [11], or C lsb ( ), [12], [13]. During the review process, we were informed by one referee that the present approach, with the same algorithm, is in a paper that has been accepted for publication in this Transactions [14]. This algorithm, called QDIAG, is without restrictions on the separating matrix B, which can be rectangular, and on the set of matrices M, regarding their symmetry, except for M which is positive definite. Two implementations, with complexity O(Jm 3 ) or O(m 5 ), are proposed and compared with ACDC [4] and FFDIAG [13]. From a theoretical point of view, only the convergence analysis of the algorithm appears to be new here, together with the comparison of the three criteria C lsu ( ), C lsa ( ) and C lsb ( ) given in this introduction. From now on, the main additional contribution of the present paper is in the simulation results where we are focused on showing the required conditions in order to improve the classical BSS methods by relaxing the orthogonality restriction. The paper is organized as follows. In Section II, we derive the algorithm for the minimization of C lsb ( ). Some properties of this criterion and convergence of the algorithm are studied in Section III. Finally, the three criteria C lsu ( ), C lsa ( ) and C lsb ( ) are compared, using BSS simulation results, in Section IV. II. PRESENTATION OF THE ALGORITHM Some ambiguities are well known in BSS problems. If Π, and E designate respectively a permutation matrix, a diagonal matrix whose terms are positive and a sign matrix (diagonal matrix whose terms equal ±1), the relation AD j A T = (A ΠE)(EΠ T 1 D j 1 ΠE)(EΠ T A T ) shows clearly that A (or consequently B = A 1 ) and D are not uniquely defined. Concerning the scale factor, the problem is solved by putting D = I. This corresponds to source signals with unit variance in BSS. For the permutation and sign factors, we can proceed as follows (if necessary). The sign factor E is chosen in such way that the first nonzero component in each column of A (or row of B) is positive. Then, for the permutation factor Π, the columns of A (or rows of B) are arranged in decreasing order according to their first component (or to the next ones in case of equality). First, we recall briefly the algorithm in [4], for the WLS criterion on the mixing matrix A, in order to make the comparison easier with the algorithm based on the separating matrix B. A. Mixing matrix The algorithm ACDC in [4] alternates between the two following minimization schemes. 1) Minimization with respect to D or DC phase: The minimization of (4) w.r.t. D, when A is fixed, is given for j =,..., J by: ˆD j = Diag{[(A T A) (A T A)] 1 diag(a T M j A)}, where denotes Hadamard s (element-wise) product. 2) Minimization with respect to A or AC phase: The minimization of (4) w.r.t. A, when D is fixed, is realized by successive minimizations on the columns a 1,..., a m of A. The minimum of (4) w.r.t. a selected column a k of A, while keeping its other columns fixed as well as D, comes from the following quadratic form: P k = ω j D j (k) M j D j (i)a i a T i, (6) i k where D J (i), i,..., m, are the diagonal elements of D j. If v is the unit-norm eigenvector, with a positive first nonzero component, associated to the largest eigenvalue λ of P k, then, λ â k = J v, ω jd 2 j (k) if λ >, otherwise â k =. B. Separating matrix Without constraint on B or D in the expression (5) of C lsb (B, D; M), the criterion is equal to zero for B = and D j =, j =,..., J. So, as explained above, we put D = I. 1) Minimization with respect to D: The minimization of (5) w.r.t. D, when B is fixed, is straightforward: ˆD j = Diag{diag(BM j B T )}, j,..., J. So, the nuisance parameter D in (5) can be eliminated and the minimization of the criterion is now equivalent to minimize C lsb (B; M) = ω j Off(BM j B T ), (7) with the constraint Diag{diag(BM B T )} = I. For a matter of simplicity, we use the arguments of C lsb ( ) to distinguish C lsb (B, D; M) from C lsb (B; M) associated to (5) and (7).

3 MURRAY AND BALEMI: USING THE DOCUMENT CLASS IEEETRAN.CLS 3 2) Minimization with respect to B: This minimization is again realized by successive minimizations on the vectors b 1,..., b m defined by the rows of B. Let us rewrite (7) as follows: m ( ) C lsb (B; M) = b T 2 l M j b i. ω j l=1 i l By keeping only the elements that depend on b l, we have to minimize: ( ) C lsb (b l ; M) = ω j b T 2 l M j b i (8) = b T l i l ω j M j b i b T i M j b l, i l with the constraint b T l M b l. Let B (l) be the (m 1) m matrix obtained by removing the l-th row of B. We have: b i b T i = B T (l) B (l) = B T B b l b T l. i l So the minimum of C lsb (b l ; M),w.r.t. b l, is realized by the vector ˆb l that minimizes the quadratic form, Q l = ω j M j B T (l) B (l)m j, with the constraint b T l M b l. Using the whitening phase as a change of variables and putting B = BW, the new solution b ˆ l is the unit-norm eigenvector, with a positive first nonzero component, associated to the smallest eigenvalue of the new quadratic form: Q l = ω j M j B T (l)b (l) M j. (9) It is clear that Q l is positive semidefinite, i.e. b T Q l b for every b R m. Furthermore, looking at (8), ˆb T l Q lˆbl = if and only if the solution matrix B obtained at this stage is such that, for every j =..., J and k l (l being fixed), the (l, k)-th and (k, l)-th entries of BM j B T are equal to zero, which is the goal (for every l) of the exact diagonalization problem. 3) LSB algorithm: Starting with B = I, or with any other initialization matrix B, the LSB algorithm provides a sequence of matrices {B n, n } by changing successively one selected row of the current matrix with the above process, i.e. B n+1 differs from B n only through one row b l (or is equal to B n ). Notice that, in each iteration, the constraint Diag{diag(B n B T n)} = I is automatically fulfiled. We will see in Section III-B that the convergence of the algorithm, i.e. an existing limit ˆB for the sequence {B n, n }, is guaranteed with very mild assumptions. So the stopping rule of the algorithm can operate with the change in the criterion C lsb ( ) or in the parameter B. Let us recall that, when ˆB is given by the algorithm, the solution of the problem is ˆB = ˆBW. Here, the whitening phase is only used to simplify the computations in the algorithm. But this shows that this approach is equivalent to replacing the orthogonality constraint UU T = I, in (1) or (3), by Diag{diag(UU T )} = I, which is less restrictive. III. PROPERTIES OF THE CRITERION AND CONVERGENCE OF THE ALGORITHM A. Existence, invariance and uniqueness of the solution 1) Existence: Criteria (1), (4) and (5) are minimized by zero. This lower bound is reached if and only if the set of matrices in M are jointly diagonalizable. Almost surely, this property is not satisfied in the BSS problems, except for J or in an asymptotic way. These criteria are continuous and can be restricted to compact sets, without loss of generality. So the existence of a solution is guaranteed. 2) Invariance: The natural invariance property which is desirable here is the following: for any nonsingular matrix R, the solution B associated to the set M = { M = RMj R T, j =,..., J} is given by B = BR 1 (i.e. Ã = RA), where B is the solution associated to M. This property is clearly satisfied by the criterion C lsb ( ) since in (5) we have: BM j B T D j 2 F = BR 1 RM j R T (BR 1 ) T D j 2 F. It is also the case for C lsu ( ) through the whitening phase, in which the whitening matrix W is replaced by WR 1. On the other hand, this property is not true for C lsa ( ). 3) Uniqueness: As discussed at the beginning of Section II, non uniqueness coming from the ambiguities in the factors Π, and E can be easily eliminated by selecting one set of parameters. Nevertheless, it is not difficult to select a set M of matrices which are exactly diagonalizable by a continuous set of matrices B. For instance, if M j = AD j A T, j =,..., J and, for k l, the diagonal entries of D j satisfy D j (k) = D j (l) for all j, then we can replace the columns a k and a l of A by [cos θa k + sin θa l ] and [cos θa l sin θa k ], θ [, π], without modifying the matrices M j. For example, this situation occurs when two sources have proportional spectra in the BSS problem, when M j is the cross-correlation matrix of lag j. In practice, we can suppose that the global minimum is unique, especially when the elements of M come from observed data, like in BSS problems. But the difficulty in the algorithms comes from the existence of local minima. It is not difficult to observe such a phenomenon in simulations by using different starting values in the algorithms. B. Convergence of the algorithm Convergence of these algorithms is difficult to prove, even if the global minimum exists. Nevertheless, alternating direction type methods are preferable to gradient methods, since criteria always decrease and are bounded below by zero, so they are convergent. However, in general, this does not imply convergence of parameters. This point can be detailed as follows. Let B be the set of accumulation points of a sequence B = {B n, n } given by the algorithm. If B is a bounded

4 4 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. Y, MONTH 26 sequence and, in addition, B n+1 B n 2 F goes to zero, then, using Theorem 26.1 p. 173 of [15], this sequence is convergent if B is not a continuum (a continuum is a closed set which cannot be decomposed into two disjoint closed sets). B is a bounded sequence, since Diag{diag(B n B T n)} = I implies B n 2 F = m. Now, B n+1 is obtained by replacing a row b l of B n by the eigenvector, with a positive first nonzero component, ˆb l associated to the smallest eigenvalue λ 1 of Q l. Then, we have and B n+1 B n 2 F = ˆb l b l 2 F, C lsb (B n ; M ) C lsb (B n+1 ; M ) = b T l Q l b l ˆb T l Q lˆbl. Let Q l = VΛV T be the eigendecomposition of Q l with Λ = Diag{(λ 1,..., λ m ) T } and V = (v 1,..., v m ) T, where the eigenvalues are arranged in decreasing order, λ 1 λ 2... λ m. We have ˆb l = v 1 and ˆb T l Q lˆbl = λ 1. Putting b l = m i=1 α iv i, it can be seen that and B n+1 B n 2 F = 2(1 α 1 ), C lsb (B n ; M ) C lsb (B n+1 ; M ) (1 α 2 1)(λ 2 λ 1 ). So, the convergence of the criterion C lsb (B n ; M ) implies that α 1 converges to ±1 if (λ 2 λ 1 ) remains bounded below by a strictly positive number (separating condition). The convention used in the definition of ˆb shows that α 1 tends to +1 and then B n+1 B n 2 F goes to zero. Notice that a matrix B is an accumulation point of a sequence B if and only if each of its rows b l, l,..., m, is an eigenvector associated to the smallest eigenvalue of the corresponding quadratic form Q l defined by (9). Finally, the algorithm is convergent if this set of matrices is not a continuum and if the quadratic forms Q l used in the algorithm satisfy the separating condition on their lowest eigenvalues. IV. BSS SIMULATION RESULTS In this section, we compare the LSB algorithm with the ACDC algorithm of [4] (denoted LSA for convenience here) for BSS problems. More precisely, the goal is to detail the situations where these algorithms could improve the performances of the standard LSU algorithms ( [1] and SOBI [2]). The LSB algorithm implemented here corresponds to the QDIAG algorithm with complexity O(Jm 3 ) of [14]. Notice that simulation results reported in [14] are essentially devoted to the comparison of QDIAG with ACDC and FFDIAG [13] in terms of rate of convergence and computational efficiency. A. Experimental setting We consider an instantaneous mixture of m = 3 sources with additive noise: X(t) = A S(t) + δ w(t) = Y(t) + n(t), t,... T, and the true mixing matrix used in [4]: A = The components S k ( ) of S( ) are real-valued zero mean processes with unit variances and are independent of each other, so V ar{s(t)} = I. The components w k ( ) of w( ) are real-valued white Gaussian noises with unit variances and w( ) is independent of S( ). The matrix is diagonal with 2 = Diag{diag(A A T )}. So δ is a scalar used to select the noise level, since we have V ar{n k (t)} = δ 2 V ar{y k (t}) for each k, while nl log 1 δ 2 is the noise level in decibel. The covariance matrix of X( ) satisfies R X () = E{X(t)X(t) T } = A A T + δ 2 2. Notice that the separating matrix B = A 1 is an exact diagonalizer of R X () in the noiseless case δ =, but also when A is an orthogonal matrix. We will see below that, in these situations, LSU algorithms could not be improved. Let us recall that these algorithms estimate the separating matrix in the form ˆB lsu = ÛjsŴ using criterion (1) for the orthogonal matrix Ûjs. So the whitening matrix Ŵ should be an estimate of W satisfying WA A T W T = I. 1) Whitening phase: The covariance matrix R X () is estimated by its empirical counterpart, ˆR X () T T X(t)X(t) T, t=1 which is symmetrical positive definite and gives the matrix M of M. The whitening matrix Ŵ = ˆR X () 1/2 is defined by Ŵ = D 1 V T, from the eigendecomposition ˆR X () = VD 2 V T, and the whitened signal ˆX w ( ) is given by ˆX w (t) = ŴX(t), t,..., T. Now, the other matrices M 1,..., M J of M are selected in two distinct ways in order to illustrate the comparison with or SOBI. But these matrices are always defined from statistics computed on the whitened signal ˆX w ( ). 2) Cumulant matrices: exploits the statistical independence of the sources using the fourth-order cumulants. So, for each k, S k (t), t,..., T are independent random variables uniformly distributed between 3 and 3 (thus having zero mean and unit variance) as in [4]. Matrices M j, j,..., J, of M, with J = m(m + 1)/2 = 6, are defined by the following empirical cumulant matrices of ˆXw ( ). For 1 l k m and j = (k 1) + l, M j = Ĉum k,l( ˆX w ) with Ĉum k,l ( ˆX w ) = [Ĉum{ ˆXw k, ˆX w l, ˆX w i, ˆX j=1,...,m w j }]. (1) i=1,...,m Since the empirical covariance matrix of ˆX w is equal to I, we have Ĉum{ ˆX w k, ˆX w l, ˆX w i, ˆX w j } T ˆX w k (t) T ˆX w l (t) ˆX w i (t) ˆX w j (t) t=1 δ k,l δ i,j e k (i)e l (j) e l (i)e k (j),

5 MURRAY AND BALEMI: USING THE DOCUMENT CLASS IEEETRAN.CLS 5 where δ k,l is the Kronecker symbol, δ k,l if k = l, else and [e 1, e 2,..., e m ] = I. Notice that, although ˆX w (t) = ŴX(t), the cumulant matrix Ĉum k,l( ˆX w ) is not equal to ŴĈumk,l(X)ŴT. 3) Covariance matrices: SOBI exploits the second-order statistics of colored sources having different spectra. So, for each k, S k (t), t,..., T is a sequence of a zero-mean stationary Gaussian AR(p k ) process with unit variance. We use the same models as in [7] characterized by the orders p 1 = 2, p 2 = 3 and p 3 = 4 and the following partial autocorrelation coefficients: {.4,.25;..6,.8,.5;.5,.7,.5,.4}. Matrices M j, j,..., J = 6 of M are defined by the following empirical covariance matrices of lag j of ˆX w ( ): M j 2 [ ˆR w j + ˆR wt j ], ˆR w j T T t=j+1 X w (t)x w (t j) T. (11) Notice that here we have ˆR w j = Ŵ ˆR j Ŵ T, where ˆR j is the covariance matrix of lag j of X( ). 4) Stopping rule: We use the same stopping rule in the three algorithms, LSU, LSA and LSB, given by the relative variation of criteria (2), (4) and (5) after each sweep, with the same set M. For instance, in LSB we take C lsb ( B, D ; M ) C lsb (B, D ; M ) < 1 4, C lsb (B, D ; M ) where B comes from the modification of the m rows of B. In LSA, one sweep corresponds to one DC phase, for D, and m AC phases for the modification of the m columns of A. In LSU, one sweep corresponds to m(m 1)/2 Givens rotations. The maximum number of sweeps is also limited to 1. 5) Performance index: In all experiments reported below, the algorithms are applied on the set M, where M = I and M j, j,..., J are given by (1) or (11). Then, the estimated separating matrix ˆB is ÛŴ, ˆBŴ and  1 Ŵ for LSU, LSB and LSA respectively. We have ÛŴ ˆR X ()[ÛŴ]T = I and Diag{diag( ˆBŴ ˆR X ()[ ˆBŴ]T )} = I. So the empirical covariance matrix ˆRŜ() of Ŝ(t) = ˆBX(t) satisfies ˆRŜ() = I in LSU and Diag{diag( ˆRŜ())} = I in LSB. This normalization is coherent with V ar{s(t)} = I and the matrix Ĉ = ˆBA should be nearly a matrix of the form ΠE in the noiseless case. This property is no longer true in presence of additive noise. Furthermore, there is no normalization on the columns of A ˆ in LSA. In order to eliminate the three ambiguities Π, E and, the performance index used in the sequel is the interference to signal ratio (ISR) measured as follows. In each experiment (and for each algorithm), the permutation matrix Π and the sign matrix E are chosen in such way that the diagonal elements of EΠĈ are positive with maximal sum. Then, each row of EΠĈ is divided by its diagonal component. The resulting matrix is noted Ĉ. Using R independent trials, the ISR is estimated by ISR R Ĉ 2 R r(l, k). r=1 l k The number R of trials depends on the sequence length T with RT = and selected values of T are 25, 5, 1, 5, 1, 5 and 1. B. Experimental results Numerous experiments have been conducted. Those reported below have been selected in order to illustrate the more relevant facts. The weights are identical in LSA and LSB with ω j, j,..., J and three distinct values for ω. By using a heavy weight ω, we force the separating matrix to be an exact diagonalizer of ˆR X () as in LSU. In order to relax this constraint, a very small weight should be chosen. This choice can be adapted to the sequence length T. From our experiments, we have selected, in an approximative way, ω / T for comparison with (cumulant matrices) and ω /1 T for comparison with SOBI (covariance matrices). The third value ω illustrates the neutral situation. Without other indication, the output of LSU is used as starting value for LSA and LSB. 1) Comparison with : Please recall that the separating matrix given by [9] is ˆB lsu = ÛŴ, where Ŵ is the whitening matrix defined in IV-A.1 and Û is the orthogonal matrix solution of criterion (1) applied on the set M of cumulant matrices given by (1). Then, the separating matrices given by LSB and LSA are respectively ˆB lsb = ˆBŴ and ˆB lsa =  1 Ŵ where ˆB and  are the solutions of criteria (5) and (4) using the same set M, with Û and ÛT as starting values. It is not surprising here that the performance of cannot be improved when the true separating matrix B is an exact diagonalizer of the true covariance matrix R X (). So, three conditions are necessary in order to observe better performances with LSA or LSB. The noise level nl must be important, the mixing matrix A must be sufficiently different from an orthogonal matrix and the sequence length T must be very large for estimating cumulants with good accuracy. Notice that the performance of can be improved, in the noiseless case, by using asymptotically optimal generalized weights (ω j is a random matrix instead of a scalar) in LSA applied to the output of [16]. Let us begin by general comments which are common to most simulations reported hereafter. The performances of LSA and LSB are generally identical, so further comments will be given only for LSB. The performances of and LSB with w are identical. By looking at the relative performances of LSB with respect to the weight ω, we can see that the search of an optimal weight is a difficult task, because this weight depends on the sequence length T, the noise level nl and the separating matrix A. Nevertheless, since the goal is to improve the performance of, this can be done only with w / T. Fig. 1 shows that, for T 6, LSB is better than when the noise level nl is between -2 db and 5 db, with a maximum gain of 22.7 db for nl = -5 db. But, as it

6 6 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. Y, MONTH 26 can be seen on Fig. 2, even for nl = -5 db, this advantage needs T 5. For T = 5, the ISR values in db are , and for, LSB and LSA respectively, / T. Furthermore, is preferable when the mixing matrix is orthogonal. This is illustrated in Fig. 5 where the mixing matrix is the orthogonal matrix V of the eigendecomposition A = V D V T. that this amelioration needs T 25. For T = 25, the ISR values in db are -2.82, and for SOBI, LSB and LSA respectively, /1 T. In general, the gain is more important here because the covariance matrices of different lags are estimated with the same accuracy than R X () contrary to the cumulant matrices.!1!2!3!4 LSB! /"T LSA! /"T LSB! LSA! LSB! LSA! # LSB # LSA with! LSB # LSA with! LSB # LSA with! /"T 5 1 SOBI /1 T /1 T SOBI LSB LSA LSB LSA!5 LSB with /1 T!6!25!2!15!1!5 5 1 noise level (db) Fig. 1. ISR versus noise level with mixing matrix A and sequence length T noise level(db) Fig. 3. ISR versus noise level with mixing matrix A and sequence length T SOBI LSB LSA {1 ;1} 4 LSB LSA / T / T sequence length T LSB LSA / T LSB LSA Fig. 2. ISR versus sequence length with mixing matrix A and noise level nl = -5 db. 2) Comparison with SOBI: The cumulant matrices in M are now replaced by the covariance matrices given by (11). The conclusions are very similar with those given in the comparison with. For T 6, Fig. 3 shows that LSB and LSA are better than SOBI when the noise level nl is between -2 db and 1 db, with a maximum gain of 3.2 db for nl = db. In Fig 4, for nl = db, we observe SOBI /1 T /1 T LSB LSA /1 T sequence length T Fig. 4. ISR versus sequence length with mixing matrix A and noise level nl = db. Notice that Fig 4 and Fig 2 illustrate the fact that the whitening phase in LSU algorithms imposes a limit on the reachable performance [17]. 3) Further comments: Now, we illustrate other relevant facts regarding the algorithms LSA and LSB. The first one is the influence of the starting value, the second is the choice of the set of cumulant matrices and the third is the difficulties in such comparisons regarding the choice of the mixing matrix and the performance index.

7 MURRAY AND BALEMI: USING THE DOCUMENT CLASS IEEETRAN.CLS 7 Instead of the output of, the identity matrix I is now used as starting value in LSA and LSB. Fig. 6 shows analogous results of those reported in Fig. 2, but with nl = db. We observe that the starting value does not affect LSB, when ω = 1/ T or ω, as well as LSA when ω. On the other hand, LSA / T gives very bad results (value of ISR for T = 25 is db). In fact, these results, except for T 6, are very different from one set of trials to another. When ω, LSA and LSB have the same behaviour: the results, which are now very bad and far away from those of, are independent of the sequence length T. Moreover, in this case, the algorithms are not convergent (the number of sweeps reaches 1) for about 6% of trials to 9% when T goes from 25 to 1 6. Please notice that this dependance on the starting value proves that criteria (4) and (5) can have local minima / T / T LSB LSA sequence length T / T LSB LSA / T ISR / T / T LSB LSA / T LSB LSA LSB LSA noise level (db) Fig. 5. ISR versus noise level with the orthogonal mixing matrix V and sequence length T 6. Until now, we have used the set M of cumulant matrices associated with the whitened signal ˆX w ( ) because, as it will be seen below, this set leads to the best performances. This point is illustrated in Fig. 7 in the noiseless case and using ω / T for LSA and LSB. In this experiment, the actual approach, noted CWS for Cumulants of Whitened Signal, is compared with those based on the set M of empirical cumulant matrices of the signal X( ). More precisely this set is defined by M = ˆR X () and, for 1 l k m and j = (k 1) + l, M j = Ĉum k,l(x) with j=1,...,m Ĉum k,l (X) = [Ĉum{Xk, X l, X i, X j }], i=1,...,m where, using obvious notations for elements of the matrix ˆR X = ˆR X (), Ĉum{X k, X l, X i, X j } T X k (t)x l (t)x i (t)x j (t) T t=1 ˆR X (k, l) ˆR X (i, j) ˆR X (k, i) ˆR X (l, j) ˆR X (l, i) ˆR X (k, j). Methods noted WCS, for Whitened Cumulants of Signal, come from the change of Ĉum k,l ( ˆX w ) in CWS by Fig. 6. ISR versus sequence length with mixing matrix A and noise level nl = db. Here the identity matrix is used as starting value for LSA and LSB. ŴĈumk,l(X)ŴT. Methods noted CS, for Cumulants of Signal, are applied directly on the set M, without whitening phase. But, using the invariance property discussed in section III-A.2, such versions for and LSB are not different from the above WCS approach. This is not true for LSA. It can be seen in Fig. 7 that the recommended approach CWS leads to the best performances for the three criteria. Notice also that is preferable to LSA or LSB in the noiseless case (because B is an exact diagonalizer of R X ()). It is clear that WCS is not equivalent to CWS. The comparison between ACDC and reported in [4] corresponds to LSA CS and WCS. It is the reason why our conclusion here differs from those in [4] where ACDC is said to be better than in the noiseless case. Notice that LSA with CWS, WCS and CS can be regarded as different sub-optimal solutions of the approach in [16] using generalized weights. One important fact, in such comparisons, is that some conclusions depend on the mixing matrix and on the performance index. This point is illustrated in Table I, corresponding to the experiment used for Fig 2, but with a random mixing matrix. For each trial, the elements of A are independent random variables with the standard normal distribution. We consider also the index of performance used to the presentation of FFDIAG in [13]: score 2R + 1 2R [ ( R r=1 l k [ ( R r=1 l k )] Ĉr(l, k) 2 max i Ĉr(l, i) 1 2 Ĉr(l, k) 2 max i Ĉr(i, k) 2 1 )] Please notice that the score is equal to the ISR if, for each matrix Ĉr, we have: max i Ĉr(l, i) 2 = Ĉr(l, i l ) 2 = max Ĉr(i, i l ) 2, l,..., m. i This will be the case if Ĉr is close to a matrix of the form Π E, as expected. Here, the ISR index is not relevant because.

8 8 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. Y, MONTH CWS WCS LSA CS CWS 5 LSB CWS LSA CWS LSB WCS LSA WCS WCS 55 LSB WCS LSA WCS LSA CS sequence length T LSB CWS LSA CWS Fig. 7. ISR versus sequence length with mixing matrix A, without noise and using ω / T for LSA and LSB. Here the comparison lays on the choice of the set M of cumulant matrices. fact, the difference between the two methods comes from the implementation of the two algorithms. In LSB, the estimation of D is eliminated and the computation of ˆb l is easier than the one of â k. So LSB is faster than LSA. However the algorithm LS AJD in [18], using exact joint diagonalization for minimizing C lsa ( ), is faster, with better score results, in the case of two sources when T is sufficiently large (see Table II). TABLE II RATE OF CONVERGENCE AND PERFORMANCE OF, LSB, LSA AND LS AJD WITH ω / T, RANDOM MIXING MATRIX AND NOISE LEVEL nl = -5 db, IN THE CASE OF 2 SOURCES Sequence length Method LSB LSA LS AJD Number of trials T CPU time ISR r = 5 score T = 5 CPU time ISR r score T CPU time ISR r = 5 score its values are very different from one set of trials to another. On the other hand, the score values (also in db) are stable and slightly better for LSA than for LSB. The mean CPU time (in milliseconds) of each trial shows that LSB is clearly faster than LSA. Notice that the CPU time of must be added to those of LSA and LSB, since these algorithms use the output of as starting value. Table I gives also the results of FFDIAG. This algorithm minimizes C lsb ( ) using a Newton-like method. Here, it appears to be preferable to LSA and LSB when T 5. TABLE I RATE OF CONVERGENCE AND PERFORMANCE OF, LSB, LSA AND FFDIAG WITH ω / T, RANDOM MIXING MATRIX AND NOISE LEVEL nl = -5 db Sequence length Method LSB LSA F F DIAG Number of trials T = 25 CPU time ISR r = 2 score T = 5 CPU time ISR r score T CPU time ISR r = 5 score T = 5 CPU time ISR r score T CPU time ISR r = 5 score T = 5 CPU time ISR r score T CPU time ISR r = 5 score We have seen that the performances of LSA and LSB are similar in order to improve those of or SOBI. In TABLE III RATE OF CONVERGENCE AND PERFORMANCE OF, LSB, LSA AND LS AJD WITH ω / T, RANDOM MIXING MATRIX AND NOISE LEVEL nl = -5 db, IN THE CASE OF 3 SOURCES Sequence length T 5 T = T 6 Method Number of trials r = 5 r r = 5 CPU time ISR score CPU time LSB jade ISR score CPU time LSA jade ISR score CPU time LS AJD ISR score CPU time LSB ajd ISR score CPU time LSA ajd ISR score The author proposes to use the output of this algorithm as an initial guess when m > 2. This point is illustrated in Tables III and IV, where the subscript jade or ajd in LSB and LSA specifies the output used as starting value. We observe that LS AJD alone is as fast as and can provide better score results when T 5. As a starting value, the output of LS AJD is preferable to the one of for LSA, but not for LSB. The large variation of the score from one table to the other shows that this measure strongly depends on the mixing matrix. Additional work would be necessary to clarify this point. V. CONCLUSION In this paper, a comparative study of approximate joint diagonalization algorithms for blind source separation in presence

9 MURRAY AND BALEMI: USING THE DOCUMENT CLASS IEEETRAN.CLS 9 TABLE IV RATE OF CONVERGENCE AND PERFORMANCE OF, LSB, LSA AND LS AJD WITH ω / T, RANDOM MIXING MATRIX AND NOISE LEVEL nl = -5 db, IN THE CASE OF 3 SOURCES Sequence length T 5 T = T 6 Method Number of trials r = 5 r r = 5 CPU time ISR score CPU time LSB jade ISR score CPU time LSA jade ISR score CPU time LS AJD ISR score CPU time LSB ajd ISR score CPU time LSA ajd ISR score of additive noise has been presented, with a special emphasis on a non-orthogonal approximate joint diagonalization algorithm. It is an iterative algorithm, using alternating direction type technique, based on a weighted least-squares criterion applied to the separating matrix. Convergence property to a stationary point of the criterion is proved under mild assumptions. Simulation results show that this algorithm leads to similar performances but is faster than the ACDC algorithm using the same criterion applied on the mixing matrix. The conditions in which these algorithms could improve the performances of the usual and SOBI algorithms are clarified. In our experiments, we have observed that this can be done only in presence of noise, when the length of observed sequences is sufficiently large, and with a mixing matrix not close to an orthogonal matrix. [7] S. Dégerine and A. Zaïdi, Separation of an instantaneous mixture of Gaussian autoregressive sources by the exact maximum likelihood approach, IEEE Trans. on Signal Processing, Vol. SP-52, no 6, pp , June 24. [8] D.T. Pham, Blind separation of instantaneous mixture of sources via the Gaussian mutual information criterion, Signal Processing, Vol. 81 no. 9, pp , 21. [9] J.-F. Cardoso and A. Souloumiac, Blind Beamforming for Non Gaussian Signals, Siam J. Matrix Anal. Appl., vol. 17, no 1, pp , Jan [1] D.T. Pham, Joint approximate diagonalization of positive definite hermitian matrices, Siam J. Matrix Anal. Appl., Vol. 22 no. 4, pp , 21. [11] A.-J. van der Veen, Joint diagonalization via subspace fitting techniques, in Proc. ICASSP, vol. 5, Salt Lake City, UT, May 21, pp [12] M. Joho and H. Mathis, Joint diagonalization of correlation matrices by using gradient methods with application to blind signal separation, in Proc. SAM, Rosslyn, VA, Aug. 22, pp [13] A. Ziehe, P. Laskov, G. Nolte, and K.-R. Müller, A fast algorithm for joint diagonalization with non-orthogonal transformations and its application to blind source separation, Journal of Machine Learning Research, Vol. 5, 24, pp [14] R. Vollgraf and K. Obermayer, Quadratic Optimization for Simultaneous Matrix Diagonalization, in IEEE Trans. on Signal Processing, Vol. SP-54, no 9, pp , September 26. [15] A.M. Ostrowski, Solution of equations in Euclidean and Banach spaces, 3rd ed., Academic Press, NewYork, 1973, p [16] A. Smekhov and A. Yeredor, Optimization of using a novel optimally weighted joint diagonalization approach, in Proc. EUSIPCO, 24, pp [17] J.-F. Cardoso, On the performance of orthogonal source separation algorithms, in Proc. EUSIPCO, 1994, pp [18] A. Yeredor, On Using Exact Joint Diagonalization for Noniterative Approximate Joint Diagonalization, IEEE Signals Processing Letters, Vol. 12, no 9, Sept. 25, pp ACKNOWLEDGMENT The authors would like to thank the reviewers for their constructive comments and suggestions and for pointing out some useful references. REFERENCES [1] J.-F. Cardoso and A. Souloumiac, Blind Beamforming for Non Gaussian Signals, IEE-Proceedings-F, vol. 14, no 6, pp , Dec [2] A. Belouchrani, K. Abed-Meriam, J.-F. Cardoso and E. Moulines, A blind Source Separation Technique Using Second-Order Statistics, IEEE Trans. on Signal Processing, Vol. SP-45, no 2, pp , Feb [3] M. Wax and J. Sheinvald, A Least-Squares Approach to Joint Diagonalization, IEEE Signals Processing Letters, Vol. 4, no 2, Feb [4] A. Yeredor, Non-Orthogonal Joint Diagonalization in the Least- Squares Sense With Application in Blind source Separation, IEEE Trans. on Signal Processing, Vol. SP-5, no 7, pp , July 22. [5] A. Yeredor, Approximate joint diagonalization using non-orthogonal matrices, in Proc. ICA, Helsinki, Finland, June 2, pp [6] S. Dégerine, Sur la diagonalisation conjointe approchée par un critère des moindres carrés, in Proc. GRETSI, Toulouse, France, Septembre 21.

Least square joint diagonalization of matrices under an intrinsic scale constraint

Least square joint diagonalization of matrices under an intrinsic scale constraint Least square joint diagonalization of matrices under an intrinsic scale constraint Dinh-Tuan Pham 1 and Marco Congedo 2 1 Laboratory Jean Kuntzmann, cnrs - Grenoble INP - UJF (Grenoble, France) Dinh-Tuan.Pham@imag.fr

More information

Quadratic Optimization for Simultaneous Matrix Diagonalization

Quadratic Optimization for Simultaneous Matrix Diagonalization 1 Quadratic Optimization for Simultaneous Matrix Diagonalization Roland Vollgraf and Klaus Obermayer Roland Vollgraf Bernstein Center for Computational Neuroscience and Neural Information Processing Group

More information

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr

More information

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA

More information

Blind separation of instantaneous mixtures of dependent sources

Blind separation of instantaneous mixtures of dependent sources Blind separation of instantaneous mixtures of dependent sources Marc Castella and Pierre Comon GET/INT, UMR-CNRS 7, 9 rue Charles Fourier, 9 Évry Cedex, France marc.castella@int-evry.fr, CNRS, I3S, UMR

More information

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

Simple LU and QR based Non-Orthogonal Matrix Joint Diagonalization

Simple LU and QR based Non-Orthogonal Matrix Joint Diagonalization This paper was presented in the Sixth International Conference on Independent Component Analysis and Blind Source Separation held in Charleston, SC in March 2006. It is also published in the proceedings

More information

A UNIFIED PRESENTATION OF BLIND SEPARATION METHODS FOR CONVOLUTIVE MIXTURES USING BLOCK-DIAGONALIZATION

A UNIFIED PRESENTATION OF BLIND SEPARATION METHODS FOR CONVOLUTIVE MIXTURES USING BLOCK-DIAGONALIZATION A UNIFIED PRESENTATION OF BLIND SEPARATION METHODS FOR CONVOLUTIVE MIXTURES USING BLOCK-DIAGONALIZATION Cédric Févotte and Christian Doncarli Institut de Recherche en Communications et Cybernétique de

More information

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de

More information

Semi-Blind approaches to source separation: introduction to the special session

Semi-Blind approaches to source separation: introduction to the special session Semi-Blind approaches to source separation: introduction to the special session Massoud BABAIE-ZADEH 1 Christian JUTTEN 2 1- Sharif University of Technology, Tehran, IRAN 2- Laboratory of Images and Signals

More information

A Sequentially Drilled Joint Congruence (SeD- JoCo) Transformation With Applications in Blind Source Separation and Multiuser MIMO Systems

A Sequentially Drilled Joint Congruence (SeD- JoCo) Transformation With Applications in Blind Source Separation and Multiuser MIMO Systems 2744 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 6, JUNE 2012 A Sequentially Drilled Joint Congruence (SeD- JoCo) Transformation With Applications in Blind Source Separation and Multiuser MIMO

More information

On Computation of Approximate Joint Block-Diagonalization Using Ordinary AJD

On Computation of Approximate Joint Block-Diagonalization Using Ordinary AJD On Computation of Approximate Joint Block-Diagonalization Using Ordinary AJD Petr Tichavský 1,3, Arie Yeredor 2,andZbyněkKoldovský 1,3 1 Institute of Information Theory and Automation, Pod vodárenskou

More information

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr

More information

Some Gradient Based Joint Diagonalization Methods for ICA

Some Gradient Based Joint Diagonalization Methods for ICA Some Gradient Based Joint Diagonalization Methods for ICA Bijan Afsari and P.S. Krishnaprasad Institute for Systems Research, University of Maryland College Park, Maryland 20742, USA {bijan, krishna}@isr.umd.edu

More information

Blind Instantaneous Noisy Mixture Separation with Best Interference-plus-noise Rejection

Blind Instantaneous Noisy Mixture Separation with Best Interference-plus-noise Rejection Blind Instantaneous Noisy Mixture Separation with Best Interference-plus-noise Rejection Zbyněk Koldovský 1,2 and Petr Tichavský 1 1 Institute of Information Theory and Automation, Pod vodárenskou věží

More information

Analytical solution of the blind source separation problem using derivatives

Analytical solution of the blind source separation problem using derivatives Analytical solution of the blind source separation problem using derivatives Sebastien Lagrange 1,2, Luc Jaulin 2, Vincent Vigneron 1, and Christian Jutten 1 1 Laboratoire Images et Signaux, Institut National

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA) Fundamentals of Principal Component Analysis (PCA),, and Independent Vector Analysis (IVA) Dr Mohsen Naqvi Lecturer in Signal and Information Processing, School of Electrical and Electronic Engineering,

More information

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

Blind Channel Identification in (2 1) Alamouti Coded Systems Based on Maximizing the Eigenvalue Spread of Cumulant Matrices

Blind Channel Identification in (2 1) Alamouti Coded Systems Based on Maximizing the Eigenvalue Spread of Cumulant Matrices Blind Channel Identification in (2 1) Alamouti Coded Systems Based on Maximizing the Eigenvalue Spread of Cumulant Matrices Héctor J. Pérez-Iglesias 1, Daniel Iglesia 1, Adriana Dapena 1, and Vicente Zarzoso

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

Acoustic Source Separation with Microphone Arrays CCNY

Acoustic Source Separation with Microphone Arrays CCNY Acoustic Source Separation with Microphone Arrays Lucas C. Parra Biomedical Engineering Department City College of New York CCNY Craig Fancourt Clay Spence Chris Alvino Montreal Workshop, Nov 6, 2004 Blind

More information

An Iterative Blind Source Separation Method for Convolutive Mixtures of Images

An Iterative Blind Source Separation Method for Convolutive Mixtures of Images An Iterative Blind Source Separation Method for Convolutive Mixtures of Images Marc Castella and Jean-Christophe Pesquet Université de Marne-la-Vallée / UMR-CNRS 8049 5 bd Descartes, Champs-sur-Marne 77454

More information

Blind Source Separation via Generalized Eigenvalue Decomposition

Blind Source Separation via Generalized Eigenvalue Decomposition Journal of Machine Learning Research 4 (2003) 1261-1269 Submitted 10/02; Published 12/03 Blind Source Separation via Generalized Eigenvalue Decomposition Lucas Parra Department of Biomedical Engineering

More information

A FAST ALGORITHM FOR BLIND SEPARATION OF NON-GAUSSIAN AND TIME-CORRELATED SIGNALS

A FAST ALGORITHM FOR BLIND SEPARATION OF NON-GAUSSIAN AND TIME-CORRELATED SIGNALS A FAST ALGORITHM FOR BLIND SEPARATION OF NON-GAUSSIAN AND TIME-CORRELATED SIGNALS Germán Gómez-Herrero 1, Zbynˇek Koldovský, Petr Tichavský, Karen Egiazarian 1 1 Institute of Signal Processing, Tampere

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

ON THE FOURTH-ORDER CUMULANTS ESTIMATION FOR THE HO BLIND SEPARATION OF CYCLOSTATIONARY SOURCES. Anne Férréol and Pascal Chevalier

ON THE FOURTH-ORDER CUMULANTS ESTIMATION FOR THE HO BLIND SEPARATION OF CYCLOSTATIONARY SOURCES. Anne Férréol and Pascal Chevalier ON THE FOURTH-ORDER CUMULANTS ESTIMATION FOR THE HO BLIND SEPARATION OF CYCLOSTATIONARY SOURCES Anne Férréol and Pascal Chevalier Thomson-CSF Communications, TTC/TSI, 66 rue du Fossé Blanc, 92231 Gennevilliers,

More information

A more efficient second order blind identification method for separation of uncorrelated stationary time series

A more efficient second order blind identification method for separation of uncorrelated stationary time series A more efficient second order blind identification method for separation of uncorrelated stationary time series Sara Taskinen 1a, Jari Miettinen a, Klaus Nordhausen b,c a Department of Mathematics and

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

BLIND SOURCES SEPARATION BY SIMULTANEOUS GENERALIZED REFERENCED CONTRASTS DIAGONALIZATION

BLIND SOURCES SEPARATION BY SIMULTANEOUS GENERALIZED REFERENCED CONTRASTS DIAGONALIZATION BLIND SOURCES SEPARATION BY SIMULTANEOUS GENERALIZED REFERENCED CONTRASTS DIAGONALIZATION Abdellah ADIB, Eric MOREAU and Driss ABOUTAJDINE Institut Scientifique, av. Ibn Battouta, BP. 73, Rabat, Maroc.

More information

ORIENTED PCA AND BLIND SIGNAL SEPARATION

ORIENTED PCA AND BLIND SIGNAL SEPARATION ORIENTED PCA AND BLIND SIGNAL SEPARATION K. I. Diamantaras Department of Informatics TEI of Thessaloniki Sindos 54101, Greece kdiamant@it.teithe.gr Th. Papadimitriou Department of Int. Economic Relat.

More information

Blind Source Separation Using Second-Order Cyclostationary Statistics

Blind Source Separation Using Second-Order Cyclostationary Statistics 694 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 4, APRIL 2001 Blind Source Separation Using Second-Order Cyclostationary Statistics Karim Abed-Meraim, Yong Xiang, Jonathan H. Manton, Yingbo Hua,

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Wavelet de-noising for blind source separation in noisy mixtures.

Wavelet de-noising for blind source separation in noisy mixtures. Wavelet for blind source separation in noisy mixtures. Bertrand Rivet 1, Vincent Vigneron 1, Anisoara Paraschiv-Ionescu 2 and Christian Jutten 1 1 Institut National Polytechnique de Grenoble. Laboratoire

More information

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH

BLIND SEPARATION OF TEMPORALLY CORRELATED SOURCES USING A QUASI MAXIMUM LIKELIHOOD APPROACH BLID SEPARATIO OF TEMPORALLY CORRELATED SOURCES USIG A QUASI MAXIMUM LIKELIHOOD APPROACH Shahram HOSSEII, Christian JUTTE Laboratoire des Images et des Signaux (LIS, Avenue Félix Viallet Grenoble, France.

More information

Efficient Line Search Methods for Riemannian Optimization Under Unitary Matrix Constraint

Efficient Line Search Methods for Riemannian Optimization Under Unitary Matrix Constraint Efficient Line Search Methods for Riemannian Optimization Under Unitary Matrix Constraint Traian Abrudan, Jan Eriksson, Visa Koivunen SMARAD CoE, Signal Processing Laboratory, Helsinki University of Technology,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Deflation-based separation of uncorrelated stationary time series

Deflation-based separation of uncorrelated stationary time series Deflation-based separation of uncorrelated stationary time series Jari Miettinen a,, Klaus Nordhausen b, Hannu Oja c, Sara Taskinen a a Department of Mathematics and Statistics, 40014 University of Jyväskylä,

More information

Blind Identification of FIR Systems and Deconvolution of White Input Sequences

Blind Identification of FIR Systems and Deconvolution of White Input Sequences Blind Identification of FIR Systems and Deconvolution of White Input Sequences U. SOVERINI, P. CASTALDI, R. DIVERSI and R. GUIDORZI Dipartimento di Elettronica, Informatica e Sistemistica Università di

More information

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Analytical Method for Blind Binary Signal Separation

Analytical Method for Blind Binary Signal Separation Analytical Method for Blind Binary Signal Separation Alle-Jan van der Veen Abstract The blind separation of multiple co-channel binary digital signals using an antenna array involves finding a factorization

More information

Fast and Robust Phase Retrieval

Fast and Robust Phase Retrieval Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

BLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING FUNCTION. Juha Karvanen and Visa Koivunen

BLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING FUNCTION. Juha Karvanen and Visa Koivunen BLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING UNCTION Juha Karvanen and Visa Koivunen Signal Processing Laboratory Helsinki University of Technology P.O. Box 3, IN-215 HUT, inland Tel.

More information

Blind Signal Separation: Statistical Principles

Blind Signal Separation: Statistical Principles Blind Signal Separation: Statistical Principles JEAN-FRANÇOIS CARDOSO, MEMBER, IEEE Invited Paper Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array

More information

EUSIPCO

EUSIPCO EUSIPCO 03 56974656 SLIDING WINDOW ADAPTIVE CONSTANT MODULUS ALGORITHM BASED ON COMPLEX HYPERBOLIC GIVENS ROTATIONS A. Boudjellal K. Abed-Meraim A. Belouchrani Ph. Ravier Polytech Orleans, Prisme Laboratory,

More information

BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES. Ofir Lindenbaum, Arie Yeredor Ran Vitek Moshe Mishali

BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES. Ofir Lindenbaum, Arie Yeredor Ran Vitek Moshe Mishali 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 22 25, 2013, SOUTHAPMTON, UK BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES Ofir Lindenbaum,

More information

An Improved Cumulant Based Method for Independent Component Analysis

An Improved Cumulant Based Method for Independent Component Analysis An Improved Cumulant Based Method for Independent Component Analysis Tobias Blaschke and Laurenz Wiskott Institute for Theoretical Biology Humboldt University Berlin Invalidenstraße 43 D - 0 5 Berlin Germany

More information

Tensor approach for blind FIR channel identification using 4th-order cumulants

Tensor approach for blind FIR channel identification using 4th-order cumulants Tensor approach for blind FIR channel identification using 4th-order cumulants Carlos Estêvão R Fernandes Gérard Favier and João Cesar M Mota contact: cfernand@i3s.unice.fr June 8, 2006 Outline 1. HOS

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 11, NOVEMBER Blind MIMO Identification Using the Second Characteristic Function

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 11, NOVEMBER Blind MIMO Identification Using the Second Characteristic Function IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 53, NO 11, NOVEMBER 2005 4067 Blind MIMO Identification Using the Second Characteristic Function Eran Eidinger and Arie Yeredor, Senior Member, IEEE Abstract

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Underdetermined Instantaneous Audio Source Separation via Local Gaussian Modeling

Underdetermined Instantaneous Audio Source Separation via Local Gaussian Modeling Underdetermined Instantaneous Audio Source Separation via Local Gaussian Modeling Emmanuel Vincent, Simon Arberet, and Rémi Gribonval METISS Group, IRISA-INRIA Campus de Beaulieu, 35042 Rennes Cedex, France

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D.

COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION. Aditya K. Jagannatham and Bhaskar D. COMPLEX CONSTRAINED CRB AND ITS APPLICATION TO SEMI-BLIND MIMO AND OFDM CHANNEL ESTIMATION Aditya K Jagannatham and Bhaskar D Rao University of California, SanDiego 9500 Gilman Drive, La Jolla, CA 92093-0407

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27 Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

On the Behavior of Information Theoretic Criteria for Model Order Selection

On the Behavior of Information Theoretic Criteria for Model Order Selection IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 8, AUGUST 2001 1689 On the Behavior of Information Theoretic Criteria for Model Order Selection Athanasios P. Liavas, Member, IEEE, and Phillip A. Regalia,

More information

Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation

Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation Hsiao-Chun Wu and Jose C. Principe Computational Neuro-Engineering Laboratory Department of Electrical and Computer Engineering

More information

Blind separation of sources that have spatiotemporal variance dependencies

Blind separation of sources that have spatiotemporal variance dependencies Blind separation of sources that have spatiotemporal variance dependencies Aapo Hyvärinen a b Jarmo Hurri a a Neural Networks Research Centre, Helsinki University of Technology, Finland b Helsinki Institute

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

X. Zhang, G. Feng, and D. Xu Department of Electronic Engineering Nanjing University of Aeronautics & Astronautics Nanjing , China

X. Zhang, G. Feng, and D. Xu Department of Electronic Engineering Nanjing University of Aeronautics & Astronautics Nanjing , China Progress In Electromagnetics Research Letters, Vol. 13, 11 20, 2010 BLIND DIRECTION OF ANGLE AND TIME DELAY ESTIMATION ALGORITHM FOR UNIFORM LINEAR ARRAY EMPLOYING MULTI-INVARIANCE MUSIC X. Zhang, G. Feng,

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2 EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME Xavier Mestre, Pascal Vallet 2 Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, Barcelona (Spain) 2 Institut

More information

Markovian Source Separation

Markovian Source Separation IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 12, DECEMBER 2003 3009 Markovian Source Separation Shahram Hosseini, Christian Jutten, Associate Member, IEEE, and Dinh Tuan Pham, Member, IEEE Abstract

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

A Generalization of Blind Source Separation Algorithms for Convolutive Mixtures Based on Second-Order Statistics

A Generalization of Blind Source Separation Algorithms for Convolutive Mixtures Based on Second-Order Statistics 120 IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL 13, NO 1, JANUARY 2005 A Generalization of Blind Source Separation Algorithms for Convolutive Mixtures Based on Second-Order Statistics Herbert

More information

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece.

Machine Learning. A Bayesian and Optimization Perspective. Academic Press, Sergios Theodoridis 1. of Athens, Athens, Greece. Machine Learning A Bayesian and Optimization Perspective Academic Press, 2015 Sergios Theodoridis 1 1 Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens,

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

Fitting Linear Statistical Models to Data by Least Squares: Introduction

Fitting Linear Statistical Models to Data by Least Squares: Introduction Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo

1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.Box 5400,

More information

BLIND source separation is a set of techniques that is used

BLIND source separation is a set of techniques that is used GROUP 743, 7 TH SEMESTER SIGNAL PROCESSING, AUTUMN 2004, AALBORG UNIVERSITY, FREDRIK BAJERS VEJ 7A, 9220 AALBORG ØST, DENMARK A Novel Approach to Blind Separation of Delayed Sources in Linear Mixtures

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

An improved convergence theorem for the Newton method under relaxed continuity assumptions

An improved convergence theorem for the Newton method under relaxed continuity assumptions An improved convergence theorem for the Newton method under relaxed continuity assumptions Andrei Dubin ITEP, 117218, BCheremushinsaya 25, Moscow, Russia Abstract In the framewor of the majorization technique,

More information

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxations Dian Mo Department of Electrical and Computer Engineering University of Massachusetts Amherst, MA 3 mo@umass.edu Marco F.

More information

Chapter 8 Gradient Methods

Chapter 8 Gradient Methods Chapter 8 Gradient Methods An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Introduction Recall that a level set of a function is the set of points satisfying for some constant. Thus, a point

More information

Self-Tuning Spectral Clustering

Self-Tuning Spectral Clustering Self-Tuning Spectral Clustering Lihi Zelnik-Manor Pietro Perona Department of Electrical Engineering Department of Electrical Engineering California Institute of Technology California Institute of Technology

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

SOURCE SEPARATION OF BASEBAND SIGNALS IN POST-NONLINEAR MIXTURES

SOURCE SEPARATION OF BASEBAND SIGNALS IN POST-NONLINEAR MIXTURES SOURCE SEPARATION OF BASEBAND SIGNALS IN POST-NONLINEAR MIXTURES L. T. Duarte, C. Jutten, B. Rivet GIPSA-lab UMR CNRS 526 Institut Polytechnique de Grenoble BP 46-3842 Saint Martin d Hères, France R. Suyama,

More information

the tensor rank is equal tor. The vectorsf (r)

the tensor rank is equal tor. The vectorsf (r) EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications

More information

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n. Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some

More information

One-unit Learning Rules for Independent Component Analysis

One-unit Learning Rules for Independent Component Analysis One-unit Learning Rules for Independent Component Analysis Aapo Hyvarinen and Erkki Oja Helsinki University of Technology Laboratory of Computer and Information Science Rakentajanaukio 2 C, FIN-02150 Espoo,

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

FAST AND ACCURATE METHODS OF INDEPENDENT COMPONENT ANALYSIS: A SURVEY

FAST AND ACCURATE METHODS OF INDEPENDENT COMPONENT ANALYSIS: A SURVEY K Y B E R N E T I K A V O L U M E 4 7 ( 2 0 1 1 ), N U M B E R 3, P A G E S 4 2 6 4 3 8 FAST AND ACCURATE METHODS OF INDEPENDENT COMPONENT ANALYSIS: A SURVEY Petr Tichavský and Zbyněk Koldovský This paper

More information

ON CONSISTENCY AND ASYMPTOTIC UNIQUENESS IN QUASI-MAXIMUM LIKELIHOOD BLIND SEPARATION OF TEMPORALLY-DIVERSE SOURCES

ON CONSISTENCY AND ASYMPTOTIC UNIQUENESS IN QUASI-MAXIMUM LIKELIHOOD BLIND SEPARATION OF TEMPORALLY-DIVERSE SOURCES ON CONSISENCY AND ASYMPOIC UNIQUENESS IN QUASI-MAXIMUM LIKELIHOOD BLIND SEPARAION OF EMPORALLY-DIVERSE SOURCES Amir Weiss, Arie Yeredor, Sher Ali Cheema and Martin Haardt School of Electrical Engineering

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Independent Component Analysis and Its Application on Accelerator Physics

Independent Component Analysis and Its Application on Accelerator Physics Independent Component Analysis and Its Application on Accelerator Physics Xiaoying Pang LA-UR-12-20069 ICA and PCA Similarities: Blind source separation method (BSS) no model Observed signals are linear

More information