BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES. Ofir Lindenbaum, Arie Yeredor Ran Vitek Moshe Mishali

Size: px
Start display at page:

Download "BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES. Ofir Lindenbaum, Arie Yeredor Ran Vitek Moshe Mishali"

Transcription

1 2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT , 2013, SOUTHAPMTON, UK BLIND SEPARATION OF SPATIALLY-BLOCK-SPARSE SOURCES FROM ORTHOGONAL MIXTURES Ofir Lindenbaum, Arie Yeredor Ran Vitek Moshe Mishali School of Electrical Engineering Altair EZchip Tel-Aviv University, Tel-Aviv, Israel Hod Hasharon, Israel Yokneam, Israel ABSTRACT We addresses the classical problem of blind separation of a static linear mixture, where separation is not based on statistical assumptions (such as independence) regarding the sources, but rather on their spatial (block-) sparsity, and with an additional constraint of an orthogonal mixing-matrix. An algorithm for this problem was recently proposed by Mishali and Eldar, and consists of two steps: one for recovering the support of the sources, and a subsequent one for recovering their values. That algorithm has two shortcomings: One is an assumption that the spatial sparsity level of the sources at each time-instant is constant and known; The second is the algorithm s sensitivity to the possible presence of temporal blocks of the signals with identical support. In this work we propose two pre-processing stages for improving the applicability and the performance of the algorithm. A first stage is aimed at identifying blocks of similar support, and pruning the data accordingly for the support-recovery stage. A second stage is aimed at recovering the sparsity level at each time-instant by exploiting observed structural inter-relations between the signals at different time-instants. We demonstrate the improvement over the original algorithm using both synthetic data and mixed text-images. We also show that the algorithm outperforms the recovery rate of alternative source separation methods for such contexts, including K-SVD, a leading method for dictionary learning. Index Terms Blind Source Separation, Dictionary Learning, Block-Sparsity, Orthogonal Mixtures 1. INTRODUCTION We consider the classical case of a static, square, noiseless mixture x[t] = Ψ s[t], t=1,2,...t (1) where s[t] = [s 1[t] s N [t]] T R N are N source signals, Ψ R N N is an unknown mixing matrix, and x[t] R N are the observed signals. The goal is to blindly recover the source signals (up to possible permutation and sign). However, rather than use statistical assumptions on the source signals, we employ a structural assumption, that the sources (which are not necessarily independent) are jointly sparse, in the sense that at each time instant t only K t signals are active (namely, take non-zero values) simultaneously, where typically K t N for most values of t within the observation period. We shall initially assume that all K t values are known in advance, but would later relax this assumption and consider the estimation of these values from the data. An additional, somewhat restrictive assumption employed in this work, is that the unknown mixing matrix Ψ is orthogonal, Ψ T Ψ = I (the N N identity matrix). While this might not be a realistic assumption in Blind Source Separation (BSS) applications, it can be relevant, e.g., in the context of Dictionary Learning (DL), seeking sparse representations of the observations, when the dictionary is taken to be an orthogonal basis thereof. Orthogonal mixing matrices have been considered before, e.g., in [1], [2]. We add in passing, that although, in general, it is common practice in some BSS methods to apply a (spatial) prewhitening stage, which essentially orthogonalizes the mixing-matrix - such an option is irrelevant in the context of this work, since if Ψ is not orthogonal, a whitening operation would be very likely to ruin the joint sparsity property of the (whitened) sources. The recovery of sparse sources from their observed mixtures is also known as Sparse Component Analysis (SCA). As opposed to classical Independent Component Analysis (ICA), which relies on statistical assumptions (mutual independence) regarding the sources, SCA does not require such assumptions, as it is based on structural sparsity assumptions. Previous papers on SCA, such as by Georgiev et al. [3] or Gribonval and Lesage [4], propose finding Ψ through a combinatorial search; Earlier papers, e.g., by Bofill and Zibulevsky [5] or by Li et al. [6], use an approximate maximization programming. Note that in our model the sources are not necessarily assumed to be temporally sparse - they are merely assumed to be jointly (spatially) K t-sparse (with K t N) at each time-instant t. While intuitively both kinds of sparsity should be closely related, theoretically the sources can fulfill either sparsity assumption with or without fulfilling the other. In fact, the recovery of a single temporally block-sparse signal from compressed sensing thereof has been treated extensively in recent years, see, e.g., [7], [8], [9], [10]. However, temporal block-sparsity of the individual sources do not necessarily imply spatial block-sparsity, and vice-versa. Our current work is an extension of the work by Mishali and Eldar [1], where an efficient two-stage approach was proposed for SCA with an orthogonal Ψ and with a known constant sparsity-level K t = K t of the signals. In the first stage the support-patterns of the signals are recovered, namely the K t active sources in each s[t] are identified. This is accomplished by exploiting the orthogonality of Ψ to derive a set of mutual activity rules based on the T T temporal sample-covariance matrix of the observed mixtures. Solving these rules reveals the unknown support. In the second stage the values are recovered by solving the remaining set of equations using an alternating minimization procedure. The original algorithm in [1] encounters difficulties in recovering the full support-patterns when handling natural signals: Some typical sparse signals (e.g., text images) contain segments ( blocks ) of energy in between segments of silence (zero energy). Our contribution in this work are two major modifications to the original algorithm. First, we add a preprocessing stage to reveal these block segments, thereby improv /13/$31.00 c 2013 IEEE

2 ing the ability of the algorithm to recover the support-patterns. Next, we relax the restrictive assumption of a constant, known sparsitylevel (K t = K t) and allow different, unknown sparsity levels which are estimated at each time-instant t from the correlation patterns of the data. The paper is structured as follows: Section II presents the problem formulation. In Section III the support recovery algorithm of [1] is briefly described. In section IV we describe the proposed preprocessing stage for addressing natural signals which contain blocks. Section V describes the estimation of the sparsity-level at each time frame. Section VI describes the values recovery algorithm. Experimental results are presented and analyzed in section VII, followed by conclusions in Section VIII Model 2. PROBLEM FORMULATION Expressing (1) in matrix notations, we have X = ΨS, (2) where S = [s[1] s[2] s[t ]] R N T, with a similar structure for X R N T. Each column s[t] of S is assumed to have exactly (not more and not less) K t non-zero-values, and we shall assume for now that all K t are known (an assumption that will be relaxed later on). We denote that property of s[t] as K t-sparsity, expressed mathematically as s[t] 0 = K t, t=1,...,t (3) where the l 0 pseudo-norm 0 counts the number of non-zero values of its vector argument. The separation problem can thus be recast as the following optimization program: ( Ψ, Ŝ) = arg min X ΨS F Ψ,S s.t. Ψ T Ψ = I and s[t] 0 = K t, t = 1,..., T. (4) (here F denotes the Frobenius norm). The pair of matrices Ψ, Ŝ are the primal solution of (4). Next, we outline the conditions for uniqueness of this solution Uniqueness The BSS problem as presented in (2) has a well-known inherent permutation and scale indeterminacy (which, when considering the orthogonality constraint on Ψ, reduces to sign indeterminacy). Therefore, unique recovery of Ψ and S is not possible. Nevertheless, uniqueness of the solution to (4) up to these permutation and sign ambiguities can be addressed. Here we quote the following result by Aharon et al. [11], which guarantees such uniqueness. Theorem 1. The factorization B = AS, where A R M N has normalized columns, and where all columns of S R N T are at most K-sparse, is unique up to signed permutations under the following conditions: Support: Every 2K columns of A are linearly independent. Richness: Upon clustering the columns of S according to their support, each cluster consists of at least K + 1 columns, sharing the same support. In addition, for every 1 n N, there exist k, l such that the intersection supp(s :,k ) supp(s :,l ) = {n}, where supp(s :,k ) denotes the support of the k-th column of S. Non-degeneracy: Every column subset S :,J of cardinality J = D, with R rows that are non-identically zero, has Rank(S :,J) = min{d, R}. In our case we assume that the mixing matrix A = Ψ is orthogonal, which leads to the following corollary (see [12] for details): Corollary 1. Consider the setting of Theorem 1 with an orthogonal A. Then, uniqueness holds if N 2K, S is non-degenerate and rich, possibly up to the following richness exceptions: The intersection property may not hold for n = N, and for each cluster of columns (sharing the same support), it may also not hold for one entry of the support. A rich matrix S can be constructed with T = 2N(K + 1) columns [11], whereas in our case of an orthogonal mixture, the exceptions of Corollary 1 allows to reach a similar construction with T = 2(N N 0)(K + 1) columns, where N 0 = 1 + (N 1)/K (here denotes the lower integer part of its argument). Both the Theorem and Corollary are proved by explicitly constructing Ψ and S through a procedure with combinatorial complexity - which however is irrelevant for the retrieval of these matrices. 3. SUPPORT RECOVERY In this section we provide a brief description of the support recovery algorithm proposed by Mishali and Eldar in [1]. The recovery is based on the sample-correlation matrix C = X T X R T T, which, thanks to the orthogonality of Ψ, provides direct information on the correlation between the sources, since C = X T X = S T Ψ T ΨS = S T S. (5) Using C, the recovery algorithm relies on the following set of properties, ensuing from (5) and from our model assumptions, in addition to mild statistical assumptions on the distributions of values of the sources (see below): (P1) C k,l = 0 implies that supp(s :,k ) and supp(s :,l ) are disjoint; (P2) C k,l 0 implies that supp(s :,k ) supp(s :,l ) is not empty; (P3) A column S :,t = s[t] of S contains exactly K t non-zeros; (P4) The order of rows of S is immaterial for successful recovery of the support. Property (P1) holds (deterministically) provided that the nonzero values of all sources are all positive; It also holds probabilistically with probability 1 (w.p.1) if these values are drawn independently from any continuous distribution. The support recovery algorithm is based on defining rules (see below) and on exploiting these rules, in accordance with the four properties above. The algorithm iteratively constructs an N T matrix denoted Z, which contains activity indicators regarding the respective elements in S. The elements of Z can take three optional values: Z n,t = 1 indicating an active signal (namely that s n[t] is nonzero); Z n,t = 0 indicating inactivity (s n[t] = 0); and Z n,t = φ indicating that s n[t] is unresolved. The matrix Z is initialized such that all of its elements are φ, and these elements are updated (inferred) as 1-s or 0-s in an iterative process as described in Algorithm 1 below. To explain the algorithm, let us first define the term A Rule for Column t as a set of row-indices

3 I, indicating that column t should have at least one non-zero element with a row-index in that set. The algorithm maintains a list of rules for each column, dynamically removing old rules or creating new rules at each iteration, as described in detail below. A special rule is a rule which is set (for each column) upon initialization, defining I = {1,..., N} (the entire set of all possible row-indices) and, unlike a regular rule (a rule created by the algorithm), a special rule for column t is not removed until K t 1-s are identified for that column. We say that symmetry holds with respect to a given rule if all elements of the respective set I either are all included in, or are all excluded from, all other rules for the same column. Using this terminology, the algorithm takes the following form: Algorithm 1 Recover support Input: C = X T X, sparsity levels {K t} T t=1 Output: Support pattern Z Initialization: Z n,t = φ n {1,..., N}, t {1,..., T } A special rule I = {1,..., N} for each column. for t = 1 to T do for all rules r in the list L t of rules for column t do Let I r denote the set of row-indices pointed by rule r L t if Z n,t = 1 for some n I r and r is a regular rule then Remove r form L t end if if Symmetry holds with respect to I r then Choose n I r and set Z n,t = 1 end if for all {n, t C t,t = 0, Z n,t = 1} do Z n,t = 0 if Z :,t indicates exactly K t non-zero values then supp(s :,t) = {n Z n,t = 1} Z m,t = 0 for every m / supp(s :,t) Remove the special rule from L t for all {C t,t 0} do Add a regular rule for column t with I = supp(s :,t) end if The algorithm iteratively utilizes the properties (P1)-(P4), until there are no longer any changes to the matrix Z. Although convergence (in the sense of meeting the stopping condition in a finite number of iterations) is guaranteed, perfect recovery of the support is not guaranteed, since Z might still contain a large number of unresolved elements. This usually happens when the sources are characterized by temporal blocks of the same spatial sparsity pattern. In the next section we propose a modification to the algorithm, which addresses such cases. 4. BLOCKS PRUNING Some typical natural sparse signals may contain segments ( blocks ) of energy in between segments of silence (zero energy). As mentioned above (and demonstrated in simulation in the sequel), Algorithm 1 encounters difficulties in fully recovering the support of such signals, due to the compromised diversity in their structure. Let us define a Block as a subset of columns of S sharing the exact same support, {B} = {t 1, t 2,..., t L supp(s[t 1]) = supp(s[t 2]) = = supp(s[t L])}, (6) where L = B is the cardinality of the block. The basic reason for the performance degradation of the original algorithm in the support-recovery in the presence of blocks, is that since all columns in a block share the exact same support, they also share the same set of rules, and fully resolving one column in the block still does not imply resolution of the other columns. We propose to add a preprocessing stage, which helps solving this problem. Let us first define the correlation indicators matrix C = sign(c) R T T, which only contains 1-s and 0-s, such that for nonzero elements in C the respective element in C is 1, and for zero elements in C the respective element in C is 0. We make the following observations regarding the matrix C: Observation 1. Columns in C corresponding to columns in S with the same support are identical. Observation 2. The probability that two columns in C corresponding to columns in S with different support are be identical decreases with T and will vanish for T >> N. To prove the Observation 1 (by contradiction), let supp(s[t 1]) = supp(s[t 2]) for some columns t 1 and t 2, and assume that t C t,t1 C t,t2. This implies that sign(s T [t]s[t 1]) sign(s T [t]s[t 2]). Without loss of generality assume that sign(s T [t]s[t 1]) = 0 and sign(s T [t]s[t 2]) = 1, implying (respectively) that supp(s[t]) supp(s[t 1]) = φ, whereas supp(s[t]) supp(s[t 2]) φ, which in turn implies that supp(s[t 1]) supp(s[t 2]), contradicting our assumption. Observation 2 can be explained by noting that if T is large enough such that all possible supports exist in S, then every two columns in S with different supports must lead to different columns in C. Based on these two observation we seek the blocks in the available correlation indicators matrix C in order to estimate the blocks in the unknowns matrix S. Once these blocks are identified, the matrix X is pruned accordingly, leaving one column from each block as the representative of the block. The original algorithm is then applied to the pruned version of X, and the resulting estimated supports are then re-expanded by blocks for retrieval of the full support pattern of S. Algorithm 2 Block Pruning 1. Estimate the blocks using the correlation indicators matrix C. 2. Prune the matrix X, leaving only one column representing each block. 3. Recover the support of the trimmed version of X using Algorithm Re-expand to recover the support of S using the estimated blocks. In order to get empirical quantification of the percentage of identification errors inflicted by Observation 2 for moderate values of T, we ran the following experiment. We built random block source matrices S R N T with constant sparsity levels K t = K, where the indices of the K active signals in each column were randomly

4 drawn, uniformly and independently. The values for these active signals were drawn independently from a standard Normal distribution. The blocking effect was created by temporally convolving each signal with a rectangular window of width B, thereby obtaining blocks of typical width B. We ran simulations for several values of N = 10, 12, 14 and K = 2, 3, 4, with B = 2. Figure 1 shows the ratio of falsely identified blocks vs. the observation length T, averaged over 1000 independent trials. Note that there can be no mis-detected blocks, Since Observation 1 holds deterministically. it can be shown, using some statistical model on the support distribution, that if the number of available columns is sufficiently large and S is sufficiently rich, then these lower-bounds on the sparsity levels would become tight and the estimates would converge to the true K t. The main difference between this estimation approach and a naïve energy-based approach is that in the proposed approach the estimation of a column s sparsity level makes use of the inter-relations between that column and all the other columns (as well as of the internal relations between these other columns), as opposed to basing the estimate on the column data alone. We propose a computationally efficient algorithm for computation of the proposed estimator: Algorithm 3 Sparsity estimation Input: C t - a square sub-matrix of C, containing only the rows and columns indices taken from A t. while C t I do Delete from C t the row and column with the largest number of non-zero elements. end while Set ˆK t to the dimension of C t. Fig. 1. The ratio of columns with the same correlation indicators and different support vs. the number of columns. 5. SPARSITY LEVEL ESTIMATION As explained earlier, the original algorithm in [1] is based on the often unrealistic assumption that the exact sparsity level K t at each time frame is at least known (if not constant, as assumed in [1]). To relax this restrictive assumption, we tried using statistically-based estimators in order to estimate the sparsity levels, using the energy of the columns (which is preserved by the orthogonal mixing matrix). However such estimators yielded poor results in the supportrecovery stage, which in turn resulted in poor separation. Here we present an alternative estimator of sparsity levels, based on the correlation indicators matrix C, exploiting the mutual structure information of S, rather than the statistical properties of the sources. The basic idea is to use the correlation indicators values to find a lower bound on the number of active signal in each column. Let us define for each column t a set A t - the set of indices of all columns which are correlated with that column (according to the correlation indicators matrix): A t = {s C s,t = 1} (7) Our proposed estimation scheme is based on the following observation: Observation 3. K t is larger than or equal to the size of any subset of A t containing columns that are uncorrelated with each other. Therefore, it is larger than or equal to the size of the largest of these subsets. Consequently, we should search for the largest uncorrelated subset of A t, and set K t to be the cardinality of that subset. Observation 3 can be corroborated by a simple example: If s[t] and s[t 1] are correlated, and s[t] and s[t 2] are correlated, but s[t 1] and s[t 2] are uncorrelated, then s[t] has at least two active values. Moreover, To evaluate the accuracy of the sparsity levels estimation we ran an experiment with N = 15 sources and varying sparsity levels drawn uniformly and independently between 1 and 5 (inclusive). The mean percentage of errors (averaged over 200 independent trials) is shown in Figure 2 vs. the number of observations T. As expected, the errors vanish when the observation length is sufficiently long. Fig. 2. Sparsity estimation error percentage vs. the number of columns. 6. SOURCE SEPARATION In this Section we provide a brief description of the algorithm for recovering the sources values (following the recovery of their support patterns), as proposed in [1]. With the support information available in Z, the estimation of the sources (and of the mixing matrix) translates into the following optimization problem: ( Ψ, Ŝ) = arg min X ΨS F Ψ,S s.t. Ψ T Ψ = I and S n,t = 0, {(n, t) Z n,t = 0}. (8)

5 Note that we consider the unresolved elements of Z (if any) as indicators of active signals (namely, all Z n,t = φ are considered Z n,t = 1) - the values recovery can still assign very small (or zero) values to mis-identified active locations. Any solution of (8) will minimize the Lagrangian: L(Ψ, S; Γ, Π) = X ΨS 2 F + Tr[Γ(Ψ T Ψ I)] + Tr(Π T S) (9) where Γ R N N and Π R N T are matrices of Lagrangian multipliers, with Π n,t = 0 {(n, t) Z n,t 0}. Calculating the gradients with respect to each variable yields: ΠL = 0 S n,t = 0, {(n, t) Z n,t = 0} ΓL = 0 Ψ T Ψ = I ΨL = 0 (X ΨS)S T = Ψ T S = 0 2Ψ T X + 2Ψ T ΨS + Π = 0 (10a) (10b) (10c) (10d) We use an alternating minimization procedure to find Ψ and S satisfying (10a)-(10d). When keeping S fixed, the solution to (10b)- (10d) in terms of Ψ is given by Ψ = UV T, where U R N N and V R N N are orthogonal matrices taken from the singular value decomposition (SVD) of the matrix XS T = UΣV T [1]. When keeping the Ψ fixed, the solution to (10a), (10c) and (10d) in terms of S is given (as could be expected) by S = (Ψ T X) Z, where denotes Hadamard s (element-wise) product, and where in case Z contains unresolved elements (φ), these elements are taken as 1. Namely, given Ψ, the recovered sources are the result of unmixing X, limited to the recovered supports. The problem as formulated in (8) is not convex, and thus the system (10a)-(10d) only guarantees a stationary point. Moreover, the alternating approach does not guarantee convergence to a true minimum. Nonetheless, empirically (according to our experience), algorithm 4 always converged to an optimal point of (8). In many cases S is recovered correctly, even when its support is only partially recovered in the initial stage. Algorithm 4 Recover values Input: X, support pattern Z Output: Ψ, Ŝ Initialization: Ψ as any N N orthogonal matrix repeat 1. Ŝ = ( Ψ T X) Z 2. Decompose XŜT = UΣV T using SVD 3. Ψ = UV T until No change of K-SVD to an orthogonal solution. The sparsity-level input to the original algorithm was K = 2 in both experiments: In the first experiment it is the true constant sparsity level, and in the second experiment it was observed to be the most common sparsity level in the mixtures and yielded best results for that algorithm. For the second experiment we also compare the results tho those of JADE [13], a classical ICA-based algorithm, which exploits the sources statistical independence. Although the independence assumption happens to be quite justified in this experiment, the JADE results are seen to be inferior to those of our structural sparsity based approach. Fig. 4. Experiment 1: Recovery percentage of the original algorithm, of the proposed algorithm and of K-SVD vsṫhe number of columns. B indicates the block-lengths: one (no blocks) or two. In the first experiment a K-sparse matrix S R N T is constructed by selecting the locations of the non zeros randomly for each column, then drawing the non zero values from a standard Normal distribution. Another N N matrix is drawn from a standard Normal distribution, and we use the left orthogonal matrix in its SVD as the orthogonal mixing matrix Ψ, yielding the mixture X = ΨS R N T. The four-stages process described above (block-pruning, estimation of sparsity-levels, support-recovery and sources-recovery) is applied to the measurements matrix X, and the recovered matrices Ψ, Ŝ are compared to the original pair. For experiment 1, success is counted if the Frobenius norms of all of the following matrixdifferences (after resolving sign ambiguities and row permutations) are smaller than 10 3 : Ŝ S, Ψ Ψ and X ΨŜ. The successrates of Experiment 1 (averaged over 200 independent trials) are shown in Figure 4. Separation results of Experiment 2 are shown in Figure CONCLUSION 7. EXPERIMENTAL RESULTS In order to examine and compare the overall performance of the proposed algorithm, we ran two experiments: In the first experiment we examine the performance of the algorithm with N = 12 sources with a fixed sparsity level K t = K = 2, with and without blocks. In the second experiment we show the recovery rate for a mixture of N = 12 natural sparse text images. In both experiments we compare our method to the original algorithm in [1], as well as to K-SVD [11], a leading dictionary learning algorithm. K-SVD was slightly modified by adding an orthogonalization stage between every 5 consecutive iterations, this choice was found to provide fast convergence We extended the method proposed by Mishali and Eldar in [1] for sparse source separation with an orthogonal mixing matrix. We exploit the knowledge from the correlation indicators matrix in order to recover the sparsity level, blocks dimensions and the support of each column of the mixed matrix. This knowledge enables us to derive a set of linear equations (at the size of the sparsity level of the original data matrix), without relying on statistical properties of the sources (such as mutual independence, as common in ICA), but only on their spatial sparsity. The improvement over the original algorithm of [1], as well as over other competing methods, was demonstrated in simulation. The algorithm can be applicable not only in the context of BSS, but also for dictionary learning (DL) problems.

6 Fig. 3. Experiment 2: From left to right: One of the mixed images (first), recovered image using the proposed algorithm (second), recovered image using JADE (third), recovered image using K-SVD (fourth), recovered image using original algorithm (fifth). For each algorithm we chose to show the one recovered image (among the 12) that best reflects the common quality of recovered images by that algorithm. 9. REFERENCES [1] M. Mishali and Y. C. Eldar, Sparse source separation from orthogonal mixtures, in IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP 2009), 2009, pp [2] T. E. Abrudan, J. Eriksson, and V. Koivunen, Steepest descent algorithms for optimization under unitary matrix constraint, IEEE Trans. Signal Process, vol. 56, no. 3, pp , [3] P. Georgiev, F. Theis, and A. Cichocki, Blind source separation and sparse component analysis of overcomplete mixtures, in ICASSP, vol. 5, [4] R. Gribonval and S. Lesage, A survey of sparse component analysis for blind source separation: Principles, perspectives, and new challenges,, in Proceedings of ESANN, vol. 6, 2006, pp [5] P. Bofill and M. Zibulevsky, Underdetermined blind source separation using sparse representations, Signal Processing, vol. 1, no. 1, pp , [6] Y. Li, A. Cichocki, and S. Amari, Analysis of sparse representation and blind source separation, Neural Computation, vol. 16, no. 6, pp , [7] M. Stojnic, F. Parvaresh, and B. Hassibi, On the reconstruction of block-sparse signals with an optimal number of measurements, Signal Processing, IEEE Transactions on, vol. 57, no. 8, pp , [8] Z. Ben-Haim and Y. Eldar, Near-oracle performance of greedy block-sparse estimation techniques from noisy measurements, Selected Topics in Signal Processing, IEEE Journal of, vol. 5, no. 5, pp , [9] E. Elhamifar and R. Vidal, Block-sparse recovery via convex optimization, Signal Processing, IEEE Transactions on, vol. 60, no. 8, pp , [10] Z. Zhang and B. Rao, Extension of sbl algorithms for the recovery of block sparse signals with intra-block correlation, Signal Processing, IEEE Transactions on, vol. 61, no. 8, pp , [11] M. Aharon, M. Elad, and A. M. Bruckstein, On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them, Linear Algebra and Its Applications, vol. 416, no. 1, pp , [12] M. Mishali and Y. C. Eldar, Sparse source separation from orthogonal mixtures, CCIT Report no.704, EE Dept., Techion - Israel Institute of Technology, Sept [13] J.-F. Cardoso and A. Souloumiac, Blind beamforming for non Gaussian signals, IEE Proceedings-F, vol. 140, no. 6, pp , Dec

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Blind separation of instantaneous mixtures of dependent sources

Blind separation of instantaneous mixtures of dependent sources Blind separation of instantaneous mixtures of dependent sources Marc Castella and Pierre Comon GET/INT, UMR-CNRS 7, 9 rue Charles Fourier, 9 Évry Cedex, France marc.castella@int-evry.fr, CNRS, I3S, UMR

More information

Fast Sparse Representation Based on Smoothed

Fast Sparse Representation Based on Smoothed Fast Sparse Representation Based on Smoothed l 0 Norm G. Hosein Mohimani 1, Massoud Babaie-Zadeh 1,, and Christian Jutten 2 1 Electrical Engineering Department, Advanced Communications Research Institute

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,

More information

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA

More information

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

EE 381V: Large Scale Optimization Fall Lecture 24 April 11

EE 381V: Large Scale Optimization Fall Lecture 24 April 11 EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that

More information

Blind Compressed Sensing

Blind Compressed Sensing 1 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE arxiv:1002.2586v2 [cs.it] 28 Apr 2010 Abstract The fundamental principle underlying compressed sensing is that a signal,

More information

Dictionary Learning for L1-Exact Sparse Coding

Dictionary Learning for L1-Exact Sparse Coding Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Morphological Diversity and Source Separation

Morphological Diversity and Source Separation Morphological Diversity and Source Separation J. Bobin, Y. Moudden, J.-L. Starck, and M. Elad Abstract This paper describes a new method for blind source separation, adapted to the case of sources having

More information

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)

Fundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA) Fundamentals of Principal Component Analysis (PCA),, and Independent Vector Analysis (IVA) Dr Mohsen Naqvi Lecturer in Signal and Information Processing, School of Electrical and Electronic Engineering,

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES

A NEW FRAMEWORK FOR DESIGNING INCOHERENT SPARSIFYING DICTIONARIES A NEW FRAMEWORK FOR DESIGNING INCOERENT SPARSIFYING DICTIONARIES Gang Li, Zhihui Zhu, 2 uang Bai, 3 and Aihua Yu 3 School of Automation & EE, Zhejiang Univ. of Sci. & Tech., angzhou, Zhejiang, P.R. China

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES Mika Inki and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FIN-215 HUT, Finland ABSTRACT

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER

IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 9, SEPTEMBER 2015 1239 Preconditioning for Underdetermined Linear Systems with Sparse Solutions Evaggelia Tsiligianni, StudentMember,IEEE, Lisimachos P. Kondi,

More information

A New Estimate of Restricted Isometry Constants for Sparse Solutions

A New Estimate of Restricted Isometry Constants for Sparse Solutions A New Estimate of Restricted Isometry Constants for Sparse Solutions Ming-Jun Lai and Louis Y. Liu January 12, 211 Abstract We show that as long as the restricted isometry constant δ 2k < 1/2, there exist

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization

Design of Projection Matrix for Compressive Sensing by Nonsmooth Optimization Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima

More information

Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation

Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation Simon Arberet 1, Alexey Ozerov 2, Rémi Gribonval 1, and Frédéric Bimbot 1 1 METISS Group, IRISA-INRIA Campus de Beaulieu,

More information

Rigid Structure from Motion from a Blind Source Separation Perspective

Rigid Structure from Motion from a Blind Source Separation Perspective Noname manuscript No. (will be inserted by the editor) Rigid Structure from Motion from a Blind Source Separation Perspective Jeff Fortuna Aleix M. Martinez Received: date / Accepted: date Abstract We

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

Least square joint diagonalization of matrices under an intrinsic scale constraint

Least square joint diagonalization of matrices under an intrinsic scale constraint Least square joint diagonalization of matrices under an intrinsic scale constraint Dinh-Tuan Pham 1 and Marco Congedo 2 1 Laboratory Jean Kuntzmann, cnrs - Grenoble INP - UJF (Grenoble, France) Dinh-Tuan.Pham@imag.fr

More information

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES

BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr

More information

An Improved Cumulant Based Method for Independent Component Analysis

An Improved Cumulant Based Method for Independent Component Analysis An Improved Cumulant Based Method for Independent Component Analysis Tobias Blaschke and Laurenz Wiskott Institute for Theoretical Biology Humboldt University Berlin Invalidenstraße 43 D - 0 5 Berlin Germany

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

Underdetermined Instantaneous Audio Source Separation via Local Gaussian Modeling

Underdetermined Instantaneous Audio Source Separation via Local Gaussian Modeling Underdetermined Instantaneous Audio Source Separation via Local Gaussian Modeling Emmanuel Vincent, Simon Arberet, and Rémi Gribonval METISS Group, IRISA-INRIA Campus de Beaulieu, 35042 Rennes Cedex, France

More information

Independent Component Analysis and Unsupervised Learning

Independent Component Analysis and Unsupervised Learning Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent

More information

Robust multichannel sparse recovery

Robust multichannel sparse recovery Robust multichannel sparse recovery Esa Ollila Department of Signal Processing and Acoustics Aalto University, Finland SUPELEC, Feb 4th, 2015 1 Introduction 2 Nonparametric sparse recovery 3 Simulation

More information

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT

MLCC 2018 Variable Selection and Sparsity. Lorenzo Rosasco UNIGE-MIT-IIT MLCC 2018 Variable Selection and Sparsity Lorenzo Rosasco UNIGE-MIT-IIT Outline Variable Selection Subset Selection Greedy Methods: (Orthogonal) Matching Pursuit Convex Relaxation: LASSO & Elastic Net

More information

STRUCTURE-AWARE DICTIONARY LEARNING WITH HARMONIC ATOMS

STRUCTURE-AWARE DICTIONARY LEARNING WITH HARMONIC ATOMS 19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 STRUCTURE-AWARE DICTIONARY LEARNING WITH HARMONIC ATOMS Ken O Hanlon and Mark D.Plumbley Queen

More information

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices

Recovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach

More information

Non-convex Robust PCA: Provable Bounds

Non-convex Robust PCA: Provable Bounds Non-convex Robust PCA: Provable Bounds Anima Anandkumar U.C. Irvine Joint work with Praneeth Netrapalli, U.N. Niranjan, Prateek Jain and Sujay Sanghavi. Learning with Big Data High Dimensional Regime Missing

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees

Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees Lecture: Introduction to Compressed Sensing Sparse Recovery Guarantees http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Emmanuel Candes and Prof. Wotao Yin

More information

Notes on Latent Semantic Analysis

Notes on Latent Semantic Analysis Notes on Latent Semantic Analysis Costas Boulis 1 Introduction One of the most fundamental problems of information retrieval (IR) is to find all documents (and nothing but those) that are semantically

More information

SHIFT-INVARIANT DICTIONARY LEARNING FOR SPARSE REPRESENTATIONS: EXTENDING K-SVD

SHIFT-INVARIANT DICTIONARY LEARNING FOR SPARSE REPRESENTATIONS: EXTENDING K-SVD SHIFT-INVARIANT DICTIONARY LEARNING FOR SPARSE REPRESENTATIONS: EXTENDING K-SVD Boris Mailhé, Sylvain Lesage,Rémi Gribonval and Frédéric Bimbot Projet METISS Centre de Recherche INRIA Rennes - Bretagne

More information

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Prasad Sudhakar, Rémi Gribonval To cite this version: Prasad Sudhakar, Rémi Gribonval. Sparse filter models

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Invariancy of Sparse Recovery Algorithms

Invariancy of Sparse Recovery Algorithms Invariancy of Sparse Recovery Algorithms Milad Kharratzadeh, Arsalan Sharifnassab, and Massoud Babaie-Zadeh Abstract In this paper, a property for sparse recovery algorithms, called invariancy, is introduced.

More information

Probabilistic construction of t-designs over finite fields

Probabilistic construction of t-designs over finite fields Probabilistic construction of t-designs over finite fields Shachar Lovett (UCSD) Based on joint works with Arman Fazeli (UCSD), Greg Kuperberg (UC Davis), Ron Peled (Tel Aviv) and Alex Vardy (UCSD) Gent

More information

Block-Sparse Recovery via Convex Optimization

Block-Sparse Recovery via Convex Optimization 1 Block-Sparse Recovery via Convex Optimization Ehsan Elhamifar, Student Member, IEEE, and René Vidal, Senior Member, IEEE arxiv:11040654v3 [mathoc 13 Apr 2012 Abstract Given a dictionary that consists

More information

Degeneracies, Dependencies and their Implications in Multi-body and Multi-Sequence Factorizations

Degeneracies, Dependencies and their Implications in Multi-body and Multi-Sequence Factorizations Degeneracies, Dependencies and their Implications in Multi-body and Multi-Sequence Factorizations Lihi Zelnik-Manor Michal Irani Dept. of Computer Science and Applied Math The Weizmann Institute of Science

More information

https://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:

More information

A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality

A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality A Multi-task Learning Strategy for Unsupervised Clustering via Explicitly Separating the Commonality Shu Kong, Donghui Wang Dept. of Computer Science and Technology, Zhejiang University, Hangzhou 317,

More information

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning

Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning Bo Liu Department of Computer Science, Rutgers Univeristy Xiao-Tong Yuan BDAT Lab, Nanjing University of Information Science and Technology

More information

The Analysis Cosparse Model for Signals and Images

The Analysis Cosparse Model for Signals and Images The Analysis Cosparse Model for Signals and Images Raja Giryes Computer Science Department, Technion. The research leading to these results has received funding from the European Research Council under

More information

Robust Sparse Recovery via Non-Convex Optimization

Robust Sparse Recovery via Non-Convex Optimization Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

Sparse Phase Retrieval Using Partial Nested Fourier Samplers

Sparse Phase Retrieval Using Partial Nested Fourier Samplers Sparse Phase Retrieval Using Partial Nested Fourier Samplers Heng Qiao Piya Pal University of Maryland qiaoheng@umd.edu November 14, 2015 Heng Qiao, Piya Pal (UMD) Phase Retrieval with PNFS November 14,

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013

Machine Learning for Signal Processing Sparse and Overcomplete Representations. Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 Machine Learning for Signal Processing Sparse and Overcomplete Representations Bhiksha Raj (slides from Sourish Chaudhuri) Oct 22, 2013 1 Key Topics in this Lecture Basics Component-based representations

More information

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 05 Semi-Discrete Decomposition Rainer Gemulla, Pauli Miettinen May 16, 2013 Outline 1 Hunting the Bump 2 Semi-Discrete Decomposition 3 The Algorithm 4 Applications SDD alone SVD

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA

A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA James Voss The Ohio State University vossj@cse.ohio-state.edu Mikhail Belkin The Ohio State University mbelkin@cse.ohio-state.edu Luis Rademacher

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

Applied Machine Learning for Biomedical Engineering. Enrico Grisan

Applied Machine Learning for Biomedical Engineering. Enrico Grisan Applied Machine Learning for Biomedical Engineering Enrico Grisan enrico.grisan@dei.unipd.it Data representation To find a representation that approximates elements of a signal class with a linear combination

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Estimation of linear non-gaussian acyclic models for latent factors

Estimation of linear non-gaussian acyclic models for latent factors Estimation of linear non-gaussian acyclic models for latent factors Shohei Shimizu a Patrik O. Hoyer b Aapo Hyvärinen b,c a The Institute of Scientific and Industrial Research, Osaka University Mihogaoka

More information

Sparse Subspace Clustering

Sparse Subspace Clustering Sparse Subspace Clustering Based on Sparse Subspace Clustering: Algorithm, Theory, and Applications by Elhamifar and Vidal (2013) Alex Gutierrez CSCI 8314 March 2, 2017 Outline 1 Motivation and Background

More information

IDENTIFIABILITY AND SEPARABILITY OF LINEAR ICA MODELS REVISITED. Jan Eriksson and Visa Koivunen

IDENTIFIABILITY AND SEPARABILITY OF LINEAR ICA MODELS REVISITED. Jan Eriksson and Visa Koivunen J IDENTIFIABILITY AND SEPARABILITY OF LINEAR IA MODELS REVISITED Jan Eriksson Visa Koivunen Signal Processing Laboratory Helsinki University of Technology PO Box 3000, FIN02015 HUT, Finl janeriksson,visakoivunen

More information

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018

CPSC 340: Machine Learning and Data Mining. Sparse Matrix Factorization Fall 2018 CPSC 340: Machine Learning and Data Mining Sparse Matrix Factorization Fall 2018 Last Time: PCA with Orthogonal/Sequential Basis When k = 1, PCA has a scaling problem. When k > 1, have scaling, rotation,

More information

The Iteration-Tuned Dictionary for Sparse Representations

The Iteration-Tuned Dictionary for Sparse Representations The Iteration-Tuned Dictionary for Sparse Representations Joaquin Zepeda #1, Christine Guillemot #2, Ewa Kijak 3 # INRIA Centre Rennes - Bretagne Atlantique Campus de Beaulieu, 35042 Rennes Cedex, FRANCE

More information

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA

GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA GROUP-SPARSE SUBSPACE CLUSTERING WITH MISSING DATA D Pimentel-Alarcón 1, L Balzano 2, R Marcia 3, R Nowak 1, R Willett 1 1 University of Wisconsin - Madison, 2 University of Michigan - Ann Arbor, 3 University

More information

Unsupervised Learning with Permuted Data

Unsupervised Learning with Permuted Data Unsupervised Learning with Permuted Data Sergey Kirshner skirshne@ics.uci.edu Sridevi Parise sparise@ics.uci.edu Padhraic Smyth smyth@ics.uci.edu School of Information and Computer Science, University

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

Uniqueness Conditions for A Class of l 0 -Minimization Problems

Uniqueness Conditions for A Class of l 0 -Minimization Problems Uniqueness Conditions for A Class of l 0 -Minimization Problems Chunlei Xu and Yun-Bin Zhao October, 03, Revised January 04 Abstract. We consider a class of l 0 -minimization problems, which is to search

More information

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210

ROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210 ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Constructing Explicit RIP Matrices and the Square-Root Bottleneck

Constructing Explicit RIP Matrices and the Square-Root Bottleneck Constructing Explicit RIP Matrices and the Square-Root Bottleneck Ryan Cinoman July 18, 2018 Ryan Cinoman Constructing Explicit RIP Matrices July 18, 2018 1 / 36 Outline 1 Introduction 2 Restricted Isometry

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Bayesian Paradigm. Maximum A Posteriori Estimation

Bayesian Paradigm. Maximum A Posteriori Estimation Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Compressed Sensing and Robust Recovery of Low Rank Matrices

Compressed Sensing and Robust Recovery of Low Rank Matrices Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech

More information

Semi-Blind approaches to source separation: introduction to the special session

Semi-Blind approaches to source separation: introduction to the special session Semi-Blind approaches to source separation: introduction to the special session Massoud BABAIE-ZADEH 1 Christian JUTTEN 2 1- Sharif University of Technology, Tehran, IRAN 2- Laboratory of Images and Signals

More information

On the Estimation of the Mixing Matrix for Underdetermined Blind Source Separation in an Arbitrary Number of Dimensions

On the Estimation of the Mixing Matrix for Underdetermined Blind Source Separation in an Arbitrary Number of Dimensions On the Estimation of the Mixing Matrix for Underdetermined Blind Source Separation in an Arbitrary Number of Dimensions Luis Vielva 1, Ignacio Santamaría 1,Jesús Ibáñez 1, Deniz Erdogmus 2,andJoséCarlosPríncipe

More information

Reconstruction from Anisotropic Random Measurements

Reconstruction from Anisotropic Random Measurements Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information