LOW RANK TENSOR DECONVOLUTION

Size: px
Start display at page:

Download "LOW RANK TENSOR DECONVOLUTION"

Transcription

1 LOW RANK TENSOR DECONVOLUTION Anh-Huy Phan, Petr Tichavský, Andrze Cichocki Brain Science Institute, RIKEN, Wakoshi, apan Systems Research Institute PAS, Warsaw, Poland Institute of Information Theory and Automation, Prague, Czech Republic ABSTRACT In this paper, we propose a low-rank tensor deconvolution problem which seeks multiway replicative patterns and corresponding activating tensors of rank- An alternating least squares (ALS) algorithm has been derived for the model to sequentially update loading components and the patterns In addition, together with a good initialisation method using tensor diagonalization, the update rules have been implemented with a low cost using fast inversion of block Toeplitz matrices as well as an efficient update strategy Experiments show that the proposed model and the algorithm are promising in feature extraction and clustering Index Terms tensor decomposition, tensor deconvolution, tensor diagonalization, CANDECOMP/PARAFAC INTRODUCTION Tensor decomposition especially the CANDECOMP/PARAFAC tensor decomposition (CPD) has found a wide range of applications in variety of areas such as in chemometrics, telecommunication, data mining, neuroscience, blind source separation [ 4] One of important applications of CPD is to extract hidden loading components which can provide physical insight of the source data In order to deal with shifting factors in sequential data such as time series or spectra data, Harshman et al [5] proposed the shifted CPD The model has been applied to inspecting neural activity [6] FitzGerald et al extended the shift model to nonnegative tensor factorisation and applied it to music separation [7] Cemgil et al [8] introduced a probabilistic framework for non-negative factor deconvolution for audio modeling Some other existing shifting models have been considered in nonnegative matrix factorisation (NMF) such as the convolutive NMF [9 ], two way CNMF, or multichannel NMF [2,3] In this paper, we consider a novel tensor deconvolution problem whose maor aim is to represent multiway data by replicative patterns and activating maps following a convolutive model, that is Y H M + + H R M R where denotes the tensor convolution, Y is of size I I 2 I 3, H r and M r are tensors of size r r2 r3, and K r K r2 K r3 with rn + K rn = I n, rn I n, The work of P Tichavský was supported by the Czech Science Foundation through proect No 4 373S Y ++ (a) CANDECOMP/PARAFAC (CPD) * ++ * Y H M H R M R (b) Low rank tensor deconvolution Fig (a) Illustration of CANDECOMP/PARAFAC (CPD) as a tool to extract rank- patterns from multiway data Y,and (b) low rank tensor deconvolution which represents the data by small patches H r and maps M r, which are rank- tensors respectively Although the roles of H r and M r are interchangeable because of commutativity of the convolution, we often consider patterns of relatively small sizes and K n n More specifically, we consider a rank constrained model of in which M r are rank- tensors, for r =,,R, ie, M r = a r b r c r,wherea r R K r, b r R K r2 and c r R K r3 are vectors of unit length, and represents the outer product The low rank tensor deconvolution in is rewritten as Y H (a b c ) + + H R (a R b R c R ) (2) and illustrated in Fig When data is a matrix, ie, I 3 =, the tensor deconvolution becomes rank- blind matrix deconvolution proposed in [4] In a particular case, when patterns H r all are scalar, ie, r = r2 = r3 =, the tensor deconvolution with R patterns simplifies into the rank-r CP tensor decomposition In general cases, we will show that this tensor deconvolution can be expressed as the rank-(, 2, 3 ) block tensor decomposition [5] with Toeplitz factor matrices For simplicity, patterns H r are supposed to be the same size, ie, r =, r2 = 2 and r3 = 3 for r =,,R It follows that loading components are also of the same length, ie, K r = K, K r2 = K 2, K r3 = K 3 for r =,,R Inthe following section, we will derive an ALS algorithm which sequentially updates loading components a r, b r, c r and the tensors H r Application of the model is then demonstrated for feature extraction and clustering of the hand-written digits /5/$3 25 IEEE 269 ICASSP 25

2 2 ALGORITHM We define a linear operator X = τ (x) which maps a vector x of length K to a lower triangular Toeplitz matrix X of size I, I = K + whose first column is [x T,,,] T X = τ (x), vec(x) = T K, x (3) where T K, = [I K, K,,I K, K, I K ] T of size (K + ) K comprises only ones and zeros The tensor decomposition in (2) can be expressed as Y H r A r 2 B r 3 C r, (4) where n denotes product between a tensor and a matrix along mode-n, A r = τ (a r ), B r = τ 2 (b r ), C r = τ r3 (c r )are Toeplitz matrices of size I, I 2 2 and I 3 3, respectively 2 Update of loading components a r In order to solve the problem (4), we minimise the following cost function min D = 2 Y H r A r 2 B r 3 C r 2 F We denote by Y the mode- matricization of Y The approximation in (4) can be rewritten in matrix form as Y A r H (r) (C r B r ) T, (6) where denotes the Kronecker product, and H (r) are mode- matricizations of the tensors H r for r =,,R Thecost function in is therefore expressed in an equivalent form D = 2 vec( ) ( Y (C r B r ) ( ) H (r) T ) II T K, a r 2 F, whereas its gradients with respect to a r for r =, 2,,R are given by D = w r + Φ r,s a s, (7) a r s= where vectors w r of length K, and matrices Φ r,s of size K K are defined as ( w r = T T K, H (r) (C ) r B r ) T I I vec(y) = T T K, vec ( ) Y 2 B T r 3 C T r, H r, (8) Φ r,s = T T ( ) K, Zr,s I I TK,, (9) ( )( ) Z r,s = H (r) C T r C s B T r B s H (s) T = H r 2 B T s B r 3 C T s C r, H s () but mode- between two tensors H r 2 B T s B r 3 C T s C r and H s of size 2 3 We define and sums of all entries on the -th super diagonal or -th sub diagonal of the ( ) matrices Z r,s,for =,,, that is = z (r,s), + + z(r,s) 2, z(r,s),, () = z (r,s) +, + z(r,s) +2,2 + + z(r,s), (2) From definition of the matrices T K,, we obtain that Φ r,s in (9) are banded Toeplitz matrices of size K K given as Φ r,s = + φ (r,s) φ (r,s) + φ(r,s) By setting the gradients in (7) with respect to all a r to zeros, we derive the following update rule for a = [ ] a T T,,aT R a Φ w, (3) where w = [ w T T R],,wT is a vector of length K R,and Φ = [Φ r,s ]isanr R partitioned matrix of Toeplitz matrices Φ r,s for r, s R SinceZ r,s = Z T s,r, Φ r,s = Φ T s,r and Φ r,r are symmetric It follows that Φ is a symmetric matrix of size (RK RK ) We will show that Φ can be expressed as a block Toeplitz matrix after swapping its row and columns, and thereby its inversion in (3) can be efficiently performed using fast inversion methods, eg, the block Levinson recursion algorithm [6], or the algorithm in [7] We define a permutation matrix P K,R such that vec ( X T K,R) = P K,R vec ( X K,R) for any matrix XK,R of size K R It can be verified that permutation of columns and rows of Φ using P K,R yields a block Toeplitz matrix of (R R) matrices Φ and Φ = Φ T for =,,, given as Φ Φ Φ = P K,R Φ P T K,R = Φ +, Φ The notation X, Y = X Y T denotes contraction between two tensors along all modes but mode- Entries of vectors w r are simply sums of entries on the main diagonal or on the k-sub diagonal of the (I ) matrices W r = Y 2 B T r 3 C T r, H r for k =, K +, that is w r (k) = W r (k, ) + W r (k +, 2) + + W r (k +, ), for k =,,K In addition, the matrices Z r,s of size are efficiently computed through contraction along all modes where Φ = φ (,) φ (,2) φ (2,) φ (2,2) φ (R,) φ (R,2) φ (,R) φ (2,R) φ (R,R) Φ + Φ, (4) 27

3 and are defined in () and (2) Using the above expression, the update rule in (3) is rewritten as ( ) a P T K,R PK,R Φ P T ( K,R PK,R w ) ( = P T K,R Φ w ) This update rule requires a cost of O(K 2R2 ) and needs only matrices Φ of size R R, ie, R 2 coefficients [6, 8] In a particular case when =, the update rule can be simplified into a simple form given as [a,,a R ] [w,,w R ] Z, (6) where Z is a matrix of size (R R) whose entries Z(r, s) are given in () Loading components b r and c r are updated in the similar way 22 Update of core tensors H r In order to derive update rules for H r, we compute derivatives of the cost function in with respect to H r The derivatives are set to zero to obtain the following update rule vec(h ) Ψ, Ψ,R vec(v ), (7) vec(h R ) Ψ R, Ψ R,R vec(v R ) where V r = Y A T r 2B T r rc T r are tensors of size 2 3, and Ψ r,s = (C T r C s ) (B T r B s ) (A T r A s ) The partitioned matrix Ψ = [Ψ r,s ] in (7) is of size R( 2 3 ) R( 2 3 ) In practice, we often seek relatively small patterns H r, the inversion Ψ can be proceeded quickly 23 Initialization and efficient implementation For the noise-less case, we use the tensor diagonalization (TE- DIA) [9] to seek for matrices which transform the data tensor Y to be diagonal or block diagonal form Loading components a r, b r and c r are then estimated using the Toeplitz matrix factorization [2] For other cases, TEDIA is used to generate initial points for the deconvolution The procedure is explained in more detail in the next section In update rules in and (7), besides the cost due to solving linear systems, construction of vectors w r in (8) and tensors V r in (7) is expense with a cost of O(I I 2 I 3 min( 2, 3 )) The computations are significantly time-consuming when the tensor sizes are large, because of large memory operations associated with tensor permutations [2] Taking into account that the tensor product F r = Y 2 B T r 3 C T r is the common part involving in construction of V r and w w After updating a r, the tensors V r are computed quickly as V r = F r A r Therefore we suggest to update patterns H r after each update of loading components a r, b r and c r The update order is as follows: update [a r ], update [H r ], update [b r ], update [H r ], update [c r ], update [H r ],andsoon Finally, a direct evaluation of the cost function as a stopping criterion can be the most expensive step Since we complete each iteration by updating H r using (7), the cost value is simply computed without construction of the approximate tensor to Y as Table Clustering accuracy (%) using the low rank tensor deconvolution with rank R = 2 The values inside parentheses indicate the percentage of improvement of tensor deconvolution compared to the approach based on rank-2 CPD Digits (4) 945 (2) (35) 9 (3) 98 (9) 975 (35) (39) (45) (6) (335) 945 (23) (27) 895 (45) 725 (25) 87 (45) 3 98 (9) 67 (35) 97 (55) 94 (45) 8 (285) (25) (365) (85) (2) 96 (4) (23) 69 (75) 8 78 (3) D = 2 Y 2 F vec(v r ) T vec(h r ) (8) 3 APPLICATIONS TO FEATURE EXTRACTION This section introduces an application of tensor deconvolution to feature extraction Assuming that slides Y k for k =,,I 3 represent samples of certain entity, eg, two-dimensional images Then the third loading components c,, c R explain relation between I 3 samples In a particular case when 3 =, ie, patterns are matrices of size 2, vectors c r, for r =,,R, represent R feature vectors associated with R basis components H r (a r b T r ) = τ (a r ) H r τ 2 (b r ) T of rank min(, 2 ), respectively It is worth noting that CPD with rank-r also yields R feature vectors However, its R basis components are only of rank-, and do not explain complex structure as those in the tensor deconvolution We illustrate the tensor deconvolution in clustering applications on the MNIST handwritten digits We took the first images of size for each digit, and applied the tensor deconvolution to the data consisting of 2 images for each pair of digits, eg, and, 2 and 4 So far, there does not exist a method to determine the number of patterns, ie R, which is related to rank determination in the block tensor decomposition However, one can select a suitable R by balancing the approximation error versus the number of parameters [3], or through a cross-validation technique In this paper, the deconvolution estimated two common patterns H and H 2 of size, where varied in the range of [, ] Y H r (a r b r c r ) = (H r (a r b r )) c r (9) 27

4 (a) Digits and (b) Digits 2 and (c) Digits 4 and 7 Fig 2 Illustration of tensor deconvolution with two patterns of size for clustering of two digits and 3, 2 and 6, 4 and 7 Relative approximation errors and clustering accuracies are obtained with pattern sizes =, 2,, (a) CPD with =, R = 2 (b) Tensor deconvolution with = 8andR = 2 Fig 3 Approximate images for two digits and 3 using only two patterns when = and = 8 Parameters were initialised using the TEDIA algorithm (including the Tucker compression [22,23]) for two-sided blockdiagonalization of the compressed tensor of size R R 2 The two unconstrained factor matrices A unc r and B unc r obtained by TEDIA were then factorised to yield the Toeplitz matrices τ (a r )andτ (b r ) [2] The initial patterns H r were estimated such that they minimized the Frobenius norm Y R H r τ (a r ) 2 τ (b r ) 2 F, with fixed a r and b r Finally, we completed the initialisation by performing best rank- approximations to mode-3 matricization of H r, ie, approximations H r H r c r for r =,,R In Fig 2, the relative approximation errors and clustering accuracies using K-means are illustrated as functions of size for pairs and 3, 2 and 6, 4 and 7 The results for CPD with rank-2 are shown with = For these selected pairs (a) Digits and 3 (b) Digits 2 and 6 (c) Digits 4 and 7 Fig 4 Illustration of two basis images extracted for digits and 3, 2 and 6, 4 and 7 using rank-2 CPD and tensor deconvolution with R = 2and = 2 =, 3 = of digits, we did not achieve good clustering accuracies using two features extracted by CPD However, as seen on Fig 2, when = 8, 9 and for digits and 3, digits 2 and 6, and = 5 for digits 4 and 7, we obtained much better clustering accuracies ( 925%) using only two features estimated by the tensor deconvolution The high performance of the tensor deconvolution can be explained by complex structure of basis images F r = H r (a r b T r ) illustrated in Fig 4 Basis images of rank- by CPD, ie, a r b T r, do not express sufficient structure of digits With the more complex basis images, tensor deconvolution achieved lower approximation errors, which are confirmed in both Fig 2 and Fig 3 It is clear that in Fig 3, reconstructed digits and 3 could not be distinguished by rank- 2 CPD as compared with those by tensor deconvolution The improvement was also observed in clustering of other digits as summarised in Table 4 CONCLUSIONS The tensor deconvolution with rank- structures considered in this paper has shown advantage over the CP decomposition in providing complex structure basis patterns The tensor deconvolution explains the data better than CPD, while keeping the same number of features R With efficient implementation of update rules, initialisation and update strategy, our model and the proposed algorithm can be applied to feature extraction for other kinds of data Matlab implementation is provided in the TENSORBOX package, and available online at: http: //wwwbspbrainrikenp/ phan/tensorboxphp 272

5 5 REFERENCES [] L-H Lim and P Comon, Blind multilinear identification, CoRR,volabs/226663, 22, preprint [2] A Cichocki, R Zdunek, A-H Phan, and S Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, Wiley, Chichester, 29 [3] A Cichocki, D P Mandic, A-H Phan, C Caiafa, G Zhou, Q Zhao, and L De Lathauwer, Tensor decompositions for signal processing applications from two-way to multiway component analysis, IEEE Signal Processing Magazine, accepted, 24 [4] P Comon, Tensors: a brief introduction, IEEE Signal Processing Magazine, vol 3, no, pp 44 53, 24 [5] RA Harshman, S Hong, and ME Lundy, Shifted factor analysis - Part I: Models and properties, ournal of Chemometrics, vol 7, no 7, pp , 23 [6] M Mørup, L K Hansen, S M Arnfred, L-H Lim, and K H Madsen, Shift-invariant multilinear decomposition of neuroimaging data, NeuroImage, vol 42, no 4, pp , 28 [7] D FitzGerald, M Cranitch, and E Coyle, Extended nonnegative tensor factorisation models for musical sound source separation, Computational Intelligence and Neuroscience, 28 [8] AT Cemgil, U Simsekli, and YC Subakan, Probabilistic latent tensor factorization framework for audio modeling, in Applications of Signal Processing to Audio and Acoustics (WASPAA), 2 IEEE Workshop on, Oct 2, pp 37 4 [9] P Smaragdis, Non-negative matrix factor deconvolution; Extraction of multiple sound sources from monophonic inputs, Lecture Notes in Computer Science,vol 395, pp , 24 [] P Smaragdis, Convolutive speech bases and their application to speech separation, IEEE Transactions of Speech and Audio Processing, vol 5, pp 2, an 27 [] R Zdunek, Improved convolutive and underdetermined blind audio source separation with MRF smoothing, Cognitive Computation, vol 5, no 4, pp , 23 [2] M Mørup and MN Schmidt, Sparse non-negative tensor factor double deconvolution (SNTF2D)for multi channel time-frequency analysis, Tech Rep, Technical University of Denmark, DTU, 26 [3] A Ozerov and C Févotte, Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation, Audio, Speech, and Language Processing, IEEE Transactions on, vol 8, no 3, pp , March 2 [4] A-H Phan, P Tichavský, A Cichocki, and Z Koldovský, Low-rank blind nonnegative matrix deconvolution, in Acoustics, Speech and Signal Processing (ICASSP), 22 IEEE International Conference on 22, pp , IEEE [5] L De Lathauwer and D Nion, Decompositions of a higher-order tensor in block terms Part III: Alternating least squares algorithms, SIAM ournal of Matrix Analysis and Applications, vol 3, no 3, pp 67 83, 28, Special Issue Tensor Decompositions and Applications [6] H Akaike, Block Toeplitz matrix inversion, SIAM ournal on Applied Mathematics, vol 24, no 2, pp , 973 [7] X-G Lv and T-Z Huang, The inverses of block Toeplitz matrices, ournal of Mathematics, vol 23, no ID 2776, pp 8 pages, 93 [8] P Alonso, M Badía-Contelles, and A M Vidal, Solving the block-toeplitz least-squares problem in parallel, Concurrency - Practice and Experience, vol 7, no, pp 49 67, 25 [9] P Tichavský, A-H Phan, and A Cichocki, Nonorthogonal tensor diagonalization, a tool for block tensor decompositions, arxiv, vol 42673, 24 [2] E Moulines, P Duhamel, Cardoso, and S Mayrargue, Subspace methods for the blind identification of multichannel fir filters, Signal Processing, IEEE Transactions on, vol 43, no 2, pp , Feb 995 [2] A-H Phan, P Tichavský, and A Cichocki, Fast alternating LS algorithms for high order CANDE- COMP/PARAFAC tensor factorizations, Signal Processing, IEEE Transactions on, vol 6, no 9, pp , 23 [22] L De Lathauwer, B De Moor, and Vandewalle, On thebestrank-andrank-(r,r2,,rn)approximation of higher-order tensors, SIAM ournal of Matrix Analysis and Applications, vol 2, no 4, pp , 2 [23] A-H Phan, A Cichocki, and P Tichavský, On fast algorithms for orthogonal Tucker decomposition, in Acoustics, Speech and Signal Processing (ICASSP), 24 IEEE International Conference on, May 24, pp

NMF WITH SPECTRAL AND TEMPORAL CONTINUITY CRITERIA FOR MONAURAL SOUND SOURCE SEPARATION. Julian M. Becker, Christian Sohn and Christian Rohlfing

NMF WITH SPECTRAL AND TEMPORAL CONTINUITY CRITERIA FOR MONAURAL SOUND SOURCE SEPARATION. Julian M. Becker, Christian Sohn and Christian Rohlfing NMF WITH SPECTRAL AND TEMPORAL CONTINUITY CRITERIA FOR MONAURAL SOUND SOURCE SEPARATION Julian M. ecker, Christian Sohn Christian Rohlfing Institut für Nachrichtentechnik RWTH Aachen University D-52056

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky

FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky Brain Science Institute, RIKEN, Wakoshi, Japan Institute of Information

More information

Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations

Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Anh Huy Phan 1, Andrzej Cichocki 1,, Rafal Zdunek 1,2,andThanhVuDinh 3 1 Lab for Advanced Brain Signal Processing,

More information

the tensor rank is equal tor. The vectorsf (r)

the tensor rank is equal tor. The vectorsf (r) EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

Non-negative Matrix Factorization: Algorithms, Extensions and Applications

Non-negative Matrix Factorization: Algorithms, Extensions and Applications Non-negative Matrix Factorization: Algorithms, Extensions and Applications Emmanouil Benetos www.soi.city.ac.uk/ sbbj660/ March 2013 Emmanouil Benetos Non-negative Matrix Factorization March 2013 1 / 25

More information

Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization

Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization Anh-Huy Phan 1, Petr Tichavský 2(B), and Andrzej Cichocki 1,3 1 Lab for Advanced Brain Signal Processing,

More information

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, and Domains Qibin Zhao, Cesar F. Caiafa, Andrzej Cichocki, and Liqing Zhang 2 Laboratory for Advanced Brain Signal Processing,

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 6 Non-Negative Matrix Factorization Rainer Gemulla, Pauli Miettinen May 23, 23 Non-Negative Datasets Some datasets are intrinsically non-negative: Counters (e.g., no. occurrences

More information

Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation Mikkel N. Schmidt and Morten Mørup Technical University of Denmark Informatics and Mathematical Modelling Richard

More information

On monotonicity of multiplicative update rules for weighted nonnegative tensor factorization

On monotonicity of multiplicative update rules for weighted nonnegative tensor factorization On monotonicity of multiplicative update rules for weighted nonnegative tensor factorization Alexey Ozerov, Ngoc Duong, Louis Chevallier To cite this version: Alexey Ozerov, Ngoc Duong, Louis Chevallier.

More information

Non-Negative Tensor Factorisation for Sound Source Separation

Non-Negative Tensor Factorisation for Sound Source Separation ISSC 2005, Dublin, Sept. -2 Non-Negative Tensor Factorisation for Sound Source Separation Derry FitzGerald, Matt Cranitch φ and Eugene Coyle* φ Dept. of Electronic Engineering, Cor Institute of Technology

More information

Single-channel source separation using non-negative matrix factorization

Single-channel source separation using non-negative matrix factorization Single-channel source separation using non-negative matrix factorization Mikkel N. Schmidt Technical University of Denmark mns@imm.dtu.dk www.mikkelschmidt.dk DTU Informatics Department of Informatics

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

TENSOR COMPLETION THROUGH MULTIPLE KRONECKER PRODUCT DECOMPOSITION

TENSOR COMPLETION THROUGH MULTIPLE KRONECKER PRODUCT DECOMPOSITION TENSOR COMPLETION THROUGH MULTIPLE KRONECKER PRODUCT DECOMPOSITION Anh-Huy Phan, Andrzej Cichocki, Petr Tichavský, Gheorghe Luta 3, Austin Brockmeier 4 Brain Science Institute, RIKEN, Wakoshi, Japan Institute

More information

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT CONOLUTIE NON-NEGATIE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT Paul D. O Grady Barak A. Pearlmutter Hamilton Institute National University of Ireland, Maynooth Co. Kildare, Ireland. ABSTRACT Discovering

More information

Decomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman

Decomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman Decomposing a three-way dataset of TV-ratings when this is impossible Alwin Stegeman a.w.stegeman@rug.nl www.alwinstegeman.nl 1 Summarizing Data in Simple Patterns Information Technology collection of

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

SOUND SOURCE SEPARATION BASED ON NON-NEGATIVE TENSOR FACTORIZATION INCORPORATING SPATIAL CUE AS PRIOR KNOWLEDGE

SOUND SOURCE SEPARATION BASED ON NON-NEGATIVE TENSOR FACTORIZATION INCORPORATING SPATIAL CUE AS PRIOR KNOWLEDGE SOUND SOURCE SEPARATION BASED ON NON-NEGATIVE TENSOR FACTORIZATION INCORPORATING SPATIAL CUE AS PRIOR KNOWLEDGE Yuki Mitsufuji Sony Corporation, Tokyo, Japan Axel Roebel 1 IRCAM-CNRS-UPMC UMR 9912, 75004,

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Andrzej CICHOCKI and Rafal ZDUNEK Laboratory for Advanced Brain Signal Processing, RIKEN BSI, Wako-shi, Saitama

More information

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Evrim Acar, Daniel M. Dunlavy, and Tamara G. Kolda* Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia

More information

BLIND SEPARATION OF UNDERDETERMINED LINEAR MIXTURES BASED ON SOURCE NONSTATIONARITY AND AR(1) MODELING

BLIND SEPARATION OF UNDERDETERMINED LINEAR MIXTURES BASED ON SOURCE NONSTATIONARITY AND AR(1) MODELING BLIND SEPARATION OF UNDERDETERMINED LINEAR MIXTURES BASED ON SOURCE NONSTATIONARITY AND AR(1) MODELING Ondřej Šembera 1, Petr Tichavský 1 and Zbyněk Koldovský 2 1 Institute of Information Theory and Automation,

More information

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A o be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative ensor itle: Factorization Authors: Ivica Kopriva and Andrzej Cichocki Accepted: 21 June 2009 Posted: 25 June

More information

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de

More information

Analysis of polyphonic audio using source-filter model and non-negative matrix factorization

Analysis of polyphonic audio using source-filter model and non-negative matrix factorization Analysis of polyphonic audio using source-filter model and non-negative matrix factorization Tuomas Virtanen and Anssi Klapuri Tampere University of Technology, Institute of Signal Processing Korkeakoulunkatu

More information

Natural Gradient Learning for Over- and Under-Complete Bases in ICA

Natural Gradient Learning for Over- and Under-Complete Bases in ICA NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent

More information

An Improved Cumulant Based Method for Independent Component Analysis

An Improved Cumulant Based Method for Independent Component Analysis An Improved Cumulant Based Method for Independent Component Analysis Tobias Blaschke and Laurenz Wiskott Institute for Theoretical Biology Humboldt University Berlin Invalidenstraße 43 D - 0 5 Berlin Germany

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Jason R. Holloway hollowjr@clarkson.edu Carmeliza Navasca cnavasca@clarkson.edu Department of Electrical Engineering Clarkson

More information

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches www.ijcsi.org https://doi.org/10.5281/zenodo.1467648 16 Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches Lihua Gui, Xuyang Zhao, Qibin Zhao and Jianting Cao

More information

Package nntensor. September 16, 2018

Package nntensor. September 16, 2018 Type Package Title Non-Negative Tensor Decomposition Version 0.99.4 Date 2018-09-15 Package nntensor September 16, 2018 Author, Manabu Ishii, Itoshi Nikaido Maintainer Suggests

More information

Non-negative Tensor Factorization with missing data for the modeling of gene expressions in the Human Brain

Non-negative Tensor Factorization with missing data for the modeling of gene expressions in the Human Brain Downloaded from orbit.dtu.dk on: Dec 05, 2018 Non-negative Tensor Factorization with missing data for the modeling of gene expressions in the Human Brain Nielsen, Søren Føns Vind; Mørup, Morten Published

More information

B-Spline Smoothing of Feature Vectors in Nonnegative Matrix Factorization

B-Spline Smoothing of Feature Vectors in Nonnegative Matrix Factorization B-Spline Smoothing of Feature Vectors in Nonnegative Matrix Factorization Rafa l Zdunek 1, Andrzej Cichocki 2,3,4, and Tatsuya Yokota 2,5 1 Department of Electronics, Wroclaw University of Technology,

More information

Cauchy Nonnegative Matrix Factorization

Cauchy Nonnegative Matrix Factorization Cauchy Nonnegative Matrix Factorization Antoine Liutkus, Derry Fitzgerald, Roland Badeau To cite this version: Antoine Liutkus, Derry Fitzgerald, Roland Badeau. Cauchy Nonnegative Matrix Factorization.

More information

Non-Negative Matrix Factorization And Its Application to Audio. Tuomas Virtanen Tampere University of Technology

Non-Negative Matrix Factorization And Its Application to Audio. Tuomas Virtanen Tampere University of Technology Non-Negative Matrix Factorization And Its Application to Audio Tuomas Virtanen Tampere University of Technology tuomas.virtanen@tut.fi 2 Contents Introduction to audio signals Spectrogram representation

More information

KERNEL-BASED TENSOR PARTIAL LEAST SQUARES FOR RECONSTRUCTION OF LIMB MOVEMENTS

KERNEL-BASED TENSOR PARTIAL LEAST SQUARES FOR RECONSTRUCTION OF LIMB MOVEMENTS KERNEL-BASED TENSOR PARTIAL LEAST SQUARES FOR RECONSTRUCTION OF LIMB MOVEMENTS Qibin Zhao, Guoxu Zhou, Tulay Adali 2, Liqing Zhang 3, Andrzej Cichocki RIKEN Brain Science Institute, Japan 2 University

More information

TENLAB A MATLAB Ripoff for Tensors

TENLAB A MATLAB Ripoff for Tensors TENLAB A MATLAB Ripoff for Tensors Y. Cem Sübakan, ys2939 Mehmet K. Turkcan, mkt2126 Dallas Randal Jones, drj2115 February 9, 2016 Introduction MATLAB is a great language for manipulating arrays. However,

More information

SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan

SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS Emad M. Grais and Hakan Erdogan Faculty of Engineering and Natural Sciences, Sabanci University, Orhanli

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION Hiroazu Kameoa The University of Toyo / Nippon Telegraph and Telephone Corporation ABSTRACT This paper proposes a novel

More information

Online Tensor Factorization for. Feature Selection in EEG

Online Tensor Factorization for. Feature Selection in EEG Online Tensor Factorization for Feature Selection in EEG Alric Althoff Honors Thesis, Department of Cognitive Science, University of California - San Diego Supervised by Dr. Virginia de Sa Abstract Tensor

More information

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal ZDUNEK 1 and Tomasz M. RUTKOWSKI 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

ACCOUNTING FOR PHASE CANCELLATIONS IN NON-NEGATIVE MATRIX FACTORIZATION USING WEIGHTED DISTANCES. Sebastian Ewert Mark D. Plumbley Mark Sandler

ACCOUNTING FOR PHASE CANCELLATIONS IN NON-NEGATIVE MATRIX FACTORIZATION USING WEIGHTED DISTANCES. Sebastian Ewert Mark D. Plumbley Mark Sandler ACCOUNTING FOR PHASE CANCELLATIONS IN NON-NEGATIVE MATRIX FACTORIZATION USING WEIGHTED DISTANCES Sebastian Ewert Mark D. Plumbley Mark Sandler Queen Mary University of London, London, United Kingdom ABSTRACT

More information

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs Paris Smaragdis TR2004-104 September

More information

Single Channel Music Sound Separation Based on Spectrogram Decomposition and Note Classification

Single Channel Music Sound Separation Based on Spectrogram Decomposition and Note Classification Single Channel Music Sound Separation Based on Spectrogram Decomposition and Note Classification Hafiz Mustafa and Wenwu Wang Centre for Vision, Speech and Signal Processing (CVSSP) University of Surrey,

More information

Probabilistic Latent Variable Models as Non-Negative Factorizations

Probabilistic Latent Variable Models as Non-Negative Factorizations 1 Probabilistic Latent Variable Models as Non-Negative Factorizations Madhusudana Shashanka, Bhiksha Raj, Paris Smaragdis Abstract In this paper, we present a family of probabilistic latent variable models

More information

Jacobi Algorithm For Nonnegative Matrix Factorization With Transform Learning

Jacobi Algorithm For Nonnegative Matrix Factorization With Transform Learning Jacobi Algorithm For Nonnegative Matrix Factorization With Transform Learning Herwig Wt, Dylan Fagot and Cédric Févotte IRIT, Université de Toulouse, CNRS Toulouse, France firstname.lastname@irit.fr Abstract

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

PROBLEM of online factorization of data matrices is of

PROBLEM of online factorization of data matrices is of 1 Online Matrix Factorization via Broyden Updates Ömer Deniz Akyıldız arxiv:1506.04389v [stat.ml] 6 Jun 015 Abstract In this paper, we propose an online algorithm to compute matrix factorizations. Proposed

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND

More information

EUSIPCO

EUSIPCO EUSIPCO 213 1569744273 GAMMA HIDDEN MARKOV MODEL AS A PROBABILISTIC NONNEGATIVE MATRIX FACTORIZATION Nasser Mohammadiha, W. Bastiaan Kleijn, Arne Leijon KTH Royal Institute of Technology, Department of

More information

Using Matrix Decompositions in Formal Concept Analysis

Using Matrix Decompositions in Formal Concept Analysis Using Matrix Decompositions in Formal Concept Analysis Vaclav Snasel 1, Petr Gajdos 1, Hussam M. Dahwa Abdulla 1, Martin Polovincak 1 1 Dept. of Computer Science, Faculty of Electrical Engineering and

More information

SPARSE TENSORS DECOMPOSITION SOFTWARE

SPARSE TENSORS DECOMPOSITION SOFTWARE SPARSE TENSORS DECOMPOSITION SOFTWARE Papa S Diaw, Master s Candidate Dr. Michael W. Berry, Major Professor Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville

More information

On Spectral Basis Selection for Single Channel Polyphonic Music Separation

On Spectral Basis Selection for Single Channel Polyphonic Music Separation On Spectral Basis Selection for Single Channel Polyphonic Music Separation Minje Kim and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong, Nam-gu

More information

Non-negative matrix factorization and term structure of interest rates

Non-negative matrix factorization and term structure of interest rates Non-negative matrix factorization and term structure of interest rates Hellinton H. Takada and Julio M. Stern Citation: AIP Conference Proceedings 1641, 369 (2015); doi: 10.1063/1.4906000 View online:

More information

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing 1 Zhong-Yuan Zhang, 2 Chris Ding, 3 Jie Tang *1, Corresponding Author School of Statistics,

More information

Fast Nonnegative Tensor Factorization with an Active-Set-Like Method

Fast Nonnegative Tensor Factorization with an Active-Set-Like Method Fast Nonnegative Tensor Factorization with an Active-Set-Like Method Jingu Kim and Haesun Park Abstract We introduce an efficient algorithm for computing a low-rank nonnegative CANDECOMP/PARAFAC(NNCP)

More information

Acoustic MIMO Signal Processing

Acoustic MIMO Signal Processing Yiteng Huang Jacob Benesty Jingdong Chen Acoustic MIMO Signal Processing With 71 Figures Ö Springer Contents 1 Introduction 1 1.1 Acoustic MIMO Signal Processing 1 1.2 Organization of the Book 4 Part I

More information

OPTIMAL WEIGHT LEARNING FOR COUPLED TENSOR FACTORIZATION WITH MIXED DIVERGENCES

OPTIMAL WEIGHT LEARNING FOR COUPLED TENSOR FACTORIZATION WITH MIXED DIVERGENCES OPTIMAL WEIGHT LEARNING FOR COUPLED TENSOR FACTORIZATION WITH MIXED DIVERGENCES Umut Şimşekli, Beyza Ermiş, A. Taylan Cemgil Boğaziçi University Dept. of Computer Engineering 34342, Bebek, İstanbul, Turkey

More information

Dept. Electronics and Electrical Engineering, Keio University, Japan. NTT Communication Science Laboratories, NTT Corporation, Japan.

Dept. Electronics and Electrical Engineering, Keio University, Japan. NTT Communication Science Laboratories, NTT Corporation, Japan. JOINT SEPARATION AND DEREVERBERATION OF REVERBERANT MIXTURES WITH DETERMINED MULTICHANNEL NON-NEGATIVE MATRIX FACTORIZATION Hideaki Kagami, Hirokazu Kameoka, Masahiro Yukawa Dept. Electronics and Electrical

More information

A Brief Guide for TDALAB Ver 1.1. Guoxu Zhou and Andrzej Cichocki

A Brief Guide for TDALAB Ver 1.1. Guoxu Zhou and Andrzej Cichocki A Brief Guide for TDALAB Ver 1.1 Guoxu Zhou and Andrzej Cichocki April 30, 2013 Contents 1 Preliminary 2 1.1 Highlights of TDALAB...................... 2 1.2 Install and Run TDALAB....................

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

Z-eigenvalue methods for a global polynomial optimization problem

Z-eigenvalue methods for a global polynomial optimization problem Math. Program., Ser. A (2009) 118:301 316 DOI 10.1007/s10107-007-0193-6 FULL LENGTH PAPER Z-eigenvalue methods for a global polynomial optimization problem Liqun Qi Fei Wang Yiju Wang Received: 6 June

More information

Dimitri Nion & Lieven De Lathauwer

Dimitri Nion & Lieven De Lathauwer he decomposition of a third-order tensor in block-terms of rank-l,l, Model, lgorithms, Uniqueness, Estimation of and L Dimitri Nion & Lieven De Lathauwer.U. Leuven, ortrijk campus, Belgium E-mails: Dimitri.Nion@kuleuven-kortrijk.be

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation

Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation Simon Arberet 1, Alexey Ozerov 2, Rémi Gribonval 1, and Frédéric Bimbot 1 1 METISS Group, IRISA-INRIA Campus de Beaulieu,

More information

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Prasad Sudhakar, Rémi Gribonval To cite this version: Prasad Sudhakar, Rémi Gribonval. Sparse filter models

More information

Dealing with curse and blessing of dimensionality through tensor decompositions

Dealing with curse and blessing of dimensionality through tensor decompositions Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017 2 Overview Curse of dimensionality

More information

Automatic Rank Determination in Projective Nonnegative Matrix Factorization

Automatic Rank Determination in Projective Nonnegative Matrix Factorization Automatic Rank Determination in Projective Nonnegative Matrix Factorization Zhirong Yang, Zhanxing Zhu, and Erkki Oja Department of Information and Computer Science Aalto University School of Science and

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

Scalable audio separation with light Kernel Additive Modelling

Scalable audio separation with light Kernel Additive Modelling Scalable audio separation with light Kernel Additive Modelling Antoine Liutkus 1, Derry Fitzgerald 2, Zafar Rafii 3 1 Inria, Université de Lorraine, LORIA, UMR 7503, France 2 NIMBUS Centre, Cork Institute

More information

Tensor decompositions for feature extraction and classification of high dimensional datasets

Tensor decompositions for feature extraction and classification of high dimensional datasets Invited Paper Tensor decompositions for feature extraction and classification of high dimensional datasets Anh Huy Phan a) and Andrzej Cichocki,b) Brain Science Institute, RIKEN, - Hirosawa, Wakoshi, Saitama

More information

Non-Negative Matrix Factorization

Non-Negative Matrix Factorization Chapter 3 Non-Negative Matrix Factorization Part 2: Variations & applications Geometry of NMF 2 Geometry of NMF 1 NMF factors.9.8 Data points.7.6 Convex cone.5.4 Projections.3.2.1 1.5 1.5 1.5.5 1 3 Sparsity

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

Nesterov-based Alternating Optimization for Nonnegative Tensor Completion: Algorithm and Parallel Implementation

Nesterov-based Alternating Optimization for Nonnegative Tensor Completion: Algorithm and Parallel Implementation Nesterov-based Alternating Optimization for Nonnegative Tensor Completion: Algorithm and Parallel Implementation Georgios Lourakis and Athanasios P. Liavas School of Electrical and Computer Engineering,

More information

A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS

A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS Evrim Acar, Mathias Nilsson, Michael Saunders University of Copenhagen, Faculty of Science, Frederiksberg C, Denmark University

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information

Efficient CP-ALS and Reconstruction From CP

Efficient CP-ALS and Reconstruction From CP Efficient CP-ALS and Reconstruction From CP Jed A. Duersch & Tamara G. Kolda Sandia National Laboratories Livermore, CA Sandia National Laboratories is a multimission laboratory managed and operated by

More information

Coupled Matrix/Tensor Decompositions:

Coupled Matrix/Tensor Decompositions: Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition

More information

Blind Extraction of Singularly Mixed Source Signals

Blind Extraction of Singularly Mixed Source Signals IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL 11, NO 6, NOVEMBER 2000 1413 Blind Extraction of Singularly Mixed Source Signals Yuanqing Li, Jun Wang, Senior Member, IEEE, and Jacek M Zurada, Fellow, IEEE Abstract

More information

Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem

Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem Vandermonde-form Preserving Matrices And The Generalized Signal Richness Preservation Problem Borching Su Department of Electrical Engineering California Institute of Technology Pasadena, California 91125

More information

Convention Paper Presented at the 128th Convention 2010 May London, UK

Convention Paper Presented at the 128th Convention 2010 May London, UK Audio Engineering Society Convention Paper Presented at the 128th Convention 2010 May 22 25 London, UK 8130 The papers at this Convention have been selected on the basis of a submitted abstract and extended

More information

EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT

EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT Qing Shen,2, Wei Liu 2, Wei Cui, Siliang Wu School of Information and Electronics, Beijing Institute of Technology Beijing,

More information

Available Ph.D position in Big data processing using sparse tensor representations

Available Ph.D position in Big data processing using sparse tensor representations Available Ph.D position in Big data processing using sparse tensor representations Research area: advanced mathematical methods applied to big data processing. Keywords: compressed sensing, tensor models,

More information

Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm

Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm Duy-Khuong Nguyen 13, Quoc Tran-Dinh 2, and Tu-Bao Ho 14 1 Japan Advanced Institute of Science and Technology, Japan

More information

FREQUENCY-DOMAIN IMPLEMENTATION OF BLOCK ADAPTIVE FILTERS FOR ICA-BASED MULTICHANNEL BLIND DECONVOLUTION

FREQUENCY-DOMAIN IMPLEMENTATION OF BLOCK ADAPTIVE FILTERS FOR ICA-BASED MULTICHANNEL BLIND DECONVOLUTION FREQUENCY-DOMAIN IMPLEMENTATION OF BLOCK ADAPTIVE FILTERS FOR ICA-BASED MULTICHANNEL BLIND DECONVOLUTION Kyungmin Na, Sang-Chul Kang, Kyung Jin Lee, and Soo-Ik Chae School of Electrical Engineering Seoul

More information

A concise proof of Kruskal s theorem on tensor decomposition

A concise proof of Kruskal s theorem on tensor decomposition A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem

More information

744 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011

744 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011 744 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 4, MAY 2011 NMF With Time Frequency Activations to Model Nonstationary Audio Events Romain Hennequin, Student Member, IEEE,

More information

Author manuscript, published in "7th International Symposium on Computer Music Modeling and Retrieval (CMMR 2010), Málaga : Spain (2010)"

Author manuscript, published in 7th International Symposium on Computer Music Modeling and Retrieval (CMMR 2010), Málaga : Spain (2010) Author manuscript, published in "7th International Symposium on Computer Music Modeling and Retrieval (CMMR 2010), Málaga : Spain (2010)" Notes on nonnegative tensor factorization of the spectrogram for

More information