B-Spline Smoothing of Feature Vectors in Nonnegative Matrix Factorization

Size: px
Start display at page:

Download "B-Spline Smoothing of Feature Vectors in Nonnegative Matrix Factorization"

Transcription

1 B-Spline Smoothing of Feature Vectors in Nonnegative Matrix Factorization Rafa l Zdunek 1, Andrzej Cichocki 2,3,4, and Tatsuya Yokota 2,5 1 Department of Electronics, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, Wroclaw, Poland rafal.zdunek@pwr.wroc.pl 2 Laboratory for Advanced Brain Signal Processing RIKEN BSI, Wako-shi, Japan 3 Warsaw University of Technology, Poland 4 Systems Research Institute, Polish Academy of Science (PAN), Poland 5 Tokyo Institute of Technology, Japan Abstract. Nonnegative Matrix Factorization (NMF) captures nonnegative, sparse and parts-based feature vectors from the set of observed nonnegative vectors. In many applications, the features are also expected to be locally smooth. To incorporate the information on the local smoothness to the optimization process, we assume that the features vectors are conical combinations of higher degree B-splines with a given number of knots. Due to this approach the computational complexity of the optimization process does not increase considerably with respect to the standard NMF model. The numerical experiments, which were carried out for the blind spectral unmixing problem, demonstrate the robustness of the proposed method. Keywords: NMF, B-Splines, Feature Extraction, Spectral Unmixing. 1 Introduction Nonnegative Matrix Factorization (NMF) [1,2] is a key tool for feature extraction and dimensionality reduction of nonnegative data. However, in many practical scenarios the feature extracted with NMF may not have easy interpretation or meaning due to various ambiguities that are intrinsically related with the assumed model of factorization. The extensive analysis of non-uniqueness issues in NMF can be found in [3]. To relieve these problems, the prior knowledge on the factors to be estimated is usually imposed. Commonly used priors assume the sparsity that is typically incorporated to the factor updating process by l 1 -norm-based penalty or regularization terms. Pascuala-Montano et al. [4] reported that sparsity and smoothness constraints are closely related in NMF applications. Typically, when one of the factors is sparse, the other tends to be locally smooth. It is also well-known that the smoothness of a signal in the time domain involves its sparseness in the frequency domain, and vise versa. Hence, the smoothness constraints are also important L. Rutkowski et al. (Eds.): ICAISC 2014, Part II, LNAI 8468, pp , c Springer International Publishing Switzerland 2014

2 B-Spline Smoothing of Feature Vectors in NMF 73 and might considerably relax the factor ambiguity problems, if some estimated factors are expected to be locally smooth. The smoothness constraints are motivated by many practical applications of NMF. For example, the temporal as well as frequency profiles extracted from the magnitude spectrograms of speech or audio signals are locally smooth [5 7]. The similar smoothness behavior is also observed in the parts-based features extracted from a set of facial images. In a supervised classification performed with NMF, the encoding vectors obtained from the training vectors that belong to one class may also demonstrate the local smoothness property. Smooth signals may also be found in spectral unmixing problems, e.g. in hyperspectral imaging or Raman spectroscopy [8 13]. Depending on the observed data, the smoothness may take place in spectral signatures (endmembers) or in abundance maps. These applications motivate the proposed method. Smoothness constraints can be imposed on the estimated factors in many ways. It is well-known that the Gaussian priors lead to smooth estimates. This issue has been widely discussed in the literature (see e.g. [9, 14]). The smoothness can be also enforced in the similar way by applying the Markov Random Field (MRF) models [7]. These models are usually more robust to the overfitting phenomena than the Gaussian ones but they are more difficult to tackle numerically. In all these approaches, if the factor updating process is performed with gradient descent algorithms, the hyperparameters associated with the priors are difficult to be estimated. Another approach to the smoothness is to assume that the feature vectors in NMF can be expressed by a linear combination of some basis functions that are locally smooth, bounded and nonnegative in the whole domain. This model was proposed in [15], where the basis functions are determined by the Gaussian Radial Basis Functions (GRBF). Next, Yokota et al. [16] improved this model in terms of computational complexity and extended it to work efficiently with 2D features and with multi-linear array decomposition models. Motivated by the efficiency of a family of the GRBF-NMF algorithms, we extend the concept of approximating the feature vectors in NMF to a more general and sophisticated case where the basis functions are expressed by piecewise smooth nonnegative B-spline functions. This idea is partially motivated by the piecewise smoothness constraints proposed in [11] but our method does not use the MRF model to enforce the smoothness. The B-splines are piecewise polynomials that are widely used in many applications such as curve fitting, interpolation, approximation techniques [17]. The use of B-splines gives us more possibilities for model adaptation and regularization with the Total Variation (TV) term. The degree of the splines and the knots can be easily adapted. The coefficients of the linear combinations of the B-splines can be also readily estimated since the basis matrix reveals a block structure. In this paper, we also proposed the multiplicative algorithm for estimating the coefficients, assuming any feature vector is a superposition of the B-splines. The paper is organized as follows: Section 2 discusses the application of B- splines to approximate the feature vectors in NMF. The optimization algorithm

3 74 R. Zdunek, A, Cichocki, and T. Yokota for estimating the underlying factors is presented in Section 3. The experiments carried out for the linear spectral unmixing problem are described in Section 4. Finally, the conclusions are drawn in Section 5. 2 Model The aim of NMF is to find such lower-rank nonnegative matrices A =[a ij ] R I J + and X =[x jt ] R J T + that Y =[y it ] = AX R+ I T, given the data matrix Y, the lower rank J, and possibly some prior knowledge on the matrices A or X. The set of nonnegative real numbers is denoted by R +. For high redundancy: J<< IT I+T but in our considerations we assume: J min{i,t}. For the linear spectral unmixing problem, we assume that the column vectors of Y =[y 1,...,y T ] represent the observed mixed spectra, where T is the number of registered mixed spectra or the number of pixels in a remotely observed hyperspectral image. The observed spectral signals are divided into I adjacent subbands. Using the basic NMF model, the column vectors of A =[a 1,...,a J ] are feature vectors that contain the pure spectra (spectral signatures) and X may represent the mixing matrix or vectorized abundance maps. The rank of factorization J determines the number of pure spectra, which can be estimated with various techniques such as cross-validation or automatic relevance determination. In this paper, we assume that all the feature vectors, i.e. the vectors {a j } are locally smooth. 2.1 B-Splines A spline is a piecewise-polynomial real function F :[ξ min,ξ max ] R determined on the interval [ξ min,ξ max ] that is divided into P subintervals [ξ n 1,ξ n ]where ξ min = ξ 0 <ξ 1 < <ξ P 1 <ξ P = ξ max for n =1,...,P.For n : F (ξ) = S n (ξ) forξ n 1 ξ<ξ n where S n :[ξ n 1,ξ n ] R. Any spline function F can be uniquely expressed in terms of a linear combination of so-called B-splines, i.e. the basis splines: F (ξ) = N n=0 α n S (k) n (ξ), (1) where S n (k) (ξ) is the B-spline of order k determined on the subinterval ξ n 1 ξ<ξ n,and{α n } are coefficients of a linear combination of N B-splines, where N k 1. The points {ξ 0,ξ 1,...,ξ N } are known as knots. The B-splines can be determined by the Cox-DeBoor recursive formula: S n (k) ξ ξ n (ξ) = S n (k 1) (ξ)+ ξ n+k ξ S (k 1) n+1 (ξ), (2) ξ n+k 1 ξ n ξ n+k ξ n+1 where S (1) n (ξ) = { 1ifξn ξ<ξ n+1 0 otherwise (3)

4 B-Spline Smoothing of Feature Vectors in NMF 75 are B-splines of the first-order. The B-spline is a continuous function at the knots, and n S(1) n (ξ) = 1 for all ξ. Ifk =2,F (ξ) is composed of piecewise linear functions, hence a degree of the polynomial is equal to one. For k =2, the B-splines are formed by the piecewise quadratic functions. The degree of the polynomial F is equal to k 1. Note that the derivative of F (ξ) atξ can be easily determined by the (k 1)- order B-spline. Since: F (ξ) = N (k 1) α n α n 1 S n (k) (ξ), (4) ξ n+k 1 ξ n n=0 the TV-based regularization term: TV(F (ξ)) = ξ max ξ min F (ξ) dξ can be readily calculated. 2.2 Smoothing If k>2, the spline F (ξ) is a smooth function on the interval [ξ min, ξ max ]. In this paper, we assume that the model (1) is used for approximating the feature vectors in NMF. Thus: a j = N n=0 w nj s (k) n, (5) where {w nj } R + are coefficients of a conical combinations of B-splines of order k, ands n (k) =[S n (k) (ξ i )] R I + for i =1,...,I. Following [15,16], the NMF model with the B-spline smoothing has the form: Y = SWX, (6) where S =[s (k) 1,...,s(k) N ] RI N +, W =[w nj] R N J + and X =[x jt ] R J T +. Assuming A = SW, the model (6) boils down to the standard NMF model. The model (6) has better flexibility than in the GRBF-NMF that was proposed in [15]. First, the knots sequence {ξ n } does not have to be uniformly distributed in the interval [ξ min, ξ max ]. Moreover, the positions of knots can be also estimated. The number of knots can be adjusted with alternating iterations or can be regarded as the regularization parameter. The degree of the approximating polynomial can be also adaptively selected. For the intervals where the feature vectors do not show meaningful variability, low-order B-splines can be used. A higher variation may involve a higher order but the cubic B-splines should be sufficient for many practical applications. Using B-splines of low-order, the regularization is easier. To model relatively narrow spectral peaks, the number and the order of knots can be determined adaptively. Note that the model (6) can be also extended to work with multi-linear array decompositions, similarly as in [16]. Moreover, when the extracted features are 2D images, then 2D B-splines can be used.

5 76 R. Zdunek, A, Cichocki, and T. Yokota 3 Algorithm To estimate the matrices W and X from (6), we assume that the objective function is expressed by the squared Euclidean distance: From the stationarity of (7) we have: Ψ(W, X) = 1 2 Y SWX 2 F. (7) W Ψ(W, X) =S T (SWX Y )X T 0, (8) X Ψ(W, X) =(SW) T (SWX Y ) 0. (9) Hence, the projected ALS updating rules have the form: W =(S T S) 1 SY X T (XX T ) 1, () A =[SW] +, [ ] (11) X = (W T S T SW) 1 W T S T Y, + (12) where [ξ] + =max{0,ξ}. The ALS algorithm does not satisfy the KKT optimality conditions that lead to non-monotonic convergence. Obviously, the monotonicity is not a crucial condition in certain applications, and hence the ALS algorithm may work quite efficiency for some class of data. However, if the monotonic convergence is expected, a better choice seems to be the multiplicative algorithm presented below. Theorem 1. Given S 0, and the initial guesses: w nj,x jt 0, the objective function in (7) is non-increasing under the update rules: [ ] [ ] S T YX T W T S T Y w nj w nj nj [ ] jt, x jt x jt [ ]. (13) S T SWXX T W T S T SWX nj jt Lemma 1. The function G(W, W )= 1 [S T S WXX T ] nj wnj 2 2 w n,j nj ( )) [XY T wnj S] nj w nj (1+ln + 12 { } w tr Y T Y (14) n,j nj is an auxiliary function to the objective function in (7), i.e. it satisfies the conditions: G( W, W )=Ψ( W, X), (15) G(W, W ) Ψ(W, X). (16)

6 B-Spline Smoothing of Feature Vectors in NMF 77 Proof. The objective function in (7) can be rewritten as: Ψ(W, X) = 1 {W } { } 2 tr T S T SWXX T tr XY T SW + 1 { } 2 tr Y T Y.(17) It is easy to show that the condition (15) holds. Ding et al. [18] proved that for any symmetric matrices U R N N + and V R J J + and the matrices Z = [z nj ] R N J +, Z = [ znj ] R N J +,the following inequality: N J n=1 j=1 [U ZV ] nj z 2 nj z nj tr(z T UZV ) (18) is satisfied. Replacing Z W, Z W, U S T S and V XX T,weget: { } tr W T S T SWXX T [ST S WXX T ] nj wnj 2. (19) w nj From the concavity of the log function: ξ 1 + ln(ξ) forξ 0. Assuming ξ = wnj w nj,wehave: { } tr XY T SW [XY T S] nj w nj (1+ln n,j ( wnj Considering (19) and (20), the condition (16) is satisfied. Q.E.D. w nj )). (20) Assuming W (m+1) =argmin W G(W, W ) and setting W (m) W for m = 0, 1,...,, we obtain: G(W, w W )= [ST S WXX T ] nj w nj [XY T w nj S] nj 0, (21) nj w nj w nj which gives the update rule for W in Theorem 1. Since A = SW, the update rule for X is the same as proposed and proved in [19]. From Lemma 1, we have: Ψ(W (0), X (0) )... Ψ(W (m), X (m) ) Ψ(W (m+1), X (m) ) Ψ(W (m+1), X (m+1) )... Ψ(W ( ), X ( ) ), (22) which proves Theorem 1. 4 Experiments The experiments are carried out for the blind linear spectral unmixing problem, using the benchmark of 4 real reflectance signals taken from the U.S. Geological Survey (USGS) database 1. We selected 4 spectral signatures that are illustrated 1

7 78 R. Zdunek, A, Cichocki, and T. Yokota in Fig. 1 (in blue). These signals are divided into 224 bands that cover the range of wavelengths from 400 nm to 2.5 μm. The angle between any pair of the signals is greater than 15 degrees. These signals form the matrix A R+ I J, where I = 224 and J = 4. Note that all the reflectance signals are strictly positive and slowly varying with strong local smoothness. Thus, this benchmark is very difficult to be blindly estimated with nearly all Blind Source Separation (BSS) methods. The entries of the mixing matrix X R were generated randomly from a normal distribution N (0, 1), and then the negative entries are replaced with a zero-value. The columns with all-zero entries were removed. The mixed spectra were corrupted with an additive zero-mean Gaussian noise with the variance adapted to a given noise level. 4 a a a 3 5 a Original Estimated Index of wavelength Fig. 1. Spectral signatures (endmembers): (blue) original; (red) estimated with the BS-NMF. For the estimated spectra: SIR = 17.7 db. To estimate the matrices A and X from Y we used several NMF algorithms for comparing the results. The proposed algorithm is denoted by the acronym BS-NMF (B-Spline NMF). The algorithms are listed as follows: the standard Lee-Seung algorithm [1] (denoted here by the MUE acronym) for minimizing the Euclidean distance, projected ALS [2], HALS [2], Lin s Projected Gradient [20], FC-NNLS [21], and BS-NMF. All the tested algorithms were initialized by the same random initializer generated from an uniform distribution. To analyze the

8 B-Spline Smoothing of Feature Vectors in NMF 79 efficiency of the discussed methods, 0 Monte Carlo (MC) runs of the NMF algorithms were performed, each time the initial matrices A and X were different. All the algorithms, except for the ALS, were implemented using the same computational strategy as in the LPG, i.e. the same stopping criteria are applied to all the algorithms, and the maximum number of inner iterations for updating the factor A or X is set to. The ALS performs only one inner iterative step. In the BS-NMF, we set k = 4, the number of the B-splines is varying with the alternating steps according to the rule: N =max(, min(0, [m/5])), where m stands for the alternating step, and [ ] rounds towards the nearest integers. The performance of the NMF algorithms was evaluated with the Signal-to- Interference Ratio (SIR) [2] between the estimated signals and the true ones. Fig. 2 shows the SIR statistics for estimating the spectral signatures in the matrix A for two cases of the noise level and the number of observations SIR [db] SIR [db] MUE ALS HALS LPG FC-NNLS BS-NMF Algorithms (a) 7 MUE ALS HALS LPG FC-NNLS BS-NMF Algorithms (b) SIR [db] SIR [db] MUE ALS HALS LPG FC-NNLS BS-NMF Algorithms (c) MUE ALS HALS LPG FC-NNLS BS-NMF Algorithms (d) Fig. 2. SIR statistics for estimating the spectral signatures in the matrix A from noisy observations: (a) SNR = 30 db, T = ; (b) SNR = db, T = ; (c) SNR =30dB, T = 0; (d) SNR =db, T = 0

9 80 R. Zdunek, A, Cichocki, and T. Yokota Table 1. Mean-SIR [db] for estimating the matrix A and the runtime [in seconds] for the case: SNR =30dB, T = 0 Algorithms MUE ALS HALS LPG FC-NNLS BS-NMF SIR for A SIR for X Time Conclusions The paper shows a possibility of using B-splines for modeling the feature vectors in NMF. The proposed method is suitable for recovering locally smooth features, e.g. such as presented in Fig. 1. These signals are strictly positive and their variability is very low, which makes them very difficult to estimate with nearly all BSS methods. The statistics presented in Fig. 2 demonstrates that for this kind of data nearly all the NMF algorithms fail. Only the proposed BS-NMF gives mean-sirs greater than 15 db, both for weak (T = ) and strong (T = 0) redundant observations. When the data are corrupted with the strong noise SNR =db, the mean-sir of the BS-NMF decreases below the level 15 db but not considerably. The results given in Table 1 shows that the runtime of the proposed method is nearly the same as for the FC-NNLS but not much longer than for the others. Summing up, the proposed BS-NMF method seems to be efficient for estimating locally smooth feature vectors in NMF. Moreover, it may be easily modified and adapted to the data. References 1. Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401, (1999) 2. Cichocki, A., Zdunek, R., Phan, A.H., Amari, S.I.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley and Sons (2009) 3. Huang, K., Sidiropoulos, N.D., Swami, A.: NMF revised: New uniquness results and algorithms. In: Proc. of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013), Vancouver, Kanada, May (2013) 4. Pascual-Montano, A., Carazo, J.M., Kochi, K., Lehmean, D., Pacual-Marqui, R.: Nonsmooth nonnegative matrix factorization (nsnmf). IEEE Transaction Pattern Analysis and Machine Intelligence 28(3), (2006) 5. Févotte, C., Bertin, N., Durrieu, J.L.: Nonnegative matrix factorization with the Itakura-Saito divergence: With application to music analysis. Neural Computation 21(3), (2009)

10 B-Spline Smoothing of Feature Vectors in NMF Mohammadiha, N., Smaragdis, P., Leijon, A.: Prediction based filtering and smoothing to exploit temporal dependencies in NMF. In: Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013), Vancouver, Canada, May 26 31, pp (2013) 7. Zdunek, R.: Improved convolutive and under-determined blind audio source separation with MRF smoothing. Cognitive Computation 5(4), (2013) 8. Sajda, P., Du, S., Brown, T.R., Stoyanova, R., Shungu, D.C., Mao, X., Parra, L.C.: Nonnegative matrix factorization for rapid recovery of constituent spectra in magnetic resonance chemical shift imaging of the brain. IEEE Transaction on Medical Imaging 23(12), (2004) 9. Pauca, V.P., Pipera, J., Plemmons, R.J.: Nonnegative matrix factorization for spectral data analysis. Linear Algebra and its Applications 416(1), (2006). Miao, L., Qi, H.: Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization. IEEE Transactions on Geoscience and Remote Sensing 45(3), (2007) 11. Jia, S., Qian, Y.: Constrained nonnegative matrix factorization for hyperspectral unmixing. IEEE Transactions on Geoscience and Remote Sensing 47(1), (2009) 12. Iordache, M., Dias, J., Plaza, A.: Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Transactions on Geoscience and Remote Sensing 50(11), (2012) 13. Wang, N., Du, B., Zhang, L.: An endmember dissimilarity constrained non-negative matrix factorization method for hyperspectral unmixing. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 6(2), (2013) 14. Schmidt, M.N., Laurberg, H.: Nonnegative matrix factorization with Gaussian process priors. Computational Intelligence and Neuroscience (361705) (2008) 15. Zdunek, R.: Approximation of feature vectors in nonnegative matrix factorization with gaussian radial basis functions. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012, Part I. LNCS, vol. 7663, pp Springer, Heidelberg (2012) 16. Yokota, T., Cichocki, A., Yamashita, Y., Zdunek, R.: Fast algorithms for smooth nonnegative matrix and tensor factorizations. Neural Computations (submitted) 17. Schumaker, L.L.: Spline Functions: Basic Theory. John Wiley and Sons, New York (1981) 18. Ding, C., Li, T., Peng, W., Park, H.: Orthogonal nonnegative matrix trifactorizations for clustering. In: KDD 2006: Proc. of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp ACM Press, New York (2006) 19. Lee, D.D., Seung, H.S.: Algorithms for nonnegative matrix factorization. In: Advances in Neural Information Processing, NIPS, vol. 13, pp MIT Press (2001) 20. Lin, C.J.: Projected gradient methods for non-negative matrix factorization. Neural Computation 19(), (2007) 21. Kim, H., Park, H.: Non-negative matrix factorization based on alternating nonnegativity constrained least squares and active set method. SIAM Journal in Matrix Analysis and Applications 30(2), (2008)

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal ZDUNEK 1 and Tomasz M. RUTKOWSKI 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal Zdunek 1 and Tomasz M. Rutkowski 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Andrzej CICHOCKI and Rafal ZDUNEK Laboratory for Advanced Brain Signal Processing, RIKEN BSI, Wako-shi, Saitama

More information

MULTIPLICATIVE ALGORITHM FOR CORRENTROPY-BASED NONNEGATIVE MATRIX FACTORIZATION

MULTIPLICATIVE ALGORITHM FOR CORRENTROPY-BASED NONNEGATIVE MATRIX FACTORIZATION MULTIPLICATIVE ALGORITHM FOR CORRENTROPY-BASED NONNEGATIVE MATRIX FACTORIZATION Ehsan Hosseini Asl 1, Jacek M. Zurada 1,2 1 Department of Electrical and Computer Engineering University of Louisville, Louisville,

More information

NMF WITH SPECTRAL AND TEMPORAL CONTINUITY CRITERIA FOR MONAURAL SOUND SOURCE SEPARATION. Julian M. Becker, Christian Sohn and Christian Rohlfing

NMF WITH SPECTRAL AND TEMPORAL CONTINUITY CRITERIA FOR MONAURAL SOUND SOURCE SEPARATION. Julian M. Becker, Christian Sohn and Christian Rohlfing NMF WITH SPECTRAL AND TEMPORAL CONTINUITY CRITERIA FOR MONAURAL SOUND SOURCE SEPARATION Julian M. ecker, Christian Sohn Christian Rohlfing Institut für Nachrichtentechnik RWTH Aachen University D-52056

More information

Automatic Rank Determination in Projective Nonnegative Matrix Factorization

Automatic Rank Determination in Projective Nonnegative Matrix Factorization Automatic Rank Determination in Projective Nonnegative Matrix Factorization Zhirong Yang, Zhanxing Zhu, and Erkki Oja Department of Information and Computer Science Aalto University School of Science and

More information

Non-negative matrix factorization and term structure of interest rates

Non-negative matrix factorization and term structure of interest rates Non-negative matrix factorization and term structure of interest rates Hellinton H. Takada and Julio M. Stern Citation: AIP Conference Proceedings 1641, 369 (2015); doi: 10.1063/1.4906000 View online:

More information

Regularized NNLS Algorithms for Nonnegative Matrix Factorization with Application to Text Document Clustering

Regularized NNLS Algorithms for Nonnegative Matrix Factorization with Application to Text Document Clustering Regularized NNLS Algorithms for Nonnegative Matrix Factorization with Application to Text Document Clustering Rafal Zdunek Abstract. Nonnegative Matrix Factorization (NMF) has recently received much attention

More information

SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan

SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS. Emad M. Grais and Hakan Erdogan SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIX FACTORIZATION AND SPECTRAL MASKS Emad M. Grais and Hakan Erdogan Faculty of Engineering and Natural Sciences, Sabanci University, Orhanli

More information

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka

MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION. Hirokazu Kameoka MULTI-RESOLUTION SIGNAL DECOMPOSITION WITH TIME-DOMAIN SPECTROGRAM FACTORIZATION Hiroazu Kameoa The University of Toyo / Nippon Telegraph and Telephone Corporation ABSTRACT This paper proposes a novel

More information

NONNEGATIVE matrix factorization (NMF) is a

NONNEGATIVE matrix factorization (NMF) is a Algorithms for Orthogonal Nonnegative Matrix Factorization Seungjin Choi Abstract Nonnegative matrix factorization (NMF) is a widely-used method for multivariate analysis of nonnegative data, the goal

More information

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT

CONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT CONOLUTIE NON-NEGATIE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT Paul D. O Grady Barak A. Pearlmutter Hamilton Institute National University of Ireland, Maynooth Co. Kildare, Ireland. ABSTRACT Discovering

More information

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds

Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Orthogonal Nonnegative Matrix Factorization: Multiplicative Updates on Stiefel Manifolds Jiho Yoo and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong,

More information

Single-channel source separation using non-negative matrix factorization

Single-channel source separation using non-negative matrix factorization Single-channel source separation using non-negative matrix factorization Mikkel N. Schmidt Technical University of Denmark mns@imm.dtu.dk www.mikkelschmidt.dk DTU Informatics Department of Informatics

More information

Non-Negative Matrix Factorization

Non-Negative Matrix Factorization Chapter 3 Non-Negative Matrix Factorization Part 2: Variations & applications Geometry of NMF 2 Geometry of NMF 1 NMF factors.9.8 Data points.7.6 Convex cone.5.4 Projections.3.2.1 1.5 1.5 1.5.5 1 3 Sparsity

More information

ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering

ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering ONP-MF: An Orthogonal Nonnegative Matrix Factorization Algorithm with Application to Clustering Filippo Pompili 1, Nicolas Gillis 2, P.-A. Absil 2,andFrançois Glineur 2,3 1- University of Perugia, Department

More information

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation

Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,

More information

Nonnegative Matrix Factorization with Toeplitz Penalty

Nonnegative Matrix Factorization with Toeplitz Penalty Journal of Informatics and Mathematical Sciences Vol. 10, Nos. 1 & 2, pp. 201 215, 2018 ISSN 0975-5748 (online); 0974-875X (print) Published by RGN Publications http://dx.doi.org/10.26713/jims.v10i1-2.851

More information

Non-negative Matrix Factorization: Algorithms, Extensions and Applications

Non-negative Matrix Factorization: Algorithms, Extensions and Applications Non-negative Matrix Factorization: Algorithms, Extensions and Applications Emmanouil Benetos www.soi.city.ac.uk/ sbbj660/ March 2013 Emmanouil Benetos Non-negative Matrix Factorization March 2013 1 / 25

More information

Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations

Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Anh Huy Phan 1, Andrzej Cichocki 1,, Rafal Zdunek 1,2,andThanhVuDinh 3 1 Lab for Advanced Brain Signal Processing,

More information

Fast Nonnegative Matrix Factorization with Rank-one ADMM

Fast Nonnegative Matrix Factorization with Rank-one ADMM Fast Nonnegative Matrix Factorization with Rank-one Dongjin Song, David A. Meyer, Martin Renqiang Min, Department of ECE, UCSD, La Jolla, CA, 9093-0409 dosong@ucsd.edu Department of Mathematics, UCSD,

More information

A Variance Modeling Framework Based on Variational Autoencoders for Speech Enhancement

A Variance Modeling Framework Based on Variational Autoencoders for Speech Enhancement A Variance Modeling Framework Based on Variational Autoencoders for Speech Enhancement Simon Leglaive 1 Laurent Girin 1,2 Radu Horaud 1 1: Inria Grenoble Rhône-Alpes 2: Univ. Grenoble Alpes, Grenoble INP,

More information

On Spectral Basis Selection for Single Channel Polyphonic Music Separation

On Spectral Basis Selection for Single Channel Polyphonic Music Separation On Spectral Basis Selection for Single Channel Polyphonic Music Separation Minje Kim and Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 31 Hyoja-dong, Nam-gu

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 6 Non-Negative Matrix Factorization Rainer Gemulla, Pauli Miettinen May 23, 23 Non-Negative Datasets Some datasets are intrinsically non-negative: Counters (e.g., no. occurrences

More information

PROBLEM of online factorization of data matrices is of

PROBLEM of online factorization of data matrices is of 1 Online Matrix Factorization via Broyden Updates Ömer Deniz Akyıldız arxiv:1506.04389v [stat.ml] 6 Jun 015 Abstract In this paper, we propose an online algorithm to compute matrix factorizations. Proposed

More information

Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization

Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization Nicolas Gillis nicolas.gillis@umons.ac.be https://sites.google.com/site/nicolasgillis/ Department

More information

Preserving Privacy in Data Mining using Data Distortion Approach

Preserving Privacy in Data Mining using Data Distortion Approach Preserving Privacy in Data Mining using Data Distortion Approach Mrs. Prachi Karandikar #, Prof. Sachin Deshpande * # M.E. Comp,VIT, Wadala, University of Mumbai * VIT Wadala,University of Mumbai 1. prachiv21@yahoo.co.in

More information

Linear dimensionality reduction for data analysis

Linear dimensionality reduction for data analysis Linear dimensionality reduction for data analysis Nicolas Gillis Joint work with Robert Luce, François Glineur, Stephen Vavasis, Robert Plemmons, Gabriella Casalino The setup Dimensionality reduction for

More information

Matrix Factorization for Speech Enhancement

Matrix Factorization for Speech Enhancement Matrix Factorization for Speech Enhancement Peter Li Peter.Li@nyu.edu Yijun Xiao ryjxiao@nyu.edu 1 Introduction In this report, we explore techniques for speech enhancement using matrix factorization.

More information

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs Paris Smaragdis TR2004-104 September

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs

Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Rank Selection in Low-rank Matrix Approximations: A Study of Cross-Validation for NMFs Bhargav Kanagal Department of Computer Science University of Maryland College Park, MD 277 bhargav@cs.umd.edu Vikas

More information

NONNEGATIVE MATRIX FACTORIZATION WITH TRANSFORM LEARNING. Dylan Fagot, Herwig Wendt and Cédric Févotte

NONNEGATIVE MATRIX FACTORIZATION WITH TRANSFORM LEARNING. Dylan Fagot, Herwig Wendt and Cédric Févotte NONNEGATIVE MATRIX FACTORIZATION WITH TRANSFORM LEARNING Dylan Fagot, Herwig Wendt and Cédric Févotte IRIT, Université de Toulouse, CNRS, Toulouse, France firstname.lastname@irit.fr ABSTRACT Traditional

More information

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing 1 Zhong-Yuan Zhang, 2 Chris Ding, 3 Jie Tang *1, Corresponding Author School of Statistics,

More information

THE NONNEGATIVE matrix factorization (NMF) has

THE NONNEGATIVE matrix factorization (NMF) has Kernel nonnegative matrix factorization without the curse of the pre-image Fei Zhu, Paul Honeine, Member, IEEE, Maya Kallas arxiv:7.44v [cs.cv] 6 Jul 4 Abstract The nonnegative matrix factorization (NMF)

More information

Nonnegative Matrix Factorization with Markov-Chained Bases for Modeling Time-Varying Patterns in Music Spectrograms

Nonnegative Matrix Factorization with Markov-Chained Bases for Modeling Time-Varying Patterns in Music Spectrograms Nonnegative Matrix Factorization with Markov-Chained Bases for Modeling Time-Varying Patterns in Music Spectrograms Masahiro Nakano 1, Jonathan Le Roux 2, Hirokazu Kameoka 2,YuKitano 1, Nobutaka Ono 1,

More information

Single Channel Music Sound Separation Based on Spectrogram Decomposition and Note Classification

Single Channel Music Sound Separation Based on Spectrogram Decomposition and Note Classification Single Channel Music Sound Separation Based on Spectrogram Decomposition and Note Classification Hafiz Mustafa and Wenwu Wang Centre for Vision, Speech and Signal Processing (CVSSP) University of Surrey,

More information

EUSIPCO

EUSIPCO EUSIPCO 213 1569744273 GAMMA HIDDEN MARKOV MODEL AS A PROBABILISTIC NONNEGATIVE MATRIX FACTORIZATION Nasser Mohammadiha, W. Bastiaan Kleijn, Arne Leijon KTH Royal Institute of Technology, Department of

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,

More information

LOW RANK TENSOR DECONVOLUTION

LOW RANK TENSOR DECONVOLUTION LOW RANK TENSOR DECONVOLUTION Anh-Huy Phan, Petr Tichavský, Andrze Cichocki Brain Science Institute, RIKEN, Wakoshi, apan Systems Research Institute PAS, Warsaw, Poland Institute of Information Theory

More information

Jacobi Algorithm For Nonnegative Matrix Factorization With Transform Learning

Jacobi Algorithm For Nonnegative Matrix Factorization With Transform Learning Jacobi Algorithm For Nonnegative Matrix Factorization With Transform Learning Herwig Wt, Dylan Fagot and Cédric Févotte IRIT, Université de Toulouse, CNRS Toulouse, France firstname.lastname@irit.fr Abstract

More information

Dimension Reduction Using Nonnegative Matrix Tri-Factorization in Multi-label Classification

Dimension Reduction Using Nonnegative Matrix Tri-Factorization in Multi-label Classification 250 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 Dimension Reduction Using Nonnegative Matrix Tri-Factorization in Multi-label Classification Keigo Kimura, Mineichi Kudo and Lu Sun Graduate

More information

Cholesky Decomposition Rectification for Non-negative Matrix Factorization

Cholesky Decomposition Rectification for Non-negative Matrix Factorization Cholesky Decomposition Rectification for Non-negative Matrix Factorization Tetsuya Yoshida Graduate School of Information Science and Technology, Hokkaido University N-14 W-9, Sapporo 060-0814, Japan yoshida@meme.hokudai.ac.jp

More information

Discovering Convolutive Speech Phones using Sparseness and Non-Negativity Constraints

Discovering Convolutive Speech Phones using Sparseness and Non-Negativity Constraints Discovering Convolutive Speech Phones using Sparseness and Non-Negativity Constraints Paul D. O Grady and Barak A. Pearlmutter Hamilton Institute, National University of Ireland Maynooth, Co. Kildare,

More information

Constrained Nonnegative Matrix Factorization with Applications to Music Transcription

Constrained Nonnegative Matrix Factorization with Applications to Music Transcription Constrained Nonnegative Matrix Factorization with Applications to Music Transcription by Daniel Recoskie A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the

More information

Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm

Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm Duy-Khuong Nguyen 13, Quoc Tran-Dinh 2, and Tu-Bao Ho 14 1 Japan Advanced Institute of Science and Technology, Japan

More information

arxiv: v3 [cs.lg] 18 Mar 2013

arxiv: v3 [cs.lg] 18 Mar 2013 Hierarchical Data Representation Model - Multi-layer NMF arxiv:1301.6316v3 [cs.lg] 18 Mar 2013 Hyun Ah Song Department of Electrical Engineering KAIST Daejeon, 305-701 hyunahsong@kaist.ac.kr Abstract Soo-Young

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A o be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative ensor itle: Factorization Authors: Ivica Kopriva and Andrzej Cichocki Accepted: 21 June 2009 Posted: 25 June

More information

Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation Mikkel N. Schmidt and Morten Mørup Technical University of Denmark Informatics and Mathematical Modelling Richard

More information

Tutorial on Blind Source Separation and Independent Component Analysis

Tutorial on Blind Source Separation and Independent Component Analysis Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS

POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel

More information

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien

Independent Component Analysis and Unsupervised Learning. Jen-Tzung Chien Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood

More information

Non-Negative Matrix Factorization And Its Application to Audio. Tuomas Virtanen Tampere University of Technology

Non-Negative Matrix Factorization And Its Application to Audio. Tuomas Virtanen Tampere University of Technology Non-Negative Matrix Factorization And Its Application to Audio Tuomas Virtanen Tampere University of Technology tuomas.virtanen@tut.fi 2 Contents Introduction to audio signals Spectrogram representation

More information

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA

More information

SOUND SOURCE SEPARATION BASED ON NON-NEGATIVE TENSOR FACTORIZATION INCORPORATING SPATIAL CUE AS PRIOR KNOWLEDGE

SOUND SOURCE SEPARATION BASED ON NON-NEGATIVE TENSOR FACTORIZATION INCORPORATING SPATIAL CUE AS PRIOR KNOWLEDGE SOUND SOURCE SEPARATION BASED ON NON-NEGATIVE TENSOR FACTORIZATION INCORPORATING SPATIAL CUE AS PRIOR KNOWLEDGE Yuki Mitsufuji Sony Corporation, Tokyo, Japan Axel Roebel 1 IRCAM-CNRS-UPMC UMR 9912, 75004,

More information

Nonlinear Support Vector Machines through Iterative Majorization and I-Splines

Nonlinear Support Vector Machines through Iterative Majorization and I-Splines Nonlinear Support Vector Machines through Iterative Majorization and I-Splines P.J.F. Groenen G. Nalbantov J.C. Bioch July 9, 26 Econometric Institute Report EI 26-25 Abstract To minimize the primal support

More information

ACCOUNTING FOR PHASE CANCELLATIONS IN NON-NEGATIVE MATRIX FACTORIZATION USING WEIGHTED DISTANCES. Sebastian Ewert Mark D. Plumbley Mark Sandler

ACCOUNTING FOR PHASE CANCELLATIONS IN NON-NEGATIVE MATRIX FACTORIZATION USING WEIGHTED DISTANCES. Sebastian Ewert Mark D. Plumbley Mark Sandler ACCOUNTING FOR PHASE CANCELLATIONS IN NON-NEGATIVE MATRIX FACTORIZATION USING WEIGHTED DISTANCES Sebastian Ewert Mark D. Plumbley Mark Sandler Queen Mary University of London, London, United Kingdom ABSTRACT

More information

Iterative Laplacian Score for Feature Selection

Iterative Laplacian Score for Feature Selection Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,

More information

Independent Component Analysis and Unsupervised Learning

Independent Component Analysis and Unsupervised Learning Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien National Cheng Kung University TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

A State-Space Approach to Dynamic Nonnegative Matrix Factorization

A State-Space Approach to Dynamic Nonnegative Matrix Factorization 1 A State-Space Approach to Dynamic Nonnegative Matrix Factorization Nasser Mohammadiha, Paris Smaragdis, Ghazaleh Panahandeh, Simon Doclo arxiv:179.5v1 [cs.lg] 31 Aug 17 Abstract Nonnegative matrix factorization

More information

Numerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS,

Numerical Methods. Rafał Zdunek Underdetermined problems (2h.) Applications) (FOCUSS, M-FOCUSS, Numerical Methods Rafał Zdunek Underdetermined problems (h.) (FOCUSS, M-FOCUSS, M Applications) Introduction Solutions to underdetermined linear systems, Morphological constraints, FOCUSS algorithm, M-FOCUSS

More information

Dept. Electronics and Electrical Engineering, Keio University, Japan. NTT Communication Science Laboratories, NTT Corporation, Japan.

Dept. Electronics and Electrical Engineering, Keio University, Japan. NTT Communication Science Laboratories, NTT Corporation, Japan. JOINT SEPARATION AND DEREVERBERATION OF REVERBERANT MIXTURES WITH DETERMINED MULTICHANNEL NON-NEGATIVE MATRIX FACTORIZATION Hideaki Kagami, Hirokazu Kameoka, Masahiro Yukawa Dept. Electronics and Electrical

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Compressed Sensing and Neural Networks

Compressed Sensing and Neural Networks and Jan Vybíral (Charles University & Czech Technical University Prague, Czech Republic) NOMAD Summer Berlin, September 25-29, 2017 1 / 31 Outline Lasso & Introduction Notation Training the network Applications

More information

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation

Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Prasad Sudhakar, Rémi Gribonval To cite this version: Prasad Sudhakar, Rémi Gribonval. Sparse filter models

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

NON-NEGATIVE SPARSE CODING

NON-NEGATIVE SPARSE CODING NON-NEGATIVE SPARSE CODING Patrik O. Hoyer Neural Networks Research Centre Helsinki University of Technology P.O. Box 9800, FIN-02015 HUT, Finland patrik.hoyer@hut.fi To appear in: Neural Networks for

More information

COPYRIGHTED MATERIAL. Introduction Problem Statements and Models

COPYRIGHTED MATERIAL. Introduction Problem Statements and Models 1 Introduction Problem Statements and Models Matrix factorization is an important and unifying topic in signal processing and linear algebra, which has found numerous applications in many other areas.

More information

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization

Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Regularized Multiplicative Algorithms for Nonnegative Matrix Factorization Christine De Mol (joint work with Loïc Lecharlier) Université Libre de Bruxelles Dept Math. and ECARES MAHI 2013 Workshop Methodological

More information

Environmental Sound Classification in Realistic Situations

Environmental Sound Classification in Realistic Situations Environmental Sound Classification in Realistic Situations K. Haddad, W. Song Brüel & Kjær Sound and Vibration Measurement A/S, Skodsborgvej 307, 2850 Nærum, Denmark. X. Valero La Salle, Universistat Ramon

More information

Constrained Projection Approximation Algorithms for Principal Component Analysis

Constrained Projection Approximation Algorithms for Principal Component Analysis Constrained Projection Approximation Algorithms for Principal Component Analysis Seungjin Choi, Jong-Hoon Ahn, Andrzej Cichocki Department of Computer Science, Pohang University of Science and Technology,

More information

Block Coordinate Descent for Regularized Multi-convex Optimization

Block Coordinate Descent for Regularized Multi-convex Optimization Block Coordinate Descent for Regularized Multi-convex Optimization Yangyang Xu and Wotao Yin CAAM Department, Rice University February 15, 2013 Multi-convex optimization Model definition Applications Outline

More information

ONLINE NONNEGATIVE MATRIX FACTORIZATION BASED ON KERNEL MACHINES. Fei Zhu, Paul Honeine

ONLINE NONNEGATIVE MATRIX FACTORIZATION BASED ON KERNEL MACHINES. Fei Zhu, Paul Honeine ONLINE NONNEGATIVE MATRIX FACTORIZATION BASED ON KERNEL MACINES Fei Zhu, Paul oneine Institut Charles Delaunay (CNRS), Université de Technologie de Troyes, France ABSTRACT Nonnegative matrix factorization

More information

SUPERVISED NON-EUCLIDEAN SPARSE NMF VIA BILEVEL OPTIMIZATION WITH APPLICATIONS TO SPEECH ENHANCEMENT

SUPERVISED NON-EUCLIDEAN SPARSE NMF VIA BILEVEL OPTIMIZATION WITH APPLICATIONS TO SPEECH ENHANCEMENT SUPERVISED NON-EUCLIDEAN SPARSE NMF VIA BILEVEL OPTIMIZATION WITH APPLICATIONS TO SPEECH ENHANCEMENT Pablo Sprechmann, 1 Alex M. Bronstein, 2 and Guillermo Sapiro 1 1 Duke University, USA; 2 Tel Aviv University,

More information

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, and Domains Qibin Zhao, Cesar F. Caiafa, Andrzej Cichocki, and Liqing Zhang 2 Laboratory for Advanced Brain Signal Processing,

More information

Interaction on spectral data with: Kira Abercromby (NASA-Houston) Related Papers at:

Interaction on spectral data with: Kira Abercromby (NASA-Houston) Related Papers at: Low-Rank Nonnegative Factorizations for Spectral Imaging Applications Bob Plemmons Wake Forest University Collaborators: Christos Boutsidis (U. Patras), Misha Kilmer (Tufts), Peter Zhang, Paul Pauca, (WFU)

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

EUSIPCO

EUSIPCO EUSIPCO 013 1569746769 SUBSET PURSUIT FOR ANALYSIS DICTIONARY LEARNING Ye Zhang 1,, Haolong Wang 1, Tenglong Yu 1, Wenwu Wang 1 Department of Electronic and Information Engineering, Nanchang University,

More information

arxiv: v1 [stat.ml] 6 Nov 2018

arxiv: v1 [stat.ml] 6 Nov 2018 A Quasi-Newton algorithm on the orthogonal manifold for NMF with transform learning arxiv:1811.02225v1 [stat.ml] 6 Nov 2018 Pierre Ablin, Dylan Fagot, Herwig Wendt, Alexandre Gramfort, and Cédric Févotte

More information

Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE Artificial Intelligence Grad Project Dr. Debasis Mitra

Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE Artificial Intelligence Grad Project Dr. Debasis Mitra Factor Analysis (FA) Non-negative Matrix Factorization (NMF) CSE 5290 - Artificial Intelligence Grad Project Dr. Debasis Mitra Group 6 Taher Patanwala Zubin Kadva Factor Analysis (FA) 1. Introduction Factor

More information

Fast Bregman Divergence NMF using Taylor Expansion and Coordinate Descent

Fast Bregman Divergence NMF using Taylor Expansion and Coordinate Descent Fast Bregman Divergence NMF using Taylor Expansion and Coordinate Descent Liangda Li, Guy Lebanon, Haesun Park School of Computational Science and Engineering Georgia Institute of Technology Atlanta, GA

More information

Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation

Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation University of Pennsylvania ScholarlyCommons Departmental Papers (ESE) Department of Electrical & Systems Engineering December 24 Bayesian Regularization and Nonnegative Deconvolution for Time Delay Estimation

More information

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning LIU, LU, GU: GROUP SPARSE NMF FOR MULTI-MANIFOLD LEARNING 1 Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning Xiangyang Liu 1,2 liuxy@sjtu.edu.cn Hongtao Lu 1 htlu@sjtu.edu.cn

More information

Non-negative matrix factorization with fixed row and column sums

Non-negative matrix factorization with fixed row and column sums Available online at www.sciencedirect.com Linear Algebra and its Applications 9 (8) 5 www.elsevier.com/locate/laa Non-negative matrix factorization with fixed row and column sums Ngoc-Diep Ho, Paul Van

More information

ORTHOGONALITY-REGULARIZED MASKED NMF FOR LEARNING ON WEAKLY LABELED AUDIO DATA. Iwona Sobieraj, Lucas Rencker, Mark D. Plumbley

ORTHOGONALITY-REGULARIZED MASKED NMF FOR LEARNING ON WEAKLY LABELED AUDIO DATA. Iwona Sobieraj, Lucas Rencker, Mark D. Plumbley ORTHOGONALITY-REGULARIZED MASKED NMF FOR LEARNING ON WEAKLY LABELED AUDIO DATA Iwona Sobieraj, Lucas Rencker, Mark D. Plumbley University of Surrey Centre for Vision Speech and Signal Processing Guildford,

More information

Analysis of polyphonic audio using source-filter model and non-negative matrix factorization

Analysis of polyphonic audio using source-filter model and non-negative matrix factorization Analysis of polyphonic audio using source-filter model and non-negative matrix factorization Tuomas Virtanen and Anssi Klapuri Tampere University of Technology, Institute of Signal Processing Korkeakoulunkatu

More information

A Coupled Helmholtz Machine for PCA

A Coupled Helmholtz Machine for PCA A Coupled Helmholtz Machine for PCA Seungjin Choi Department of Computer Science Pohang University of Science and Technology San 3 Hyoja-dong, Nam-gu Pohang 79-784, Korea seungjin@postech.ac.kr August

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

Matrix Factorization with Applications to Clustering Problems: Formulation, Algorithms and Performance

Matrix Factorization with Applications to Clustering Problems: Formulation, Algorithms and Performance Date: Mar. 3rd, 2017 Matrix Factorization with Applications to Clustering Problems: Formulation, Algorithms and Performance Presenter: Songtao Lu Department of Electrical and Computer Engineering Iowa

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

When Dictionary Learning Meets Classification

When Dictionary Learning Meets Classification When Dictionary Learning Meets Classification Bufford, Teresa 1 Chen, Yuxin 2 Horning, Mitchell 3 Shee, Liberty 1 Mentor: Professor Yohann Tendero 1 UCLA 2 Dalhousie University 3 Harvey Mudd College August

More information

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan

ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM Pando Georgiev a, Andrzej Cichocki b and Shun-ichi Amari c Brain Science Institute, RIKEN, Wako-shi, Saitama 351-01, Japan a On leave from the Sofia

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Nonnegative matrix factorization (NMF) for determined and underdetermined BSS problems

Nonnegative matrix factorization (NMF) for determined and underdetermined BSS problems Nonnegative matrix factorization (NMF) for determined and underdetermined BSS problems Ivica Kopriva Ruđer Bošković Institute e-mail: ikopriva@irb.hr ikopriva@gmail.com Web: http://www.lair.irb.hr/ikopriva/

More information