A Brief Guide for TDALAB Ver 1.1. Guoxu Zhou and Andrzej Cichocki

Size: px
Start display at page:

Download "A Brief Guide for TDALAB Ver 1.1. Guoxu Zhou and Andrzej Cichocki"

Transcription

1 A Brief Guide for TDALAB Ver 1.1 Guoxu Zhou and Andrzej Cichocki April 30, 2013

2 Contents 1 Preliminary Highlights of TDALAB Install and Run TDALAB Basic Notations Tucker Decomposition CP Decomposition General Tensor Data Decomposition 5 3 Multiway Blind Source Separation (MBSS) MBSS Based on Tucker Model Number of Components and Dimensionality Reduction D Blind Source Separation 12 5 Algorithms Comparison and Evaluation Simulations By Using Synthetic Data Monte-Carlo Tests Which Algorithm Should I Use? Applications Tensor Discriminant Analysis Clustering Analysis How To Cite TDALAB 19 8 DISCLAIMER 20 9 FAQ 20 1

3 1 Preliminary 1.1 Highlights of TDALAB It provides friendly graphical user interface (GUI) for tensor decompositions, by which we can easily select proper decomposition models, algorithms, and parameters, etc, which may greatly facilitate practical multiway data analysis tasks. It provides a platform to call, compare and evaluate a large number of state-of-the-art tensor decomposition algorithms, and provides friendly GUI to access the widely used functions included in N-way Toolbox [10] and Tensor Toolbox [11], and some latest developments on tensor decompositions. It allows us to perform constrained tensor decomposition by incorporating standard 2D Penalized Matrix Factorization (PMF) methods in order to impose some diversity/constraints on the components (columns of factor matrices), such as orthogonality, statistical independence, sparsity, nonnegativity, etc. This topic is often referred to as Multilinear Blind Source Separation (MBSS) [4]. It also allows to perform 2D Penalized Matrix Factorization (PMF) directly in TDALAB, which is also referred to as 2D Blind Source Separation (BSS) 1. Several visualization approaches for tensor objects are provided, where users can explore the components and their connections. It provides useful applications of tensor decompositions, that is, Tucker discriminant analysis and clustering analysis. In comparison with other related toolboxes for tensor decompositions, we may think that Tensor Toolbox is basically programmer-oriented in the sense that it mainly provides a package of fundamental data structures and operations for tensor data. On the contrary, TDALAB attempts to provide an easy to use, end-user-oriented toolbox for experimental and practical tensor decomposition and analysis tasks. 1 On this topic you may also turn to ICALAB and NMFLAB toolbox available at where 2D BSS/ICA/NMF are focused. 2

4 1.2 Install and Run TDALAB System Requirements: TDALAB is developed in 64-bit MATLAB 2008a, and the operation system is Windows 7 (64bit). However, TDALAB requires the support of MATLAB Tensor Toolbox Version 2.5 2, which may partially require higher version of MATLAB. It also works fine in OS X Mountain Lion. To run TDALAB, we simply type tdalab in the command line of MAT- LAB and run it. Then the graphical user interface (GUI) of TDALAB will appear, see Figure 1. At the same time all the subfolders will be added to the path of MATLAB automatically (so you do not need to set the path for TDALAB manually. The added paths can also be automatically removed after you quit TDALAB if you chose to do). Important: As TDALAB is based on the basic tensor operations provided by Tensor Toolbox, before running TDALAB you MUST install MAT- LAB Tensor Toolbox Version 2.5 and make sure to add the path to MAT- LAB. 1.3 Basic Notations In this document, an Nth-order tensor is an N-way array and is denoted by calligraphic capital letters, e.g., Y R I 1 I 2 I N. Matrices (2nd-order tensors) are denoted by boldface capital letters, e.g., A; vectors (1st-order tensors) are denoted by boldface lowercase letters, e.g., the rth column of the matrix A R I R is denoted by a r. The mode-n product Y = G n A of a tensor G R J 1 J 2 J N matrix A R I Jn y j1,j 2,...,j n 1,i,j n+1,...,j N and a is a tensor Y R J 1 J n 1 I J n+1 J N, with elements = J n j n=1 (g j 1,j 2,...,j N )(a i,jn ). The symbol denotes the Kronecker product, i.e., A B = [a ij B], and the symbol denotes the Khatri-Rao product or column-wise Kronecker product, i.e., A B = [a 1 b 1 a J b J ]. Unfolding (matricization, flattening) of a tensor Y R I 1 I 2 I N mode-n is denoted as Y (n) R In p n Ip, which consists of arranging all possible mode-n tubes (vectors) as the columns of a matrix [12]. Readers are referred to [1] [12] for more details about the notations and tensor operations. 2 tgkolda/tensortoolbox/index-2.5.html in 3

5 1.4 Tucker Decomposition In Tucker decomposition a given tensor Y R I 1 I 2 I N is decomposed as Y G 1 A (1) 2 A (2) N A (N), (1) where G R J 1 J 2 J N is the core tensor and A (n) R In Jn (n = 1, 2,..., N) are called factor (mode, loading, component) matrices. Tucker decompositions are not unique if we do not impose any constraints on the factors (components). Consider the mode-n matricization of (1): Y (n) = A (n) G (n) B (n), (2) where B (n) = A (N) A (N 1) A (n+1) A (n 1) A (1). (3) Matricization is widely used in the development of Tucker decomposition algorithms. In the Tensor Toolbox, a tensor Y using Tucker model (1) is saved using a ttensor structure: Y = ttensor(g, A), where A = {A (1), A (2),..., A (N) } is a cell of A (n) (n = 1, 2,..., N). This allows us to use Y.core and Y.U to access the core tensor and the factor matrices of a ttensor Y, respectively. 1.5 CP Decomposition In CP decomposition a given tensor Y R I 1 I 2 I N is decomposed as Y G 1 A (1) 2 A (2) N A (N) J = λ j (a 1 a 2 a N ). j=1 (4) Compared with (1), in CP model the number of components in each factor matrix A (n) (n = 1, 2,..., N) is the same, i.e., J 1 = J 2 = = J N = J, and the core tensor G is super-diagonal, i.e., g j1 j 2 j N 0 if and only if j 1 = j 2 = = j N = j. We define λ j = g j,j,...,j. The mode-n matricization operator plays the central role in CPD algorithms: Y (n) = A (n) B (n)t + E (n), (5) 4

6 where B (n) is defined by B (n) = p n A (p) R p n In J. (6) Basically, A (n) can be solved by using alternating least squares based on (5). In the Tensor Toolbox, a tensor Y using the CP model (4) is saved using the ktensor structure: Y = ktensor(λ, A), where A = {A (1), A (2),..., A (N) } is a cell of A (n), n = 1, 2,..., N. λ is a column vector that consists of the diagonal elements of G. We use Y.lambda and Y.U to access the vector λ and the component matrices of a ktensor Y in MATLAB. 2 General Tensor Data Decomposition For simplicity, hereafter we use label to denote a control item of GUI (it can be, for example, a menu item, button, checkbox, etc). Purpose: Decompose a tensor data for further data analysis (visualization, clustering analysis, discriminant analysis, etc). Basic Steps: 1. Run tdalab from the command line of MATLAB. We will see the GUI of TDALAB shown in Figure Load data. Access the menu File Load to load a -mat file which contains the tensor to be decomposed. The data can be a multi-way array or a tensor. As an illustrative example, we load the benchmark claus.mat. 3. Select data for decomposition. (If the opened file contains only one valid tensor data, this data will be loaded automatically. Valid tensor data type can be any of double, tensor, ttensor, and ktensor.) Here X is automatically loaded and saved to the tensor Y for decomposition. 4. Select a suitable decomposition model. Currently TDALAB supports CP, Tucker, and Block Component Decomposition (BCD) (only limited support so far). Here CP is selected. 5. Select a CPD algorithm. Invalid algorithms will be hidden automatically (You can check the box Show all to show all the algorithms included in the toolbox). Here we select the SWATLD algorithm [13]. 5

7 6. Click Advanced Options to set parameters for the selected algorithm. Here we set the parameter NumOfComp, i.e., the rank of the output ktensor, as 3. We can check the box Help>> to see the help information of current selected algorithm. Finally click OK to finish setting the parameters. See Figure Click Run Now! to run the selected algorithm with specified parameters. In this step the main window of TDALAB will be invisible and the running information will be displayed in the command window of MATLAB till the decomposition is finished. Whenever some errors happen and the main window of TDALAB does not appear, type tdalab in the command line and press the Entry key to recover the main window of TDALAB. 8. Visualize the results. See Figure 3 for some examples. 9. Save the results. Click Save or access the menu File Save Results to save the decomposed tensor (Only the output tensor will be saved) or access the menu File Save Workspace to save the workspace (including all the information such as the used algorithm, parameters, data set, etc). We can save the current workspace and load it from File Load Workspace in the future. Similarly, we can perform Tucker decomposition, nonnegative CP/Tucker decomposition (by checking the box Nonnegative ) and Block Component Decomposition. 3 Multiway Blind Source Separation (MBSS) 3.1 MBSS Based on Tucker Model Here we show how to find unique and physical meaningful Tucker representation of a given data tensor by incorporating proper a priori information. We have two ways to perform MBSS: 1. Run unconstrained Tucker decomposition first by applying, e.g., HOSVD [14], HOOI [15] or ALS algorithms [10] and then use Penalized/constrained Matrix Factorization (PMF) methods to refine the factors. Let Y = G 1 A (1) 2 A (2) N A (N). (7) 6

8 Figure 1: GUI of TDALAB 7

9 Figure 2: Interface for setting the parameters of the selected algorithm. (a) (b) (c) (d) Figure 3: Visualization of the results. (a)-(c) show the different visualization of the components in mode-2. (d) Visualize the results of CPD as the sum P (1) (2) (3) of rank-1 terms: Y = 3j=1 aj aj aj. be an unconstrained Tucker decomposition and Ψn be a proper PMF algorithm. We run a specific 2D BSS algorithm Ψn on A(n) to extract 8

10 the desired latent components S (n) : S (n) = Ψ n (A (n) ), n = 1, 2,..., N. (8) The final output is Y = M 1 S (1) 2 S (2) N S (N), (9) where the columns of S (n) consist of the components with some desired properties and diversities. In TDALAB we click Penalized Matrix Factorization to select PMF algorithms for each factor matrix A (n). The major limitation of this way is that, the first step (7) may destroy the physical meaning of the data, for example, nonnegativity, which will hamper the subsequent data analysis. 2. Run MBSS by using the MBSS algorithm. From Eq. (9) and by using matrix unfolding we known that Y (n) = S (n) M (n), n = 1, 2,..., N. (10) This motivates us to run PMF algorithm Ψ n on the mode-n unfolding matrices Y (n) directly. This way is quite flexible and generally more efficient than the first way [4]. In TDALAB, we select the MBSS TuckerALS (MBSS) to perform MBSS in this way. As an example, we access the menu File Load and load the benchmark ssvep2.mat, select the Tucker model. Then we select the MBSS TuckerALS (MBSS) algorithm. Click Advanced Options to set the parameters for MBSS, see Figure 4. In Figure 4(a) we click PMFalgIDs and the GUI Figure 4(b) appears, from which we can select PMF algorithms for each mode and their parameters as well. We used the DNNMF algorithm [16] to extract the components in mode-1,2,3 and lranmf [5] in mode-4. More details about this data and experiment can be found in [4]. 3.2 Number of Components and Dimensionality Reduction In practice how to detect the number of components, i.e. dimension of the core tensor G, is quite critical. There are many approaches proposed 9

11 (a) (b) Figure 4: Graphical user interface for setting the parameters of MBSS. for this important topic. In TDALAB we use the Second ORder statistic of the Eigenvalues (SORTE) method [8] to initially detect the number of components in each mode. The SORTE method detects the GAP of the eigenvalues of Y (n) Y(n) T between the significant eigenvalues corresponding to signal spaces and the trivial ones corresponding to noise. In TDALAB we click NumOfComp and then Figure 6 will disappear, where the values of eigenvalues and GAPs are plotted for further analysis of the number of components. Some main limitations of SORTE are: 1) it seems that it only works for Gaussian noise; 2) The performance is not satisfactory for very heavy noise. Another practical issue of MBSS is dimensionality reduction. From (10) the number of mixtures is generally significantly larger than the number of sources J n. Therefore it is quite critical to perform dimensionality reduction on Y (n) to improve the efficiency. In this regard we may apply, for example, the Fiber Sampling Tensor Decomposition (FSTD) methods [9] or PCA to Y (n). In TDALAB you can run the FSTD methods independently or run the MBSS algorithms where these functions have been integrated. In [4] randomly fibers sampling was proposed. 10

12 Figure 5: Four-way EEG spectrum tensor (frequency time channel trial) factorization by MBSS. The example includes 12 trials recorded from eight channels P7, P3, Pz, P4, P8, O1, Oz and O2 during 6.5 Hz and 10.5 Hz flickering visual stimulus (6 trials each). Frequency components between 5 Hz and 50 Hz with 0.5 Hz resolution (i.e., 91 frequency bins) were analyzed and the time length was 4s (i.e., 1000 sample points). Each trial is represented by a 3-way tensor with dimension of See [4] for more details. Figure 6: Illustraction of how to estimate the number of components (dimension of the core tensor) by using the SORTE method [8]. 11

13 4 2D Blind Source Separation In TDALAB we can also perform 2D Blind Source Separation (BSS). In BSS, generally we have X = SA, (11) where the columns of S R T R and X R T M are the latent signals and observations, and A R R M is the mixing matrix, R M. Generally only S is of our interest. In TDALAB we select the PMF model (i.e. penalized matrix factorization) to perform blind source separation (including independence component analysis, nonnegative matrix factorization, etc.). This is the easiest way to perform BSS. Let A = GB T, where B R M R and G R R R is any invertible matrix. Consequently Eq.(11) can be rewritten as X = G 1 S 2 B, (12) where X and G are viewed as 2D-tensors. Consequently, we can use MBSS to retrieve the desired factor S, and G 2 B is served as the ordinary mixing matrix of BSS. Another way is using the method named CP with One Single Mode BSS (CP-SMBSS). In general the CP-SMBSS method imposes constraints (by performing BSS) on one pre-selected mode and the remaining factors will be extracted by using Khatri-Rao product structure projection (approximation) (See [6]). Due to the unavailable scale ambiguity of BSS, Eq. (11) can be rewritten as X = SΛÃ = Λ 1 S 2 Ã T, (13) where Λ = diag(λ 1, λ 2,..., λ R ) reflects the scale ambiguity. We apply BSS to the mode-1 unfolding matrix and hence S will be estimated. As an example of 2D BSS, we load the benchmark mat sin10d.mat, where the mixtures are saved in the variable x. We select the TD model as PMF and select the SOBI method [17] to perform BSS. 5 Algorithms Comparison and Evaluation 5.1 Simulations By Using Synthetic Data Before applying an algorithm to real data, we often test the algorithm on synthetic data to see whether the algorithm is able to give desired represen- 12

14 Figure 7: Blind source separation in TDALAB by using the benchmark of mat sin10d.mat. tation of data. For this purpose we can generate a data tensor using the Tucker or CP model. Then we decompose it by using proper algorithms to observe and compare their efficiency, accuracy, and robustness to noise. The following is a simple illustrative example. 1. Load the benchmark kt ica3b 2bottlenecks.mat. For this data the components in mode-1,2 are highly correlated. Details about this data set can be found in [6]. 2. Select the distribution of lambda to generate a new ktensor. 3. Check the box Full tensor to generate a full tensor, i.e. Y. 4. Add noise with selected noise type, SNR, and the sparseness of noise. 5. Set the TD model as CP. 6. In the GUI of TDALAB, check the box Show all, and select the SOBI algorithm. Then click Advanced Options to select parameters for the SOBI algorithm. Finally click Save to save the settings of the SOBI algorithm and then close the algorithm options window. 7. Uncheck the box Show all. Select the CP-SMBSS algorithm and set the parameters: NumOfComp=10, BSSmode=3. Click the field of PMFalgFile a browse window will appear. Load the mat file saved in 13

15 Figure 8: Visualization of SIRs by using synthetic data. the previous step such that the SOBI algorithm will be used to perform BSS. 8. Click Run Now! to run the algorithm. 9. Visualize and save the results. The SIR values will be calculated to evaluate how well the extracted components match the original components. See Figure 8 for the visualization results. 5.2 Monte-Carlo Tests It is possible to perform Monte-Carlo tests in TDALAB. Click Settings... in the area of Algorithm analysis, then the window of Figure 9 will appear. We select the algorithm from the left list box and click >> to add the selected algorithm into the list for comparison. Then set the Repeated times and click OK. At this moment you will be asked to set the parameters for each algorithm. Finally click Compare in the main window of TDALAB to perform the Monte-Carlo tests3. Both text report and visualization of the results are available. Figure 9(b) is an example of visualization of SIRs in a Monte-Carlo run. 3 If you occasionally click Run Now!, however, ordinary tensor decomposition will be performed instead by using the first selected algorithm. 14

16 (a) (b) Figure 9: GUI for configuring the Monte-Carlo tests. 5.3 Which Algorithm Should I Use? As each algorithm has its own assumptions, bias, limitations, advantages and disadvantages, it is important to compare and select proper algorithms for your own purposes. Here we briefly introduce several algorithms developed by the authors. The MRCPD algorithm 4 : a fast algorithm to perform CPD for very high order tensors. It converts a higher-order tensor into a 3-way tensor. Then the CPD is realized by applying any pre-specified 3-way CPD algorithm to the 3-way tensor followed by a Khatri-Rao product projection procedure [2]. This algorithm significantly reduces the unfolding operations and ALS iterations, and hence often enjoys fast convergence speed especially for large scale high dimensional data. The key point is to select proper way of 3-way unfolding, which is named as mode reduction in the paper. The HALS CPD algorithm: Perform CPD by using the HALS iterations, where low-rank approximation is incorporated to reduce the computational complexity and where we can easily impose nonnegativity or sparseness on the components. See [5, 7] for details. The CPD with Single Mode BSS (CP-SMBSS) algorithm: Perform CPD by running a BSS algorithm on one mode to estimate the corresponding factor matrix first followed by a Khatri-Rao product projection procedure. This result is published in IEEE Signal Processing Letters [6]. 4 It is the extended version of the N3CP method [3]. 15

17 The Fast Nonnegative CPD algorithm based accelerated proximal gradient (FastNCP APG): Perform fast nonnegative CPD by incorporating low-rank approximation techniques and accelerated proximal gradient. You may also find some helpful information from [5]. The lrasntd/lranmf algorithm: Perform fast nonnegative Tucker/Matrix decomposition based on the low-rank approximation techniques. See [5] for details. The FSTD algorithms (FSTD1 and FSTD2): Perform Tucker decomposition based on fiber sampling, which can be viewed as the extension of the matrix CUR algorithm in tensor cases [9]. 6 Applications In this section we show two important applications of tensor decompositions by using TDALAB. 6.1 Tensor Discriminant Analysis Tensor discriminant analysis is a powerful tool for multi-way data discriminant analysis. For multi-way data, the traditional way is to vectorize the samples and then use ordinary 2D discriminant analysis methods, such as linear discriminant analysis (LDA), k-nearest neighbors (KNN), support-vector machine (SVM), etc. However, this way often causes overfitting problem when the dimension of features is higher than the number of samples. Tensor discriminant analysis is a promising way to overcome this problem. See [18, 19] for details. In TDALAB we follow the following steps to perform tensor discriminant analysis: 1. Click Applications Tucker discriminant analysis. The GUI of Figure 10 will appear. 2. Click Load data to load a -mat file for discriminant analysis. Here we load the benchmark EEG classify.mat. Note that this file contains four variables: sample, an N-way array consists of samples; 16

18 training, an N-way array consists of training data; Note that sample and training should be of the same dimensionality except their last one which indicates the size of samples and training data, respectively. group, vector whose distinct values define the grouping of the training data. The number of entries of group should be equal to the last dimensionality of training. label, vector (if available) whose distinct values define the grouping of samples. In practice this value is unknown and to be estimated. An valid -mat file for discriminant analysis must contain the first 3 variables: sample, training, and group. However, label is optional as for practical data it is to be estimated. Once it is provided, we are able to evaluate the classification performance. 3. Set the parameters, where Dim. of Features denotes the reduced dimensionality of samples by using the Tucker model. If sample is an N-way array, it is a vector with the size of N 1. Here we set it as [2, 2]. We can also select a classifier from KNN (if valid in your MATLAB version) and LDA. 4. Click Run to perform discriminant analysis. See the command window for detailed results. 5. Click Save results to save the results. Except the estimated labels of samples, the features of samples and training data extracted by using tensor discriminant analysis are also saved in the variables sample fea and training fea, respectively. You can load them and then use your own classifiers to perform classification. 6.2 Clustering Analysis TDALAB allows us to perform simple clustering analysis by using the K- means method included in MATLAB. To do this, we run tensor decomposition first (otherwise, the matricization of the original high-order tensor will be used instead for clustering analysis.) Then we access to the menu Applications Clustering analysis and the window Figure 11(left) will appear. We take the following steps: 17

19 Figure 10: GUI for Tucker discriminant analysis. 1. Select mode. If mode-n is selected, the factor F=U (n) will be used as features for clustering (in the case where the original tensor has not been decomposed yet, F=Y (n) will be used instead). 2. Check the box Decomposed data if you want to use the decomposed data for clustering. Otherwise the original data will be used. It is enabled only if the tensor has already been decomposed. 3. Check the box with core if the core tensor should be combined into the features (valid only if the data has been decomposed). If it is checked, the features F is computed as for CP model, F (n) = U (n) Λ; For Tucker model, F (n) is computed from G = G n U (n), F = G (n). (14) 4. Check the box Dimensionality reduction to apply dimensionality reduction on the features F. If it is checked, you will be asked to select 18

20 Figure 11: Left: GUI for tensor based clustering analysis. Right: Visualization of the features provided the dimension of the features is 2 or 3. dimensionality reduction methods (currently it supports t-sne, PCA, ISOMAP, and LLE) 5 and the dimension of reduced features. 5. Click Load true labels to load true labels from a -mat file. This step is optional and is used to evaluate the clustering results provided that the true labels are available. The -mat file should contain a variable named label which indicates the true class information of each row of F. 6. Click KMeans configuration... to configure the K-means. 7. Click Run to run the K-means method. If the dimension of features, i.e. the number of columns of F, is 2 or 3, the features will be visualized incorporating the label information, where each color corresponds to one class, and o denotes true classes if true labels are available. See Figure 11(right) for an example. 8. Click Save to save the results (including the estimated labels est label and the features used for clustering Feas ). 7 How To Cite TDALAB Guoxu Zhou, Andrzej Cichocki. Matlab Toolbox for Tensor Decomposition & Analysis Ver1.1. [Online]. Available: 5 These functions are from the Matlab Toolbox for Dimensionality Reduction available at Reduction.html 19

21 jp/tdalab. Also visit to find our full publication list related to this topic. 8 DISCLAIMER NEITHER THE AUTHORS NOR THEIR EMPLOYERS ACCEPT ANY RESPONSIBILITY OR LIABILITY FOR LOSS OR DAMAGE OCCA- SIONED TO ANY PERSON OR PROPERTY THROUGH USING SOFT- WARE, MATERIALS, INSTRUCTIONS, METHODS OR IDEAS CON- TAINED HEREIN, OR ACTING OR REFRAINING FROM ACTING AS A RESULT OF SUCH USE. THE AUTHORS EXPRESSLY DISCLAIM ALL IMPLIED WARRANTIES, INCLUDING MERCHANTABILITY OR FIT- NESS FOR ANY PARTICULAR PURPOSE. THERE WILL BE NO DUTY ON THE AUTHORS TO CORRECT ANY ERRORS OR DEFECTS IN THE SOFTWARE. THIS SOFTWARE AND THE DOCUMENTATIONS ARE THE PROPERTY OF THE AUTHORS AND SHOULD ONLY BE USED FOR SCIENTIFIC AND EDUCATIONAL PURPOSES. ALL SOFT- WARE IS PROVIDED FREE AND IT IS NOT SUPPORTED. THE AU- THORS ARE, HOWEVER, HAPPY TO RECEIVE COMMENTS, CRITI- CISM AND SUGGESTIONS ADDRESSED TO zhouguoxu@brain.riken.jp. 9 FAQ (1). Q: My main window of TDALAB disappeared. Where is it? A: Due to some errors your main window of TDALAB may fail to appear. Whenever you want to callback the main window of TDALAB, simply run tdalab from the command window of MATLAB. In fact, you may use tdalab( hide ) to hide the GUI and then use tdalab( show ) to show it. (2). Q: I received the error message Error:... Unbalanced or unexpected parenthesis or bracket.. A: Mostly it is caused by calling a function that has used the syntax [,b]=func(...);. This feature is not supported by the your current MATLAB version. Please replace by any unused variable name. 20

22 (3). Q: May I add new algorithms to the TDALAB for tensor decompositions? A: Sure. It is relatively easy to add your own algorithms. Please define your algorithms like Ydec=algName(Y,opts), where Y can be any type of double/tensor/ktensor/ttensor. Then register your algorithm in the file algsinitialization.m. A simpler way is perhaps sending us your algorithms and we will help to add them to the toolbox. (4). Q: Is is possible to call tensor decomposition algorithms included in TDALAB without GUI? A: Yes. You can call most algorithms like this: Ycap=TDAlgName(Y,opts); where opts is a structure that contains the parameters required by the algorithm TDAlgName. It would be more convenient if you save the parameters via Advanced options in TDALAB in advance and then load the file in your own function. Acknowledgement Some codes were used with the permission from their original authors. Some codes were contained because they declared that users are free to use, modify, or redistribute this code for non-commercial purposes under some open source licenses. We would like to thank all the authors for their contributions. The copyright of all algorithms belong to their original authors. Please access Advanced Options and check Help>> to see their declaration on copyright and references. We also would like to thank Dr. Peter Jurica who creates and maintains the web pages supporting this toolbox. Selected Publications From Our Laboratory [1] A. Cichocki, R. Zdunek, A.-H. Phan, and S. Amari, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Chichester: Wiley, [2] G. Zhou, A. Cichocki, and S. Xie, Accelerated canonical polyadic decomposition by using mode reduction, [arxiv]

23 [3] G. Zhou, Z. He, Y. Zhang, Q. Zhao, and A. Cichocki, Canonical polyadic decomposition: From 3-way to n-way, in Eighth International Conference on Computational Intelligence and Security (CIS 2012), Nov. 2012, pp [4] G. Zhou and A. Cichocki, Fast and unique tucker decompositions via multiway blind source separation, Bulletin of the Polish Academy of Sciences-Technical Sciences, vol. 60, no. 3, p , [5] G. Zhou, A. Cichocki, and S. Xie, Fast nonnegative matrix/tensor factorization based on low-rank approximation, IEEE Transactions on Signal Processing, vol. 60, no. 6, pp , June [6] G. Zhou and A. Cichocki, Canonical polyadic decomposition based on a single mode blind source separation, IEEE Signal Processing Letters, vol. 19, no. 8, pp , Aug [7] A. Cichocki and A.-H. Phan, Fast local algorithms for large scale nonnegative matrix and tensor factorizations, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences (Invited paper), vol. E92-A, no. 3, pp , [8] Z. He, A. Cichocki., S. Xie, and K. Choi, Detecting the number of clusters in n-way probabilistic clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 11, pp , [9] C. F. Caiafa and A. Cichocki, Generalizing the column-row matrix decomposition to multi-way arrays, Linear Algebra and its Applications, vol. 433, no. 3, pp , Other References [10] C. A. Andersson and R. Bro, The N-way toolbox for MAT- LAB, [Online]. Available: nwaytoolbox/ [11] B. W. Bader and T. G. Kolda, MATLAB tensor toolbox version 2.5, Feb [Online]. Available: tgkolda/ TensorToolbox/ 22

24 [12] T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, vol. 51, no. 3, pp , [13] Z.-P. Chen, H.-L. Wu, and R.-Q. Yu, On the self-weighted alternating trilinear decomposition algorithm the property of being insensitive to excess factors used in calculation, Journal of Chemometrics, vol. 15, no. 5, pp , [14] L. De Lathauwer, B. De Moor, and J. Vandewalle, A multilinear singular value decomposition, SIAM Journal on Matrix Analysis and Applications, vol. 21, pp , [15], On the best rank-1 and rank-(r1,r2,...,rn) approximation of higher-order tensors, SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp , [16] R. Zdunek, H. A. Phan, and A. Cichocki, Damped Newton iterations for nonnegative matrix factorization, Australian Journal of Intelligent Information Processing Systems, vol. 12, no. 1, pp , [17] A. Belouchrani, K. AbedMeraim, J. F. Cardoso, and E. Moulines, A blind source separation technique using second-order statistics, IEEE Transactions on Signal Processing, vol. 45, no. 2, pp , Feb [18] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.-J. Zhang, Multilinear discriminant analysis for face recognition, IEEE Transactions on Image Processing, vol. 16, no. 1, pp , Jan [19] D. Tao, X. Li, X. Wu, and S. Maybank, General tensor discriminant analysis and gabor features for gait recognition, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, no. 10, pp , oct

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains

Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, Frequency and Time Domains Slice Oriented Tensor Decomposition of EEG Data for Feature Extraction in Space, and Domains Qibin Zhao, Cesar F. Caiafa, Andrzej Cichocki, and Liqing Zhang 2 Laboratory for Advanced Brain Signal Processing,

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy

Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

LOW RANK TENSOR DECONVOLUTION

LOW RANK TENSOR DECONVOLUTION LOW RANK TENSOR DECONVOLUTION Anh-Huy Phan, Petr Tichavský, Andrze Cichocki Brain Science Institute, RIKEN, Wakoshi, apan Systems Research Institute PAS, Warsaw, Poland Institute of Information Theory

More information

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute

More information

Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations

Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Anh Huy Phan 1, Andrzej Cichocki 1,, Rafal Zdunek 1,2,andThanhVuDinh 3 1 Lab for Advanced Brain Signal Processing,

More information

the tensor rank is equal tor. The vectorsf (r)

the tensor rank is equal tor. The vectorsf (r) EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications

More information

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,

More information

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Evrim Acar, Daniel M. Dunlavy, and Tamara G. Kolda* Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia

More information

THERE is an increasing need to handle large multidimensional

THERE is an increasing need to handle large multidimensional 1 Matrix Product State for Feature Extraction of Higher-Order Tensors Johann A. Bengua 1, Ho N. Phien 1, Hoang D. Tuan 1 and Minh N. Do 2 arxiv:1503.00516v4 [cs.cv] 20 Jan 2016 Abstract This paper introduces

More information

Coupled Matrix/Tensor Decompositions:

Coupled Matrix/Tensor Decompositions: Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A

To be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A o be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative ensor itle: Factorization Authors: Ivica Kopriva and Andrzej Cichocki Accepted: 21 June 2009 Posted: 25 June

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Online Tensor Factorization for. Feature Selection in EEG

Online Tensor Factorization for. Feature Selection in EEG Online Tensor Factorization for Feature Selection in EEG Alric Althoff Honors Thesis, Department of Cognitive Science, University of California - San Diego Supervised by Dr. Virginia de Sa Abstract Tensor

More information

The Canonical Tensor Decomposition and Its Applications to Social Network Analysis

The Canonical Tensor Decomposition and Its Applications to Social Network Analysis The Canonical Tensor Decomposition and Its Applications to Social Network Analysis Evrim Acar, Tamara G. Kolda and Daniel M. Dunlavy Sandia National Labs Sandia is a multiprogram laboratory operated by

More information

Postgraduate Course Signal Processing for Big Data (MSc)

Postgraduate Course Signal Processing for Big Data (MSc) Postgraduate Course Signal Processing for Big Data (MSc) Jesús Gustavo Cuevas del Río E-mail: gustavo.cuevas@upm.es Work Phone: +34 91 549 57 00 Ext: 4039 Course Description Instructor Information Course

More information

SPARSE TENSORS DECOMPOSITION SOFTWARE

SPARSE TENSORS DECOMPOSITION SOFTWARE SPARSE TENSORS DECOMPOSITION SOFTWARE Papa S Diaw, Master s Candidate Dr. Michael W. Berry, Major Professor Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville

More information

Robust extraction of specific signals with temporal structure

Robust extraction of specific signals with temporal structure Robust extraction of specific signals with temporal structure Zhi-Lin Zhang, Zhang Yi Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

KERNEL-BASED TENSOR PARTIAL LEAST SQUARES FOR RECONSTRUCTION OF LIMB MOVEMENTS

KERNEL-BASED TENSOR PARTIAL LEAST SQUARES FOR RECONSTRUCTION OF LIMB MOVEMENTS KERNEL-BASED TENSOR PARTIAL LEAST SQUARES FOR RECONSTRUCTION OF LIMB MOVEMENTS Qibin Zhao, Guoxu Zhou, Tulay Adali 2, Liqing Zhang 3, Andrzej Cichocki RIKEN Brain Science Institute, Japan 2 University

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Efficient CP-ALS and Reconstruction From CP

Efficient CP-ALS and Reconstruction From CP Efficient CP-ALS and Reconstruction From CP Jed A. Duersch & Tamara G. Kolda Sandia National Laboratories Livermore, CA Sandia National Laboratories is a multimission laboratory managed and operated by

More information

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches www.ijcsi.org https://doi.org/10.5281/zenodo.1467648 16 Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches Lihua Gui, Xuyang Zhao, Qibin Zhao and Jianting Cao

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Wafer Pattern Recognition Using Tucker Decomposition

Wafer Pattern Recognition Using Tucker Decomposition Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an

More information

ARestricted Boltzmann machine (RBM) [1] is a probabilistic

ARestricted Boltzmann machine (RBM) [1] is a probabilistic 1 Matrix Product Operator Restricted Boltzmann Machines Cong Chen, Kim Batselier, Ching-Yun Ko, and Ngai Wong chencong@eee.hku.hk, k.batselier@tudelft.nl, cyko@eee.hku.hk, nwong@eee.hku.hk arxiv:1811.04608v1

More information

Dealing with curse and blessing of dimensionality through tensor decompositions

Dealing with curse and blessing of dimensionality through tensor decompositions Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017 2 Overview Curse of dimensionality

More information

Tensor decompositions for feature extraction and classification of high dimensional datasets

Tensor decompositions for feature extraction and classification of high dimensional datasets Invited Paper Tensor decompositions for feature extraction and classification of high dimensional datasets Anh Huy Phan a) and Andrzej Cichocki,b) Brain Science Institute, RIKEN, - Hirosawa, Wakoshi, Saitama

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

Brain Computer Interface Using Tensor Decompositions and Multi-way Analysis

Brain Computer Interface Using Tensor Decompositions and Multi-way Analysis 1 Brain Computer Interface Using Tensor Decompositions and Multi-way Analysis Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing http://www.bsp.brain.riken.jp/~cia/. RIKEN, Brain Science

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices

A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 6 Non-Negative Matrix Factorization Rainer Gemulla, Pauli Miettinen May 23, 23 Non-Negative Datasets Some datasets are intrinsically non-negative: Counters (e.g., no. occurrences

More information

a Short Introduction

a Short Introduction Collaborative Filtering in Recommender Systems: a Short Introduction Norm Matloff Dept. of Computer Science University of California, Davis matloff@cs.ucdavis.edu December 3, 2016 Abstract There is a strong

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Andrzej CICHOCKI and Rafal ZDUNEK Laboratory for Advanced Brain Signal Processing, RIKEN BSI, Wako-shi, Saitama

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ

where A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA

More information

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES THEORY AND PRACTICE Bogustaw Cyganek AGH University of Science and Technology, Poland WILEY A John Wiley &. Sons, Ltd., Publication Contents Preface Acknowledgements

More information

Automatic Rank Determination in Projective Nonnegative Matrix Factorization

Automatic Rank Determination in Projective Nonnegative Matrix Factorization Automatic Rank Determination in Projective Nonnegative Matrix Factorization Zhirong Yang, Zhanxing Zhu, and Erkki Oja Department of Information and Computer Science Aalto University School of Science and

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Table of Contents. Multivariate methods. Introduction II. Introduction I

Table of Contents. Multivariate methods. Introduction II. Introduction I Table of Contents Introduction Antti Penttilä Department of Physics University of Helsinki Exactum summer school, 04 Construction of multinormal distribution Test of multinormality with 3 Interpretation

More information

Notes on Latent Semantic Analysis

Notes on Latent Semantic Analysis Notes on Latent Semantic Analysis Costas Boulis 1 Introduction One of the most fundamental problems of information retrieval (IR) is to find all documents (and nothing but those) that are semantically

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

Multiway Canonical Correlation Analysis for Frequency Components Recognition in SSVEP-based BCIs

Multiway Canonical Correlation Analysis for Frequency Components Recognition in SSVEP-based BCIs Multiway Canonical Correlation Analysis for Frequency Components Recognition in SSVEP-based BCIs Yu Zhang 1,2, Guoxu Zhou 1, Qibin Zhao 1, Akinari Onishi 1,3, Jing Jin 2, Xingyu Wang 2, and Andrzej Cichocki

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

NMFLAB for Signal Processing

NMFLAB for Signal Processing NMFLAB for Signal Processing Toolbox for NMF (Non-negative Matrix Factorization) and BSS (Blind Source Separation) By Andrzej CICHOCKI and Rafal ZDUNEK Copyright LABSP, June 15, 2006 The NMFLAB Package

More information

NINE CHOICE SERIAL REACTION TIME TASK

NINE CHOICE SERIAL REACTION TIME TASK instrumentation and software for research NINE CHOICE SERIAL REACTION TIME TASK MED-STATE NOTATION PROCEDURE SOF-700RA-8 USER S MANUAL DOC-025 Rev. 1.3 Copyright 2013 All Rights Reserved MED Associates

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

TENLAB A MATLAB Ripoff for Tensors

TENLAB A MATLAB Ripoff for Tensors TENLAB A MATLAB Ripoff for Tensors Y. Cem Sübakan, ys2939 Mehmet K. Turkcan, mkt2126 Dallas Randal Jones, drj2115 February 9, 2016 Introduction MATLAB is a great language for manipulating arrays. However,

More information

Ocean Optics Red Tide UV-VIS Spectrometer (Order Code: SPRT-UV-VIS)

Ocean Optics Red Tide UV-VIS Spectrometer (Order Code: SPRT-UV-VIS) Ocean Optics Red Tide UV-VIS Spectrometer (Order Code: SPRT-UV-VIS) The UV-VIS spectrometer is a portable ultraviolet light and visible light spectrophotometer, combining a spectrometer and a light source/cuvette

More information

Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices

Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Himanshu Nayar Dept. of EECS University of Michigan Ann Arbor Michigan 484 email:

More information

Kronecker Decomposition for Image Classification

Kronecker Decomposition for Image Classification university of innsbruck institute of computer science intelligent and interactive systems Kronecker Decomposition for Image Classification Sabrina Fontanella 1,2, Antonio Rodríguez-Sánchez 1, Justus Piater

More information

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views

A Randomized Approach for Crowdsourcing in the Presence of Multiple Views A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion

More information

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

FIT100 Spring 01. Project 2. Astrological Toys

FIT100 Spring 01. Project 2. Astrological Toys FIT100 Spring 01 Project 2 Astrological Toys In this project you will write a series of Windows applications that look up and display astrological signs and dates. The applications that will make up the

More information

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal ZDUNEK 1 and Tomasz M. RUTKOWSKI 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s

1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing

Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing Note on Algorithm Differences Between Nonnegative Matrix Factorization And Probabilistic Latent Semantic Indexing 1 Zhong-Yuan Zhang, 2 Chris Ding, 3 Jie Tang *1, Corresponding Author School of Statistics,

More information

Using web-based Java pplane applet to graph solutions of systems of differential equations

Using web-based Java pplane applet to graph solutions of systems of differential equations Using web-based Java pplane applet to graph solutions of systems of differential equations Our class project for MA 341 involves using computer tools to analyse solutions of differential equations. This

More information

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22

More information

STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION

STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 5, Number 3-4, Pages 351 358 c 2009 Institute for Scientific Computing and Information STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION

More information

SICE??, VOL.50Exx??, NO.7xx XXXX 200x 1

SICE??, VOL.50Exx??, NO.7xx XXXX 200x 1 SICE??, VOL.50Exx??, NO.7xx XXXX 200x 1 INVITED PAPER Special Issue on Measurement of Brain Functions and Bio-Signals Tensor Decompositions: New Concepts in Brain Data Analysis? Andrzej CICHOCKI a), SMMARY

More information

Dimension Reduction Using Nonnegative Matrix Tri-Factorization in Multi-label Classification

Dimension Reduction Using Nonnegative Matrix Tri-Factorization in Multi-label Classification 250 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 Dimension Reduction Using Nonnegative Matrix Tri-Factorization in Multi-label Classification Keigo Kimura, Mineichi Kudo and Lu Sun Graduate

More information

A Brief Introduction To. GRTensor. On MAPLE Platform. A write-up for the presentation delivered on the same topic as a part of the course PHYS 601

A Brief Introduction To. GRTensor. On MAPLE Platform. A write-up for the presentation delivered on the same topic as a part of the course PHYS 601 A Brief Introduction To GRTensor On MAPLE Platform A write-up for the presentation delivered on the same topic as a part of the course PHYS 601 March 2012 BY: ARSHDEEP SINGH BHATIA arshdeepsb@gmail.com

More information

FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky

FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky Brain Science Institute, RIKEN, Wakoshi, Japan Institute of Information

More information

vmmlib Tensor Approximation Classes Susanne K. Suter April, 2013

vmmlib Tensor Approximation Classes Susanne K. Suter April, 2013 vmmlib Tensor pproximation Classes Susanne K. Suter pril, 2013 Outline Part 1: Vmmlib data structures Part 2: T Models Part 3: Typical T algorithms and operations 2 Downloads and Resources Tensor pproximation

More information

arxiv: v4 [math.na] 10 Nov 2014

arxiv: v4 [math.na] 10 Nov 2014 NEWTON-BASED OPTIMIZATION FOR KULLBACK-LEIBLER NONNEGATIVE TENSOR FACTORIZATIONS SAMANTHA HANSEN, TODD PLANTENGA, TAMARA G. KOLDA arxiv:134.4964v4 [math.na] 1 Nov 214 Abstract. Tensor factorizations with

More information

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Fast Nonnegative Tensor Factorization with an Active-Set-Like Method

Fast Nonnegative Tensor Factorization with an Active-Set-Like Method Fast Nonnegative Tensor Factorization with an Active-Set-Like Method Jingu Kim and Haesun Park Abstract We introduce an efficient algorithm for computing a low-rank nonnegative CANDECOMP/PARAFAC(NNCP)

More information

CityGML XFM Application Template Documentation. Bentley Map V8i (SELECTseries 2)

CityGML XFM Application Template Documentation. Bentley Map V8i (SELECTseries 2) CityGML XFM Application Template Documentation Bentley Map V8i (SELECTseries 2) Table of Contents Introduction to CityGML 1 CityGML XFM Application Template 2 Requirements 2 Finding Documentation 2 To

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization

Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization Anh-Huy Phan 1, Petr Tichavský 2(B), and Andrzej Cichocki 1,3 1 Lab for Advanced Brain Signal Processing,

More information

Collaborative topic models: motivations cont

Collaborative topic models: motivations cont Collaborative topic models: motivations cont Two topics: machine learning social network analysis Two people: " boy Two articles: article A! girl article B Preferences: The boy likes A and B --- no problem.

More information

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability

More information

1. Introduction. Hang Qian 1 Iowa State University

1. Introduction. Hang Qian 1 Iowa State University Users Guide to the VARDAS Package Hang Qian 1 Iowa State University 1. Introduction The Vector Autoregression (VAR) model is widely used in macroeconomics. However, macroeconomic data are not always observed

More information

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Jason R. Holloway hollowjr@clarkson.edu Carmeliza Navasca cnavasca@clarkson.edu Department of Electrical Engineering Clarkson

More information

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where ESTIMATE PHYSICAL PARAMETERS BY BLACK-BOX MODELING Liang-Liang Xie Λ;1 and Lennart Ljung ΛΛ Λ Institute of Systems Science, Chinese Academy of Sciences, 100080, Beijing, China ΛΛ Department of Electrical

More information

CS145: INTRODUCTION TO DATA MINING

CS145: INTRODUCTION TO DATA MINING CS145: INTRODUCTION TO DATA MINING Text Data: Topic Model Instructor: Yizhou Sun yzsun@cs.ucla.edu December 4, 2017 Methods to be Learnt Vector Data Set Data Sequence Data Text Data Classification Clustering

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm

Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm Simplicial Nonnegative Matrix Tri-Factorization: Fast Guaranteed Parallel Algorithm Duy-Khuong Nguyen 13, Quoc Tran-Dinh 2, and Tu-Bao Ho 14 1 Japan Advanced Institute of Science and Technology, Japan

More information

CS534 Machine Learning - Spring Final Exam

CS534 Machine Learning - Spring Final Exam CS534 Machine Learning - Spring 2013 Final Exam Name: You have 110 minutes. There are 6 questions (8 pages including cover page). If you get stuck on one question, move on to others and come back to the

More information

Automated Unmixing of Comprehensive Two-Dimensional Chemical Separations with Mass Spectrometry. 1 Introduction. 2 System modeling

Automated Unmixing of Comprehensive Two-Dimensional Chemical Separations with Mass Spectrometry. 1 Introduction. 2 System modeling Automated Unmixing of Comprehensive Two-Dimensional Chemical Separations with Mass Spectrometry Min Chen Stephen E. Reichenbach Jiazheng Shi Computer Science and Engineering Department University of Nebraska

More information

Recent Advances in Bayesian Inference Techniques

Recent Advances in Bayesian Inference Techniques Recent Advances in Bayesian Inference Techniques Christopher M. Bishop Microsoft Research, Cambridge, U.K. research.microsoft.com/~cmbishop SIAM Conference on Data Mining, April 2004 Abstract Bayesian

More information

Multilinear Singular Value Decomposition for Two Qubits

Multilinear Singular Value Decomposition for Two Qubits Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL

More information

ISSP User Guide CY3207ISSP. Revision C

ISSP User Guide CY3207ISSP. Revision C CY3207ISSP ISSP User Guide Revision C Cypress Semiconductor 198 Champion Court San Jose, CA 95134-1709 Phone (USA): 800.858.1810 Phone (Intnl): 408.943.2600 http://www.cypress.com Copyrights Copyrights

More information

Self-Tuning Spectral Clustering

Self-Tuning Spectral Clustering Self-Tuning Spectral Clustering Lihi Zelnik-Manor Pietro Perona Department of Electrical Engineering Department of Electrical Engineering California Institute of Technology California Institute of Technology

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,

More information