/16/$ IEEE 1728
|
|
- Julian Norton
- 5 years ago
- Views:
Transcription
1 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin Haardt Communications Research Laboratory Ilmenau University of Technology P. O. Box 00565, D Ilmenau, Germany { kristina.naskovska, martin.haardt }@tu-ilmenau.de Abstract Several combined signal processing applications such as the joint processing of EEG and MEG data can benefit from coupled tensor decompositions, for instance, the coupled CP (Canonical Polyadic decomposition. The coupled CP decomposition jointly decomposes tensors that have at least one factor matrix in common. The (Semi-Algebraic framework for approximate CP decomposition via SImultaneaous matrix diagonalization framework is an efficient tool for the calculation of the CP decomposition based on matrix diagonalizations. It provides a semi-algebraic solution for the CP decomposition even in ill-posed scenarios, e.g., if the columns of a factor matrix are highly correlated. Moreover, the framework provides an adjustable complexity-accuracy trade-off. In this paper, we present an extension of the framework to the efficient calculation of coupled CP decompositions and show its advantages compared to the traditional solution via alternating least squares (ALS and other state of the art algorithms. Index Terms Coupled, CP, PARAFAC, tensor decomposition, semialgebraic framework,, simultaneous diagonalization I. INTRODUCTION Tensors provide a useful tool for the analysis of multidimensional data. A comprehensive review of tensor concepts is provided in. Tensors have a very broad range of applications such as compressed sensing, processing of big data, blind source separation and many more. Often a tensor should be decomposed into the minimum number of rank one components. This decomposition is know as PARAFAC (PARallel FACtors, CANDECOMP (Canonical Decomposition, or CP (CANDECOMP/PARAFAC. The CP decomposition is often calculated via the iterative multilinear-als (Alternating Least Square algorithm. ALS based algorithms require a lot of iterations to calculate the CP decomposition and there is no convergence guarantee. Moreover, ALS based algorithms are less accurate in ill-conditioned scenarios, for instance, if the columns of the factor matrices are highly correlated. There are already many ALS based algorithms for calculating the CP decomposition such as the ones presented in 3 and 4 that either introduce constraints to reduce the number of iterations or are based on line search, respectively. Alternatively, semi-algebraic solutions have been proposed in the literature based on SMDs (Simultaneous Matrix Diagonalizations. Such examples include 5, 6, 7, 8 and 9. The framework 8 calculates all possible SMDs and then selects the best available solution in a final step via appropriate heuristics. However, recent combined signal processing application such as joint processing of EEG and MEG data can benefit from coupled tensor decompositions, such as a coupled CP decomposition. The coupled CP decomposition jointly decomposes tensors that have at least one factor matrix in common. In order to jointly decompose tensors the existing algorithms have to be modified. Therefore, in this paper we propose an extension of the framework for the calculation of coupled CP decompositions and compare it to the coupled ALS. We use the following notation. Scalars are denoted either as capital or lower-case italic letters, A, a. Vectors and matrices, are denoted as bold-face capital and lower-case letters, a, A, respectively. Finally, tensors are represented by bold-face calligraphic letters A. The following superscripts, T, H,, and + denote transposition, Hermitian transposition, matrix inversion and Moore-Penrose pseudo matrix inversion, respectively. The outer product, Kronecker product and Khatri-Rao product are denoted as,, and, respectively. The operators. F and. H denote the Frobenius norm and the Higher order norm, respectively. Moreover, an n-mode product between a tensor A C I I... I N and a matrix B C J In is defined as A n B, for n =,,... N 0. A super-diagonal or identity N- way tensor of dimensions R R... R is denoted as I N,R.. TENSOR DECOMPOSITIONS For simplicity, in our derivation we will take into account two low rank tensors X ( 0 CM M ( ( M 3, 0 CM M ( ( M 3. Moreover, the two tensors have only one common mode and that is the first mode. Therefore, the two tensors have the first factor matrix as a common one. The CP decomposition of the low rank tensors X ( 0 and 0 is defined as X ( 0 = I 3,R A B ( 3 C (, ( 0 = I 3,R A B ( 3 C ( ( where the tensor rank of both tensors is equal to R. The CP decomposition is essentially unique under mild conditions, which means that the factor matrices (i.e., A, B (, B (, C (, and C (, can be identified up to a permutation and scaling ambiguity. Another, tensor decomposition which is much easier to calculate is the HOSVD (Higher Order Singular Value Decomposition 0. The HOSVD of the tensors X ( 0 and 0 is given by X ( 0 = S ( U U ( 3 U ( 3 (3 0 = S ( U U ( 3 U ( 3 (4 M ( where, S ( C M M ( ( M 3 and S ( C M M ( 3 are the core tensors. The matrices U C M M, U C M M and U 3 C M 3 M 3 are unitary matrices /6/$ IEEE 78 Asilomar 06
2 Moreover, the truncated HOSVD is defined as X ( 0 = S s,( U s 0 = S s,( U s U s,( 3 U s,( 3 (5 U s,( 3 U s,( 3 (6 where S s,( and S s,( C R R R are the truncated core tensors and the loading matrices U s C M R, U s, C M R and 3 R have unitary columns and span the column space U s, 3 C M of the n-mode unfolding of X 0, for n =,, 3 and i =,, respectively. Note that the matrices U s and A span the same column space of X ( 0 (. Due to the fact that the tensors X ( 0 and 0 have the factor matrix A in common the unitary matrix U s should be the same for both HOSVDs in equations (5 and (6. In practice we can only observe a noise corrupted version of the low rank tensor X = X 0 + N, where N contains uncorrelated zero mean circularly symmetric complex Gaussian noise. Hence, we have to calculate a rank R approximation of X X S s, U s s, U 3 U s, 3. (7 Note that (7 holds exactly in the absence of noise and if R is the true rank of the tensor X. For the following derivations we assume that this is true and hence write equalities. In the presence of noise, all the following relations still hold approximately. I. COUPLED FRAMEWORK In this section we derive the coupled framework for two tensors of order three and tensor rank R denoted by X, i =,, which have the first factor matrix in common. An extension to tensors of order N is straightforward by using the concepts of general unfoldings as described in 9. Moreover, an extension to multiple common matrices is also straightforward. Our goal is to jointly provide an estimate of the factor matrices for both tensors. The framework starts by computing the truncated HOSVD. Since the first factor matrix is common for both tensors, the column space of the corresponding one mode unfolding is calculated jointly, and independently for the rest of the modes (i.e, m =, 3 via the following SVDs (Singular Value Decompositions. X ( ( s ( = U Σs V sh, X (m = U s, m Σ s, m V m s,h, m =, 3; i =,. Inserting equations (5 and (6 into ( and (, we get ( X = S s, 3 U s, 3 U s s, U (8 = 3 T 3 }{{} C I3,R 3 (U s, }{{ T (U s, T }}{{} A B (9 (U s The equations (8 and (9 represent the fundamental link between the HOSVD and the CP decomposition, and the coupling between the two tensors. The invertible matrices T, T, and T 3 of size R R diagonalize the core tensors S s,, for i =,, respectively, as previously shown in 6 and 8. Therefore, after multiplying equations (8 and (9 by U sh U s,h we obtain the following tensors S 3 = C T T i =, (0 R R M where S 3 = S s, 3 U s, 3 C 3 and C = I 3,R 3 C R R M C 3. The visualization of equation (0 Fig. : Diagonalization of the tensor S 3, i =,. is given in Fig.. Equation (0 represents a non-symmetric SMD, while in this paper we recommend to diagonalize the core tensors via symmetric SMDs, for instance. The extension of the framework based on non-symmetric SMDs was presented in. However, instead of non-symmetric SMDs we recommend to use symmetric SMDs so that the coupling between the two tensors can be better exploited. Therefore, we convert the non-symmetric SMD problem into a symmetric SMD. In order to do so one of the diagonalization matrices has to be eliminated. Hence, as shown in 6 we multiply equation (0 by one pivoting slice from the right and left hand side, respectively. S rhs, = S S 3,p i ( = T diag(c (k, :./C (p i, : T ( ( T = S 3,p i S (3 diag(c (k, :./C (p i, : T (4 where S and C (k, :, are the k-th slice of the tensors S 3 and C, respectively. Moreover, C (k, : represents the k-th row of the factor matrix C. Furthermore, p i can be any arbitrary pivoting slice, p i {,,..., M 3 }. However, since this slice has to be inverted, the best choice is to choose the slice with the smallest conditioning number. Note that a different pivoting slice p i can be chosen for the different tensors. Equation ( represents two symmetric SMDs, for each of the two tensors S ( 3 and S ( 3. Moreover, the two SMDs have the same diagonalization matrices, which means that we can concatenate the two equations and solve one diagonalization problem instead. Hence, S rhs,( S rhs,( diag(c ( (k, :./C ( (p, : = T diag(c ( (k, :./C ( T (p, : (5 is a coupled symmetric SMD, which allows as to diagonalize both core tensors jointly. From the coupled SMD, we can estimate the first factor matrix as  I = U s T guaranteeing that even in a noisy scenario the common mode will have the same factor matrix estimate for the tensors X ( and. Next, from the diagonal elements of the diagonalized tensor the factor matrices Ĉ ( I and Ĉ ( I are estimated 8. Finally, based on a LS (Least Squares solution using the corresponding estimates of the other two factor matrices the last factor matrices can be estimated, ˆB ( I and ˆB ( I. Note that equation (4 does not depend on the common mode. Therefore the two SMDs cannot be combined and they have to be solved separately. Similarly to the coupled SMD, an estimate of the matrices ˆB (, ˆB (, Ĉ ( and Ĉ ( can be provided. The common 79
3 factor matrix is estimated from the following joint LS problem  = X ( ( ( ( ˆB ( Ĉ ( T ( ˆB ( Ĉ ( T +. Up to this point we have diagonalized the tensors along the third mode as depicted in Fig., but the rest of the modes can also be used in order to obtain more estimates as explained in 8. Another two sets of estimates can be obtained by diagonalizing the tensors along the second mode based on the following SMDs and S rhs,(,k S rhs,(,k = T diag(b ( (k, :./B ( (p, : diag(b ( (k, :./B ( (p, : T (6,k 3 diag(b (k, :./B (p i, : T 3. (7 The estimates obtained from (6 are given by  I = U s T from the transform matrix, ˆB ( I, ˆB ( I from the diagonal elements of the diagonalized tensor, and Ĉ ( I and Ĉ ( I based on a LS solution using the corresponding estimates of the other two factor matrices. Moreover, from (7 the following estimates are obtained. The factor matrices Ĉ ( IV = U s, 3 T 3 and Ĉ ( IV = U s, 3 T 3 are obtained from the transform matrices. Moreover, ˆB ( IV and ˆB ( IV are obtained from the diagonal elements of the diagonalized tensor and  IV is estimated based on the following joint LS problem. (  IV = X ( ( ( ˆB ( T ( IV Ĉ( IV ˆB ( T + IV Ĉ(. IV Finally, the following SMDs are defined for the tensors diagonalization along the first mode. S rhs,,k diag(a (k, :./A (p i, : T,k 3 diag(a (k, :./A (p i, : T 3 The coupled mode is in the diagonal elements of the diagonalized tensor, therefore a joint SMD cannot be calculated. From the four SMDs presented above, four different estimates of the coupled mode are obtained. The additional estimates obtained from the diagonalization along the first mode are summarized in Table I. Transform Matrix V V Ĉ V Ĉ V Diagonalized Tensor  V  VI  V  VI LS Ĉ V Ĉ VI VI V TABLE I: Estimates of the factor matrices obtained from the diagonalization along the first mode. To summarize, the coupled framework for two tensors of order three with N c common modes, i.e., N c =,, 3, will result in 3 + (3 N c sets of estimates of the factor matrices. For the scenario that we have presented in this paper, two tensors of order three with one mode in common, 8 different sets of estimates can be obtained with the coupled framework. As a comparison the original framework calculates 6 sets of estimates, 8. The two additional sets are obtained in the case of the diagonalization along the coupled mode. The estimate of the common mode that comes from tensor X ( can be considered as a possible solution for the tensor. However, when using the common factor matrix that is estimated from another tensor and for calculating the joint LS the permutation and scaling ambiguity has to be taken into account. The estimates that are based on different SMDs have an arbitrary permutation, which can be eliminated via a comparison if one estimate is taken as a reference. For simplicity, the final estimate is selected via the BM (Best Matching scheme, 8. The BM solves all the SMDs and the final estimate is the one that leads to the lowest reconstruction error after calculating all possible combinations. The reconstruction error is calculated according to ˆX X H RSE = X. (8 H Different heuristics that lead to different complexity-accuracy tradeoffs, have been presented in 8. They lead to a comparable performance at a significantly reduced computational complexity and are also applicable to the coupled framework described here. IV. COUPLED ALTERNATING LEAST SQUARES We want to compare the performance of the coupled framework with another algorithm for the coupled CP decomposition. Therefore, we summarize a very simple extension of ALS to coupled ALS. Similar to ALS, the coupled ALS also takes into account all unfoldings of the tensor and iteratively updates each of the factor matrices starting from a random initialization. Based on the three unfoldings for the given tensors of order three X ( and the estimates of the factor matrices can be defined as follows. For the coupled mode the u-th update of the corresponding factor matrix is jointly calculated from (  u = X ( ( ( ˆB ( T ( u Ĉ( u B ( T + u Ĉ( u. Moreover, the u-th update for the other two factor matrices is given by ( (Ĉ T + u = X ( u  u ( Ĉ u = X (3 (Âu Ĉ T + u. V. SIMULATIONS RESULTS In this section the proposed extension of for coupled CP decompositions, denoted as C-, is compared to the original framework (, coupled ALS denoted as, coupled CPD by unconstrained nonlinear optimization and coupled / symmetric CPD by nonlinear least squares from 3 denoted as and, respectively. In each case we have computed Monte Carlo simulation using 000 realizations. For simulation purposes two different tensors with first common mode and tensor rank R have been designed. Each of the tensors is generated according to the CP decomposition. X ( 0 = I 3,R A B ( 3 C ( 0 = I 3,R A B ( 3 C ( where the factor matrices A, B, and C have i.i.d. zero mean Gaussian distributed random entries with variance one, if not otherwise stated. Moreover, for some simulation scenarios we want the tensors to have correlated factor matrices, therefore we add correlation via A A R(ρ R(ρ = ( ρ I R,R + ρ R R R, 730
4 where R(ρ is the correlation matrix with correlation factor ρ and R R denotes a matrix of ones. Finally, the synthetic data is generated by adding i.i.d. zero mean Gaussian noise with variance σn. In the simulation results the (Total relative Mean Square Factor Error ˆF n P F n F = E min P M PD (R F n F ˆF n=â, ˆB,Ĉ is used as an accuracy measure, where M PD(R is a set of permuted diagonal matrices of size R R that resolve the permutation ambiguity of the CP decomposition and F n is equal to A, B or C C- C Fig. 3: of the for complex-valued via tensors with dimensions 4 8 7, tensor rank R = 3 and factor matrices with mutually correlated columns, SNR = 45 db X C- X C- X X very difficult to calculate. From the Fig. 3 it is noticeable that C- ALS fails in most of the attempts to decompose the given tensors. However, the and the C- frameworks are still able to decompose the tensors. Moreover, the C- algorithm shows a better performance than the and NLS algorithms Fig. : of the for real-valued tensors with dimensions , tensor rank R = 4 and factor matrices with mutually correlated columns, SNR = 30 db. First, we compare the performance of the C-,, C- ALS, CCDP NLS and MINF for two real-valued tensors, X ( and of size The two tensors have the first factor matrix in common, and additionally the common factor matrix has collinear columns with correlation factor ρ = The (Complementary Cumulative Distribution Function of the for SNR = 30 db is presented in Fig.. We present the of the error since we are also interested in the convergence of the algorithms in addition to the mean error. Moreover, the mean value of the error for the SNR = 30 db and for each curve is presented as a vertical line. From Fig., it is easy to observe that the C- outperforms the rest of the algorithms, however there is no performance gain compared to the original framework. Next, in Fig. 3 the of the is presented for two tensors of size with a common first factor matrix. For this scenario the common factor matrix is chosen as A from ( A = C ( = C ( = (9 This factor matrix is ill-conditioned and has highly correlated columns, and the CP decomposition containing this factor matrix is C- C- Fig. 4: of the for complex-valued tensors with dimensions 7 8 4, tensor rank R = 3 and factor matrices with mutually correlated columns, SNR = 45 db. Similarly, in Fig. 4 we compare the performance of above discussed algorithms for an ill-conditioned scenario, where the third factor matrices are chosen as C and C from (9. The two tensors have the dimensions and they still have the first mode in common. In the scenario where the ill-conditioned factor matrix is not the common mode, we are able to observe an accuracy gain compared to the uncoupled framework as depicted in Fig. 4. Moreover, since the framework is able to estimate the factor matrices even in a degenerate case, when the rank of the tensor R exceeds the tensor size in one of the modes, we have also simulated such a scenario. The tensors are of size with rank R = 4, hence the two tensors are degenerate in mode two, but still have the first factor matrix in common. The of the for SNR = 30 db is visualized in Fig. 5. Moreover, in this figure we show the performance of the C- algorithm plus one iteration, denoted as C-+x. In this case the C- 73
5 C- C- C- + x C- + x Fig. 5: of the for complex-valued tensors with dimensions 7 3 4, tensor rank R = 4 and factor matrices with mutually correlated columns, SNR = 30 db. outperforms the framework. If they converge, the, NLS and MINF provide a more accurate estimate, but in some cases they do not converge at all. Therefore, the mean error is larger than for the frameworks. Furthermore, already a single iteration of improves the accuracy of the C-SESCI framework additionally. Furthermore, in 4 was suggested that if two tensors with different noise variances are coupled, requires normalization with respect to the different SNRs. To investigate this effect, we assume the following scenario. Two tensors, X ( and with tensor rank R = 3 of sizes and first mode in common have the following SNRs, SNR = 40 db and SNR = 0 : 60 db. The resulting as function of SNR is depicted in Fig. 6. Based on these results it can be concluded that C- does not require normalization of the noise variance (in contrast to C- C- C- normalized C- normalized normalized normalized SNR Fig. 6: as a function of SNR for complex-valued tensors with dimensions 3 8 7, tensor rank R = 3, SNR = 40 db, SNR = 0 : 60 db. VI. CONCLUSIONS In this paper, we present an extension of the framework to the efficient computation of the coupled CP decomposition. For tensors of order three, the coupled framework results in 3 + (3 N c sets of estimates of the factor matrices, where N c =,, 3 is the number of common modes. The final estimate can be selected based on different heuristics as discussed in 8 that lead to different complexity-accuracy trade-offs of the coupled framework. The coupled framework exploits the fact that the tensors have at least one factor matrix in common and guarantees that even in noisy scenarios the common mode will have the same factor matrix for the different tensors. We have compared the coupled framework with the original framework as well as with other state of the art algorithms and shown that it outperforms these algorithms. Moreover, we have shown that it provides a better accuracy than the original framework in challenging scenarios. C- performs coupled SMDs and joint LS estimates which lead to improved accuracy when those solutions are chosen as final estimates. Furthermore, even for tensors with different SNRs, normalization is not required which makes the framework significantly more robust than coupled ALS. Extensions to coupled matrix-tensor decompositions are straightforward. REFERENCES T. Kolda and B. Bader, Tensor decompositions and applications, SIAM Review, vol. 5, pp , 009. A. Cichocki, D. Mandic, A. Phan, C. Caiafa, G. Zhou, Q. Zhao, and L. de Lathauwer, Tensor decompositions for signal processing applications: From two-way to multiway, IEEE Signal Processing Magazine, vol. 3, pp , R. Bro, N. Sidiropoulos, and G. B. Giannakis, Least squares algorithm for separating trilinear mixtures. in Proc. Int. Workshop on Independent Component Analysis and Blind Signal Separation, pp. 5, January M. Rajih, P. Comon, and R. Harshman, Enhanced line search: A novel method to accelerate PARAFAC, SIAM Journal on Matrix Analysis Appl., vol. 30, pp. 48 7, September L. de Lathauwer, Parallel factor analysis by means of simultaneous matrix decompositions, Proc. First IEEE Int. Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP 005, pp. 5 8, December F. Roemer and M. Haardt, A closed-form solution for parallel factor (PARAFAC analysis, Proc. IEEE Int. Con. on Acoustics, Speech and Sig. Proc. (ICASSP 008, pp , April X. Luciani and L. Albera, Semi-algebraic canonical decomposition of multi-way arrays and joint eigenvalue decomposition, IEEE Int. Con. on Acoustics, Speech and Sig. Proc. (ICASSP 0, pp , May 0. 8 F. Roemer and M. Haardt, A semi-algebraic framework for approximate CP decomposition via simultaneous matrix diagonalization (, Signal Processing, vol. 93, pp , September F. Roemer, C. Schroeter, and M. Haardt, A semi-algebraic framework for approximate CP decompositions via joint matrix diagonalization and generalized unfoldings, in Proc. of the 46th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, pp , November 0. 0 L. D. Lathauwer, B. D. Moor, and J. Vandewalle, A multilinear singular value decomposition, SIAM J. Matrix Anal. Appl. (SIMAX, vol., pp , 000. T. Fu and X. Gao, Simultaneous diagonalization with similarity transformation for non-defective matrices, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 006, vol. 4, pp , May 006. K. Naskovska, M. Haardt, P. Tichavsky, G. Chabriel, and J. Barrere, Extension of the semi-algebraic framework for approximate CP decomposition via non-symmetric simultaneous matrix diagonalization, in Proc. IEEE Int. Conference on Acoustics, Speech and Signal Processing (ICASSP, March N. Vervliet, O. Debals, M. B. L. Sorber, and L. de Lathauwer, Tensorlab, Release 3.0, KU Leuven, March J. Cohen, R. Farias, and P. Comon, Joint tensor compression for coupled canonical polyadic decompositions, in Proc. 4th European Signal Processing Conference (EUSIPCO 06, August
the tensor rank is equal tor. The vectorsf (r)
EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications
More informationGeneralized Tensor Contraction with Application to Khatri-Rao Coded MIMO OFDM Systems
Generalized Tensor Contraction with Application to Khatri-ao Coded IO OFD Systems Kristina Naskovska, artin Haardt and André. F. de Almeida Communications esearch aboratory, Ilmenau University of Technology,
More informationA randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors
A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous
More informationA Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition
A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute
More informationCP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION
International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More informationTime-Delay Estimation via CPD-GEVD Applied to Tensor-based GNSS Arrays with Errors
Time-Delay Estimation via CPD-GEVD Applied to Tensor-based GNSS Arrays with Errors Daniel Valle de Lima 1, João Paulo C. L. da Costa 1, Felix Antreich 2, Ricardo Kerhle Miranda 1, and Giovanni Del Galdo
More informationA Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices
A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de
More informationTheoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices
Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Himanshu Nayar Dept. of EECS University of Michigan Ann Arbor Michigan 484 email:
More informationDealing with curse and blessing of dimensionality through tensor decompositions
Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017 2 Overview Curse of dimensionality
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.
More informationTensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo
Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information
More informationIntroduction to Tensors. 8 May 2014
Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of
More informationA CLOSED FORM SOLUTION TO SEMI-BLIND JOINT SYMBOL AND CHANNEL ESTIMATION IN MIMO-OFDM SYSTEMS
A CLOSED FORM SOLUTION TO SEMI-BLIND JOINT SYMBOL AND CHANNEL ESTIMATION IN MIMO-OFDM SYSTEMS Kefei Liu 1, João Paulo C. L. da Costa 2, André L. F. de Almeida 3, and H.C. So 1 1 Department of Electronic
More informationImproved PARAFAC based Blind MIMO System Estimation
Improved PARAFAC based Blind MIMO System Estimation Yuanning Yu, Athina P. Petropulu Department of Electrical and Computer Engineering Drexel University, Philadelphia, PA, 19104, USA This work has been
More information20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012
2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 A NEW TOOL FOR MULTIDIMENSIONAL LOW-RANK STAP FILTER: CROSS HOSVDS Maxime Boizard 12, Guillaume Ginolhac 1,
More informationTensor approach for blind FIR channel identification using 4th-order cumulants
Tensor approach for blind FIR channel identification using 4th-order cumulants Carlos Estêvão R Fernandes Gérard Favier and João Cesar M Mota contact: cfernand@i3s.unice.fr June 8, 2006 Outline 1. HOS
More informationCoupled Matrix/Tensor Decompositions:
Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition
More informationSparseness Constraints on Nonnegative Tensor Decomposition
Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department
More informationTruncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy
More informationc 2008 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 3, pp. 1067 1083 c 2008 Society for Industrial and Applied Mathematics DECOMPOSITIONS OF A HIGHER-ORDER TENSOR IN BLOCK TERMS PART III: ALTERNATING LEAST SQUARES
More informationThird-Order Tensor Decompositions and Their Application in Quantum Chemistry
Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor
More informationFast multilinear Singular Values Decomposition for higher-order Hankel tensors
Fast multilinear Singular Values Decomposition for higher-order Hanel tensors Maxime Boizard, Remy Boyer, Gérard Favier, Pascal Larzabal To cite this version: Maxime Boizard, Remy Boyer, Gérard Favier,
More informationarxiv: v1 [math.ra] 13 Jan 2009
A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical
More informationA concise proof of Kruskal s theorem on tensor decomposition
A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationAvailable Ph.D position in Big data processing using sparse tensor representations
Available Ph.D position in Big data processing using sparse tensor representations Research area: advanced mathematical methods applied to big data processing. Keywords: compressed sensing, tensor models,
More informationBlind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization
Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization Anh-Huy Phan 1, Petr Tichavský 2(B), and Andrzej Cichocki 1,3 1 Lab for Advanced Brain Signal Processing,
More informationTensor Decompositions for Signal Processing Applications
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis A. Cichocki, D. Mandic, A-H. Phan, C. Caiafa, G. Zhou, Q. Zhao, and L. De Lathauwer Summary The widespread
More informationTensor MUSIC in Multidimensional Sparse Arrays
Tensor MUSIC in Multidimensional Sparse Arrays Chun-Lin Liu 1 and P. P. Vaidyanathan 2 Dept. of Electrical Engineering, MC 136-93 California Institute of Technology, Pasadena, CA 91125, USA cl.liu@caltech.edu
More informationParallel Tensor Compression for Large-Scale Scientific Data
Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel
More informationPOLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS
POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel
More informationLOW RANK TENSOR DECONVOLUTION
LOW RANK TENSOR DECONVOLUTION Anh-Huy Phan, Petr Tichavský, Andrze Cichocki Brain Science Institute, RIKEN, Wakoshi, apan Systems Research Institute PAS, Warsaw, Poland Institute of Information Theory
More informationNonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy
Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille
More informationCORE CONSISTENCY DIAGNOSTIC AIDED BY RECONSTRUCTION ERROR FOR ACCURATE ENUMERATION OF THE NUMBER OF COMPONENTS IN PARAFAC MODELS
CORE CONSISTENCY DIAGNOSTIC AIDED BY RECONSTRUCTION ERROR FOR ACCURATE ENUMERATION OF THE NUMBER OF COMPONENTS IN PARAFAC MODELS Kefei Liu 1, H.C. So 1, João Paulo C. L. da Costa and Lei Huang 3 1 Department
More informationSubtracting a best rank-1 approximation may increase tensor rank
Subtracting a best rank- approximation may increase tensor rank Alwin Stegeman, Pierre Comon To cite this version: Alwin Stegeman, Pierre Comon. Subtracting a best rank- approximation may increase tensor
More informationTensor Decompositions and Applications
Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =
More informationA new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationCVPR A New Tensor Algebra - Tutorial. July 26, 2017
CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic
More informationc 2008 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND
More informationCOMPARISON OF MODEL ORDER SELECTION TECHNIQUES FOR HIGH-RESOLUTION PARAMETER ESTIMATION ALGORITHMS
COMPARISON OF MODEL ORDER SELECTION TECHNIQUES FOR HIGH-RESOLUTION PARAMETER ESTIMATION ALGORITHMS João Paulo C. L. da Costa, Arpita Thare 2, Florian Roemer, and Martin Haardt Ilmenau University of Technology
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationDimitri Nion & Lieven De Lathauwer
he decomposition of a third-order tensor in block-terms of rank-l,l, Model, lgorithms, Uniqueness, Estimation of and L Dimitri Nion & Lieven De Lathauwer.U. Leuven, ortrijk campus, Belgium E-mails: Dimitri.Nion@kuleuven-kortrijk.be
More informationLecture 7 MIMO Communica2ons
Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10
More informationAn Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition
An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,
More informationPostgraduate Course Signal Processing for Big Data (MSc)
Postgraduate Course Signal Processing for Big Data (MSc) Jesús Gustavo Cuevas del Río E-mail: gustavo.cuevas@upm.es Work Phone: +34 91 549 57 00 Ext: 4039 Course Description Instructor Information Course
More informationLecture 4. CP and KSVD Representations. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix
More informationDecomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman
Decomposing a three-way dataset of TV-ratings when this is impossible Alwin Stegeman a.w.stegeman@rug.nl www.alwinstegeman.nl 1 Summarizing Data in Simple Patterns Information Technology collection of
More informationTENLAB A MATLAB Ripoff for Tensors
TENLAB A MATLAB Ripoff for Tensors Y. Cem Sübakan, ys2939 Mehmet K. Turkcan, mkt2126 Dallas Randal Jones, drj2115 February 9, 2016 Introduction MATLAB is a great language for manipulating arrays. However,
More informationTensor Analysis. Topics in Data Mining Fall Bruno Ribeiro
Tensor Analysis Topics in Data Mining Fall 2015 Bruno Ribeiro Tensor Basics But First 2 Mining Matrices 3 Singular Value Decomposition (SVD) } X(i,j) = value of user i for property j i 2 j 5 X(Alice, cholesterol)
More informationDecompositions of Higher-Order Tensors: Concepts and Computation
L. De Lathauwer Decompositions of Higher-Order Tensors: Concepts and Computation Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 L. De Lathauwer Canonical Polyadic Decomposition
More informationDOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources
Progress In Electromagnetics Research M, Vol. 63, 185 193, 218 DOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources Kai-Chieh Hsu and
More informationPascal Lab. I3S R-PC
Pascal Lab. I3S R-PC-2004-06 Canonical Tensor Decompositions Presented at ARCC Workshop on Tensor Decompositions, American Institute of Mathematics, Palo Alto, CA, July 18 24 2004 Pierre Comon, Laboratoire
More informationLow Complexity Closed-form Solution to Semi-blind Joint Channel and Symbol Estimation in MIMO-OFDM
Low Complexity Closed-form Solution to Semi-blind Joint Channel and Symbol Estimation in IO-OD João Paulo C L da Costa 1, André L de Almeida 2, Walter C reitas Jr 2 and Daniel Valle de Lima 1 1 Department
More informationFitting a Tensor Decomposition is a Nonlinear Optimization Problem
Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Evrim Acar, Daniel M. Dunlavy, and Tamara G. Kolda* Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia
More informationA variable projection method for block term decomposition of higher-order tensors
A variable projection method for block term decomposition of higher-order tensors Guillaume Olikier 1, P.-A. Absil 1, and Lieven De Lathauwer 2 1- Université catholique de Louvain - ICTEAM Institute B-1348
More informationA BLIND SPARSE APPROACH FOR ESTIMATING CONSTRAINT MATRICES IN PARALIND DATA MODELS
2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 A BLIND SPARSE APPROACH FOR ESTIMATING CONSTRAINT MATRICES IN PARALIND DATA MODELS F. Caland,2, S. Miron 2 LIMOS,
More informationAn Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More information2542 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 62, NO. 10, MAY 15, Keyong Han, Student Member, IEEE, and Arye Nehorai, Fellow, IEEE
2542 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 62, NO. 10, MAY 15, 2014 Nested Vector-Sensor Array Processing via Tensor Modeling Keyong Han, Student Member, IEEE, and Arye Nehorai, Fellow, IEEE Abstract
More informationA Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang
More informationarxiv: v4 [math.na] 10 Nov 2014
NEWTON-BASED OPTIMIZATION FOR KULLBACK-LEIBLER NONNEGATIVE TENSOR FACTORIZATIONS SAMANTHA HANSEN, TODD PLANTENGA, TAMARA G. KOLDA arxiv:134.4964v4 [math.na] 1 Nov 214 Abstract. Tensor factorizations with
More informationFROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky
FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky Brain Science Institute, RIKEN, Wakoshi, Japan Institute of Information
More informationModeling Parallel Wiener-Hammerstein Systems Using Tensor Decomposition of Volterra Kernels
Modeling Parallel Wiener-Hammerstein Systems Using Tensor Decomposition of Volterra Kernels Philippe Dreesen 1, David T. Westwick 2, Johan Schoukens 1, Mariya Ishteva 1 1 Vrije Universiteit Brussel (VUB),
More informationA Brief Guide for TDALAB Ver 1.1. Guoxu Zhou and Andrzej Cichocki
A Brief Guide for TDALAB Ver 1.1 Guoxu Zhou and Andrzej Cichocki April 30, 2013 Contents 1 Preliminary 2 1.1 Highlights of TDALAB...................... 2 1.2 Install and Run TDALAB....................
More informationENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University
More informationRecovering Tensor Data from Incomplete Measurement via Compressive Sampling
Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Jason R. Holloway hollowjr@clarkson.edu Carmeliza Navasca cnavasca@clarkson.edu Department of Electrical Engineering Clarkson
More informationJOS M.F. TEN BERGE SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS
PSYCHOMETRIKA VOL. 76, NO. 1, 3 12 JANUARY 2011 DOI: 10.1007/S11336-010-9193-1 SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS JOS M.F. TEN BERGE UNIVERSITY OF GRONINGEN Matrices can be diagonalized
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationNOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION. M. Schwab, P. Noll, and T. Sikora. Technical University Berlin, Germany Communication System Group
NOISE ROBUST RELATIVE TRANSFER FUNCTION ESTIMATION M. Schwab, P. Noll, and T. Sikora Technical University Berlin, Germany Communication System Group Einsteinufer 17, 1557 Berlin (Germany) {schwab noll
More information18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008
18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 MPCA: Multilinear Principal Component Analysis of Tensor Objects Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis,
More informationWindow-based Tensor Analysis on High-dimensional and Multi-aspect Streams
Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Jimeng Sun Spiros Papadimitriou Philip S. Yu Carnegie Mellon University Pittsburgh, PA, USA IBM T.J. Watson Research Center Hawthorne,
More informationBlind Identification of Underdetermined Mixtures Based on the Hexacovariance and Higher-Order Cyclostationarity
Blind Identification of Underdetermined Mixtures Based on the Hexacovariance and Higher-Order Cyclostationarity André De Almeida, Xavier Luciani, Pierre Comon To cite this version: André De Almeida, Xavier
More informationMultilinear Singular Value Decomposition for Two Qubits
Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL
More informationHigher-Order Singular Value Decomposition (HOSVD) for structured tensors
Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationFACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS
FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS MISHA E. KILMER AND CARLA D. MARTIN Abstract. Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally,
More informationMultidimensional Sinusoidal Frequency Estimation Using Subspace and Projection Separation Approaches
Multidimensional Sinusoidal Frequency Estimation Using Subspace and Projection Separation Approaches 1 Longting Huang, Yuntao Wu, H. C. So, Yanduo Zhang and Lei Huang Department of Electronic Engineering,
More informationTo be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative Tensor Title: Factorization Authors: Ivica Kopriva and A
o be published in Optics Letters: Blind Multi-spectral Image Decomposition by 3D Nonnegative ensor itle: Factorization Authors: Ivica Kopriva and Andrzej Cichocki Accepted: 21 June 2009 Posted: 25 June
More informationMULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES
MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.
More informationA Multi-Affine Model for Tensor Decomposition
Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationThe Singular Value Decomposition
The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general
More informationWeighted Singular Value Decomposition for Folded Matrices
Weighted Singular Value Decomposition for Folded Matrices SÜHA TUNA İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE (TURKEY) suha.tuna@be.itu.edu.tr N.A. BAYKARA Marmara
More informationA FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS
A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS Evrim Acar, Mathias Nilsson, Michael Saunders University of Copenhagen, Faculty of Science, Frederiksberg C, Denmark University
More informationNovel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations
Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Anh Huy Phan 1, Andrzej Cichocki 1,, Rafal Zdunek 1,2,andThanhVuDinh 3 1 Lab for Advanced Brain Signal Processing,
More informationSingular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0
Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns
More informationKronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm
Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,
More informationTwo-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix
Two-View Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix René Vidal Stefano Soatto Shankar Sastry Department of EECS, UC Berkeley Department of Computer Sciences, UCLA 30 Cory Hall,
More informationDISTRIBUTED LARGE-SCALE TENSOR DECOMPOSITION. s:
Author manuscript, published in "2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), lorence : Italy (2014)" DISTRIBUTED LARGE-SCALE TENSOR DECOMPOSITION André L..
More informationEstimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition
Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition Seema Sud 1 1 The Aerospace Corporation, 4851 Stonecroft Blvd. Chantilly, VA 20151 Abstract
More informationNear Optimal Adaptive Robust Beamforming
Near Optimal Adaptive Robust Beamforming The performance degradation in traditional adaptive beamformers can be attributed to the imprecise knowledge of the array steering vector and inaccurate estimation
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationEfficient CP-ALS and Reconstruction From CP
Efficient CP-ALS and Reconstruction From CP Jed A. Duersch & Tamara G. Kolda Sandia National Laboratories Livermore, CA Sandia National Laboratories is a multimission laboratory managed and operated by
More informationResearch Article Blind PARAFAC Signal Detection for Polarization Sensitive Array
Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2007, Article ID 12025, 7 pages doi:10.1155/2007/12025 Research Article Blind PARAFAC Signal Detection for Polarization
More informationarxiv: v1 [cs.lg] 18 Nov 2018
THE CORE CONSISTENCY OF A COMPRESSED TENSOR Georgios Tsitsikas, Evangelos E. Papalexakis Dept. of Computer Science and Engineering University of California, Riverside arxiv:1811.7428v1 [cs.lg] 18 Nov 18
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationEfficient Low Rank Tensor Ring Completion
1 Efficient Low Rank Tensor Ring Completion Wenqi Wang, Vaneet Aggarwal, and Shuchin Aeron arxiv:1707.08184v1 [cs.lg] 23 Jul 2017 Abstract Using the matrix product state (MPS) representation of the recently
More information