The multiple-vector tensor-vector product

Size: px
Start display at page:

Download "The multiple-vector tensor-vector product"

Transcription

1 I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril

2 Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition The orthogonal Tucker decomposition 3 Multiple-vector tensor-vector product Traditional approach Successive contractions Blocking 4 Conclusions

3 Overview I TD MTVP C Inspiring example Notation 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition The orthogonal Tucker decomposition 3 Multiple-vector tensor-vector product Traditional approach Successive contractions Blocking 4 Conclusions

4 Inspiring example I TD MTVP C Inspiring example Notation 1 1 Image courtesy of NASA

5 Inspiring example I TD MTVP C Inspiring example Notation 1 1 Image courtesy of Hossain et al (2013)

6 Inspiring example I TD MTVP C Inspiring example Notation 1 1 Image courtesy of Hossain et al (2013)

7 Inspiring example I TD MTVP C Inspiring example Notation Numerical simulation yields, at every grid point, pressure, velocity, and temperature Usually simulated for different values of the angle of attack, the Reynolds number (viscosity of fluid), the Mach number (speed) Hence, a multiway array with 5 indices

8 Tensors I TD MTVP C Inspiring example Notation A tensor A is an element of R n 1 R n 2 R n d A = n 1 n 2 i 1 =1 i 2 =1 n d i d =1 a i1,i 2,,i d e i1 e i2 e id Mode 1 vectors (R n 1 ) Mode 2 vectors (R n 2 ) Mode 3 vectors (R n 3 )

9 Unfolding I TD MTVP C Inspiring example Notation R n 1 n 2 n 3 A = Mode 2 unfolding A (2) = R n 2 n 1 n 3

10 Overview I TD MTVP C The CP decomposition The orthogonal Tucker decomposition 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition The orthogonal Tucker decomposition 3 Multiple-vector tensor-vector product Traditional approach Successive contractions Blocking 4 Conclusions

11 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition Reduced parameter representation Low-rank representation of matrices by the SVD: A = + +

12 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition Reduced parameter representation Low-rank representation by the rank decomposition: = + + A Low-rank representation by the orthogonal Tucker decomposition: A = S

13 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition CANDECOMP/PARAFAC decomposition Hitchcock (1927) proposed the rank decomposition: A = r i=1 a (1) i a (2) i a (d) i A = + + Various applications in Chemometrics, signal processing (blind source separation), and psychometrics

14 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition Algorithms for CP decompositions Let be the elementwise product and the Khatri-Rao product: A B = [ a 1 b 1 a 2 b 2 a n b n ] Let A i = [ ] a (i) 1 a (i) 2 a (i) r

15 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition Algorithms for CP decompositions Gradient-based optimization algorithms minimizing the objective function: 2 f (A 1,, A d ) = 1 2 A (1) A 1 (A 2 A d ) T F, whose gradient is with and f = [ ] f A i f A i = A (i) (A 1 A i 1 A i+1 A d ) A i J J = (A T 1 A 1 A T i 1A i 1 A T i+1a i+1 A T d A d) 2 Hayashi and Hayashi (1982), Paatero (1997), Tomasi and Bro (2005), Acar et al (2011), and Sorber et al (2013), among others

16 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition Orthogonal Tucker decomposition The multilinear multiplication of A with {U i } d i=1 is: n 1 (U 1,, U d ) A := i 1 =1 n d i d =1 Tucker (1966) proposed the decomposition a i1,,i d (U 1 e i1 ) (U d e id ) A = (U 1,, U d ) S A = U 1 S U 3 U 2 where S is r 1 r d and A is n 1 n d with r i n i

17 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition Algorithms for OT decompositions Core idea: use your favorite method to compute a (reduced) basis U i for every factor R n i, then project to obtain S Several approaches exist: Quasi-optimal direct methods, 3 iterative refinement (HOOI), 4 optimization-based methods, 5 iterative methods, 6 and randomized methods 3 Tucker (1966), De Lathauwer et al (2000a), Vannieuwenhoven et al (2012) 4 Kroonenberg and de Leeuw (1980), De Lathauwer et al (2000b) 5 Eldén and Savas (2009), Ishteva et al (2009,2011), Savas and Lim (2010) 6 Goreinov et al (2012), Savas and Eldén (2013)

18 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition The tensor Krylov method of Eldén and Savas To generate a dominant subspace for mode i, what about Golub and Kahan s bidiagonalization algorithm? β (i) 0 0 v (i) 0 0 for k 1, 2, do α (i) k v(i) k A T (i) u(i) k β (i) k+1 u(i) k+1 A (i)v (i) k β(i) k v(i) k 1 α (i) k u(i) k end for While feasible, 7 this may be impractical because v (i) k is of length n 1 n i 1 n i+1 n d 7 To the best of my knowledge, this has not been investigated Note that only the last vector is required, so memory requirements are quite modest

19 I TD MTVP C The CP decomposition The orthogonal Tucker decomposition The tensor Krylov method of Eldén and Savas Key idea of Eldén and Savas (2013) was to interlace computation of the bases U i and requiring v (i) k = a (1) i,k a(2) i,k a(i 1) i,k a (i+1) i,k a (d) i,k which is somehow constructed form {v (i) j } k,d j=1,i=1 (several options exist) Hence, the key step in the algorithm becomes: A (i) (a (1) i,k a(2) i,k a(i 1) i,k a (i+1) i,k a (d) i,k )

20 Overview I TD MTVP C Traditional approach Successive contractions Blocking 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition The orthogonal Tucker decomposition 3 Multiple-vector tensor-vector product Traditional approach Successive contractions Blocking 4 Conclusions

21 I TD MTVP C Traditional approach Successive contractions Blocking Multiple vector tensor-vector product It is known that v = A (k) (v 1 v 2 v k 1 v k+1 v d ) is equivalent with the multilinear multiplication v k = (v T 1, v T 2,, v T k 1, I, vt k+1,, v T d ) A := vt j d A j=1 j k We call this a mode-k tensor-vector product (k-tvp) v 3 = v 1 v 2

22 I TD MTVP C Traditional approach Successive contractions Blocking M1: The traditional approach Traditional way of computing a k-tvp: v = A (k) (v 1 v 2 v k 1 v k+1 v d ), (Ref M1) which requires j k n j values of temporary memory In general, the multiple-vector cases is: V = A (k) (V 1 V 2 V k 1 V k+1 V d ), V i R n i r which requires r j k n j values of temporary memory

23 I TD MTVP C Traditional approach Successive contractions Blocking M1: The traditional approach Major problem for small r is the cost of unfolding: 200x150x15x70 200x50x32x x70x60x76 90x62x62x90 90x80x60x75 85x75x60x83 80x75x70x76 75x75x75x75 Normalized execution times all modes Ideal (r = 2 in these experiments)

24 I TD MTVP C Traditional approach Successive contractions Blocking M3: Successive contractions = A T = A T (3) z

25 I TD MTVP C Traditional approach Successive contractions Blocking M3: Successive contractions = x T = A T (2) y

26 I TD MTVP C Traditional approach Successive contractions Blocking M3: Successive contractions Formally, v 1 (I, v 2,, v d ) T A, then, letting A 0 := A, we may compute, (A 1 (d) )T (A 0 (d) )T v d ; (A 2 (d 1) )T (A 1 (d 1) )T v d 1 ; v T 1 (A d 2 (2) )T v 2 This is called a Right-To-Left (RTL) contraction One can show that explicit unfoldings are unnecessary!

27 I TD MTVP C Traditional approach Successive contractions Blocking M3: Successive contractions One can prove analogous result for v d (v 1, v 2,, v d 1, I ) T A, yielding a Left-To-Right (LTR) contraction Consequently, no unfoldings are necessary if v k v T j d A = vj T j=1 j k d ( v T k 1 j A) j=k+1 j=1 is computed by an LTR followed by an RTL contraction

28 I TD MTVP C Traditional approach Successive contractions Blocking M3: Successive contractions The multiple-vector cases is handled by B 1 (1) V T 1 A (1) and then processing its rows using tensor-vector products

29 I TD MTVP C Traditional approach Successive contractions Blocking M3: Successive contractions Normalized execution times all modes M3 200x50x32x100 M1 200x50x32x100 M3 75x75x75x75 M1 75x75x75x (r = 2 in these experiments)

30 I TD MTVP C Traditional approach Successive contractions Blocking M3: Successive contractions Normalized execution times all modes M3 200x50x32x100 M1 200x50x32x100 M3 75x75x75x75 M1 75x75x75x Additional memory consumption (MB) M3 200x50x32x100 M1 200x50x32x100 M3 75x75x75x75 M1 75x75x75x ,000 1,500 2,000 2,500 (r = 300 in these experiments)

31 I TD MTVP C Traditional approach Successive contractions Blocking M3B: Successive contractions with blocking To reduce memory consumption, subdivide the tensor and compute k-tvps with the subtensors v v 3,2 3,1 3 + = v 1,1 v 1,2 v 1,3 v 2,1 v 2,2 v 2,3 v 2,4 Nearly as easy as it sounds, though some care must be taken

32 I TD MTVP C Traditional approach Successive contractions Blocking M3B: Successive contractions with blocking Some heuristics: If possible, permute the modes once prior to all computations so that n 1 n d n 2 n d 1 Slicing on modes 1 and 2 appears to be most effective blocking strategy (!) See our upcoming report for motivations

33 I TD MTVP C Traditional approach Successive contractions Blocking tensor with r = 100 vectors Execution time (s) Ref M3B Ref M3 Ref M Additional memory consumption (MB) Ref M3B Ref M3 Ref M

34 I TD MTVP C Traditional approach Successive contractions Blocking tensor with r = 300 vectors Execution time (s) Ref M3B Ref M3 Ref M Additional memory consumption (MB) Ref M3B Ref M3 Ref M

35 Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition The orthogonal Tucker decomposition 3 Multiple-vector tensor-vector product Traditional approach Successive contractions Blocking 4 Conclusions

36 Conclusions I TD MTVP C Unfoldings may be very useful for theoretical developments, but practical algorithms will benefit from combining blocking with successive contractions

37 I TD MTVP C Thank you for your attention!

38 References I TD MTVP C E Acar, DM Dunlavy, and TG Kolda, A scalable optimization approach for fitting canonical tensor decompositions, J Chemometrics 25(2), 2011 J Carroll and J-J Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of EckartYoung decomposition, Psychometrika 35(3), 1970 L De Lathauwer, B De Moor, and J Vandewalle, A multilinear singular value decomposition, SIAM J Matrix Anal Appl 30(3), 2000a L De Lathauwer, B De Moor, and J Vandewalle, On the best Rank-1 and rank-(r 1, R 2,, R N ) approximation of higher-order tensors, SIAM J Matrix Anal Appl, 21(4), 2000b L Eldén and B Savas, A Newton Grassmann method for computing the best multilinear rank-(r 1, r 2, r 3 ) approximation of a tensor, SIAM J Matrix Anal Appl 31(2), 2009 SA Goreinov, IV Oseledets, and DV Savostyanov, Wedderburn rank reduction and Krylov subspace methods for tensor approximation Part 1: Tucker case, SIAM J Sci Comput 34(1), 2012 RA Harshman, Foundations of the PARAFAC procedure: Models and conditions for an explanatory multi-modal factor analysis, UCLA Working Papers in Phonetics 16, 1970

39 References I TD MTVP C C Hayashi and F Hayashi, A new algorithm to solve PARAFAC-model, Behaviormetrika 11, 1982 FL Hitchcock, The expression of a tensor or a polyadic as a sum of products, J Math Phys 6(1), 1927 MA Hossain, Z Huque, RR Kammalapati, and S Khan, Numerical investigation of compressible flow over NREL Phase VI airfoil, IJERT 2(2), 2013 M Ishteva, L De Lathauwer, P-A Absil, and S Van Huffel, Differential-geometric Newton method for the best rank-(r 1,r 2,r 3 ) approximation of tensors, Numer Algorithms 51, 2009 M Ishteva, P-A Absil, S Van Huffel, and L De Lathauwer, Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme, SIAM J Matrix Anal Appl 32, 2011 P Kroonenberg and J de Leeuw, Principal component analysis of three-mode data by means of alternating least squares algorithms, Psychometrika 45(1), 1980 P Paatero, A weighted non-negative least squares algorithm for three-way PARAFAC factor analysis, Chemometrics and Intelligent Laboratory Systems 38, 1997

40 References I TD MTVP C B Savas and L Eldén, Krylov-type methods for tensor computations I, Linear Algebra Appl 438(2), 2013 B Savas and L-H Lim, Quasi-Newton methods on Grassmannians and multilinear approximations of tensors, SIAM J Sci Comput 32, 2010 L Sorber, M Van Barel and L De Lathauwer, Optimization-based algorithms for tensor decompositions: canonical polyadic decomposition, decomposition in rank-(l r, L r, 1) terms, and a new generalization, SIAM J Optim 23(2), 2013 G Tomasi and R Bro, PARAFAC and missing values, Chemometrics and Intelligent Laboratory Systems 75, 2005 LR Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika 31(3), 1966 N Vannieuwenhoven, R Vandebril, and K Meerbergen, A new truncation strategy for the higher-order singular value decomposition, SIAM J Sci Comput 34(2), 2012 N Vannieuwenhoven, N Vanbaelen, K Meerbergen, and R Vandebril, The dense tensor-vector product: an initial study, 2013 (In preparation)

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

On the convergence of higher-order orthogonality iteration and its extension

On the convergence of higher-order orthogonality iteration and its extension On the convergence of higher-order orthogonality iteration and its extension Yangyang Xu IMA, University of Minnesota SIAM Conference LA15, Atlanta October 27, 2015 Best low-multilinear-rank approximation

More information

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem

Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Evrim Acar, Daniel M. Dunlavy, and Tamara G. Kolda* Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 3, pp. 1067 1083 c 2008 Society for Industrial and Applied Mathematics DECOMPOSITIONS OF A HIGHER-ORDER TENSOR IN BLOCK TERMS PART III: ALTERNATING LEAST SQUARES

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Dealing with curse and blessing of dimensionality through tensor decompositions

Dealing with curse and blessing of dimensionality through tensor decompositions Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017 2 Overview Curse of dimensionality

More information

Tutorial on MATLAB for tensors and the Tucker decomposition

Tutorial on MATLAB for tensors and the Tucker decomposition Tutorial on MATLAB for tensors and the Tucker decomposition Tamara G. Kolda and Brett W. Bader Workshop on Tensor Decomposition and Applications CIRM, Luminy, Marseille, France August 29, 2005 Sandia is

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

JOS M.F. TEN BERGE SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS

JOS M.F. TEN BERGE SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS PSYCHOMETRIKA VOL. 76, NO. 1, 3 12 JANUARY 2011 DOI: 10.1007/S11336-010-9193-1 SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS JOS M.F. TEN BERGE UNIVERSITY OF GRONINGEN Matrices can be diagonalized

More information

Introduction to Tensors. 8 May 2014

Introduction to Tensors. 8 May 2014 Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of

More information

JACOBI ALGORITHM FOR THE BEST LOW MULTILINEAR RANK APPROXIMATION OF SYMMETRIC TENSORS

JACOBI ALGORITHM FOR THE BEST LOW MULTILINEAR RANK APPROXIMATION OF SYMMETRIC TENSORS JACOBI ALGORITHM FOR THE BEST LOW MULTILINEAR RANK APPROXIMATION OF SYMMETRIC TENSORS MARIYA ISHTEVA, P.-A. ABSIL, AND PAUL VAN DOOREN Abstract. The problem discussed in this paper is the symmetric best

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND

More information

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute

More information

Sparseness Constraints on Nonnegative Tensor Decomposition

Sparseness Constraints on Nonnegative Tensor Decomposition Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department

More information

LOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING

LOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING 17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 LOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING Carmeliza Navasca and Lieven De Lathauwer

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS

A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS Evrim Acar, Mathias Nilsson, Michael Saunders University of Copenhagen, Faculty of Science, Frederiksberg C, Denmark University

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Scalable Tensor Factorizations with Incomplete Data

Scalable Tensor Factorizations with Incomplete Data Scalable Tensor Factorizations with Incomplete Data Tamara G. Kolda & Daniel M. Dunlavy Sandia National Labs Evrim Acar Information Technologies Institute, TUBITAK-UEKAE, Turkey Morten Mørup Technical

More information

Coupled Matrix/Tensor Decompositions:

Coupled Matrix/Tensor Decompositions: Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

The Canonical Tensor Decomposition and Its Applications to Social Network Analysis

The Canonical Tensor Decomposition and Its Applications to Social Network Analysis The Canonical Tensor Decomposition and Its Applications to Social Network Analysis Evrim Acar, Tamara G. Kolda and Daniel M. Dunlavy Sandia National Labs Sandia is a multiprogram laboratory operated by

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Decomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman

Decomposing a three-way dataset of TV-ratings when this is impossible. Alwin Stegeman Decomposing a three-way dataset of TV-ratings when this is impossible Alwin Stegeman a.w.stegeman@rug.nl www.alwinstegeman.nl 1 Summarizing Data in Simple Patterns Information Technology collection of

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

THERE is an increasing need to handle large multidimensional

THERE is an increasing need to handle large multidimensional 1 Matrix Product State for Feature Extraction of Higher-Order Tensors Johann A. Bengua 1, Ho N. Phien 1, Hoang D. Tuan 1 and Minh N. Do 2 arxiv:1503.00516v4 [cs.cv] 20 Jan 2016 Abstract This paper introduces

More information

A concise proof of Kruskal s theorem on tensor decomposition

A concise proof of Kruskal s theorem on tensor decomposition A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem

More information

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro Tensor Analysis Topics in Data Mining Fall 2015 Bruno Ribeiro Tensor Basics But First 2 Mining Matrices 3 Singular Value Decomposition (SVD) } X(i,j) = value of user i for property j i 2 j 5 X(Alice, cholesterol)

More information

Optimization of Symmetric Tensor Computations

Optimization of Symmetric Tensor Computations Optimization of Symmetric Tensor Computations Jonathon Cai Department of Computer Science, Yale University New Haven, CT 0650 Email: jonathon.cai@yale.edu Muthu Baskaran, Benoît Meister, Richard Lethin

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 6, JUNE

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 6, JUNE IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 6, JUNE 2017 2299 Adaptive Algorithms to Track the PARAFAC Decomposition of a Third-Order Tensor Dimitri Nion, Associate Member, IEEE, and Nicholas

More information

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling

Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Jason R. Holloway hollowjr@clarkson.edu Carmeliza Navasca cnavasca@clarkson.edu Department of Electrical Engineering Clarkson

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams

Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Jimeng Sun Spiros Papadimitriou Philip S. Yu Carnegie Mellon University Pittsburgh, PA, USA IBM T.J. Watson Research Center Hawthorne,

More information

Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping

Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping BRETT W. BADER and TAMARA G. KOLDA Sandia National Laboratories Tensors (also known as multidimensional arrays or N-way arrays) are used

More information

Computing and decomposing tensors

Computing and decomposing tensors Computing and decomposing tensors Tensor rank decomposition Nick Vannieuwenhoven (FWO / KU Leuven) Sensitivity Condition numbers Tensor rank decomposition Pencil-based algorithms Alternating least squares

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Key words. multiway array, Tucker decomposition, low-rank approximation, maximum block improvement

Key words. multiway array, Tucker decomposition, low-rank approximation, maximum block improvement ON TENSOR TUCKER DECOMPOSITION: THE CASE FOR AN ADJUSTABLE CORE SIZE BILIAN CHEN, ZHENING LI, AND SHUZHONG ZHANG Abstract. This paper is concerned with the problem of finding a Tucer decomposition for

More information

A variable projection method for block term decomposition of higher-order tensors

A variable projection method for block term decomposition of higher-order tensors A variable projection method for block term decomposition of higher-order tensors Guillaume Olikier 1, P.-A. Absil 1, and Lieven De Lathauwer 2 1- Université catholique de Louvain - ICTEAM Institute B-1348

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Generalized Higher-Order Tensor Decomposition via Parallel ADMM

Generalized Higher-Order Tensor Decomposition via Parallel ADMM Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of

More information

Journal of Statistical Software

Journal of Statistical Software JSS Journal of Statistical Software November 2018, Volume 87, Issue 10. doi: 10.18637/jss.v087.i10 rtensor: An R Package for Multidimensional Array (Tensor) Unfolding, Multiplication, and Decomposition

More information

Non-negative Tensor Factorization Based on Alternating Large-scale Non-negativity-constrained Least Squares

Non-negative Tensor Factorization Based on Alternating Large-scale Non-negativity-constrained Least Squares Non-negative Tensor Factorization Based on Alternating Large-scale Non-negativity-constrained Least Squares Hyunsoo Kim, Haesun Park College of Computing Georgia Institute of Technology 66 Ferst Drive,

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

the tensor rank is equal tor. The vectorsf (r)

the tensor rank is equal tor. The vectorsf (r) EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Wafer Pattern Recognition Using Tucker Decomposition

Wafer Pattern Recognition Using Tucker Decomposition Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an

More information

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion

Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Factor Matrix Trace Norm Minimization for Low-Rank Tensor Completion Yuanyuan Liu Fanhua Shang Hong Cheng James Cheng Hanghang Tong Abstract Most existing low-n-rank imization algorithms for tensor completion

More information

Pascal Lab. I3S R-PC

Pascal Lab. I3S R-PC Pascal Lab. I3S R-PC-2004-06 Canonical Tensor Decompositions Presented at ARCC Workshop on Tensor Decompositions, American Institute of Mathematics, Palo Alto, CA, July 18 24 2004 Pierre Comon, Laboratoire

More information

Fast Nonnegative Tensor Factorization with an Active-Set-Like Method

Fast Nonnegative Tensor Factorization with an Active-Set-Like Method Fast Nonnegative Tensor Factorization with an Active-Set-Like Method Jingu Kim and Haesun Park Abstract We introduce an efficient algorithm for computing a low-rank nonnegative CANDECOMP/PARAFAC(NNCP)

More information

Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA

Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA Carla D. Moravitz Martin July 9-23, 24 This work was supported by AIM. Center for Applied Mathematics,

More information

Decompositions of Higher-Order Tensors: Concepts and Computation

Decompositions of Higher-Order Tensors: Concepts and Computation L. De Lathauwer Decompositions of Higher-Order Tensors: Concepts and Computation Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 L. De Lathauwer Canonical Polyadic Decomposition

More information

A BLIND SPARSE APPROACH FOR ESTIMATING CONSTRAINT MATRICES IN PARALIND DATA MODELS

A BLIND SPARSE APPROACH FOR ESTIMATING CONSTRAINT MATRICES IN PARALIND DATA MODELS 2th European Signal Processing Conference (EUSIPCO 22) Bucharest, Romania, August 27-3, 22 A BLIND SPARSE APPROACH FOR ESTIMATING CONSTRAINT MATRICES IN PARALIND DATA MODELS F. Caland,2, S. Miron 2 LIMOS,

More information

Illustration by Chris Brigman IPAM Workshop: Big Data Meets Computation February 2, 2017

Illustration by Chris Brigman IPAM Workshop: Big Data Meets Computation February 2, 2017 Illustration by Chris Brigman IPAM Workshop: Big Data Meets Computation February 2, 2017 Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-03 LINEAR ALGEBRA FOR TENSOR PROBLEMS I. V. OSELEDETS, D. V. SAVOSTYANOV, AND E. E. TYRTYSHNIKOV Abstract. By

More information

Numerical multilinear algebra and its applications

Numerical multilinear algebra and its applications Front. Math. China 2007, 2(4): 501 526 DOI 10.1007/s11464-007-0031-4 Numerical multilinear algebra and its applications QI Liqun 1, SUN Wenyu 2, WANG Yiju 3,1 1 Department of Applied Mathematics, The Hong

More information

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information

More information

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012

More information

FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky

FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky Brain Science Institute, RIKEN, Wakoshi, Japan Institute of Information

More information

vmmlib Tensor Approximation Classes Susanne K. Suter April, 2013

vmmlib Tensor Approximation Classes Susanne K. Suter April, 2013 vmmlib Tensor pproximation Classes Susanne K. Suter pril, 2013 Outline Part 1: Vmmlib data structures Part 2: T Models Part 3: Typical T algorithms and operations 2 Downloads and Resources Tensor pproximation

More information

Subtracting a best rank-1 approximation may increase tensor rank

Subtracting a best rank-1 approximation may increase tensor rank Subtracting a best rank- approximation may increase tensor rank Alwin Stegeman, Pierre Comon To cite this version: Alwin Stegeman, Pierre Comon. Subtracting a best rank- approximation may increase tensor

More information

Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR)

Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR) Towards A New Multiway Array Decomposition Algorithm: Elementwise Multiway Array High Dimensional Model Representation (EMAHDMR) MUZAFFER AYVAZ İTU Informatics Institute, Computational Sci. and Eng. Program,

More information

Simultaneous Diagonalization of Positive Semi-definite Matrices

Simultaneous Diagonalization of Positive Semi-definite Matrices Simultaneous Diagonalization of Positive Semi-definite Matrices Jan de Leeuw Version 21, May 21, 2017 Abstract We give necessary and sufficient conditions for solvability of A j = XW j X, with the A j

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Online Tensor Factorization for. Feature Selection in EEG

Online Tensor Factorization for. Feature Selection in EEG Online Tensor Factorization for Feature Selection in EEG Alric Althoff Honors Thesis, Department of Cognitive Science, University of California - San Diego Supervised by Dr. Virginia de Sa Abstract Tensor

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University

More information

Illustration by Chris Brigman Michigan Institute for Data Science (MIDAS) Seminar November 9, 2015

Illustration by Chris Brigman Michigan Institute for Data Science (MIDAS) Seminar November 9, 2015 Illustration by Chris Brigman Michigan Institute for Data Science (MIDAS) Seminar November 9, 2015 Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation,

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Modeling and Multiway Analysis of Chatroom Tensors

Modeling and Multiway Analysis of Chatroom Tensors Modeling and Multiway Analysis of Chatroom Tensors Evrim Acar, Seyit A. Çamtepe, Mukkai S. Krishnamoorthy, and Bülent Yener Department of Computer Science, Rensselaer Polytechnic Institute, 110 8 th Street,

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Efficient CP-ALS and Reconstruction From CP

Efficient CP-ALS and Reconstruction From CP Efficient CP-ALS and Reconstruction From CP Jed A. Duersch & Tamara G. Kolda Sandia National Laboratories Livermore, CA Sandia National Laboratories is a multimission laboratory managed and operated by

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Fast multilinear Singular Values Decomposition for higher-order Hankel tensors

Fast multilinear Singular Values Decomposition for higher-order Hankel tensors Fast multilinear Singular Values Decomposition for higher-order Hanel tensors Maxime Boizard, Remy Boyer, Gérard Favier, Pascal Larzabal To cite this version: Maxime Boizard, Remy Boyer, Gérard Favier,

More information

Higher-order tensor-based method for delayed exponential fitting

Higher-order tensor-based method for delayed exponential fitting Higher-order tensor-based method for delayed exponential fitting Remy Boyer, Lieven Delathauwer, Karim Abed-Meraim To cite this version: Remy Boyer, Lieven Delathauwer, Karim Abed-Meraim Higher-order tensor-based

More information

Modeling Parallel Wiener-Hammerstein Systems Using Tensor Decomposition of Volterra Kernels

Modeling Parallel Wiener-Hammerstein Systems Using Tensor Decomposition of Volterra Kernels Modeling Parallel Wiener-Hammerstein Systems Using Tensor Decomposition of Volterra Kernels Philippe Dreesen 1, David T. Westwick 2, Johan Schoukens 1, Mariya Ishteva 1 1 Vrije Universiteit Brussel (VUB),

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Eigenvalues of tensors

Eigenvalues of tensors Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors

More information

c 2012 Society for Industrial and Applied Mathematics

c 2012 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 33, No. 2, pp. 639 652 c 2012 Society for Industrial and Applied Mathematics LOCAL CONVERGENCE OF THE ALTERNATING LEAST SQUARES ALGORITHM FOR CANONICAL TENSOR APPROXIMATION

More information

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition

An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,

More information

Using semiseparable matrices to compute the SVD of a general matrix product/quotient

Using semiseparable matrices to compute the SVD of a general matrix product/quotient Using semiseparable matrices to compute the SVD of a general matrix product/quotient Marc Van Barel Yvette Vanberghen Paul Van Dooren Report TW 508, November 007 n Katholieke Universiteit Leuven Department

More information

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches

Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches www.ijcsi.org https://doi.org/10.5281/zenodo.1467648 16 Non-local Image Denoising by Using Bayesian Low-rank Tensor Factorization on High-order Patches Lihua Gui, Xuyang Zhao, Qibin Zhao and Jianting Cao

More information

Hyperdeterminants, secant varieties, and tensor approximations

Hyperdeterminants, secant varieties, and tensor approximations Hyperdeterminants, secant varieties, and tensor approximations Lek-Heng Lim University of California, Berkeley April 23, 2008 Joint work with Vin de Silva L.-H. Lim (Berkeley) Tensor approximations April

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

PRINCIPAL CUMULANT COMPONENT ANALYSIS JASON MORTON AND LEK-HENG LIM

PRINCIPAL CUMULANT COMPONENT ANALYSIS JASON MORTON AND LEK-HENG LIM PRINCIPAL CUMULANT COMPONENT ANALYSIS JASON MORTON AND LEK-HENG LIM Abstract. Multivariate Gaussian data is completely characterized by its mean and covariance, yet modern non-gaussian data makes higher-order

More information

Weighted Singular Value Decomposition for Folded Matrices

Weighted Singular Value Decomposition for Folded Matrices Weighted Singular Value Decomposition for Folded Matrices SÜHA TUNA İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE (TURKEY) suha.tuna@be.itu.edu.tr N.A. BAYKARA Marmara

More information

Postgraduate Course Signal Processing for Big Data (MSc)

Postgraduate Course Signal Processing for Big Data (MSc) Postgraduate Course Signal Processing for Big Data (MSc) Jesús Gustavo Cuevas del Río E-mail: gustavo.cuevas@upm.es Work Phone: +34 91 549 57 00 Ext: 4039 Course Description Instructor Information Course

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

Tensor Factorization Using Auxiliary Information

Tensor Factorization Using Auxiliary Information Tensor Factorization Using Auxiliary Information Atsuhiro Narita 1,KoheiHayashi 2,RyotaTomioka 1, and Hisashi Kashima 1,3 1 Department of Mathematical Informatics, The University of Tokyo, 7-3-1 Hongo,

More information

Multi-Way Compressed Sensing for Big Tensor Data

Multi-Way Compressed Sensing for Big Tensor Data Multi-Way Compressed Sensing for Big Tensor Data Nikos Sidiropoulos Dept. ECE University of Minnesota MIIS, July 1, 2013 Nikos Sidiropoulos Dept. ECE University of Minnesota ()Multi-Way Compressed Sensing

More information

Lecture 2. Tensor Unfoldings. Charles F. Van Loan

Lecture 2. Tensor Unfoldings. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,

More information