Dealing with curse and blessing of dimensionality through tensor decompositions
|
|
- Peter Conley
- 5 years ago
- Views:
Transcription
1 Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017
2 2 Overview Curse of dimensionality Algorithms Variants and applications
3 3 Curse of dimensionality The curse Tensor decompositions as a remedy Immunization by low rank
4 The curse 4 Tensors 1 I 1 I 2 I N
5 The curse 5 Tensors Multidimensional array of numerical values General Nth order tensor T C I 1 I 2 I N Number of elements: O ( I N)
6 The curse 5 Tensors Multidimensional array of numerical values General Nth order tensor T C I 1 I 2 I N Number of elements: O ( I N) Curse of Dimensionality The problems arising from the exponential increase in memory and computational requirements Example: # entries in a Nth order tensor of size exceeds # atoms in observable universe for N > 41
7 The curse 6 Alleviating or breaking the curse of dimensionality Use decompositions Canonical Polyadic Decomposition Low Multilinear Rank Approximation (Tucker, MLSVD) Tensors Trains Hierarchical Tucker Tensor Networks
8 The curse 6 Alleviating or breaking the curse of dimensionality Use decompositions Canonical Polyadic Decomposition Low Multilinear Rank Approximation (Tucker, MLSVD) Tensors Trains Hierarchical Tucker Tensor Networks Scientific computing vs signal processing/data analysis
9 The curse 6 Alleviating or breaking the curse of dimensionality Use decompositions Canonical Polyadic Decomposition Low Multilinear Rank Approximation (Tucker, MLSVD) Tensors Trains Hierarchical Tucker Tensor Networks Scientific computing vs signal processing/data analysis Use incomplete tensors Because we do not have the full tensor Because we do not want the full tensor
10 7 Curse of dimensionality The curse Tensor decompositions as a remedy Immunization by low rank
11 Decompositions as a remedy Low Multilinear Rank Approximation Multilinear transform of a core tensor A (3) T = G A (2) A (1) Mathematically, for a general Nth order tensor T C I I T = G 1 A (1) 2 A (2) N A (N) G; A (1), A (2),..., A (N) Number of variables: O ( NIR + R N) Curse not broken, but can be computed via QR/SVD Truncation error bound: 2 T ˆT MLSVD, trunc N min T ˆT F rank ( ˆT ) (R 1,R 2,...,R N ) 2 F 8
12 Decompositions as a remedy 9 Canonical polyadic decomposition Sum of rank-1 terms c 1 c R T = b b R a 1 a R Mathematically, for a general Nth order tensor T C I I T = R r=1 a (1) r a (2) r a (N) r = A (1), A (2),..., A (N) Number of variables: O (NIR) Curse broken, but possibly ill-conditioned/ill-posed problem
13 Decompositions as a remedy 10 Tensor Trains or Matrix Product States Write tensor as a train of lower-order tensors [Oseledets 2011] A 1 A 2 A 3 A 4 Mathematically, for a general Nth order tensor T C I I t i1 i 2 i N = a (1) i 1 r 1 a (2) r 1 i 2 r 2 a (N) r N 1 i N r 1,r 2,...,r N 1 Number of variables: O ( 2IR + (N 2)IR 2) Curse broken and can be computed via QR/SVD Truncation error bound T ˆT TT, trunc 2 F (N 1) min T ˆT 2 F rank TT ( ˆT ) (R 1,R 2,...,R N 1 )
14 11 Curse of dimensionality The curse Tensor decompositions as a remedy Immunization by low rank
15 Immunization by low rank 12 Key assumption: low rank Matrix: decaying singular value spectrum Power law Exponential polynomial structure (see further) Rank-1 terms: T = R r=1 u (1) r u (2) r u (N) r T [1,2,...;n+1,n+2,...] = (U (1) U (2) ) (U (n+1) U (n+2) ) T all matrix representations have rank R
16 13 Overview Curse of dimensionality Algorithms Variants and applications
17 14 Algorithms for large-scale tensors Missing entries/partially sampled tensor and CPD Missing entries / partially sampled tensors and CPD Randomized block sampling for CPD
18 15 How to handle large tensors? Use incomplete tensors Missing entries/partially sampled tensor and CPD CPWOPT [Acar et al. 2011] CPDI NLS [Vervliet et al. 2014; Vervliet et al. 2016a] Exploit sparsity GigaTensor [Kang et al. 2012] ParCube [Papalexakis et al. 2012] Compress the tensor PARACOMP algorithm [Sidiropoulos et al. 2014] Tensor Trains [Oseledets and Tyrtyshnikov 2010] Decompose subtensors and combine results ParCube [Papalexakis et al. 2012] Grid PARAFAC [Phan and Cichocki 2011] Parallel ADMoM [Liavas and Sidiropoulos 2015] Most of the above
19 16 Optimization for CPD Optimization problem: 1 min W A (1),A (2),...,A (N) 2 Algorithms Missing entries/partially sampled tensor and CPD ( T A (1), A (2),..., A (N) ) 2 F CPWOPT [Acar et al. 2011] Nonlinear Conjugate Gradients INDAFAC [Tomasi and Bro 2005] Gauss Newton CPD/SDF [Sorber et al. 2015] Quasi-Newton and (approximate) inexact Gauss Newton Tensorlab: cpd_nls, sdf_nls CPD(L)I [Vervliet et al. 2016a; Vervliet et al. 2016d] Inexact Gauss Newton with possible linear constraints Tensorlab: cpd_nls with UseCPDI option, cpdli_nls Samples investigated: N samples
20 Randomized block sampling for CPD 17 Algorithms for large-scale tensors Missing entries / partially sampled tensors and CPD Randomized block sampling for CPD
21 Randomized block sampling for CPD 18 Randomized block sampling CPD: idea + + [Vervliet and De Lathauwer 2016]
22 Randomized block sampling for CPD 18 Randomized block sampling CPD: idea + + [Vervliet and De Lathauwer 2016]
23 Randomized block sampling for CPD 18 Randomized block sampling CPD: idea + + Take sample [Vervliet and De Lathauwer 2016]
24 Randomized block sampling for CPD 18 Randomized block sampling CPD: idea + + Take sample Initialization Compute step + + [Vervliet and De Lathauwer 2016]
25 Randomized block sampling for CPD 18 Randomized block sampling CPD: idea + + Take sample Initialization Update Compute step + + [Vervliet and De Lathauwer 2016]
26 Randomized block sampling for CPD 19 Detection of hazardous gasses using e-noses Classify 900 experiments containing 72 time series with samples each. [Vervliet and De Lathauwer 2016]
27 Randomized block sampling for CPD 20 Classify hazardous gasses Does the sample contain CO, acetaldehyde or ammonia? Sensor Experiment Time Strategy: classify using coefficients of spatiotemporal patterns R = 5 Unknown
28 Randomized block sampling for CPD 21 Results Resulting factor matrices time sensor experiment
29 Randomized block sampling for CPD 21 Results Resulting factor matrices time sensor experiment Performance after clustering Iterations Time (s) Error (%) No restriction Restriction
30 22 Overview Curse of dimensionality Algorithms Variants and applications
31 23 Variants and applications Compression as preprocessing in the computation of unconstrained and constrained decompositions Thermodynamic data and curse of dimensionality Quantization and blessing of dimensionality
32 Compression as preprocessing Exploiting low multilinear rank for tensor decompositions Strategy without constraints: 1 Compress tensor, e.g, using (randomized) MLSVD [Vervliet et al. 2016c] 2 Compute CPD of core tensor 3 Expand CPD using factor matrices of compression 4 Refine result if necessary Orthogonal factor matrices preserve length and distance in compression 24
33 Compression as preprocessing Exploiting low multilinear rank for tensor decompositions Strategy with constraints 1 Compute LMLRA 2 Decompose while exploiting structure Core operations like norms, inner products and mtkrprod exploit structure of the tensor - [Vervliet et al. 2016c]: 2-2 Many combinations of structures and decompositions possible 25
34 Compression as preprocessing 26 Exploiting efficient representations in tensor decompositions Tensorlab can compute tensors given in an efficient format CPD, LMLRA, Tensor Train, Hankel, Löwner,... using possibly coupled and/or symmetric decompositions CPD, LL1, LMLRA, BTD with possible constraints nonnegativity, Hankel, Vandermonde, polynomial, orthogonal,... Example: compute a nonnegative rank-5 CPD of a tensor after randomized MLSVD compression using mlsvd_rsi
35 Compression as preprocessing Nonnegative CPD using MLSVD compression Compute nonnegative rank-10 CPD of I I I tensor with SNR 20 db: Time (s) Projected GN Parametric GN (SDF) Full 7.6 Compressed I I [Vervliet et al. 2016b] 27
36 Compression as preprocessing 28 Nonnegative CPD using TT compression Compute nonnegative rank-5 CPD of Nth-order tensor with SNR 20 db: Time (s) Full 10.8 Compressed Order N [Vervliet et al. 2016b]
37 29 Variants and applications Compression as preprocessing in the computation of unconstrained and constrained decompositions Thermodynamic data and curse of dimensionality Quantization and blessing of dimensionality
38 Thermodynamic data 30 Modeling multiway thermodynamic data Modes: fraction atom/molecule n in a multi-component material Value: Gibbs free energy, chemical potential, melting temperature (e.g., computed using thermodynamic software)
39 Thermodynamic data Modeling multiway thermodynamic data Modes: fraction atom/molecule n in a multi-component material Value: Gibbs free energy, chemical potential, melting temperature (e.g., computed using thermodynamic software) Second-order example Alloy with c 1 % iron, c 2 % carbon and 100 c 1 c 2 % nickel Discretize c 1 and c 2 in 100 steps Grid of size [Vervliet et al. 2014] 30
40 Thermodynamic data 31 Multiway dataset # constituent materials: 10 (thus N = 9) Size: elements # Samples: of which are validation samples Model: T = R r=1 a (1) r a (2) r a (N) r
41 Thermodynamic data 31 Multiway dataset # constituent materials: 10 (thus N = 9) Size: elements # Samples: of which are validation samples Model: Algorithm T = R r=1 a (1) r a (2) r a (N) r Tensorlab 3.0 with cpd_nls and UseCPDI option [Vervliet et al. 2014; Vervliet et al. 2016d] Initialization: optimally scaled best-out-of-five strategy
42 Thermodynamic data 32 Visualization 1,600 1,400 Tmelt 1,200 1, c c 1 5
43 Thermodynamic data 33 Fitting the model Error E Time (s) R Figure: Errors on training E tr ( ) and validation E val ( ) set and the 99% quantile error E quant ( ) for different CPDs. The computation time for each model is indicated by ( ) on the right y-axis.
44 Thermodynamic data 33 Fitting the model Error E Time (s) R Figure: Errors on training E tr ( ) and validation E val ( ) set and the 99% quantile error E quant ( ) for different CPDs. The computation time for each model is indicated by ( ) on the right y-axis.
45 Thermodynamic data 34 From a discrete model a9r
46 Thermodynamic data From a discrete model a9r to a continuous (e.g., polynomial) model t i1 i 9 f (c 1,..., c N ) = R r=1 n=1 a (n) r (c n ), Advantage: allows interpolation, derivation and integration, parameter reduction,... 34
47 Thermodynamic data 35 Recap From a ninth order tensor with elements...
48 35 Thermodynamic data Recap From a ninth order tensor with elements... we took samples...
49 35 Thermodynamic data Recap From a ninth order tensor with elements... we took samples... to get a rank-1 model with parameters...
50 35 Thermodynamic data Recap From a ninth order tensor with elements... we took samples... to get a rank-1 model with parameters... to get a continuous model with O (100) parameters...
51 35 Thermodynamic data Recap From a ninth order tensor with elements... we took samples... to get a rank-1 model with parameters... to get a continuous model with O (100) parameters... in 3 min
52 36 Variants and applications Compression as preprocessing in the computation of unconstrained and constrained decompositions Thermodynamic data and curse of dimensionality Quantization and blessing of dimensionality
53 Quantization and blessing of dimensionality 37 Low-rank matrices can be used as compact models for large-scale vectors matricize M vectorize I J P M = IJ P(I + J)
54 Quantization and blessing of dimensionality 38 The approach holds exactly for (exponential) polynomials
55 Quantization and blessing of dimensionality 38 The approach holds exactly for (exponential) polynomials 1 z f = z 2 z 3 z 4 z 5
56 Quantization and blessing of dimensionality 38 The approach holds exactly for (exponential) polynomials 1 z f = z 2 z 3 z 4 z 5 1 z 3 R = z z 4 z 2 z 5
57 Quantization and blessing of dimensionality 38 The approach holds exactly for (exponential) polynomials 1 z f = z 2 z 3 z 4 z 5 1 z 3 1 R = z z 4 = z ( 1 z 3) z 2 z 5 z 2
58 Quantization and blessing of dimensionality 38 The approach holds exactly for (exponential) polynomials 1 z f = z 2 z 3 z 4 z 5 1 z 3 1 R = z z 4 = z ( 1 z 3) z 2 z 5 z 2 R can be interpreted as a compact form of the Hankel matrix H 1 z z 2 z 3 H = z z 2 z 3 z 4 z 2 z 3 z 4 z 5
59 Quantization and blessing of dimensionality 38 The approach holds exactly for (exponential) polynomials 1 z f = z 2 z 3 z 4 z 5 1 z 3 1 R = z z 4 = z ( 1 z 3) z 2 z 5 z 2 R can be interpreted as a compact form of the Hankel matrix H 1 z z 2 z 3 1 H = z z 2 z 3 z 4 = z ( 1 z z 2 z 3) z 2 z 3 z 4 z 5 z 2
60 Quantization and blessing of dimensionality 39 f (t) r(h) f (t) r(h) az t 1 a sin(bt) a cos(bt) 2 az t sin(bt) 2 p(t) = Q a q t q Q + 1 q=0 p(t)z t Q + 1 R a r zr t r=1 R a r sin(b r t) r=1 R a r zr t sin(b r t) r=1 R p r (t) r=1 R p r (t)zr t r=1 R 2R 2R R Q r + R r=1 R Q r + R r=1 [Boussé et al. 2017]
61 Quantization and blessing of dimensionality Periodic signals can be reshaped into low-rank matrices one period f = 40
62 Quantization and blessing of dimensionality 40 Periodic signals can be reshaped into low-rank matrices f = one period [ ] R = r(r) = 1 Regardless of the type of signal, e.g., discontinuities are allowed.
63 Quantization and blessing of dimensionality Periodic signals can be reshaped into low-rank matrices one period f = r(r) = 1 R = Regardless of the type of signal, e.g., discontinuities are allowed. 40
64 Quantization and blessing of dimensionality 40 Periodic signals can be reshaped into low-rank matrices f = one period [ ] R = r(r) = 2 half a period Regardless of the type of signal, e.g., discontinuities are allowed.
65 Quantization and blessing of dimensionality 41 The approach also works well for more general compressible functions ɛ = f vec( R) 2 F Underlying function f (t) Low-rank approximation of R = reshape(f)
66 Quantization and blessing of dimensionality 41 The approach also works well for more general compressible functions ɛ = f vec( R) 2 F Underlying function f (t) Low-rank approximation of R = reshape(f) Functions with rapidly converging Taylor series admit an approximate low-rank model f (t) p(t) ɛ Taylor Taylor polynomial
67 Quantization and blessing of dimensionality 41 The approach also works well for more general compressible functions ɛ = f vec( R) 2 F Underlying function f (t) Low-rank approximation of R = reshape(f) Functions with rapidly converging Taylor series admit an approximate low-rank model f (t) p(t) ɛ Taylor Taylor polynomial [Grasedyck et al. 2013; Boussé et al. 2017]
68 Quantization and blessing of dimensionality 42 The singular values of R often decay fast, hence, f often admits a good representation for low rank values. Gaussian 1 original function rank-1 model Sigmoid Rational original function rank-2 model
69 43
70 44 Tensorlab 3.0 A MATLAB toolbox for tensor decompositions Variety of tensor decompositions CPD, LMLRA, MLSVD, BTD, LL1,... Support large scale and incomplete tensors Randomized block sampling, MLSVD RSI, Structured tensors Constrained, symmetric and coupled decompositions Structured data fusion framework Tensorization of data Segmentation, Hankelization, cumulants,... Cumulants, tensor visualization, estimating a tensor s rank or multilinear rank,...
71 45
72 46
73 Conclusion Tensor problems are often large-scale Alleviate/break curse of dimensionality using decompositions for analysis, compression,... by computations using randomization, incompleteness, efficient representations,... Dimensionality is also a blessing segmentation/quantization 47
74 Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017
75 2 Survey papers Cichocki, A. et al. (2015). Tensor Decompositions for Signal Processing Applications: From two-way to multiway component analysis. In: IEEE Signal Processing Magazine 32.2, pp Sidiropoulos, N.D., L. De Lathauwer, X. Fu, K. Huang, E.E. Papalexakis, and C. Faloutsos (2017). Tensor Decomposition for Signal Processing and Machine Learning. In: IEEE Transactions on Signal Processing 65.13, pp
76 3 References I Acar, E., D.M. Dunlavy, T.G. Kolda, and M. Mørup (2011). Scalable tensor factorizations for incomplete data. In: Chemometrics and Intelligent Laboratory Systems 106.1, pp Boussé, M., O. Debals, and L. De Lathauwer (2017). A Tensor-Based Method for Large-Scale Blind Source Separation using Segmentation. In: IEEE Transactions on Signal Processing 65.2, pp Grasedyck, L., D. Kressner, and Tobler C. (2013). A literature survey of low-rank tensor approximation techniques. In: GAMM-Mitteilungen 36.1, pp
77 4 References II Kang, U., E. Papalexakis, A. Harpale, and C. Faloutsos (2012). GigaTensor: scaling tensor analysis up by 100 times-algorithms and discoveries. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp Liavas, A. and N. Sidiropoulos (2015). Parallel Algorithms for Constrained Tensor Factorization via the Alternating Direction Method of Multipliers. In: IEEE Transactions on Signal Processing PP.99, pp Oseledets, I.V. (2011). Tensor-Train Decomposition. In: SIAM J. Sci. Comput. 33.5, pp Oseledets, I.V. and E.E. Tyrtyshnikov (2010). TT-cross approximation for multidimensional arrays. In: Linear Algebra and its Applications 432.1, pp
78 5 References III Papalexakis, E., C. Faloutsos, and N. Sidiropoulos (2012). ParCube: Sparse Parallelizable Tensor Decompositions. English. In: Machine Learning and Knowledge Discovery in Databases. Ed. by PeterA. Flach, Tijl De Bie, and Nello Cristianini. Vol Lecture Notes in Computer Science. Springer Berlin Heidelberg, pp Phan, A.-H. and A. Cichocki (2011). PARAFAC algorithms for large-scale problems. In: Neurocomputing 74.11, pp Sidiropoulos, N., E. Papalexakis, and C. Faloutsos (2014). Parallel randomly compressed cubes: A scalable distributed architecture for big tensor decomposition. In: IEEE Signal Processing Magazine 31.5, pp
79 6 References IV Sorber, L., M. Van Barel, and L. De Lathauwer (2015). Structured Data Fusion. In: IEEE Journal of Selected Topics in Signal Processing 9.4, pp Tomasi, G. and R. Bro (2005). PARAFAC and missing values. In: Chemometrics and Intelligent Laboratory Systems 75.2, pp Vervliet, N. and L. De Lathauwer (2016). A Randomized Block Sampling Approach to Canonical Polyadic Decomposition of Large-Scale Tensors. In: IEEE Journal of Selected Topics in Signal Processing 10.2, pp Vervliet, N., O. Debals, and L. De Lathauwer (2016a). Canonical polyadic decomposition of incomplete tensors with linearly constrained factors. Technical Report , ESAT-STADIUS, KU Leuven, Belgium.
80 7 References V Vervliet, N., O. Debals, and L. De Lathauwer (2016b). Exploiting efficient data representations in tensor decompositions. Technical Report , ESAT-STADIUS, KU Leuven, Belgium. (2016c). Tensorlab 3.0 Numerical optimization strategies for large-scale constrained and coupled matrix/tensor factorization. Technical Report , ESAT-STADIUS, KU Leuven, Belgium. Vervliet, N., O. Debals, L. Sorber, and L. De Lathauwer (2014). Breaking the Curse of Dimensionality Using Decompositions of Incomplete Tensors: Tensor-based scientific computing in big data analysis. In: IEEE Signal Processing Magazine 31.5, pp
81 8 References VI Vervliet, N., O. Debals, L. Sorber, M. Van Barel, and L. De Lathauwer (2016d). Tensorlab 3.0. Available online at
A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors
A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous
More informationCoupled Matrix/Tensor Decompositions:
Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition
More informationDecompositions of Higher-Order Tensors: Concepts and Computation
L. De Lathauwer Decompositions of Higher-Order Tensors: Concepts and Computation Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 L. De Lathauwer Canonical Polyadic Decomposition
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More information/16/$ IEEE 1728
Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin
More informationComputing and decomposing tensors
Computing and decomposing tensors Tensor rank decomposition Nick Vannieuwenhoven (FWO / KU Leuven) Sensitivity Condition numbers Tensor rank decomposition Pencil-based algorithms Alternating least squares
More informationarxiv: v1 [cs.lg] 18 Nov 2018
THE CORE CONSISTENCY OF A COMPRESSED TENSOR Georgios Tsitsikas, Evangelos E. Papalexakis Dept. of Computer Science and Engineering University of California, Riverside arxiv:1811.7428v1 [cs.lg] 18 Nov 18
More informationA variable projection method for block term decomposition of higher-order tensors
A variable projection method for block term decomposition of higher-order tensors Guillaume Olikier 1, P.-A. Absil 1, and Lieven De Lathauwer 2 1- Université catholique de Louvain - ICTEAM Institute B-1348
More informationENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 10: Non-Negative Matrix Factorization and Tensor Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University
More informationAvailable Ph.D position in Big data processing using sparse tensor representations
Available Ph.D position in Big data processing using sparse tensor representations Research area: advanced mathematical methods applied to big data processing. Keywords: compressed sensing, tensor models,
More informationPostgraduate Course Signal Processing for Big Data (MSc)
Postgraduate Course Signal Processing for Big Data (MSc) Jesús Gustavo Cuevas del Río E-mail: gustavo.cuevas@upm.es Work Phone: +34 91 549 57 00 Ext: 4039 Course Description Instructor Information Course
More informationA FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS
A FLEXIBLE MODELING FRAMEWORK FOR COUPLED MATRIX AND TENSOR FACTORIZATIONS Evrim Acar, Mathias Nilsson, Michael Saunders University of Copenhagen, Faculty of Science, Frederiksberg C, Denmark University
More informationLecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko
tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400
More informationStructured tensor missing-trace interpolation in the Hierarchical Tucker format Curt Da Silva and Felix J. Herrmann Sept. 26, 2013
Structured tensor missing-trace interpolation in the Hierarchical Tucker format Curt Da Silva and Felix J. Herrmann Sept. 6, 13 SLIM University of British Columbia Motivation 3D seismic experiments - 5D
More informationBlind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization
Blind Source Separation of Single Channel Mixture Using Tensorization and Tensor Diagonalization Anh-Huy Phan 1, Petr Tichavský 2(B), and Andrzej Cichocki 1,3 1 Lab for Advanced Brain Signal Processing,
More informationThe Canonical Tensor Decomposition and Its Applications to Social Network Analysis
The Canonical Tensor Decomposition and Its Applications to Social Network Analysis Evrim Acar, Tamara G. Kolda and Daniel M. Dunlavy Sandia National Labs Sandia is a multiprogram laboratory operated by
More informationFitting a Tensor Decomposition is a Nonlinear Optimization Problem
Fitting a Tensor Decomposition is a Nonlinear Optimization Problem Evrim Acar, Daniel M. Dunlavy, and Tamara G. Kolda* Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia
More informationLarge Scale Tensor Decompositions: Algorithmic Developments and Applications
Large Scale Tensor Decompositions: Algorithmic Developments and Applications Evangelos Papalexakis, U Kang, Christos Faloutsos, Nicholas Sidiropoulos, Abhay Harpale Carnegie Mellon University, KAIST, University
More informationTensor Decompositions for Signal Processing Applications
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis A. Cichocki, D. Mandic, A-H. Phan, C. Caiafa, G. Zhou, Q. Zhao, and L. De Lathauwer Summary The widespread
More informationScalable Tensor Factorizations with Incomplete Data
Scalable Tensor Factorizations with Incomplete Data Tamara G. Kolda & Daniel M. Dunlavy Sandia National Labs Evrim Acar Information Technologies Institute, TUBITAK-UEKAE, Turkey Morten Mørup Technical
More informationModeling Parallel Wiener-Hammerstein Systems Using Tensor Decomposition of Volterra Kernels
Modeling Parallel Wiener-Hammerstein Systems Using Tensor Decomposition of Volterra Kernels Philippe Dreesen 1, David T. Westwick 2, Johan Schoukens 1, Mariya Ishteva 1 1 Vrije Universiteit Brussel (VUB),
More informationLow-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising
Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University
More informationA Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition
A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationCVPR A New Tensor Algebra - Tutorial. July 26, 2017
CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic
More informationMultiscale Tensor Decomposition
Multiscale Tensor Decomposition Alp Ozdemir 1, Mark A. Iwen 1,2 and Selin Aviyente 1 1 Department of Electrical and Computer Engineering, Michigan State University 2 Deparment of the Mathematics, Michigan
More informationCP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION
International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More informationAn Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition
An Effective Tensor Completion Method Based on Multi-linear Tensor Ring Decomposition Jinshi Yu, Guoxu Zhou, Qibin Zhao and Kan Xie School of Automation, Guangdong University of Technology, Guangzhou,
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationNesterov-based Alternating Optimization for Nonnegative Tensor Completion: Algorithm and Parallel Implementation
Nesterov-based Alternating Optimization for Nonnegative Tensor Completion: Algorithm and Parallel Implementation Georgios Lourakis and Athanasios P. Liavas School of Electrical and Computer Engineering,
More informationIntroduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March
More informationKronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm
Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,
More informationMust-read Material : Multimedia Databases and Data Mining. Indexing - Detailed outline. Outline. Faloutsos
Must-read Material 15-826: Multimedia Databases and Data Mining Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. Technical Report SAND2007-6702, Sandia National Laboratories,
More informationarxiv: v2 [cs.lg] 9 May 2018
TensorLy: Tensor Learning in Python arxiv:1610.09555v2 [cs.lg] 9 May 2018 Jean Kossaifi 1 jean.kossaifi@imperial.ac.uk Yannis Panagakis 1,2 i.panagakis@imperial.ac.uk Anima Anandkumar 3,4 anima@amazon.com
More informationNUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA
NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov
More informationDISTRIBUTED LARGE-SCALE TENSOR DECOMPOSITION. s:
Author manuscript, published in "2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), lorence : Italy (2014)" DISTRIBUTED LARGE-SCALE TENSOR DECOMPOSITION André L..
More informationTensor networks and deep learning
Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can
More information(0)
Citation/eference Andrzej Cichocki, Danilo P. Mandic, Anh Huy Phan, Cesar F. Caiafa, Guoxu Zhou, Qibin Zhao, and Lieven De Lathauwer, (2015, Tensor Decompositions for Signal Processing Applications. From
More informationthe tensor rank is equal tor. The vectorsf (r)
EXTENSION OF THE SEMI-ALGEBRAIC FRAMEWORK FOR APPROXIMATE CP DECOMPOSITIONS VIA NON-SYMMETRIC SIMULTANEOUS MATRIX DIAGONALIZATION Kristina Naskovska, Martin Haardt Ilmenau University of Technology Communications
More informationOrthogonal tensor decomposition
Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.
More informationDimitri Nion & Lieven De Lathauwer
he decomposition of a third-order tensor in block-terms of rank-l,l, Model, lgorithms, Uniqueness, Estimation of and L Dimitri Nion & Lieven De Lathauwer.U. Leuven, ortrijk campus, Belgium E-mails: Dimitri.Nion@kuleuven-kortrijk.be
More informationA PARALLEL ALGORITHM FOR BIG TENSOR DECOMPOSITION USING RANDOMLY COMPRESSED CUBES (PARACOMP)
A PARALLEL ALGORTHM FOR BG TENSOR DECOMPOSTON USNG RANDOMLY COMPRESSED CUBES PARACOMP N.D. Sidiropoulos Dept. of ECE, Univ. of Minnesota Minneapolis, MN 55455, USA E.E. Papalexakis, and C. Faloutsos Dept.
More informationA new truncation strategy for the higher-order singular value decomposition
A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,
More informationLocal Feature Extraction Models from Incomplete Data in Face Recognition Based on Nonnegative Matrix Factorization
American Journal of Software Engineering and Applications 2015; 4(3): 50-55 Published online May 12, 2015 (http://www.sciencepublishinggroup.com/j/ajsea) doi: 10.11648/j.ajsea.20150403.12 ISSN: 2327-2473
More informationFROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky
FROM BASIS COMPONENTS TO COMPLEX STRUCTURAL PATTERNS Anh Huy Phan, Andrzej Cichocki, Petr Tichavský, Rafal Zdunek and Sidney Lehky Brain Science Institute, RIKEN, Wakoshi, Japan Institute of Information
More informationRecovering Tensor Data from Incomplete Measurement via Compressive Sampling
Recovering Tensor Data from Incomplete Measurement via Compressive Sampling Jason R. Holloway hollowjr@clarkson.edu Carmeliza Navasca cnavasca@clarkson.edu Department of Electrical Engineering Clarkson
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationTENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018
TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in
More informationNEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING
NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationAn Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples
An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation
More information3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy
3D INTERPOLATION USING HANKEL TENSOR COMPLETION BY ORTHOGONAL MATCHING PURSUIT A. Adamo, P. Mazzucchelli Aresys, Milano, Italy Introduction. Seismic data are often sparsely or irregularly sampled along
More informationRank Determination for Low-Rank Data Completion
Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,
More informationNon-negative Tensor Factorization with missing data for the modeling of gene expressions in the Human Brain
Downloaded from orbit.dtu.dk on: Dec 05, 2018 Non-negative Tensor Factorization with missing data for the modeling of gene expressions in the Human Brain Nielsen, Søren Føns Vind; Mørup, Morten Published
More informationSparseness Constraints on Nonnegative Tensor Decomposition
Sparseness Constraints on Nonnegative Tensor Decomposition Na Li nali@clarksonedu Carmeliza Navasca cnavasca@clarksonedu Department of Mathematics Clarkson University Potsdam, New York 3699, USA Department
More informationARestricted Boltzmann machine (RBM) [1] is a probabilistic
1 Matrix Product Operator Restricted Boltzmann Machines Cong Chen, Kim Batselier, Ching-Yun Ko, and Ngai Wong chencong@eee.hku.hk, k.batselier@tudelft.nl, cyko@eee.hku.hk, nwong@eee.hku.hk arxiv:1811.04608v1
More informationNonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy
Nonnegative Tensor Factorization using a proximal algorithm: application to 3D fluorescence spectroscopy Caroline Chaux Joint work with X. Vu, N. Thirion-Moreau and S. Maire (LSIS, Toulon) Aix-Marseille
More informationMulti-Way Compressed Sensing for Big Tensor Data
Multi-Way Compressed Sensing for Big Tensor Data Nikos Sidiropoulos Dept. ECE University of Minnesota MIIS, July 1, 2013 Nikos Sidiropoulos Dept. ECE University of Minnesota ()Multi-Way Compressed Sensing
More informationData Mining and Matrices
Data Mining and Matrices 6 Non-Negative Matrix Factorization Rainer Gemulla, Pauli Miettinen May 23, 23 Non-Negative Datasets Some datasets are intrinsically non-negative: Counters (e.g., no. occurrences
More informationMATRIX COMPLETION AND TENSOR RANK
MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2
More informationarxiv: v4 [math.na] 10 Nov 2014
NEWTON-BASED OPTIMIZATION FOR KULLBACK-LEIBLER NONNEGATIVE TENSOR FACTORIZATIONS SAMANTHA HANSEN, TODD PLANTENGA, TAMARA G. KOLDA arxiv:134.4964v4 [math.na] 1 Nov 214 Abstract. Tensor factorizations with
More informationTensorlab. User Guide Getting started 1
Tensorlab User Guide 2014-05-07 Laurent Sorber * S Marc Van Barel * Lieven De Lathauwer S Contents 1 Getting started 1 2 Data sets: dense, incomplete and sparse tensors 5 2.1 Representation................................
More informationJOS M.F. TEN BERGE SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS
PSYCHOMETRIKA VOL. 76, NO. 1, 3 12 JANUARY 2011 DOI: 10.1007/S11336-010-9193-1 SIMPLICITY AND TYPICAL RANK RESULTS FOR THREE-WAY ARRAYS JOS M.F. TEN BERGE UNIVERSITY OF GRONINGEN Matrices can be diagonalized
More informationc 2008 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND
More informationUsing Hankel structured low-rank approximation for sparse signal recovery
Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,
More informationTensors and graphical models
Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations
More informationLOW RANK TENSOR DECONVOLUTION
LOW RANK TENSOR DECONVOLUTION Anh-Huy Phan, Petr Tichavský, Andrze Cichocki Brain Science Institute, RIKEN, Wakoshi, apan Systems Research Institute PAS, Warsaw, Poland Institute of Information Theory
More informationNumerical tensor methods and their applications
Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationarxiv: v2 [math.sp] 23 May 2018
ON THE LARGEST MULTILINEAR SINGULAR VALUES OF HIGHER-ORDER TENSORS IGNAT DOMANOV, ALWIN STEGEMAN, AND LIEVEN DE LATHAUWER Abstract. Let σ n denote the largest mode-n multilinear singular value of an I
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationWindow-based Tensor Analysis on High-dimensional and Multi-aspect Streams
Window-based Tensor Analysis on High-dimensional and Multi-aspect Streams Jimeng Sun Spiros Papadimitriou Philip S. Yu Carnegie Mellon University Pittsburgh, PA, USA IBM T.J. Watson Research Center Hawthorne,
More informationOnline Tensor Factorization for. Feature Selection in EEG
Online Tensor Factorization for Feature Selection in EEG Alric Althoff Honors Thesis, Department of Cognitive Science, University of California - San Diego Supervised by Dr. Virginia de Sa Abstract Tensor
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationShaden Smith * George Karypis. Nicholas D. Sidiropoulos. Kejun Huang * Abstract
Streaming Tensor Factorization for Infinite Data Sources Downloaded 0/5/8 to 60.94.64.33. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php Shaden Smith * shaden.smith@intel.com
More informationModel-Driven Sparse CP Decomposition for Higher-Order Tensors
7 IEEE International Parallel and Distributed Processing Symposium Model-Driven Sparse CP Decomposition for Higher-Order Tensors Jiajia Li, Jee Choi, Ioakeim Perros, Jimeng Sun, Richard Vuduc Computational
More informationThe power of low rank TENSOR Approximations in Smart Patient Monitoring
The power of low rank TENSOR Approximations in Smart Patient Monitoring Prof. Sabine Van Huffel EUSIPCO 2017: 25 th European Signal Processing Conference Kos Island, Greece August 28-September 2, 2017
More informationHigh Performance Parallel Tucker Decomposition of Sparse Tensors
High Performance Parallel Tucker Decomposition of Sparse Tensors Oguz Kaya INRIA and LIP, ENS Lyon, France SIAM PP 16, April 14, 2016, Paris, France Joint work with: Bora Uçar, CNRS and LIP, ENS Lyon,
More informationA Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices
A Block-Jacobi Algorithm for Non-Symmetric Joint Diagonalization of Matrices ao Shen and Martin Kleinsteuber Department of Electrical and Computer Engineering Technische Universität München, Germany {hao.shen,kleinsteuber}@tum.de
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Vectors Arrays of numbers Vectors represent a point
More informationTruncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy
More informationA Randomized Approach for Crowdsourcing in the Presence of Multiple Views
A Randomized Approach for Crowdsourcing in the Presence of Multiple Views Presenter: Yao Zhou joint work with: Jingrui He - 1 - Roadmap Motivation Proposed framework: M2VW Experimental results Conclusion
More informationTensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University
Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell
More informationTensor Networks for Dimensionality Reduction and Large-Scale Optimization Part 1 Low-Rank Tensor Decompositions
Foundations and Trends R in Machine Learning Vol. 9, No. 4-5 (2016) 249 429 c 2017. Cichocki et al. DOI: 10.1561/2200000059 Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Part
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space
More informationAll-at-once Decomposition of Coupled Billion-scale Tensors in Apache Spark
All-at-once Decomposition of Coupled Billion-scale Tensors in Apache Spark Aditya Gudibanda, Tom Henretty, Muthu Baskaran, James Ezick, Richard Lethin Reservoir Labs 632 Broadway Suite 803 New York, NY
More informationUncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization
Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,
More informationCS60021: Scalable Data Mining. Dimensionality Reduction
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a
More informationNovel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations
Novel Alternating Least Squares Algorithm for Nonnegative Matrix and Tensor Factorizations Anh Huy Phan 1, Andrzej Cichocki 1,, Rafal Zdunek 1,2,andThanhVuDinh 3 1 Lab for Advanced Brain Signal Processing,
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationMax Planck Institute Magdeburg Preprints
Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09
More informationTensor Decomposition Theory and Algorithms in the Era of Big Data
Tensor Decomposition Theory and Algorithms in the Era of Big Data Nikos Sidiropoulos, UMN EUSIPCO Inaugural Lecture, Sep. 2, 2014, Lisbon Nikos Sidiropoulos Tensor Decomposition in the Era of Big Data
More informationarxiv: v2 [math.na] 13 Dec 2014
Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More information