A new truncation strategy for the higher-order singular value decomposition

Size: px
Start display at page:

Download "A new truncation strategy for the higher-order singular value decomposition"

Transcription

1 A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21, 2011 Joint work with Raf Vandebril and Karl Meerbergen

2 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

3 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

4 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

5 Context I Everything is a tensor d = 0 Scalar a = d = 1 Vector a = d = 2 Matrix A = d 3 Tensor A =

6 Context II Everything is a tensor Multidimensional data appears in many applications: image and signal processing, pattern recognition, data mining and machine learning, chemometrics, biomedicine, psychometrics,... Two major problems associated with this data: 1 Storage cost is very high, and 2 analysis and interpretation of patterns and phenomena in data.

7 Context III Tensor decompositions for compression Stable tensor decompositions suitable for compression in the Low-dimensional case: Tucker (Tucker 1966), Higher-Order SVD (De Lathauwer et al 2000), Cross-approximation (Oseledets et al. 2008), Sequentially Truncated HOSVD (Vannieuwenhoven et al. 2011). High-dimensional case: Hierarchical Tucker (Hackbusch and Kühn 2009, Grasedyck 2010), Tensor-Train (Oseledets 2011).

8 Context IV Compression Low-rank representation of matrices by SVD: A USV T := (U, V ) S A U S V T Low-rank representation of tensors by Tucker: A (Û 1, Û 2, Û 3 ) Ŝ Û 3 A Û 1 Ŝ Û 2

9 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

10 Notation I Mode k vector space A tensor A of order d is an object in the tensor product of d vector spaces: A R n 1 R n 2 R n d R n 1 n 2 n d A 3 rd order tensor has 3 associated vector spaces: Mode 1 vectors (R n 1 ) Mode 2 vectors (R n 2 ) Mode 3 vectors (R n 3 )

11 Notation II Unfolding R n 1 n 2 n 3 A = Mode 2 unfolding A (2) = R n 2 n 1 n 3

12 Notation III Frobenius norm: A 2 := i 1,i 2,i 3 A 2 i 1,i 2,i 3. Multilinear multiplication: [(I, M 2, I ) A] (2) := M 2 A (2). (M 1, M 2, M 3 ) A := (M 1, I, I ) (I, M 2, I ) (I, I, M 3 ) A. Projection of mode 2 vectors on span of U 2 (orthogonal columns): π 2 A := (I, U 2 U2 T, I ) A Projection of mode 2 vectors on complement of U 2 : π2 A := A π 2 A

13 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

14 Orthogonal Tucker approximation problem Best rank-(r 1, r 2, r 3 ) approximation problem: min A B F = rank(b) (r 1,r 2,r 3 ) min A (U 1 U1 T, U 2 U2 T,..., U d Ud T ) A F. U i O(n i,r i ) with O(n i, r i ) the group of n i r i matrices with orthonormal columns. Optimum is found by orthogonal projection onto a new, optimal tensor basis, but no closed solution known.

15 Orthogonal Tucker approximation I Definition A (Û 1, Û 2, Û 3 ) Ŝ Û 3 A Û 1 Ŝ Û 2 Rank (r 1, r 2, r 3 ) orthogonal Tucker approximation to A Columns of Û 1 R n 1 r 1 Û 2 R n 2 r 2 Û 3 R n 3 r 3 can be extended to a basis of R n 1 R n 2 R n 3

16 Orthogonal Tucker approximation II Error [VVM2011] If A Â := π 1 π 2 π 3 A = (U 1 U1 T, U 2 U2 T, U 3 U3 T ) A. Then an error expression is = A π 1 π 2 π 3 A 2 = + + π2 A 2 + π1 π 2A 2 + π3 π 1π 2 A 2 with upper bound + + A π 1 π 2 π 3 A 2 π2 A 2 + π1 A 2 + π3 A 2

17 Orthogonal Tucker approximation II Error [VVM2011] If A Â := π 1 π 2 π 3 A = (U 1 U1 T, U 2 U2 T, U 3 U3 T ) A. Then an error expression is = A π 1 π 2 π 3 A 2 = + + π2 A 2 + π1 π 2A 2 + π3 π 1π 2 A 2 with upper bound + + A π 1 π 2 π 3 A 2 π2 A 2 + π1 A 2 + π3 A 2 Dependence on the processing order of the modes.

18 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

19 Truncated Higher-Order SVD (T-HOSVD) [DDV2000] Recall upper bound? + + A π 1 π 2 π 3 A 2 π2 A 2 + π1 A 2 + π3 A 2 Minimize it! min A π 1 π 2 π 3 A 2 π 1,π 2,π 3 = min π 1,π 2,π 3 k=1 3 k=1 3 πk A 2, min π k π k A 2. Minimum given by r k first singular vectors in every mode k!

20 Algorithm Rank (r 1, r 2, r 3 ) T-HOSVD: for every mode k do Compute rank r k truncated SVD: A (k) = [ Ū k Ū k end for Project: S = (Ū T 1, ŪT 2, ŪT 3 ) A ] [ S k S k ] [ V k V k ] T

21 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

22 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

23 Sequentially truncated HOSVD (ST-HOSVD) [VVM2011] Recall error expression? = A π 1 π 2 π 3 A 2 = + + π2 A 2 + π1 π 2A 2 + π3 π 1π 2 A 2 (Try to) minimize it! min A π 1 π 2 π 3 A 2 π 1,π 2,π 3 = min π 1,π 2,π 3 [ = min π 1 π1 A 2 + π2 π 1 A 2 + π3 π 1 π 2 A 2] [ π2 π 1 A 2 + min π π 3 π 1 π 2 A 2 3 [ π 1 A 2 + min π 2 ]]

24 Sequentially truncated HOSVD (ST-HOSVD) [VVM2011] The sequentially truncated HOSVD computes solution to π 1 = arg min π 1 π 1 A 2 π 2 = arg min π 2 π 2 π 1A 2 π 3 = arg min π 3 π 3 π 1π 2A 2 π k given by r k first singular vectors of [π 1 π k 1 A] (k)!

25 Algorithm Rank (r 1, r 2, r 3 ) ST-HOSVD: Ŝ = A for every mode k do Compute rank r k trunctated SVD: Ŝ (k) = [ Û k Project: Ŝ (k) = Ûk T end for Û k Ŝ(k) ] [ Ŝ k Ŝ k ] [ ˆV k ˆV k ] T Ŝ = A Ŝ (1) = Û T 1 Ŝ(1) Ŝ (2) = Û T 2 Ŝ(2) Ŝ (3) = Û T 3 Ŝ(3)

26 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

27 Operation count Theorem Let A R n n n be truncated to rank (r, r,..., r) by the ST-HOSVD and T-HOSVD. Assume an O(m 2 n) algorithm to compute the SVD of an m n matrix, m n. Then, the ST-HOSVD requires ( d ) d O r k 1 n d k+2 + r k n d k k=1 k=1 operations, and T-HOSVD requires ( ) d O dn d+1 + r k n d k+1 k=1 operations to compute the approximation.

28 Operation count Speedup r d = 3 d = 4 d = 5 Speedup of ST-HOSVD over T-HOSVD for an order-d tensor of size which is truncated to rank (r, r,..., r). Speedups greater than d possible with non-cubic tensors.

29 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

30 Approximation error I Hypothesis Hypothesis Let A be an ST-HOSVD approximation, and A the T-HOSVD approximation of corresponding rank. Then, A A F? A A F. Not valid in general.

31 Approximation error II Counterexample Rank-(1, 1, 1, 1) approximation of [ ] [ ] A :,:,1,1 =, A :,:,2,1 =, [ ] [ ] A :,:,1,2 =, A :,:,2,2 = T-HOSVD: A = ST-HOSVD: A = , , The approximation errors:, , , , [ ] [ ] A A 2 F = and A A 2 F = ,..

32 Approximation error III Sufficient condition Theorem Let A R n 1 n 2 n 3. Let A be the rank-(1, r, r) ST-HOSVD of A and let A be the T-HOSVD of A of the same rank. Then, A A F A A F. ST-HOSVD yields better rank-1 approximation for order-3 tensors.

33 Approximation error IV Bounds ST-HOSVD error bound: = A π 1 π 2 π 3 A 2 = + + π2 A 2 + π1 π 2A 2 + π3 π 1π 2 A 2 T-HOSVD error bound: + + A π 1 π 2 π 3 A 2 π2 A 2 + π1 A 2 + π3 A 2

34 Approximation error IV Bounds ST-HOSVD error bound: = A π 1 π 2 π 3 A 2 = + + π2 A 2 + π1 π 2A 2 + π3 π 1π 2 A 2 T-HOSVD error bound: + + A π 1 π 2 π 3 A 2 π2 A 2 + π1 A 2 + π3 A 2 Both bounded by: A A F d A A opt F.

35 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

36 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

37 Compression of images I Compression of the Olivetti Research Laboratory faces database [SH1994, VT2003]. Tensor of size (Texel Subject Expression). Unstructured, 4.1 million nonzeros. HOOI, T-HOSVD and ST-HOSVD approximations of 6560 different ranks.

38 Compression of images II Relative difference between approximation error of T-HOSVD and HOOI: (err HOSVD err HOOI )/err HOOI. Subject mode rank Expression mode rank = Texel mode rank 7% 6% 5% 4% 3% 2% 1% 0% Subject mode rank Expression mode rank = Texel mode rank 7% 6% 5% 4% 3% 2% 1% 0% Average relative error: 2.115% Maximum relative error: 6.340%

39 Compression of images II Relative difference between approximation error of ST-HOSVD and HOOI: (err HOSVD err HOOI )/err HOOI. Subject mode rank Expression mode rank = Texel mode rank 7% 6% 5% 4% 3% 2% 1% 0% Subject mode rank Expression mode rank = Texel mode rank 7% 6% 5% 4% 3% 2% 1% 0% Average relative error: 0.099% Maximum relative error: 1.012%

40 Dinner tonight La Finestra at 19h30!

41 Compression of images III Total compression time (min) of HOOI, ST-HOSVD and T-HOSVD:

42 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

43 Handwritten digit classification I Classification of handwritten digits by T-HOSVD [SE2007]. Tensor of size (Texel Example Digit). Unstructured, 42.6 million non-zeros. T-HOSVD and ST-HOSVD truncated to relative error of 10%. T-HOSVD ST-HOSVD Rel. model error 9.90% 9.68% Model rank (94, 511, 10) (94, 511, 10)

44 Handwritten digit classification II T-HOSVD ST-HOSVD Classification error 4.94% 4.94% Factorization time 49m 26.0s 1m 8.7s 43x speedup!

45 Handwritten digit classification III Why? Recall truncation rank? (94, 511, 10). T-HOSVD requires: SVD of matrix, SVD of matrix, and SVD of matrix. ST-HOSVD (only) requires: 1 SVD of matrix, 2 SVD of matrix, and 3 SVD of matrix.

46 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

47 Compression of simulation results I Compression of numerical solution of the heat equation on a square domain computed by explicit Euler. Inspired by [LVV2010]. Tensor of size (x y t). Partially symmetric, million non-zeros. T-HOSVD and ST-HOSVD truncated to absolute error of 10 4 (discretization accurary).

48 Compression of simulation results II T-HOSVD ST-HOSVD Abs. error Rank (22, 22, 20) (22, 21, 19) T-HOSVD ST-HOSVD Storage (nb. of values) Factorization time 2h 46m 1m 14.7s 133x speedup!

49 Overview 1 Introduction Context Notation 2 Orthogonal Tucker approximation 3 Truncated HOSVD 4 Sequentially truncated HOSVD Definition Operation count Approximation error 5 Numerical examples Compression of images Handwritten digit classification Compression of simulation results 6 Conclusions

50 Conclusions Early projection, as in ST-HOSVD, can greatly improve the performance of T-HOSVD.

51 Thank you for your attention.

52 References DDV2000 L. De Lathauwer, B. De Moor, and J. Vandewalle, A multilinear singular value decomposition, SIMAX, 21 (2000) LVV2010 L.S. Lorente, J.M. Vega, and A. Velazquez, Compression of aerodynamic databases using higher-order singular value decomposition, Aerosp. Sc. and Tech., 14 (2010) SÉ2007 B. Savas and L. Eldén, Handwritten digit classification using higher-order singular value decomposition, Patt. Recog., 40 (2007) VVM2011 N. Vannieuwenhoven, R. Vandebril, and K. Meerbergen, A new truncation strategy for the higher-order singular value decomposition, 2011, Submitted. Also: On the truncated multilinear singular value decomposition, Tech. Rep. TW589, K.U.Leuven, March 2011

53 References SH1994 F.S. Samaria and A.C. Harter, Parameterization of a stochastic model of human face identification, Proc. Second IEEE W. Appl. Comp. Vision, VT2003 M.A.O. Vasilescu and D. Terzopoulos, Multilinear subspace analysis of image ensembles, Comp. Vision Patt. Recog. IEEE Conf. 2003

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

On the convergence of higher-order orthogonality iteration and its extension

On the convergence of higher-order orthogonality iteration and its extension On the convergence of higher-order orthogonality iteration and its extension Yangyang Xu IMA, University of Minnesota SIAM Conference LA15, Atlanta October 27, 2015 Best low-multilinear-rank approximation

More information

THERE is an increasing need to handle large multidimensional

THERE is an increasing need to handle large multidimensional 1 Matrix Product State for Feature Extraction of Higher-Order Tensors Johann A. Bengua 1, Ho N. Phien 1, Hoang D. Tuan 1 and Minh N. Do 2 arxiv:1503.00516v4 [cs.cv] 20 Jan 2016 Abstract This paper introduces

More information

Tensors and graphical models

Tensors and graphical models Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations

More information

Multilinear Subspace Analysis of Image Ensembles

Multilinear Subspace Analysis of Image Ensembles Multilinear Subspace Analysis of Image Ensembles M. Alex O. Vasilescu 1,2 and Demetri Terzopoulos 2,1 1 Department of Computer Science, University of Toronto, Toronto ON M5S 3G4, Canada 2 Courant Institute

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo

Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery. Florian Römer and Giovanni Del Galdo Tensor-Based Dictionary Learning for Multidimensional Sparse Recovery Florian Römer and Giovanni Del Galdo 2nd CoSeRa, Bonn, 17-19 Sept. 2013 Ilmenau University of Technology Institute for Information

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

A Tensor Approximation Approach to Dimensionality Reduction

A Tensor Approximation Approach to Dimensionality Reduction Int J Comput Vis (2008) 76: 217 229 DOI 10.1007/s11263-007-0053-0 A Tensor Approximation Approach to Dimensionality Reduction Hongcheng Wang Narendra Ahua Received: 6 October 2005 / Accepted: 9 March 2007

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

A Multi-Affine Model for Tensor Decomposition

A Multi-Affine Model for Tensor Decomposition Yiqing Yang UW Madison breakds@cs.wisc.edu A Multi-Affine Model for Tensor Decomposition Hongrui Jiang UW Madison hongrui@engr.wisc.edu Li Zhang UW Madison lizhang@cs.wisc.edu Chris J. Murphy UC Davis

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors

Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices

Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Theoretical Performance Analysis of Tucker Higher Order SVD in Extracting Structure from Multiple Signal-plus-Noise Matrices Himanshu Nayar Dept. of EECS University of Michigan Ann Arbor Michigan 484 email:

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Multiscale Tensor Decomposition

Multiscale Tensor Decomposition Multiscale Tensor Decomposition Alp Ozdemir 1, Mark A. Iwen 1,2 and Selin Aviyente 1 1 Department of Electrical and Computer Engineering, Michigan State University 2 Deparment of the Mathematics, Michigan

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Dynamical low-rank approximation

Dynamical low-rank approximation Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Principal components analysis COMS 4771

Principal components analysis COMS 4771 Principal components analysis COMS 4771 1. Representation learning Useful representations of data Representation learning: Given: raw feature vectors x 1, x 2,..., x n R d. Goal: learn a useful feature

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012

20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012 2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 A NEW TOOL FOR MULTIDIMENSIONAL LOW-RANK STAP FILTER: CROSS HOSVDS Maxime Boizard 12, Guillaume Ginolhac 1,

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

vmmlib Tensor Approximation Classes Susanne K. Suter April, 2013

vmmlib Tensor Approximation Classes Susanne K. Suter April, 2013 vmmlib Tensor pproximation Classes Susanne K. Suter pril, 2013 Outline Part 1: Vmmlib data structures Part 2: T Models Part 3: Typical T algorithms and operations 2 Downloads and Resources Tensor pproximation

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Quick Introduction to Nonnegative Matrix Factorization

Quick Introduction to Nonnegative Matrix Factorization Quick Introduction to Nonnegative Matrix Factorization Norm Matloff University of California at Davis 1 The Goal Given an u v matrix A with nonnegative elements, we wish to find nonnegative, rank-k matrices

More information

Wafer Pattern Recognition Using Tucker Decomposition

Wafer Pattern Recognition Using Tucker Decomposition Wafer Pattern Recognition Using Tucker Decomposition Ahmed Wahba, Li-C. Wang, Zheng Zhang UC Santa Barbara Nik Sumikawa NXP Semiconductors Abstract In production test data analytics, it is often that an

More information

Generalized Higher-Order Tensor Decomposition via Parallel ADMM

Generalized Higher-Order Tensor Decomposition via Parallel ADMM Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Generalized Higher-Order Tensor Decomposition via Parallel ADMM Fanhua Shang 1, Yuanyuan Liu 2, James Cheng 1 1 Department of

More information

N-mode Analysis (Tensor Framework) Behrouz Saghafi

N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Behrouz Saghafi N-mode Analysis (Tensor Framework) Drawback of 1-mode analysis (e.g. PCA): Captures the variance among just a single factor Our training set contains

More information

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product

More information

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability

More information

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview

More information

Weighted Singular Value Decomposition for Folded Matrices

Weighted Singular Value Decomposition for Folded Matrices Weighted Singular Value Decomposition for Folded Matrices SÜHA TUNA İstanbul Technical University Informatics Institute Maslak, 34469, İstanbul TÜRKİYE (TURKEY) suha.tuna@be.itu.edu.tr N.A. BAYKARA Marmara

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

LOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING

LOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING 17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 LOW MULTILINEAR RANK TENSOR APPROXIMATION VIA SEMIDEFINITE PROGRAMMING Carmeliza Navasca and Lieven De Lathauwer

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018

TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS. Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 TENSOR LAYERS FOR COMPRESSION OF DEEP LEARNING NETWORKS Cris Cecka Senior Research Scientist, NVIDIA GTC 2018 Tensors Computations and the GPU AGENDA Tensor Networks and Decompositions Tensor Layers in

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis

Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Matrix-Tensor and Deep Learning in High Dimensional Data Analysis Tien D. Bui Department of Computer Science and Software Engineering Concordia University 14 th ICIAR Montréal July 5-7, 2017 Introduction

More information

Available Ph.D position in Big data processing using sparse tensor representations

Available Ph.D position in Big data processing using sparse tensor representations Available Ph.D position in Big data processing using sparse tensor representations Research area: advanced mathematical methods applied to big data processing. Keywords: compressed sensing, tensor models,

More information

Multilinear Analysis of Image Ensembles: TensorFaces

Multilinear Analysis of Image Ensembles: TensorFaces Multilinear Analysis of Image Ensembles: TensorFaces M Alex O Vasilescu and Demetri Terzopoulos Courant Institute, New York University, USA Department of Computer Science, University of Toronto, Canada

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

HOSVD Based Image Processing Techniques

HOSVD Based Image Processing Techniques HOSVD Based Image Processing Techniques András Rövid, Imre J. Rudas, Szabolcs Sergyán Óbuda University John von Neumann Faculty of Informatics Bécsi út 96/b, 1034 Budapest Hungary rovid.andras@nik.uni-obuda.hu,

More information

Key words. multiway array, Tucker decomposition, low-rank approximation, maximum block improvement

Key words. multiway array, Tucker decomposition, low-rank approximation, maximum block improvement ON TENSOR TUCKER DECOMPOSITION: THE CASE FOR AN ADJUSTABLE CORE SIZE BILIAN CHEN, ZHENING LI, AND SHUZHONG ZHANG Abstract. This paper is concerned with the problem of finding a Tucer decomposition for

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES

OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES OBJECT DETECTION AND RECOGNITION IN DIGITAL IMAGES THEORY AND PRACTICE Bogustaw Cyganek AGH University of Science and Technology, Poland WILEY A John Wiley &. Sons, Ltd., Publication Contents Preface Acknowledgements

More information

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 MPCA: Multilinear Principal Component Analysis of Tensor Objects Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis,

More information

Lecture 13 Visual recognition

Lecture 13 Visual recognition Lecture 13 Visual recognition Announcements Silvio Savarese Lecture 13-20-Feb-14 Lecture 13 Visual recognition Object classification bag of words models Discriminative methods Generative methods Object

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Lecture Notes 2: Matrices

Lecture Notes 2: Matrices Optimization-based data analysis Fall 2017 Lecture Notes 2: Matrices Matrices are rectangular arrays of numbers, which are extremely useful for data analysis. They can be interpreted as vectors in a vector

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

Low-rank tensor discretization for high-dimensional problems

Low-rank tensor discretization for high-dimensional problems Low-rank tensor discretization for high-dimensional problems Katharina Kormann August 6, 2017 1 Introduction Problems of high-dimensionality appear in many areas of science. High-dimensionality is usually

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

March 27 Math 3260 sec. 56 Spring 2018

March 27 Math 3260 sec. 56 Spring 2018 March 27 Math 3260 sec. 56 Spring 2018 Section 4.6: Rank Definition: The row space, denoted Row A, of an m n matrix A is the subspace of R n spanned by the rows of A. We now have three vector spaces associated

More information

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS

FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS MISHA E. KILMER AND CARLA D. MARTIN Abstract. Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally,

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

PAVEMENT CRACK CLASSIFICATION BASED ON TENSOR FACTORIZATION. Offei Amanor Adarkwa

PAVEMENT CRACK CLASSIFICATION BASED ON TENSOR FACTORIZATION. Offei Amanor Adarkwa PAVEMENT CRACK CLASSIFICATION BASED ON TENSOR FACTORIZATION by Offei Amanor Adarkwa A thesis submitted to the Faculty of the University of Delaware in partial fulfillment of the requirements for the degree

More information

Tensor networks and deep learning

Tensor networks and deep learning Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can

More information

JOHANNES KEPLER UNIVERSITY LINZ. Institute of Computational Mathematics

JOHANNES KEPLER UNIVERSITY LINZ. Institute of Computational Mathematics JOHANNES KEPLER UNIVERSITY LINZ Institute of Computational Mathematics Greedy Low-Rank Approximation in Tucker Format of Tensors and Solutions of Tensor Linear Systems Irina Georgieva Institute of Mathematics

More information

THERE is an increasing need to handle large multidimensional

THERE is an increasing need to handle large multidimensional 1 Matrix Product State for Higher-Order Tensor Compression and Classification Johann A. Bengua, Ho N. Phien, Hoang D. Tuan and Minh N. Do Abstract This paper introduces matrix product state (MPS) decomposition

More information

TCNJ JOURNAL OF STUDENT SCHOLARSHIP VOLUME XI APRIL, 2009 HIGHER ORDER TENSOR OPERATIONS AND THEIR APPLICATIONS

TCNJ JOURNAL OF STUDENT SCHOLARSHIP VOLUME XI APRIL, 2009 HIGHER ORDER TENSOR OPERATIONS AND THEIR APPLICATIONS CNJ JOURNAL OF SUDEN SCHOLARSHIP VOLUME XI APRIL, 2009 HIGHER ORDER ENSOR OPERAIONS AND HEIR APPLICAIONS Authors: Emily Miller, he College of New Jersey Scott Ladenheim, Syracuse University Faculty Sponsor:

More information

Multilinear Singular Value Decomposition for Two Qubits

Multilinear Singular Value Decomposition for Two Qubits Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Low Rank Tensor Recovery via Iterative Hard Thresholding

Low Rank Tensor Recovery via Iterative Hard Thresholding Low Rank Tensor Recovery via Iterative Hard Thresholding Holger Rauhut, Reinhold Schneider and Željka Stojanac ebruary 16, 016 Abstract We study extensions of compressive sensing and low rank matrix recovery

More information

Fast multilinear Singular Values Decomposition for higher-order Hankel tensors

Fast multilinear Singular Values Decomposition for higher-order Hankel tensors Fast multilinear Singular Values Decomposition for higher-order Hanel tensors Maxime Boizard, Remy Boyer, Gérard Favier, Pascal Larzabal To cite this version: Maxime Boizard, Remy Boyer, Gérard Favier,

More information

Journal of Statistical Software

Journal of Statistical Software JSS Journal of Statistical Software November 2018, Volume 87, Issue 10. doi: 10.18637/jss.v087.i10 rtensor: An R Package for Multidimensional Array (Tensor) Unfolding, Multiplication, and Decomposition

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Tutorial on MATLAB for tensors and the Tucker decomposition

Tutorial on MATLAB for tensors and the Tucker decomposition Tutorial on MATLAB for tensors and the Tucker decomposition Tamara G. Kolda and Brett W. Bader Workshop on Tensor Decomposition and Applications CIRM, Luminy, Marseille, France August 29, 2005 Sandia is

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos GAIT RECOGNITION THROUGH MPCA PLUS LDA Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto, M5S 3G4, Canada

More information

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections Section 6. 6. Orthogonal Sets Orthogonal Projections Main Ideas in these sections: Orthogonal set = A set of mutually orthogonal vectors. OG LI. Orthogonal Projection of y onto u or onto an OG set {u u

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016 Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation

Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Sufficient Conditions to Ensure Uniqueness of Best Rank One Approximation Yang Qi Joint work with Pierre Comon and Lek-Heng Lim Supported by ERC grant #320594 Decoda August 3, 2015 Yang Qi (GIPSA-Lab)

More information