TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

Size: px
Start display at page:

Download "TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY"

Transcription

1 TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets)

2 WHAT ARE TENSORS? Tensors = d-dimensional arrays: A = [a ij...k ] i I, j J,..., k K Tensor A has: dimensionality (order) d = number of indices (modes, axes, directions, ways) size n 1... n d (number of nodes along each axis)

3 WHAT IS A PROBLEM? NUMBER OF TENSOR ELEMENTS = n d GROWS EXPONENTIALLY IN d WATER AND UNIVERSE H 2 O molecule has 18 electrons. Each electron has 3 coordinates. Thus we have 18 3 = 54 axes. If we take 32 nodes on each axis, we obtain points, which is close to the number of atoms in the universe. CURSE OF DIMENSIONALITY

4 WE SURVIVE WITH COMPACT (LOW-PARAMETRIC) REPRESENTATIONS FOR TENSORS METHODS FOR COMPUTATIONS IN COMPACT REPRESENTATIONS

5 TUCKER DECOMPOSITION a(i 1,..., i d ) = r 1 α 1 =1... r d α d =1 g(α 1,..., α d ) q 1 (i 1, α 1 )... q d (i d, α d ) L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, V. 31, P (1966). COMPONENTS: 2D arrays q 1,..., q d with dnr entries d-dimensional array g(α 1,..., α d ) with r d entries CURSE OF DIMENSIONALITY REMAINS

6 CANONICAL DECOMPOSITION (PARAFAC, CANDECOMP) a(i 1,..., i d ) = R α=1 u 1 (i 1, α)... u d (i d, α) Number of defining parameters is drn. DRAWBACKS: INSTABILITY (cf. Lim, de Silva) x 1,..., x d, y 1,..., y d d a = z t 1... zt d, t=1 linearly independent zt k = { xk, k t y k, k = t a = 1 ε (x 1 + εy 1 )... (x d + εy d ) 1 ε x 1... x d + O(ε) EVENTUAL LACK OF ROBUST ALGORITHMS

7 a(i 1,..., i d ) = r 1 α 1 =1... r d α d =1 g(α 1,..., α d ) q 1 (i 1, α 1 )... q d (i d, α d ) TUCKER DECOMPOSITION

8 a(i 1,..., i d ) = R α=1 u 1 (i 1, α)... u d (i d, α) CANONICAL DECOMPOSITION (PARAFAC, CANDECOMP)

9 a(i 1,..., i d ) = α 1,..., α d 1 g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g d 1 (α d 2, i d 1, α d 1 ) g d (α d 1, i d ) TENSOR-TRAIN DECOMPOSITION

10 TENSORS AND MATRICES Let A = [a ijklm ]. Take up a pair of mutually complementary long indices (ij) and (klm) (kl) and (ijm)... Tensor A gives rise to unfolding matrices: B 1 = [b (ij),(klm) ] By definition, B 2 = [b (kl),(ijm) ]... b (ij),(klm) = b (kl),(ijm) =... = a ijklm

11 DIMENSIONALITY CAN BE DECREASED a(i 1,..., i d ) = a(i 1,..., i k ; i k+1,..., i s ) r = u(i 1,..., i k ; s) v(i k+1,..., i d ; s) s=1 Dimension d reduces to dimensions k + 1 and d k + 1. Proceed by recursion. Binary tree arises.

12 TUCKER VIA RECURSION α α 2 α α α 2 3 α α α 2 α 3 4 α 4 5 α α 2 α 3 α 4 5 α 5 α α 2 α 3 α 4 α 5 a(i 1, i 2, i 3, i 4, i 5 ) = α 1,α 2,α 3,α 4,α 5 g(α 1, α 2, α 3, α 4, α 5 ) q 1 (i 1, α 1 ) q 2 (i 2, α 2 ) q 3 (i 3, α 3 ) q 4 (i 4, α 4 ) q 5 (i 5, α 5 )

13 BINARY TREE IMPLIES Any auxiliary index belongs to exactly two leaf tensors. Tensor is the sum over all auxiliary indices of the product of elements of the leaf tensors. HOW TO AVOID r d PARAMETERS Let any leaf tensor have at most one spatial index. Let any leaf tensor have at most two (three) auxiliary indices.

14 TREE WITHOUT TUCKER α α 2 α α α 2 3 α α α 2 α 3 4 α 2 α 4 5 α 3 α 4 TENSOR-TRAIN DECOMPOSITION a(i 1, i 2, i 3, i 4, i 5 ) = α 1,α 2,α 3,α 4 g 1 (i 1, α 1 ) g 2 (α 1, i 3, α 3 ) g 3 (α 3, i 5, α 4 ) g 4 (α 4, i 4, α 2 ) g 5 (α 2, i 2 )

15 HOW MANY PARAMETERS NUMBER OF TT PARAMETERS = 2nr + (d 2)nr 2 EXTENDED TT DECOMPOSITION α α α 2 2 α α 2 3 α α α 3 2 α 5 α α 2 α 5 4 α α 4 5 α 3 α 4 4 α 6 α α 4 α 6 5 α 7 α 3 α 4 α 7 NUMBER OF EXTENDED TT PARAMETERS = dnr + (d 2)r 3

16 TREE IS NOT NEEDED! ALL IS DEFINED BY A PERMUTATION OF SPATIAL INDICES TENSOR-TRAIN DECOMPOSITION a(i 1, i 2, i 3, i 4, i 5 ) = β 1,β 2,β 3,β 4 g 1 (i σ(1), β 1 ) g 2 (β 1, i σ(2), β 2 ) g 3 (β 2, i σ(3), β 4 ) g 4 (β 4, i σ(5), β 5 ) g 5 (β 5, i σ(5) ) TT = Tree Tucker neither Tree, nor Tucker TENSOR TRAIN

17 MINIMAL TT DECOMPOSITION Let 1 β k r k. What are minimal values for compression ranks r k? r k ranka σ k A σ k = [ a σ (i σ(1),..., i σ(k) ; i σ(k+1),..., i σ(d) ) ] a σ (i σ(1),..., i σ(k) ; i σ(k+1),..., i σ(d) ) = a(i 1,..., i d )

18 GENERAL PROPERTIES THEOREM 1. Assume that a tensor a(i 1,..., i d ) possesses a canonical decomposition with R terms. Then a(i 1,..., i d ) admits a TT decomposition of rank R or less. THEOREM 2. Assume that a tensor a(i 1,..., i d ), when ε-perturbed, with any small ε possesses a canonical decomposition with R terms. Then a(i 1,..., i d ) admits a TT decomposition of rank R or less.

19 FROM CANONICAL TO TENSOR TRAIN a(i 1,..., i d ) = R s=1 u(i 1, s)... u(i d, s) = α 1,...,α d 1 u(i 1, α 1 ) δ(α 1, α 2 )u(i 2, α 2 ) δ(α d 2, α d 1 )u(i d 1, α d 1 ) u(i d, α d 1 ) FREE!

20 EFFECTIVE RANK OF A TENSOR ERank(a) = lim sup ε +0 min RANK(b) b a ε b C(n 1,..., n d ) F(n 1,..., n d ): all tensors of size n 1... n d with entries from F. Let a F(n 1,..., n d ) C(n 1,..., n d ). Then canonical rank over F depends on F, effective rank does not. Close to border rank concept (Bini-Capovani). Which still depends on F. THEOREM 2 (reformulated) Let a F(n 1,..., n d ). Then for this tensor there exists a TT decomposition of rank r ERank(a) with entries of all tensors belonging to F.

21 EXAMPLE 1 d-dimensional tensor in the matrix form A = Λ I... I + I Λ... I I... I Λ P(h) d s=1 (I + hλ) = I + ha + O(h2 ) A = 1 h P(h) 1 P(0) + O(h) h ERank(A) = 2

22 EXAMPLE 2 Real-valued tensor F by the function f(x 1,..., x d ) = sin(x x d ) on some 1D grids for x 1,..., x d. Beylkin et al: canonical rank over R of F does not exceed d (it is likely to be exactly d). However, sin x = exp(ix) exp( ix) 2i ERank(F) = 2

23 EXAMPLE 3 d-dimensional tensor A from discretization of operator A = 1 i j d a ij x i x j on a tensor grid for variables x 1,..., x d. Canonical rank d 2 /2. However, ERank(A) 3 2 d + 1 (N. Zamarashkin, I. Oseledets, E. Tyrtyshnikov)

24 TENSOR TRAIN DECOMPOSITION a(i 1,...i d ) = α 0,...,α d g 1 (α 0, i 1, α 1 ) g 2 (α 1, i 2, α 2 )... g d (α d 1, i d, α d ) MATRIX FORM a(i 1,..., i d ) = G i 1 1 G i G i d d MINIMAL TT COMPRESSION RANKS: r k = ranka k, A k = [a (i1...i k )(i k+1...i d )], 0 k d size(g i k k ) = r k 1 r k

25 THE KEY TO EVERYTHING PROBLEM OF RECOMPRESSION: Given a tensor train, but with large ranks. Let us try to find in ε-vicinity a tensor train with lesser compression ranks. METHOD OF TT RECOMPRESSION (I.V.Oseledets): NUMBER OF OPERATIONS IS LINEAR IN DIMENSIONALITY d AND MODE SIZE n THE RESULT HAS GUARANTEED APPROXIMATION ACCURACY

26 METHOD OF TENSOR TRAIN RECOMPRESSION Minimal TT compression ranks = ranks of unfolding matrices A k Matrices A k are of size n k n d k, but never appear as full arrays of n d elements. Nevertheless, the SVD for A k are constructed with orthogonal (unitary) matrices in a compact factorized form. When neglecting smallest singular values, we provide GUARANTEED ACCURACY. To show the idea, consider a TT decomposition a(i 1, i 2, i 3 ) = α 1,α 2 g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2, i 3 )

27 TENSOR TRAIN RECOMPRESSION RIGHT TO LEFT by QR a(i 1, i 2, i 3 ) = g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2 ; i 3 ) α 1,α 2 = g 1 (i 1, α 1 ) g 2 (α 1, i 2 ; α 2 ) q 3(α 2 ; i 3) α 1,α 2 = ĝ 1 (i 1 ; α 1 ) q 2(α 1, i 2; α 2 ) q 3(α 2 ; i 3) α 1,α 2 Matrices q 2 (α 1 ; i 2, α 2 ), q 3(α 2 ; i 3) obtain orthonormal rows. g 3 (α 2 ; i 3 ) = r 3 (α 2 ; α α 2 ) q 3(α 2 ; i 3) 2 g 2 (α 1, i 2 ; α 2 ) = g 2 (α 1, i 2 ; α 2 ) r 3 (α 2, α 2 ) α 2 g 2 (α 1 ; i 2, α 2 ) = r 2 (α 1 ; α α 1 ) q 2(α 1 ; i 2, α 2 ) 1 ĝ 1 (i 1 ; α 1 ) = g 1 (i 1 ; α 1 ) r 2 (α 1 ; α 2 ) α 1 QR QR

28 TENSOR TRAIN RECOMPRESSION LEFT TO RIGHT by SVD a(i 1, i 2, i 3 ) = ĝ 1 (i 1 ; α 1 ) q 2(α 1, i 2, α 2 ) q 3(α 2, i 3) α 1,α 2 = α 1,α 2 = α 1,α 2 z 1 (i 1 ; α 1 ) ĝ 2(α 1 ; i 2, α 2 ) q 3(α 2, i 3) z 1 (i 1 ; α 1 ) z 2(α 1 ; i 2, α 2 ) ĝ 3(α 2, i 3) Matrices z 1 (i 1 ; α 1 ), z 2(α 1, i 2; α 2 ) obtain orthonormal columns.

29 LEMMA ON ORTHONORMALITY Let k l and matrices q k (α k 1 ; i k, α k ),..., q l (α l 1 ; i l, α l ) have orthonormal rows. Then the matrix Q k (α k ; i) Q k (α k 1 ; i k,..., i l, α l ) α k,...,α l 1 q k (α k 1 ; i k, α k )... q l (α l 1 ; i l, α l ) has orthonormal rows as well. PROOF BY INDUCTION. Q k (α k 1 ; i k, i) = α k q k (α k 1 ; i k, i) Q k+1 (α k ; i) Q k (α ; i k, i) Q k (β ; i k, i) = i k,i q k (α ; i k, µ) Q k+1 (µ ; i)q k (β ; i k, ν) Q k+1 (ν ; i) = i k,i µ,ν q k (α, ; i k, µ) q k (β ; i k, ν) δ(µ, ν) = i k µ,ν q k (α, ; i k, α k ) q k (β ; i k, α k ) = δ(α, β) i k,α k

30 TENSOR TRAIN RECOMPRESSION a(i 1, i 2, i 3 ) = ĝ 1 (i 1, α 1 ) q 2(α 1, i 2, α 2 ) q 3(α 2, i 3) α 1,α 2 = α 1,α 2 = α 1,α 2 z 1 (i 1, α 1 ) ĝ 2(α 1, i 2, α 2 ) q 3(α 2, i 3) z 1 (i 1, α 1 ) z 2(α 1, i 2, α 2 ) ĝ 3(α 2, i 3) ranka 1 = rank [ ĝ 1 (α 0, i 1; α 1 )] ranka 2 = rank [ ĝ 2 (α 1, i 2; α 2 )] ranka 3 = rank [ ĝ 3 (α 2, i 3; α 3 )] Complexity of computation of compression ranks is linear in d. Truncation is performed in the SVD of small-size matrices. NUMBER OF OPERATIONS = O(dnr 3 ) GUARANTEED ACCURACY = d ε (in the Frobenius norm)

31 TT APPROXIMATION FOR LAPLACIAN d TT recompression time Canonical rank Compresison rank sec sec sec sec sec sec D grids are of size 32. Tensor has modes of size n = 1024.

32 WHAT CAN WE DO WITH TENSOR TRAINS? a(i 1,...i d ) = α 1,...,α d 1 g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 )... g d (α d 1, i d ) RECOMPRESSION: given a tensor train with TT-ranks r, we can approximate it by another tensor train with a guaranteed accuracy using O(dnr 3 ) operations. QUASI-OPTIMALITY OF RECOMPRESSION: ERROR d 1 BEST APPROX. ERROR WITH SAME TT-RANKS EFFICIENT APPROXIMATE MATRIX OPERATIONS

33 CANONICAL VERSUS TENSOR-TRAIN Canonical Tensor-Train Number of parameters O(dnR) O(dnr + (d 2)r 3 ) Matrix-by-vector O(dn 2 R 2 ) O(dn 2 r 2 + dr 6 ) Addition O(dnR) O(dnr) Recompression O(dnR 2 + d 3 R 3 ) O(dnr 2 + dr 4 ) Tensor-vector contraction O(dnR) O(dnr + dr 3 )

34 TENSOR-VECTOR CONTRACTION γ = i 1,...,i d a(i 1,..., i d ) x 1 (i 1 )... x d (i d ) ALGORITHM: Compute matrices Z k = ik g k (i k, α k 1, α k ) x k (i k ) Multiply matrices γ = Z 1 Z 2...Z k NUMBER OF OPERATIONS = O(dnr 2 )

35 RECOVER A d-dimensional TENSOR FROM A SMALL PORTION OF ITS ELEMENTS Given a procedure for computation of a(i 1,..., i d ). We need to choose true elements and use them to construct a TT approximation for this tensor. TT decomposition with maximal compression rank r is allowed to be constructed from some O(dnr 2 ) elements.

36 HOW THIS PROBLEM IS SOLVED FOR MATRICES Let A be close to a matrix of rank r: σ r+1 (A) ε Then there exists a cross of r columns C and r rows R such that (A CG 1 R) ij (r + 1)ε G is an r r matrix on the intersection of C and R Take G of maximal volume among all r r submatrices in A. S.A.Goreinov, E.E.Tyrtyshnikov: The maximal-volume concept in approximation by low-rank matrices, Contemporary Mathematics, Vol. 208 (2001), S.A.Goreinov, E.E.Tyrtyshnikov, N.L.Zamarashkin: A theory of pseudo-skeleton approximations, Linear Algebra Appl. 261: 1 21 (1997). Doklady RAS (1995).

37 GOOD INSTEAD OF BEST: PSEUDO-MAX-VOLUME Given A of size n r, find a row permutation to move a good submatrix to the upper r r block. Since volume does not change by right-side multiplications, assume that 1... A = 1 a r+1,1... a r+1,r a n1... a nr NECESSARY FOR MAX-VOL: a ij 1, r + 1 i n, 1 j r Let this define a good submatrix. Then here is an algorithm: If a ij 1 + δ, then swap rows i and j. Make I in the first r rows by right-side multiplication. Check new a ij. Quit if all are less than 1 + δ. Otherwise repeat.

38 MATRIX CROSS ALGORITHM Assume we are given some initial column indices j 1,..., j r. Find maximal-volume row indices i 1,..., i r in these columns. Find maximal-volume column indices in the rows i 1,..., i r. Proceed choosing columns and rows until the skeleton cross approximations stabilize. E.E.Tyrtyshnikov, Incomplete cross approximation in the mosaic-skeleton method, Computing 64, no. 4 (2000),

39 TENSOR-TRAIN CROSS INTERPOLATION Given a(i 1, i 2, i 3, i 4 ), consider the unfoldings and r-column sets: A 1 = [a(i 1 ; i 2, i 3, i 4 )], J 1 = {i (β 1) 2 i (β 1) 3 i (β 1) 4 } A 2 = [a(i 1, i 2 ; i 3, i 4 )], J 2 = {i (β 2) 3 i (β 2) 4 } A 3 = [a(i 1, i 2, i 3 ; i 4 )], J 3 = {i (β 3) 4 } Successively choose good rows: I 1 = {i (α 1) 1 } in a(i 1 ; i 2, i 3, i 4 ) : a = α 1 g 1 (i 1 ; α 1 ) a 2 (α 1 ; i 2, i 3, i 4 ) I 2 = {i (α 2) 1 i (α 2) 2 } in a 2 (α 1, i 2 ; i 3, i 4 ) : a 2 = α 2 g 2 (α 1, i 2 ; α 2 ) a 3 (α 2, i 3 ; i 4 ) I 3 = {i (α 3) 1 i (α 3) 2 i (α 3) 3 } in a 3 (α 2, i 3 ; i 4 ) : a 3 = α 3 g 3 (α 2, i 3 ; α 3 ) g 4 (α 3 ; i 4 ) Finally a = g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2, i 3, α 3 ) g 4 (α 3, i 4 ) α 1,α 2,α 3,α 4

40 TT-CROSS INTERPOLATION OF A TENSOR Tensor A of size n 1 n 2... n d with compression ranks r k = ranka k, A k = A(i 1 i 2... i k ; i k+1... i d ) is recovered by elements of TT-cross C k (α k 1, i k, β k ) = A(i (α k 1) 1, i (α k 1) 2,..., i (α k 1) k 1, i k, j (β k) k+1,..., j(β k) d ) TT-cross is defined by index sets I k = {i (α k) 1... i (α k) k }, 1 α k r k J k = {j (β k) k+1... j(β k) d }, 1 β k r k Nested property for α sets. Require nonsingularity of r k r k matrices  k (α k, β k ) = A(i (α k) 1, i (α k) 2,..., i (α k) k ; j (β k) k+1,..., j(β k) d ) α k, β k = 1,..., r k

41 FORMULA FOR TT-INTERPOLATION A(i 1, i 2,..., i d ) = α 1,...,α d 1 Ĉ 1 (α 0, i 1, α 1 ) Ĉ2(α 1, i 2, α 2 )... Ĉ d (α d 1, i d, α d ) Ĉ k (α k 1, i k, α k ) = α k C k (α k 1, i k, α k )  1 k (α k, α k) k = 1,..., d  d = I

42 TENSOR-TRAIN CROSS ALGORITHM Assume we are given r k initial column indices j (β k) k+1,..., j(β k) d in the unfolding matrices A k. Find r k maximal-volume rows in submatrices in A k of the form a(i (α k 1) 1,..., i (α k 1) k 1, i k ; j (β k) k+1,..., j(β k) d ). Use the row indices obtained and do the same from right to left to find new column indices. Proceed with these sweeps from left to right and from right to left. Stop when tensor trains stabilize.

43 EXAMPLE OF TT-CROSS APPROXIMATION HILBERT TENSOR 1 a(i 1, i 2,..., i d ) = i 1 + i i d d = 60, n = 32 r max Time Iterations Relative accuracy e e e e e e e e e e e-09

44 COMPUTATION OF d-dimensional INTEGRALS: example 1 I(d) = sin(x 1 + x x d ) dx 1 dx 2... dx d = Im e i(x 1+x x d ) dx 1 dx 2... dx d = Im(( ei 1 ) d ) [0,1] d i Use the Chebyshev (Clenshaw-Curtis) quadrature with n = 11 nodes. All n d values are NEVER COMPUTED! Instead, we find a TT cross and construct a TT approximation for this tensor. d I Relative accuracy Time e e e e e e e e e e e e

45 COMPUTATION OF d-dimensional INTEGRALS: example 2 I(d) = [0,1] d x x x 2 d dx 1dx 2... dx d d = 100 Chebyshev quadrature with n = 41 nodes plus TT-cross of size r max = 32 give a reference solution. For comparison, take n = 11 nodes: r max Relative accuracy Time e e e e e e e e e

46 INCREASE DIMENSIONALITY (TENSORS INSTEAD MATRICES) Matrix is a 2-way array. A d-level matrix is naturally viewed as a 2d-way array: A(i, j) = A(i 1, i 2,..., i d ; j 1, j 2,..., j d ) i (i 1...i d ), j (j 1...j d ) Important to consider a related reshaped array: B(i 1 j 1,..., i d j d ) = A(i 1, i 2,..., i d ; j 1, j 2,..., j d ) Matrix A is represented by tensor B.

47 MINIMAL TENSOR TRAINS a(i 1... i d ; j 1... j d ) = 1 α k r k g 1 (i 1 j 1, α 1 ) g 2 (α 1, i 2 j 2, α 2 )... g d 1 (α d 2, i d 1 j d 1, α d 1 ) g d (α d 1, i d j d ) Minimal possible values of compression ranks r k are equal to the ranks of specific unfolding matrices: r k = ranka k, A k = [A(i 1 j 1,..., i k j k ; i k+1 j k+1,..., i d j d )] If all r k = 1 then A = G 1... G d In general A = α 1 G 1α1 α2 G 2α1 α 2 α3 G 3α2 α

48 NO CURSE OF DIMENSIONALITY Let 1 i k, j k n and r k = r. Then the number of representation parameters is dn 2 r 2. Dependence on d is linear! SO LET US MAKE d AS LARGE AS POSSIBLE BY ADDING FICTITIOUS AXES Assume we had d 0 levels. If n = 2 d 1 then set d = d 0 d 1.Then memory = 4dr 2 d = log 2 (size(a)) LOGARITHMIC IN THE SIZE OF MATRIX

49 CAUCHY TOEPLITZ EXAMPLE [ ] 1 A = i j + 1/2 Relative accuracy Compression ranks for A and A 1 1.e e e e e n = 1024, d 0 = 1, d 1 = 10

50 INVERSES TO BANDED TOEPLITZ MATRICES Let A be a band Toeplitz matrix: A ij = [a(i j)] a k = 0, k > s, s is half-bandwidth. THEOREM Let size(a) = 2 d 2 d and det A 0. Then r k (A 1 ) 4s 2 + 1, k = 1,..., d 1, the estimate being sharp. COROLLARY The inverse to a band Toeplitz matrix A of size 2 d 2 d with halfbandwidth s has a TT representation with the number of parameters O(s 4 log 2 n). Using Newton with approximations we obtain the inversion algorithm with complexity O(log 2 n).

51 AVERAGE COMPRESSION RANK r = memory 2d memory = 4dr 2 INVERSION OF d 0 -DIMENSIONAL LAPLACIAN BY MODIFIED NEWTON d 1 = 10 Physical dimensionality (= d 0 ) Average compression rank of A Average compression rank of approximation to A Time (sec) AX I / I 1.e-2 6.e-3 2.e-3 5.e-5 4.e-5 4.e-5 The last matrix size is

52 INVERSION OF 10-DIMENSIONAL LAPLACIAN VIA INTEGRAL REPRESENTATION BY STENGER FORMULA 0 exp( At)dt h τ M k= M w k exp ( t k τ A ) h = π/ M, w k = t k = exp(hk), λ min (A/τ) 1

53 CONCLUSIONS AND PERSPECTIVES Tensor-train decompositions and corresponding algorithms (see provide us with excellent approximation tools for vectors and matrices. TT-toolbox for Matlab is available: The memory needed depends on the matrix size logarithmically. It is terrific advantage when compression ranks are small. It is exactly so in many applications. Approximate inverses can be computed in the tensor-train format generally with complexity logarithmic in the size of matrix. Applications unclude huge-scale matrices (with size up to ) and as well typical large-scale and even modest-scale matrices (like images). The key to efficient tensor-train operations is the recompression algorithm with complexity O(dnr 6 ) and reliability of the SVD. Modified Newton method with truncations and integral representations of matrix functions are viable in the tensor-train format.

54 GOOD PERSPECTIVES Multi-variate interpolation (construction of tensor trains from a small portion of all elements, tensor cross methods using the maximal volume concept). Fast computation of integrals in d dimensions (no Monte Carlo). Approximate matrix operations (e.g. inversion) with complexity O(log 2 n). linear in d = linear in log 2 n New direction in data compression and image processing (movies). Statistical interpretation of tensor trains. Applications to quantum chemistry, multi-parametric optimization, stochastic PDEs, data mining etc.

55 MORE DETAILS and WORK IN PROGRESS I. V. Oseledets and E. E. Tyrtyshnikov, Breaking the curse od dimensionality, or how to use SVD in many dimensions, Research Report 09-03, Hong Kong: ICM HKBU, 2009 ( SIAM J. Sci. Comput., I. Oseledets, Compact matrix form of the d-dimensional tensor decomposition, SIAM J. Sci. Comput., I. V. Oseledets, "Tensors inside matrices give logarithmic complexity", SIAM J. Matrix Anal. Appl., I. V. Oseledets, TT-Cross Approximation for Multidimensional Arrays, Research Report 09-11, Hong Kong: ICM HKBU, 2009 ( Linear ALgebra Appl., I. Oseledets, E. E. Tyrtyshnikov, On a recursive decomposition of multi-dimensional tensors, Doklady RAS, vol. 427, no. 2 (2009). I. Oseledets, On a new tensor decomposition, Doklady RAS, vol. 427, no. 3 (2009). I. Oseledets, On approximation of matrices with logarithmic number of parameters, Doklady RAS, vol. 427, no. 4 (2009). N. Zamarashkin, I. Oseledets, E. Tyrtyshnikov, Tensor structure of the inverse to a banded Toeplitz matrix, Doklady RAS, vol. 427, no. 5 (2009). Efficient ranks of tensors and stability of TT approximations, TTM for image processing, TT approximations in electronic structure calculations. In preparation.

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 October 2012 COLLABORATION MOSCOW:

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 09-11 TT-Cross approximation for multidimensional arrays Ivan Oseledets 1, Eugene Tyrtyshnikov 1, Institute of Numerical

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 70 88 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa TT-cross approximation for

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

Tensor networks and deep learning

Tensor networks and deep learning Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Tensor properties of multilevel Toeplitz and related. matrices.

Tensor properties of multilevel Toeplitz and related. matrices. Tensor properties of multilevel Toeplitz and related matrices Vadim Olshevsky, University of Connecticut Ivan Oseledets, Eugene Tyrtyshnikov 1 Institute of Numerical Mathematics, Russian Academy of Sciences,

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Tensors and graphical models

Tensors and graphical models Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-03 LINEAR ALGEBRA FOR TENSOR PROBLEMS I. V. OSELEDETS, D. V. SAVOSTYANOV, AND E. E. TYRTYSHNIKOV Abstract. By

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors

A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors A randomized block sampling approach to the canonical polyadic decomposition of large-scale tensors Nico Vervliet Joint work with Lieven De Lathauwer SIAM AN17, July 13, 2017 2 Classification of hazardous

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

arxiv: v4 [math.na] 27 Nov 2017

arxiv: v4 [math.na] 27 Nov 2017 Rectangular maximum-volume submatrices and their applications arxiv:1502.07838v4 [math.na 27 Nov 2017 A. Mikhalev a,, I. V. Oseledets b,c a King Abdullah University of Science and Technology, Thuwal 23955-6900,

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets

More information

Fast low rank approximations of matrices and tensors

Fast low rank approximations of matrices and tensors Fast low rank approximations of matrices and tensors S. Friedland, V. Mehrmann, A. Miedlar and M. Nkengla Univ. Illinois at Chicago & Technische Universität Berlin Gene Golub memorial meeting, Berlin,

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

ON MANIFOLDS OF TENSORS OF FIXED TT-RANK

ON MANIFOLDS OF TENSORS OF FIXED TT-RANK ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Incomplete Cross Approximation in the Mosaic-Skeleton Method

Incomplete Cross Approximation in the Mosaic-Skeleton Method Incomplete Cross Approximation in the Mosaic-Skeleton Method Eugene E. Tyrtyshnikov ABSTRACT The mosaic-skeleton method was bred in a simple observation that rather large blocks in very large matrices

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis

Numerical Experiments for Finding Roots of the Polynomials in Chebyshev Basis Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 12, Issue 2 (December 2017), pp. 988 1001 Applications and Applied Mathematics: An International Journal (AAM) Numerical Experiments

More information

Introduction to Tensors. 8 May 2014

Introduction to Tensors. 8 May 2014 Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

CS60021: Scalable Data Mining. Dimensionality Reduction

CS60021: Scalable Data Mining. Dimensionality Reduction J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 1 CS60021: Scalable Data Mining Dimensionality Reduction Sourangshu Bhattacharya Assumption: Data lies on or near a

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

MATRIX COMPLETION AND TENSOR RANK

MATRIX COMPLETION AND TENSOR RANK MATRIX COMPLETION AND TENSOR RANK HARM DERKSEN Abstract. In this paper, we show that the low rank matrix completion problem can be reduced to the problem of finding the rank of a certain tensor. arxiv:1302.2639v2

More information

Tensor Product Approximation

Tensor Product Approximation Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),

More information

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University Sparse Grids & "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" from MK Stoyanov, CG Webster Léopold Cambier ICME, Stanford University

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

On Multivariate Newton Interpolation at Discrete Leja Points

On Multivariate Newton Interpolation at Discrete Leja Points On Multivariate Newton Interpolation at Discrete Leja Points L. Bos 1, S. De Marchi 2, A. Sommariva 2, M. Vianello 2 September 25, 2011 Abstract The basic LU factorization with row pivoting, applied to

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Parameter Selection Techniques and Surrogate Models

Parameter Selection Techniques and Surrogate Models Parameter Selection Techniques and Surrogate Models Model Reduction: Will discuss two forms Parameter space reduction Surrogate models to reduce model complexity Input Representation Local Sensitivity

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Michael Nip Center for Control, Dynamical Systems, and Computation University of California,

More information

Tensor Low-Rank Completion and Invariance of the Tucker Core

Tensor Low-Rank Completion and Invariance of the Tucker Core Tensor Low-Rank Completion and Invariance of the Tucker Core Shuzhong Zhang Department of Industrial & Systems Engineering University of Minnesota zhangs@umn.edu Joint work with Bo JIANG, Shiqian MA, and

More information

/16/$ IEEE 1728

/16/$ IEEE 1728 Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Dynamical low-rank approximation

Dynamical low-rank approximation Dynamical low-rank approximation Christian Lubich Univ. Tübingen Genève, Swiss Numerical Analysis Day, 17 April 2015 Coauthors Othmar Koch 2007, 2010 Achim Nonnenmacher 2008 Dajana Conte 2010 Thorsten

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

Part IB Numerical Analysis

Part IB Numerical Analysis Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information