NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING

Size: px
Start display at page:

Download "NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING"

Transcription

1 NEW TENSOR DECOMPOSITIONS IN NUMERICAL ANALYSIS AND DATA PROCESSING Institute of Numerical Mathematics of Russian Academy of Sciences 11 October 2012

2 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov S.Dolgov, V.Kazeev, O.Lebedeva, A.Setukha, S.Stavtsev, D.Zheltkov S.Goreinov, N.Zamarashkin LEIPZIG: W.Hackbusch, B.Khoromskij, R.Schneider H.-J.Flad, V.Khoromskaia, M.Espig, L.Grasedyck

3 TENSORS IN 20TH CENTURY used chiefly as desriptive tools: physics differential geometry multiplication tables in algebras applied data management chemometrics sociometrics signal/image processing many others

4 WHAT IS TENSOR Tensor = d-linear form = d-dimensional array: A = [a i1 i 2...i d ] Tensor A possesses: dimensionality (order) d = number of indices (dimensions, modes, axes, directions, ways) size n 1... n d (number of points at each dimension)

5 EXAMPLES OF PROMINENT THEORIES FOR TENSORS IN 20th CENTURY Kruskal s theorem (1977) on essential uniqueness of canonical tensor decomposition introduced by Hitchcock (1927); canonical tensor decompositions as a base for Strassen s method of matrix multiplication of complexity less than n 3 (1969); interrelations between tensors (especially symmetric) and polynomials as a topic in algebraic geometry.

6 BEGIN WITH 2 2 MATRICES The column-by-row rule for 2 2 matrices yields 8 mults: [ ] [ ] a11 a 12 b11 b 12 = a 21 a 22 b 21 b 22 [ a11 b 11 + a 12 b 21 a 11 b 12 + a 12 b 22 ] a 21 b 11 + a 22 b 21 a 21 b 12 + a 22 b 22

7 DISCOVERY BY STRASSEN Only 7 mults is enough! IMPORTANT: for block 2 2 matrices these are 7 mults of blocks: α 1 = (a 11 + a 22 )(b 11 + b 22 ) α 2 = (a 21 + a 22 )b 11 α 3 = a 11 (b 12 b 22 ) α 4 = a 22 (b 21 b 11 ) α 5 = (a 11 + a 12 )b 22 α 6 = (a 21 a 11 )(b 11 + b 12 ) α 7 = (a 12 a 22 )(b 21 + b 22 ) c 11 = α 1 + α 4 α 5 + α 7 c 12 = α 3 + α 5 c 21 = α 2 + α 4 c 22 = α 1 + α 3 α 2 + α 6

8 HOW A TENSOR ARISES AND HELPS [ ] [ ] [ ] c1 c 2 a1 a = 2 b1 b 2 c 3 c 4 a 3 a 4 b 3 b 4 c k = h ijk = R α=1 w kα R u i α v jα w kα α=1 n 2 u iα a i i=1 n 2 n 2 c k = h ijk a i b j i=1 j=1 n 2 v jα b j j=1 Now only R mults of blocks! If n = 2 then R = 7 (Strassen, 1969). Recursion O(n log 2 7 ) scalar mults for any n.

9 GENERAL CASE BY RECURSION Two matrices of order n = 2 d can be multiplied with 7 d = n log 2 7 scalar multiplications and 7n log 2 7 scalar additions/subtrations. n = 2 d n/2 n/2 n/2 n/2 n/2 n/2 n/2

10 TENSORS IN 21ST CENTURY: NUMERICAL METHODS WITH TENSORIZATION OF DATA We consider typical problems of numerical analysis (matrix computations, interpolation, optimization) under the assumption that the input, output and all intermediate data are represented by tensors with many dimensions (tens, hundreds, even thousands). Of course, it assumes a very special structure of data. But we have it in really many problems!

11 THE CURSE OF DIMENSIONALITY The main problem is that using arrays as means to introduce tensors in many dimensions is infeasible: if d = 300 and n = 2, then such an array contains entries

12 NEW REPRESENTATION FORMATS Canonical polyadic and Tucker decompositions are of limited use for our purposes (by different reasons). New decompositions: TT (Tensor Train) HT (Hierarchical Tucker)

13 REDUCTION OF DIMENSIONALITY i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 i 3 i 4 i 5 i 6 i 3 i 4 i 5 i 6

14 SCHEME FOR TT i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 α i 3 i 4 i 5 i 6 α i 1 β i 2 αβ i 3 i 4 γ i 5 i 6 αγ i 3 δ i 4 γδ i 5 αη i 6 γη

15 SCHEME FOR HT i 1 i 2 i 3 i 4 i 5 i 6 i 1 i 2 α i 3 i 4 i 5 i 6 α i 1 β i 2 αβ i 3 i 4 γ i 5 i 6 αγ i 2 φ αβφ i 3 δ i 4 γδ i 5 i 6 ξ γηξ i 4 ψ γδψ i 5 ζ i 6 ξζ i 6 ν ξζν

16 THE BLESSING OF DIMENSIONALITY TT and HT provide new representation formats for d-tensors + algorithms with complexity linear in d. Let the amount of data be N. In numerical analysis, complexity O(N) is usually considered as a dream. With ultimate tensorization we go beyond the dream: since d log N, we may obtain complexity O(log N).

17 BASIC TT ALGORITHMS TT rounding. Like the rounding of machine numbers. COMLEXITY = O(dnr 3 ). ERROR d 1 BEST ERROR. TT interpolation. A tensor train is constructed from sufficiently few elements of the tensor, the number of them is O(dnr 2 ). TT quantization and wavelets. Low-dimensional high-dimensional algebraic wavelet tranbsforms (WTT). In matrix problems the complexity may drop from O(N) down to O(log N).

18 SUMMATION AGREEMENT Omit the symbol of summation. Assume summation if the index in a product of quantities with indices is repeated at least twice. Equations hold for all values of other indices.

19 SKELETON DECOMPOSITION A = UV = r u 1α... [ ] v 1α... v nα u mα α=1 According to the summation agreement, a(i, j) = u(i, α)v(j, α)

20 CANONICAL AND TUCKER CANONICAL DECOMPOSITION a(i 1... i d ) = u 1 (i 1 α)... u d (i d α) TUCKER DECOMPOSITION a(i 1... i d ) = g(α 1... α d )u 1 (i 1 α 1 )... u d (i d α d )

21 TENSOR TRAIN (TT) IN THREE DIMENSIONS a(i 1 ; i 2 i 3 ) = g 1 (i 1 ; α 1 )a 1 (α 1 ; i 2 i 3 ) a 1 (α 1 i 2 ; i 3 ) = g 2 (α 1 i 2 ; α 2 )g 3 (α 2 ; i 3 ) TENSOR TRAIN (TT) a(i 1 i 2 i 3 ) = g 1 (i 1 α 1 )g 2 (α 1 i 2 α 2 )g 3 (α 2 i 3 )

22 TENSOR TRAIN (TT) IN d DIMENSIONS a(i 1... i d ) = g 1 (i 1 α 1 )g 2 (α 1 i 2 α 2 )... g d 1 (α d 2 i d 1 α d 1 )g d (α d 1 i d ) a(i 1... i d ) = d g k (α k 1 i k α k ) k=1

23 KRONECKER REPRESENTATION OF TENSOR TRAINS A = G 1 α 1 G 2 α 1 α 2... G d 1 α d 2 α d 1 G d α d 1 A is of size (m 1... m d ) (n 1... n d ). G k α k 1 α k is of size m k n k.

24 ADVANTAGES OF TENSOR-TRAIN REPRESENTATION The tensor is determined through d tensor carriages g k (α k 1 i k α k ), each of size r k 1 n k r k. If the maximal size is r n r, then the number of representation parameters does not exceed dnr 2 n d.

25 TENSOR TRAIN PROVIDES STRUCTURED SKELETON DECOMPOSITIONS OF UNFOLDING MATRICES A k = a(i 1... i k ; i k+1... i d ) = u k (i 1... i k ; α k ) v k (α k ; i k+1... i d ) = U k V k u k (i 1... i k α k ) = g 1 (i 1 α 1 )... g k (α k 1 i k α k ) v k (α k i k+1... i d ) = g k+1 (α k i k+1 α k+1 )... g d (α k 1 i d )

26 TT RANKS ARE BOUNDED BY THE RANKS OF UNFOLDING MATRICES r k ranka k, A k = [a(i 1... i k ; i k+1... i d )] Equalities are always possible.

27 ORTHOGONAL TENSOR CARRIAGES A tensor carriage g(αiβ) is called row orthogonal if its first unfolding matrix g(α ; iβ) has orthonormal rows. A tensor carriage g(αiβ) is called column orthogonal if its second unfolding matrix g(αi ; β) has orthonormal columns.

28 ORTHOGONALIZATION OF TENSOR CARRIAGES tensor carriage g(αiβ) decomposition g(αiβ) = h(αα )q(α iβ) with q(α iβ) being row orthogonal. tensor carriage g(αiβ) decomposition g(αiβ) = q(αiβ )h(β β) with q(αiβ ) being column orthogonal.

29 PRODUCTS OF ORTHOGONAL TENSOR CARRIAGES A product of row (column) orthogonal tensor carriages p(α s, i s... i t, α t ) = t k=s+1 is also row (column) orthogonal. g k (α k 1 i k α k )

30 MAKING ALL CARRIAGES ORTHOGONAL Orthogonalize the columns of g 1 = q 1 h 1, then compute and orthogonalize h 1 g 2 = q 2 h 2. Thus, and after k steps g 1 g 2 = q 1 q 2 h 2 g 1... g k = q 1... q k h k. Similarly for the row orhogonalization, g k+1... g d = h k+1 z k+1... z d.

31 STRUCTURED ORTHOGONALIZATION TT decomposition a(i 1... i d ) = d g s (α s 1 i s α s ) s=1 column q k and row z k orthogonal carriages s. t. a(i ( 1... i k ; i k+1... i d ) = k ) ( q k (α s 1 i sα s) H k (α k, α k ) s=1 d s=k+1 ) z s (α s 1 i sα s ) q k and z k can be constructed in dnr 3 operations.

32 CONSEQUENCE: STRUCTURED SVD FOR ALL UNFOLDING MATRICES IN O(dnr 3 ) OPERATIONS It suffices to compute SVD for the matrices H k (α k α k ).

33 TENSOR APPROXIMATION VIA MATRIX APPROXIMATION We can approximate any fixed unfolding matrix using its structured SVD: a(i 1... i k ; i k+1... i d ) = a k + e k a k = U k (i 1... i k ; α k)σ k (α k)v k (α k ; i k+1... i d ) e k = e k (i 1... i k ; i k+1... i d )

34 ERROR ORTHOGONALITY U k (i 1... i k α k)e k (i 1... i k ; i k+1... i d ) = 0 e k (i 1... i k+1 ; i k+1... i d )V k (α ki k+1... i d ) = 0

35 COROLLARY OF ERROR ORTHOGONALITY Let a k be further approximated by a TT but so that u k or v k are kept. Then the further error, say e l, is orthogonal to e k. Hence, e k + e l 2 F = e k 2 F + e l 2 F

36 TENSOR-TRAIN ROUNDING Approximate successively A 1, A 2,..., A d 1 with the error bound ε. Then FINAL ERROR d 1 ε

37 TENSOR INTERPOLATION Interpolate an implicitly given tensor by a TT using only small part of its elements, of order dnr 2. Cross interpolation method for tensors is constructed as a generalization of the cross method for matrices (1995) and relies on the maximal volume principle from the matrix theory.

38 MAXIMAL VOLUME PRINCIPLE THEOREM (Goreinov, Tyrtyshnikov) Let [ ] A11 A A = 12, A 21 A 22 where A 11 is a r r block with maximal determinant in modulus (volume) among all r r blocks in A. Then the rank-r matrix ] A r = [ A11 A 21 A 1 11 [ A11 A 12 ] approximates A with the Chebyshev-norm error at most in (r + 1) 2 times larger than the error of best approximation of rank r.

39 BEST IS AN ENEMY OF GOOD Move a good submatrix M in A to the upper r r block. Use right-side multiplications by nonsingular matrices A = a r+1,1... a r+1,r a n1... a nr NECESSARY FOR MAXIMAL VOLUME: a ij 1, r + 1 i n, 1 j r

40 BEST IS AN ENEMY OF GOOD COROLLARY OF MAXIMAL VOLUME σ min (M) 1/ r(n r) + 1 ALGORITHM If a ij 1 + δ, then swap rows i and j. Make identity matrix in the first r rows by right-side multiplication. Quit if a ij < 1 + δ for all i, j. Otherwise repeat.

41 MATRIX CROSS ALGORITHM Given initial column indices j 1,..., j r. Find good row indices i 1,..., i r in these columns. Find good column indices in the rows i 1,..., i r. Proceed choosing good columns and rows until the skeleton cross approximations stabilize. E.E.Tyrtyshnikov, Incomplete cross approximation in the mosaic-skeleton method, Computing 64, no. 4 (2000),

42 CROSS TENSOR-TRAIN INTERPOLATION Let a 1 = a(i 1, i 2, i 3, i 4 ). Seek crosses in the unfolding matrices. On input: r initial columns in each. Select good rows. A 1 = [a(i 1 ; i 2, i 3, i 4 )], J 1 = {i (β 1) 2 i (β 1) 3 i (β 1) 4 } A 2 = [a(i 1, i 2 ; i 3, i 4 )], J 2 = {i (β 2) 3 i (β 2) 4 } A 3 = [a(i 1, i 2, i 3 ; i 4 )], J 3 = {i (β 3) 4 } rows matrix skeleton decomposition I 1 = {i (α 1) 1 } a 1 (i 1 ; i 2, i 3, i 4 ) a 1 = α 1 g 1 (i 1 ; α 1 ) a 2 (α 1 ; i 2, i 3, i 4 ) I 2 = {i (α 2) 1 i (α 2) 2 } a 2 (α 1, i 2 ; i 3, i 4 ) a 2 = α 2 g 2 (α 1, i 2 ; α 2 ) a 3 (α 2, i 3 ; i 4 ) I 3 = {i (α 3) 1 i (α 3) 2 i (α 3) 3 } a 3 (α 2, i 3 ; i 4 ) a 3 = α 3 g 3 (α 2, i 3 ; α 3 ) g 4 (α 3 ; i 4 ) Finally a = α 1,α 2,α 3,α 4 g 1 (i 1, α 1 ) g 2 (α 1, i 2, α 2 ) g 3 (α 2, i 3, α 3 ) g 4 (α 3, i 4 )

43 QUANTIZATION OF DIMENSIONS Increase the number of dimensions. E.g Extreme case is conversion of a vector of size N = 2 d to a d-tensor of size Using TT format with bounded TT ranks may reduce the complexity from O(N) to as little as O(log 2 N).

44 EXAMPLES OF QUANTIZATION f (x) is a function on [0, 1] a(i 1,..., i d ) = f (ih), i = i i i d 2 d The array of values of f is viewed as a tensor of size 2 2. EXAMPLE 1. f (x) = e x + e 2x + e 3x ttrank= 2.7 ERROR=1.5e-14 EXAMPLE 2. f (x) = 1 + x + x 2 + x 3 ttrank= 3.4 ERROR=2.4e-14 EXAMPLE 3. f (x) = 1/(x 0.1) ttrank= 10.1 ERROR=5.4e-14

45 THEOREMS If there is an ε-approximation with separated variables f (x + y) r u k (x)v k (y), k=1 r = r(ε), then a TT exists with error ε and TT-ranks r. If f (x) is a sum of r exponents, then an exact TT exists with the ranks r. For a polynomial of degree m an exact TT exists with the ranks r = m + 1. If f (x) = 1/(x δ) then r = log ε 1 + log δ 1.

46 ALGEBRAIC WAVELET FILTERS a(i 1... i d ) = u 1 (i 1 α 1 )a 1 (α 1 i 2... i d ) + e 1 u 1 (i 1 α 1 )u(i 1 α 1) = δ(α 1, α 1) a a 1 = u 1 a a 2 = u 2 a 1 a 3 = u 3 a 2...

47 TT QUADRATURE I (d) = sin(x 1 + x x d ) dx 1 dx 2... dx d = [0,1] d Im e i(x 1+x x d ) dx 1 dx 2... dx d = Im [0,1] d ( (e i ) d ) 1 n nodes in each dimension n d values in need! TT interpolation method uses only small part (n = 11) d I (d) Relative Error Timing e e e e e e e e i

48 QTT QUADRATURE 0 sinx x dx = π 2 Truncate the domain and use the rule of rectangles. Machine accuracy causes to use 2 77 values. The vector of values is treated as a tensor of size TT-ranks 12 for the machine precision. Less than 1 sec on notebook.

49 TT IN QUANTUM CHEMISTRY Really many dimensions are natural in quantum molecular dynamics: HΨ = ( V (R 1,..., R f ))Ψ = EΨ V is a Potential Energy Surface (PES) Calculation of V requires to solve Schredinger equation for a variety of coordinates of atoms R 1,..., R f. TT interpolation method uses only small part of values of V from which it produces a suitable TT approximation of PES.

50 TT IN QUANTUM CHEMISTRY Henon-Heiles PES: V (q 1,..., q f ) = 1 2 f f 1 qk 2 + λ k=1 k=1 ( qkq 2 k+1 1 ) 3 q3 k TT-ranks and timings (Oseledets-Khoromskij)

51 SPECTRUM IN THE WHOLE Use the evolution in time: Ψ t = ihψ, Ψ(0) = Ψ 0. Physical scheme reads Ψ(t) = e iht Ψ 0, then we find the autocorrelation function a(t) = (Ψ(t), Ψ 0 ) and its Fourier transform.

52 SPECTRUM IN THE WHOLE Henon-Heilse spectra for f = 2 and different TT-ranks.

53 SPECTRUM IN THE WHOLE Henon-Heiles spectra for f = 4 and f = 10.

54 TT FOR EQUATIONS WITH PARAMETERS Diffusion equation on [0, 1] 2. The diffusion coefficients are constant in each of p p square subdomains, i.e. p 2 parameters varing from 0.1 to points in each of parameters, space grid of size The solution for all values of parameters is approximated by TT with relative accuracy 10 5 : Number of parameters Storage 4 8 Mb Mb Mb

55 WTT FOR DATA COMPRESSION f (x) = sin(100x) A signal on uniform grid with the stepsize 1/2 d on 0 x 1 converts into a tensor of size with all TT-ranks = 2. The Dobechis transform gives much more nonzeros: storage for ε storage(wtt) storage(d4) storage(d8) filters sin(100x), n = 2 d, d = 20

56 WTT FOR COMPRESSION OF MATRICES WTT for vectorized matrices applies after reshaping: a(i 1... i d ; j 1... j d ) ã(i 1 j 1 ;... ; i d j d ). WTT compression with accuracy ε = 10 8 for the Cauchy-Hilbert matrix a ij = 1/(i j) for i j, a ii = 0. n = 2 d storage(wtt) storage(d4) storage(d8) storage(d20)

57 TT IN DISCRETE OPTIMIZATION Among all elements of a tensor given by TT find minimum or maximum. Discrete optimization problem is solved a an eigenvalue problem for diagonal matrices. Block minimization of Raleigh quotient in TT format, blocks of size 5, TT-ranks 5 (O.S.Lebedeva). Function Domain Size Iter. (Ax, x) (Ae i, e i ) e i x Exact max 3Q (1+0.1 x i +sin x i ) [1, 50] i=1 same [1, 50] Q (x + sin x i ) [1, 20] i=1 same [1, 20]

58 CONCLUSIONS AND PERSPECTIVES TT algorithms ( are efficient new instruments for compression of vectors and matrices. Storage and complexity depend on matrix size logarithmically. Free access to a current version of TT-library: There are some theorems with TT-rank estimates. Sharper and more general estimates are to be derived. Difficulty is in nonlinearity of TT decompositions.

59 CONCLUSIONS AND PERSPECTIVES TT interpolation methods provide new efficient methods for tabulation of functions of many variables, also those that are hard to evaluate. There are examples of application of TT methods for fast and accurate computation of multidimensional integrals. TT methods are successfully applied to image and signal processing and may compete with other known methods.

60 CONCLUSIONS AND PERSPECTIVES TT methods are a good base for numerical solution of multidimensional problems of quantum chemistry, quantum molecular dynamics, optimization in parameters, model reduction, multiparametric and stochastic differential equations.

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA

NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA NUMERICAL METHODS WITH TENSOR REPRESENTATIONS OF DATA Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 2 June 2012 COLLABORATION MOSCOW: I.Oseledets, D.Savostyanov

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 09-11 TT-Cross approximation for multidimensional arrays Ivan Oseledets 1, Eugene Tyrtyshnikov 1, Institute of Numerical

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 70 88 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa TT-cross approximation for

More information

Tensor networks and deep learning

Tensor networks and deep learning Tensor networks and deep learning I. Oseledets, A. Cichocki Skoltech, Moscow 26 July 2017 What is a tensor Tensor is d-dimensional array: A(i 1,, i d ) Why tensors Many objects in machine learning can

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

Tensor Product Approximation

Tensor Product Approximation Tensor Product Approximation R. Schneider (TUB Matheon) Mariapfarr, 2014 Acknowledgment DFG Priority program SPP 1324 Extraction of essential information from complex data Co-workers: T. Rohwedder (HUB),

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT Approximation of Elliptic Solution Operators in Higher Dimensions (revised version: January 2010) by Boris N. Khoromskij,

More information

Dynamical low rank approximation in hierarchical tensor format

Dynamical low rank approximation in hierarchical tensor format Dynamical low rank approximation in hierarchical tensor format R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Motivation Equations describing complex systems with multi-variate solution

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Tensor properties of multilevel Toeplitz and related. matrices.

Tensor properties of multilevel Toeplitz and related. matrices. Tensor properties of multilevel Toeplitz and related matrices Vadim Olshevsky, University of Connecticut Ivan Oseledets, Eugene Tyrtyshnikov 1 Institute of Numerical Mathematics, Russian Academy of Sciences,

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Max Planck Institute Magdeburg Preprints

Max Planck Institute Magdeburg Preprints Thomas Mach Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER SYSTEME MAGDEBURG Max Planck Institute Magdeburg Preprints MPIMD/11-09

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Coupled Matrix/Tensor Decompositions:

Coupled Matrix/Tensor Decompositions: Coupled Matrix/Tensor Decompositions: An Introduction Laurent Sorber Mikael Sørensen Marc Van Barel Lieven De Lathauwer KU Leuven Belgium Lieven.DeLathauwer@kuleuven-kulak.be 1 Canonical Polyadic Decomposition

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-03 LINEAR ALGEBRA FOR TENSOR PROBLEMS I. V. OSELEDETS, D. V. SAVOSTYANOV, AND E. E. TYRTYSHNIKOV Abstract. By

More information

Matrix assembly by low rank tensor approximation

Matrix assembly by low rank tensor approximation Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks

Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für Mathematik in den Naturwissenschaften Leipzig Quantics-TT collocation approximation of parameter-dependent and stochastic elliptic PDEs by Boris N. Khoromskij, and Ivan V. Oseledets

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

A New Scheme for the Tensor Representation

A New Scheme for the Tensor Representation J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints

Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Tensor approach to optimal control problems with fractional d-dimensional elliptic operator in constraints Gennadij Heidel Venera Khoromskaia Boris N. Khoromskij Volker Schulz arxiv:809.097v2 [math.na]

More information

The tensor structure of a class of adaptive algebraic wavelet transforms

The tensor structure of a class of adaptive algebraic wavelet transforms The tensor structure of a class of adaptive algebraic wavelet transforms V. Kazeev and I. Oseledets Research Report No. 2013-28 August 2013 Seminar für Angewandte Mathematik Eidgenössische Technische Hochschule

More information

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains

Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Direct Numerical Solution of Algebraic Lyapunov Equations For Large-Scale Systems Using Quantized Tensor Trains Michael Nip Center for Control, Dynamical Systems, and Computation University of California,

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

The Sommerfeld Polynomial Method: Harmonic Oscillator Example

The Sommerfeld Polynomial Method: Harmonic Oscillator Example Chemistry 460 Fall 2017 Dr. Jean M. Standard October 2, 2017 The Sommerfeld Polynomial Method: Harmonic Oscillator Example Scaling the Harmonic Oscillator Equation Recall the basic definitions of the harmonic

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 14 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT,

More information

What is it we are looking for in these algorithms? We want algorithms that are

What is it we are looking for in these algorithms? We want algorithms that are Fundamentals. Preliminaries The first question we want to answer is: What is computational mathematics? One possible definition is: The study of algorithms for the solution of computational problems in

More information

Standardization and Singular Value Decomposition in Canonical Correlation Analysis

Standardization and Singular Value Decomposition in Canonical Correlation Analysis Standardization and Singular Value Decomposition in Canonical Correlation Analysis Melinda Borello Johanna Hardin, Advisor David Bachman, Reader Submitted to Pitzer College in Partial Fulfillment of the

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

On the quantum theory of rotating electrons

On the quantum theory of rotating electrons Zur Quantentheorie des rotierenden Elektrons Zeit. f. Phys. 8 (98) 85-867. On the quantum theory of rotating electrons By Friedrich Möglich in Berlin-Lichterfelde. (Received on April 98.) Translated by

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Tensors and graphical models

Tensors and graphical models Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations

More information

Dealing with curse and blessing of dimensionality through tensor decompositions

Dealing with curse and blessing of dimensionality through tensor decompositions Dealing with curse and blessing of dimensionality through tensor decompositions Lieven De Lathauwer Joint work with Nico Vervliet, Martijn Boussé and Otto Debals June 26, 2017 2 Overview Curse of dimensionality

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

Eigenvalue and Eigenvector Homework

Eigenvalue and Eigenvector Homework Eigenvalue and Eigenvector Homework Olena Bormashenko November 4, 2 For each of the matrices A below, do the following:. Find the characteristic polynomial of A, and use it to find all the eigenvalues

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information