1. Connecting to Matrix Computations. Charles F. Van Loan

Size: px
Start display at page:

Download "1. Connecting to Matrix Computations. Charles F. Van Loan"

Transcription

1 Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations 1 / 56

2 Its About This Four Talks on Tensor Computations 1. Connecting to Matrix Computations 2 / 56

3 Overview Preparation for the Next Big Thing... Scalar-Level Thinking 1960 s Matrix-Level Thinking 1980 s The factorization paradigm: LU, LDL T, QR, UΣV T, etc. Cache utilization, parallel computing, LAPACK, etc. Block Matrix-Level Thinking 2000 s Tensor-Level Thinking New applications, factorizations, data structures, nonlinear analysis, optimization strategies, etc. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 3 / 56

4 The Matrix Factorizations Paradigm A = UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T A = ULV It s T PAQ T = LUa A = Language UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR A = GG T PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR PAP T = LDL T Q T AQ = D X 1 AX = J U T AU = T AP = QR A = ULV T PAQ T = LU A = UΣV T PA = LU A = QR Four Talks on Tensor Computations 1. Connecting to Matrix Computations 4 / 56

5 Some Ways They Are Used Conversion of a hard matrix problem into an easier one. Ax = b Ly = Pb, Ux = y Low-rank approximation. (10 7 -by-10 5 ) (10 7 -by-10) (10-by-10 5 ) Reasoning about Data Re-Use via Block Factorizations A ij A ij A ik A kj not a ij a ij a ik a kj Discovery and Insight min A 1 (A 2 + xy T )Q F Q,x,y 10 4 Four Talks on Tensor Computations 1. Connecting to Matrix Computations 5 / 56

6 Tensor Factorizations and Decompositions The same story is playing out with tensors: = σ 1 w 1 v 1 u 1 + σ 2 w 2 v 2 u Four Talks on Tensor Computations 1. Connecting to Matrix Computations 6 / 56

7 The Big Data Theme A Changing Definition of Big In Matrix Computations, to say that A IR n 1 n 2 that both n 1 and n 2 are big. is big is to say In Tensor Computations, to say that A IR n 1 n d is big is to say that n 1 n 2 n d is big and this need not require big n k. E.g. n 1 = n 2 = = n 1000 = 2. Algorithms that scale with d will induce a transition... Matrix-Based Scientific Computation Tensor-Based Scientific Computation Four Talks on Tensor Computations 1. Connecting to Matrix Computations 7 / 56

8 What is a Tensor? Definition An order-d tensor A IR n 1 n d is a real d-dimensional array A(1:n 1,..., 1:n d ) where the index range in the k-th mode is from 1 to n k. A tensor A IR n 1 n d is cubical if n 1 = = n d. Low-Order Tensors A scalar is an order-0 tensor. A vector is an order-1 tensor. A matrix is an order-2 tensor. We use calligraphic font to designate tensors that have order 3 or greater e.g., A, B, C, etc. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 8 / 56

9 Where Might They Come From? Discretization A(i, j, k, l) might house the value of f (w, x, y, z) at (w, x, y, z) = (w i, x j, y k, z l ). Multiway Analysis A(i, j, k, l) is a value that captures an interaction between four variables/factors. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 9 / 56

10 You Have Seen Them Before Images A color picture is an m-by-n-by-3 tensor: A(:, :, 1) = red pixel values A(:, :, 2) = green pixel values A(:, :, 3) = blue pixel values Obvious extension of the colon notation. More later. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 10 / 56

11 You Have Seen them Before Block Matrices a 11 a 12 a 13 a 14 a 15 a 16 a 21 a 22 a 23 a 24 a 25 a 26 a 31 a 32 a 33 a 34 a 35 a 36 A = a 41 a 42 a 43 a 44 a 45 a 46 a 51 a 52 a 53 a 54 a 55 a 56 a 61 a 62 a 63 a 64 a 65 a 66 Matrix entry a 45 is the (2,1) entry of the (2,3) block: a 45 A(2, 3, 2, 1) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 11 / 56

12 You Have Seen them Before Kronecker Products The m 1 m 2 m 3 -by-n 1 n 2 n 3 matrix A = A 1 (1:m 1, 1:n 1 ) A 2 (1:m 2, 1:n 2 ) A 3 (1:m 3, 1:n 3 ) is a reshaping of the m 1 -by-m 2 -by-m 3 -by-n 1 -by-n 2 -by-n 3 tensor A(i 1, i 2, i 3, j 1, j 2, j 3 ) = A 1 (i 1, j 1 )A 2 (i 2, j 2 )A 3 (i 3, j 3 ) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 12 / 56

13 The Kronecker Product Definition B C is a block matrix whose ij-th block is b ij C. An Example... [ b11 b 12 ] [ b11 C b 12 C ] b 21 b 22 C = b 21 C b 22 C Kronecker products have a replicated block structure. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 13 / 56

14 The Kronecker Product There are three ways to regard A = B C... If then b 11 b 1n A =..... b m1 b mn c 11 c 1q..... c p1 c pq (i). A is an m-by-n block matrix with p-by-q blocks. (ii). A is an mp-by-nq matrix of scalars. (iii). A is an unfolded 4-th order tensor A IR p q m n : A(i 1, i 2, i 3, i 4 ) = B(i 3, i 4 )C(i 1, i 2 ) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 14 / 56

15 Kronecker Product: All Possible Entry-Entry Products 4 9 = 36 [ b11 b 12 b 21 b 22 ] = c 11 c 12 c 13 c 21 c 22 c 23 c 31 c 32 c 33 b 11 c 11 b 11 c 12 b 11 c 13 b 12 c 11 b 12 c 12 b 12 c 13 b 11 c 21 b 11 c 22 b 11 c 23 b 12 c 21 b 12 c 22 b 12 c 23 b 11 c 31 b 11 c 32 b 11 c 33 b 12 c 31 b 12 c 32 b 12 c 33 b 21 c 11 b 21 c 12 b 21 c 13 b 22 c 11 b 22 c 12 b 22 c 13 b 21 c 21 b 21 c 22 b 21 c 23 b 22 c 21 b 22 c 22 b 22 c 23 b 21 c 31 b 21 c 32 b 21 c 33 b 22 c 31 b 22 c 32 b 22 c 33 Four Talks on Tensor Computations 1. Connecting to Matrix Computations 15 / 56

16 Kronecker Products of Kronecker Products Hierarchical [ b11 b 12 b 21 b 22 ] B C D = c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 c 34 c 41 c 42 c 43 c 44 d 11 d 12 d 13 d 21 d 22 d 23 d 31 d 32 d 33 A 2-by-2 block matrix whose entries are 4-by-4 block matrices whose entries are 3-by-3 matrices. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 16 / 56

17 Bridging the Gap to Matrix Computations Tensor computations are typically disguised matrix computations and that is because of The Kronecker Product A = A 1 A 2 A 3 an order 6 tensor Tensor Unfoldings Rubik Cube 3 9 matrix Alternating Least Squares Multilinear optimization via component-wise optimization Four Talks on Tensor Computations 1. Connecting to Matrix Computations 17 / 56

18 The Decomposition Paradigm in Matrix Computations Typical... Convert the given problem into an equivalent easy-to-solve problem by using the right matrix decomposition. PA = LU, Ly = Pb, Ux = y = Ax = b Also Typical... Uncover hidden relationships by computing the right decomposition of the data matrix. A = UΣV T = A r i=1 σ i u i v T i Four Talks on Tensor Computations 1. Connecting to Matrix Computations 18 / 56

19 Two Natural Questions Question 1 Can we solve tensor problems by converting them to equivalent, easy-to-solve problems using a tensor decomposition? Question 2 Can we uncover hidden patterns in tensor data by computing an appropriate tensor decomposition? Let s explore the decomposition issue for 2-by-2-by-2 tensors... Four Talks on Tensor Computations 1. Connecting to Matrix Computations 19 / 56

20 The Singular Value Decomposition The 2-by-2 case... [ a11 a 12 a 21 a 22 ] = [ u11 u 12 u 21 u 22 ] [ σ1 0 0 σ 2 ] [ v11 v 12 v 21 v 22 ] T = σ 1 [ u11 u 21 ] [ v11 v 21 ] T + σ 2 [ u12 u 22 ] [ v12 v 22 ] T A reshaped presentation of the same thing... a 11 v 11 u 11 a 21 a 12 = σ v 11 u 21 1 v 21 u 11 + σ 2 a 22 v 21 u 21 v 12 u 12 v 12 u 22 v 22 u 12 v 22 u 22 The reshaped rank-1 matrices have special structure... Four Talks on Tensor Computations 1. Connecting to Matrix Computations 20 / 56

21 The Kronecker Product (KP) The KP of a Pair of 2-vectors [ x1 x 2 ] [ y1 y 2 ] = x 1 y 1 x 1 y 2 x 2 y 1 x 2 y 2 2-by-2 SVD in KP Terms... a 11 a 21 a 12 a 22 = σ 1 [ v11 v 21 ] [ u11 u 21 ] + σ 2 [ v12 v 22 ] [ u12 u 22 ] Four Talks on Tensor Computations 1. Connecting to Matrix Computations 21 / 56

22 Reshaping Appreciate the Duality.. [ a11 a 12 ] a 21 a 22 = σ 1 u 1 v1 T + σ 2 u 2 v2 T a 11 a 21 a 12 = σ 1 (v 1 u 1 ) + σ 2 (v 2 u 1 ) a 22 U = [ u 1 u 2 ] V = [ v 1 v 2 ] Four Talks on Tensor Computations 1. Connecting to Matrix Computations 22 / 56

23 A IR as a minimal sum of rank-1 tensors. What is a rank-1 tensor? If R IR has unit rank, then there exist f, g, h IR 2 such that r 111 r 211 r 121 r 221 r 112 r 212 r 122 r 222 = [ h1 h 2 ] This means that if R = (r ijk ), then [ g1 g 2 ] [ f1 r ijk = h k g j f i i = 1:2, j = 1:2, k = 1:2 f 2 ] Four Talks on Tensor Computations 1. Connecting to Matrix Computations 23 / 56

24 A IR as a minimal sum of rank-1 tensors. Challenge: Find thinnest possible X, Y, Z IR 2 r so a 111 a 211 a 121 a 221 a 112 a 212 a 122 a 222 = r z k y k x k k=1 where X = [ x 1 x r ], Y = [ y 1 y r ], andz = [ z 1 z r ] are column partitionings. The minimizing r is the tensor rank. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 24 / 56

25 A IR as a minimal sum of rank-1 tensors. A Surprising Fact a 111 a 211 a 121 If a 221 rank = 2 with prob 79% a 112 = randn(8,1), then a 212 rank = 3 with prob 21% a 122 a 222 This is Different from the Matrix Case... If A = randn(n,n), then rank(a) = n with prob 100%. What are the full rank 2-by-2-by-2 tensors? Four Talks on Tensor Computations 1. Connecting to Matrix Computations 25 / 56

26 The 79/21 Property The 2-by-2 Generalized Eigenvalue Problem (GEV) The generalized eigenvalues of F λg are the zeros of ([ ] [ ]) f11 f det 12 g11 g λ 12 = 0 f 21 f 22 g 21 g 22 Coincidence? If the a ijk are randn, then ([ a111 a det 121 a 211 a 221 ] [ a112 a λ 122 a 212 a 222 ]) = 0 has real distinct roots with probability 79% and complex conjugate roots with probability 21%. What is the connection between the 2-by-2 GEV and the 2-by-2-by-2 tensor rank problem? Four Talks on Tensor Computations 1. Connecting to Matrix Computations 26 / 56

27 Nearness Problems The Nearest Rank-1 Matrix Problem If A IR n 1 n 2 has SVD A = UΣV T then B opt = σ 1 u 1 v T 1 minimizes φ(b) = A B F rank(b) = 1. The Nearest Rank-1 Tensor Problem for A IR Find unit 2-norm vectors u, v, w IR 2 and scalar σ so that a σ w v u 2 is minimized where a 111 a 211 a 121 a = a 221 a 112 a 212 a 122 a 222 Four Talks on Tensor Computations 1. Connecting to Matrix Computations 27 / 56

28 A Highly Structured Nonlinear Optimization Problem It Depends on Four Parameters... φ(σ, θ 1, θ 2, θ 3 ) = = [ a σ cos(θ3 ) sin(θ 3 ) a 111 a 211 a 121 a 221 a 112 σ a 212 a 122 a 222 ] [ cos(θ2 ) sin(θ 2 ) c 3 c 2 c 1 c 3 c 2 s 1 c 3 s 2 c 1 c 3 s 2 s 1 s 3 c 2 c 1 s 3 c 2 s 1 s 3 s 2 c 1 s 3 s 2 s 1 2 ] [ cos(θ1 ) sin(θ 1 ) ] 2 c i = cos(θ i ), s i = sin(θ i ) i = 1:3 Four Talks on Tensor Computations 1. Connecting to Matrix Computations 28 / 56

29 A Highly Structured Nonlinear Optimization Problem And It Can Be Reshaped a 111 a 211 a 121 φ = a 221 a a a a σ 6 4 c 3c 2c 1 c 3c 2s 1 c 3s 2c 1 c 3s 2s 1 s 3c 2c 1 s 3c 2s 1 s 3s 2c 1 s 3s 2s = 2 3 a 111 a 211 a 121 a 221 a a a a c 3c c 3c 2 c 3s c 3s 2 s 3c s 3c s 3s s 3s 2» x1 y 1 2 [ x1 y 1 ] [ c1 = σ s 1 ] [ cos(θ1 ) = σ sin(θ 1 ) ] Idea: Improve σ and θ 1 by minimizing with respect to x 1 and y 1, holding θ 2 and θ 3 fixed. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 29 / 56

30 A Highly Structured Nonlinear Optimization Problem And It Can Be Reshaped a 111 a 211 a 121 φ = a 221 a a a a σ 6 4 c 3c 2c 1 c 3c 2s 1 c 3s 2c 1 c 3s 2s 1 s 3c 2c 1 s 3c 2s 1 s 3s 2c 1 s 3s 2s = 2 3 a 111 a 211 a 121 a 221 a a a a c 3c 1 0 c 3s c 3c 1 0 c 3s 1 s 3c s 3s s 3c s 3s 1» x2 y 2 2 [ x2 y 2 ] [ c2 = σ s 2 ] [ cos(θ2 ) = σ sin(θ 2 ) ] Idea: Improve σ and θ 2 by minimizing with respect to x 2 and y 2, holding θ 1 and θ 3 fixed. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 30 / 56

31 A Highly Structured Nonlinear Optimization Problem And It Can Be Reshaped a 111 a 211 a 121 φ = a 221 a a a a σ 6 4 c 3c 2c 1 c 3c 2s 1 c 3s 2c 1 c 3s 2s 1 s 3c 2c 1 s 3c 2s 1 s 3s 2c 1 s 3s 2s = 2 3 a 111 a 211 a 121 a 221 a a a a c 2c 1 0 c 2s 1 0 s 2c 1 0 s 2s c 2s c 2s s 2c s 2s 1» x3 y 3 2 [ x3 y 3 ] [ c3 = σ s 3 ] [ cos(θ3 ) = σ sin(θ 3 ) ] Idea: Improve σ and θ 3 by minimizing with respect to x 3 and y 3, holding θ 1 and θ 2 fixed. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 31 / 56

32 Componentwise Optimization A Common Framework for Tensor-Related Optimization Choose a subset of the unknowns such that if they are (temporarily) fixed, then we are presented with some standard matrix problem in the remaining unknowns. By choosing different subsets, cycle through all the unknowns. Repeat until converged. In tensor computations, the standard matrix problem that we end up solving is usually the linear least squares problem. In that case, the overall solution process is referred to as alternating least squares. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 32 / 56

33 Bridging the Gap via Tensor Unfoldings A Common Framework for Tensor Computations Turn tensor A into a matrix A. 2. Through matrix computations, discover things about A. 3. Draw conclusions about tensor A based on what is learned about matrix A. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 33 / 56

34 Tensor Unfoldings Some Tensor Unfoldings of A IR A (1) = A (2) = A (3) = a 111 a 121 a 131 a 112 a 122 a 132 a 211 a 221 a 231 a 212 a 222 a 232 a 311 a 321 a 331 a 312 a 322 a 332 a 411 a 421 a 431 a 412 a 422 a a 111 a 211 a 311 a 411 a 112 a 212 a 312 a a 121 a 221 a 321 a 421 a 122 a 222 a 322 a a 131 a 231 a 331 a 431 a 132 a 232 a 332 a 432» a111 a 211 a 311 a 411 a 121 a 221 a 321 a 421 a 131 a 231 a 331 a 431 a 112 a 212 a 312 a 412 a 122 a 222 a 322 a 422 a 132 a 232 a 332 a A tensor unfolding is sometimes referred to as tensor flattening or as a tensor matricization. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 34 / 56

35 Tensor Unfoldings: How They Might Show Up Gradients of Multilinear Forms If f :IR n 1 IR n 2 IR n 3 IR is defined by f (u, v, w) = n 1 n 2 n 3 A(i 1, i 2, i 3 )u(i 1 )v(i 2 )w(i 3 ) i 1=1 i 2=1 i 3=1 and A IR n 1 n 2 n 3, then its gradient is given by A (1) w v f (u, v, w) = A (2) w u A (3) v u where A (1), A (2), and A (3) are the modal unfoldings of A. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 35 / 56

36 More General Unfoldings Example: A = A(1:2, 1:3, 1:2, 1:2, 1:3) B = (1,1) (2,1) (1,2) (2,2) (1,3) (2,3) a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a (1,1,1) (2,1,1) (1,2,1) (2,2,1) (1,3,1) (2,3,1) (1,1,2) (2,1,2) (1,2,2) (2,2,2) (1,3,2) (2,3,2) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 36 / 56

37 A Block Matrix is an Unfolded Order-4 Tensor Some Natural Identifications A block matrix with uniformly sized blocks A 11 A 1,c2 A =..... A i2,j 2 IR r 1 c 1 A r2,1 A r2,c 2 has several natural reshapings: A i1,j 1 (i 2, j 2 ) A(i 1, j 1, i 2, j 2 ) A IR r 1 r 2 c 1 c 2 A i2,j 2 (i 1, j 1 ) A(i 1, j 1, i 2, j 2 ) A IR r 1 c 1 r 2 c 2 Four Talks on Tensor Computations 1. Connecting to Matrix Computations 37 / 56

38 Kronecker Products are Unfolded Tensors The m 1 m 2 m 3 -by-n 1 n 2 n 3 matrix A = A 1 (1:m 1, 1:n 1 ) A 2 (1:m 2, 1:n 2 ) A 3 (1:m 3, 1:n 3 ) is a particular unfolding of the m 1 -by-m 2 -by-m 3 -by-n 1 -by-n 2 -by-n 3 tensor A(i 1, i 2, i 3, j 1, j 2, j 3 ) = A 1 (i 1, j 1 )A 2 (i 2, j 2 )A 3 (i 3, j 3 ) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 38 / 56

39 Parts of a Tensor Fibers A fiber of a tensor A is a vector obtained by fixing all but one A s indices. For example, if A = A(1:3, 1:5, 1:4, 1:7), then A(2, :, 4, 6) = A(2, 1:5, 4, 6) = A(2, 1, 4, 6) A(2, 2, 4, 6) A(2, 3, 4, 6) A(2, 4, 4, 6) A(2, 5, 4, 6) is a fiber. We adopt the convention that a fiber is a column-vector. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 39 / 56

40 The vec Operation Turns matrices into vectors by stacking columns... X = vec(x ) = Four Talks on Tensor Computations 1. Connecting to Matrix Computations 40 / 56

41 The vec Operation Turns tensors into vectors by stacking mode-1 fibers... A(:, 1, 1) A(:, 2, 1) A IR A(:, 3, 1) vec(a) = = A(:, 1, 2) A(:, 2, 2) A(:, 3, 2) a 111 a 211 a 121 a 221 a 131 a 231 a 112 a 212 a 122 a 222 a 132 a 232 We need to specify the order of the stacking... Four Talks on Tensor Computations 1. Connecting to Matrix Computations 41 / 56

42 Parts of a Tensor Slices A slice of a tensor A is a matrix obtained by fixing all but two of A s indices. For example, if A = A(1:3, 1:5, 1:4, 1:7), then A(:, 3, :, 6) = A(1, 3, 1, 6) A(1, 3, 2, 6) A(1, 3, 3, 6) A(1, 3, 4, 6) A(2, 3, 1, 6) A(2, 3, 2, 6) A(2, 3, 3, 6) A(2, 3, 4, 6) A(3, 3, 1, 6) A(3, 3, 2, 6) A(3, 3, 3, 6) A(3, 3, 4, 6) is a slice. We adopt the convention that the first unfixed index in the tensor is the row index of the slice and the second unfixed index in the tensor is the column index of the slice. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 42 / 56

43 Tensor Notation: Subscript Vectors Referencing a Tensor Entry If A IR n 1 n d and i = (i 1,..., i d ) with 1 i k n k for k = 1:d, then A(i) A(i 1,..., i k ) We say that i is a subscript vector. Bold font will be used designate subscript vectors. Bounding Subscript Vectors If L and R are subscript vectors having the same dimension, then L R means that L k R k for all k. Special Subscript Vectors The subscript vector of all ones is denoted by 1. (Dimension clear from context.) If N is an integer, then N = N 1. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 43 / 56

44 Subtensors Specification of Subtensors Suppose A IR n 1 n d with n = [ n 1,..., n d ]. If 1 L R n, then A(L:R) denotes the subtensor B = A(L 1 :R 1,..., L d :R d ) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 44 / 56

45 The Index-Mapping Function col Definition If i and n are length-d subscript vectors, then the integer-valued function col(i, n) is defined by col(i, n) = i 1 + (i 2 1)n 1 + (i 3 1)n 1 n (i d 1)n 1 n d 1 The Formal Specification of Vec If A IR n 1 n d and v = vec(a), then a (col(i, n)) = A(i) 1 i n Four Talks on Tensor Computations 1. Connecting to Matrix Computations 45 / 56

46 The Importance of Notation Contractions A(i 1, i 2, j 1, j 2, j 3 ) = B(i 1, i 2, k 1, k 2 )C(k 1, k 2, j 1, j 2, j 3 ) k 1 k 2 is better written as A(i, j) = k B(i, k)c(k, j) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 46 / 56

47 Software: The Matlab Tensor Toolbox Why? It provides an excellent environment for learning about tensor computations and for building a facility with multi-index reasoning. Who? Brett W. Bader and Tamara G. Kolda, Sandia Laboratories Where? tkolda/tensortoolbox/ Four Talks on Tensor Computations 1. Connecting to Matrix Computations 47 / 56

48 Matlab Tensor Toolbox: A Tensor is a Structure >> n = [ ]; >> A_array = randn ( n); >> A = tensor ( A_array ); >> fieldnames ( A) ans = data size The.data field is a multi-dimensional array. The.size field is an integer vector that specifies the modal dimensions. A.data and A array have same value. A.size and n have the same value. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 47 / 56

49 Matlab Tensor Toolbox: Order and Dimension >> n = [ ]; >> A_array = randn ( n); >> A = tensor ( A_array ) >> d = ndims ( A) d = 4 >> n = size ( A) n = Four Talks on Tensor Computations 1. Connecting to Matrix Computations 47 / 56

50 Matlab Tensor Toolbox: Tensor Operations n = [ ]; % Familiar initializations... X = tenzeros (n); Y = tenones (n); A = tenrand (n); B = tenrand (n); % Extracting Fibers and Slices a_fiber = A (2,:,3,6); a_slice = A (3,:,2,:); % These operations are legal... C = 3* A; C = -A; C = A +1; C = A.^2; C = A + B; C = A./B; C = A.*B; C = A.^B; % Frobenius norm... err = norm (A); % Applying a Function to Each Entry... F = tenfun (@sin,a); G = tenfun (@(x) sin (x )./ exp (x),a); Four Talks on Tensor Computations 1. Connecting to Matrix Computations 48 / 56

51 The Matlab Tensor Toolbox Problem 1. Suppose A = A(1:n 1, 1:n 2, 1:n 3). We say A(i) is an interior entry if 1 < i < n. Otherwise, we say A(i) is an edge entry. If A(i 1, i 2, i 3) is an interior entry, then it has six neighbors: A(i 1 ± 1, i 2, i 3),A(i 1, i 2 ± 1, i 3), and A(i 1, i 2, i 3 ± 1). Implement the following function so that it performs as specified function B = Smooth(A) % A is a third order tensor % B is a third order tensor with the property that each % interior entry B(i) is the average of A(i) s six % neighbors. If B(i) is an edge entry, then B(i) = A(i). Strive for an implementation that does not have any loops. Problem 2. Formulate and solve an order-d version of Problem 1. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 48 / 56

52 Modal Unfoldings Using tenmat Example: A Mode-2 Unfolding of A IR tenmat(a,2) sets up A (2) : a 111 a 211 a 311 a 411 a 112 a 212 a 312 a 412 a 121 a 221 a 321 a 421 a 122 a 222 a 322 a 422 a 131 a 231 a 331 a 431 a 132 a 232 a 332 a 432 (1,1) (2,1) (3,1) (4,1) (1,2) (2,2) (3,2) (4,2) Notice how the fibers are ordered. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 49 / 56

53 Modal Unfoldings Using tenmat Precisely how are the fibers ordered? If A IR n 1 n d, N = n 1 n d, and B = tenmat(a,k), then B the matrix A (k) IR n k (N/n k ) with is where A (k) (i k, col(ĩ k, ñ)) = A(i) ĩ k = [i 1,..., i k 1, i k+1,..., i d ] ñ k = [n 1,..., n k 1, n k+1,..., m d ] Recall that the col function maps multi-indices to integers. For example, if n = [2 3 2] then i (1, 1, 1) (2, 1, 1) 1, 2, 1) (2, 2, 1) (1, 3, 1) (2, 3, 1) (1, 1, 2) (2, 1, 2)... col(i, n) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 50 / 56

54 Matlab Tensor Toolbox: A Tenmat Is a Structure >> n = [ ]; >> A = tenrand (n); >> A2 = tenmat (A,2); >> fieldnames ( A2) ans = tsize rindices cindices data A2.tsize = size of the unfolded tensor = [ ] A2.rindices = mode indices that define A2 s rows = [2] A2.cindices = mode indices that define A2 s columns = [1 3 4] A2.data = the matrix that is the unfolded tensor. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 51 / 56

55 Other Modal Unfoldings Using tenmat Mode-k Unfoldings of A IR n 1 n d A particular mode-k unfolding is defined by a permutation v of [ 1:k 1 k + 1:d ]. Tensor entries get mapped to matrix entries as follows A(i) A(i k, col(i(v), n(v))) Name v How to get it... A (k) (Default) [1:k 1 k +1:d] tenmat(a,k) Forward Cyclic [k + 1:d 1:k 1] tenmat(a,k, fc ) Backward Cyclic [k 1: 1:1 d: 1:k +1] tenmat(a,k, bc ) Arbitrary v tenmat(a,k,v) Four Talks on Tensor Computations 1. Connecting to Matrix Computations 51 / 56

56 Using tenmat Problem 3. Write a Matlab function alpha = MaxFnorm(A) that returns the maximum value of B F where B is a slice of the input tensor A IR n 1 n d. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 52 / 56

57 Summary Key Words Reshaping is about turning a matrix in to a differently sized matrix or into a tensor or into a vector. The Kronecker product is an operation between two matrices that produces a highly structured block matrix. A Rank-1 tensor can be reshaped into a multiple Kronecker product of vectors. A tensor decomposition expresses a given tensor in terms of simpler tensors. The alternating least squares approach to multilinear LS is based on solving a sequence of linear LS problems, each obtained by freezing all but a subset of the unknowns. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 53 / 56

58 Summary Key Words A block matrix is a matrix whose entries are matrices. A fiber of a tensor is a column vector defined by fixing all but one index and varying what s left. A slice of a tensor is a matrix defined by fixing all but two indices and varying what s left. A mode-k unfolding of a tensor is obtained by assembling all the mode-k fibers into a matrix. tensor is a Tensor Toolbox command used to construct tensors. tenmat is a Tensor Toolbox command that is used to construct unfoldings of a tensor. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 54 / 56

59 Summary Recurring themes... Given tensor A IR n 1 n d, there are many ways to unfold a tensor into a matrix A IR N 1 N 2 where N 1 N 2 = n 1 n d. For example, A could be a block matrix whose entries are A-slices. A facility with block matrices and tensor indexing is required to understand the layout possibilities. Computations with the unfolded tensor frequently involve the Kronecker product. Four Talks on Tensor Computations 1. Connecting to Matrix Computations 55 / 56

60 References L. De Lathauwer and B. De Moor (1998). From Matrix to Tensor: Multilinear Algebra and Signal Processing, in Mathematics in Signal Processing IV, J. McWhirter and I. Proudler, eds., Clarendon Press, Oxford, G.H. Golub and C. Van Loan (2013). Matrix Computations, 4th Edition, Johns Hopkins University Press, Baltimore, MD. T.G. Kolda and B.W. Bader (2009). Tensor Decompositions and Applications, SIAM Review 51, P.M. Kroonenberg (2008). Applied Multiway Data Analysis, Wiley. A. Smilde, R. Bro, and P. Geladi (2004). Multi-Way Analysis: Applications in the Chemical Sciences, Wiley, West Sussex, England. B.W. Bader and T.G. Kolda (2006). Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping, ACM Transactions on Mathematical Software, 32, Four Talks on Tensor Computations 1. Connecting to Matrix Computations 56 / 56

Lecture 2. Tensor Unfoldings. Charles F. Van Loan

Lecture 2. Tensor Unfoldings. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping

Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping BRETT W. BADER and TAMARA G. KOLDA Sandia National Laboratories Tensors (also known as multidimensional arrays or N-way arrays) are used

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Math 515 Fall, 2008 Homework 2, due Friday, September 26. Math 515 Fall, 2008 Homework 2, due Friday, September 26 In this assignment you will write efficient MATLAB codes to solve least squares problems involving block structured matrices known as Kronecker

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

1 Matrices and matrix algebra

1 Matrices and matrix algebra 1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Tutorial on MATLAB for tensors and the Tucker decomposition

Tutorial on MATLAB for tensors and the Tucker decomposition Tutorial on MATLAB for tensors and the Tucker decomposition Tamara G. Kolda and Brett W. Bader Workshop on Tensor Decomposition and Applications CIRM, Luminy, Marseille, France August 29, 2005 Sandia is

More information

Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA

Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA Carla D. Moravitz Martin July 9-23, 24 This work was supported by AIM. Center for Applied Mathematics,

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 2. Basic Linear Algebra continued Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki

More information

Introduction to Tensors. 8 May 2014

Introduction to Tensors. 8 May 2014 Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION

CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Multilinear Singular Value Decomposition for Two Qubits

Multilinear Singular Value Decomposition for Two Qubits Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Matrices and Vectors

Matrices and Vectors Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix

More information

TENLAB A MATLAB Ripoff for Tensors

TENLAB A MATLAB Ripoff for Tensors TENLAB A MATLAB Ripoff for Tensors Y. Cem Sübakan, ys2939 Mehmet K. Turkcan, mkt2126 Dallas Randal Jones, drj2115 February 9, 2016 Introduction MATLAB is a great language for manipulating arrays. However,

More information

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in 8.6 Problem Set 3 - s Due Wednesday, 26 September 27 at 4 pm in 2-6. Problem : (=2+2+2+2+2) A vector space is by definition a nonempty set V (whose elements are called vectors) together with rules of addition

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

arxiv: v1 [math.na] 26 Nov 2018

arxiv: v1 [math.na] 26 Nov 2018 arxiv:1811.10378v1 [math.na] 26 Nov 2018 Gradient-based iterative algorithms for solving Sylvester tensor equations and the associated tensor nearness problems Maolin Liang, Bing Zheng School of Mathematics

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Chapter 2. Matrix Arithmetic. Chapter 2

Chapter 2. Matrix Arithmetic. Chapter 2 Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

Computational Multilinear Algebra

Computational Multilinear Algebra Computational Multilinear Algebra Charles F. Van Loan Supported in part by the NSF contract CCR-9901988. Involves Working With Kronecker Products b 11 b 12 b 21 b 22 c 11 c 12 c 13 c 21 c 22 c 23 c 31

More information

Math 1553 Introduction to Linear Algebra

Math 1553 Introduction to Linear Algebra Math 1553 Introduction to Linear Algebra Lecture Notes Chapter 2 Matrix Algebra School of Mathematics The Georgia Institute of Technology Math 1553 Lecture Notes for Chapter 2 Introduction, Slide 1 Section

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

MTH 102A - Linear Algebra II Semester

MTH 102A - Linear Algebra II Semester MTH 0A - Linear Algebra - 05-6-II Semester Arbind Kumar Lal P Field A field F is a set from which we choose our coefficients and scalars Expected properties are ) a+b and a b should be defined in it )

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition

A Simpler Approach to Low-Rank Tensor Canonical Polyadic Decomposition A Simpler Approach to Low-ank Tensor Canonical Polyadic Decomposition Daniel L. Pimentel-Alarcón University of Wisconsin-Madison Abstract In this paper we present a simple and efficient method to compute

More information

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report, June 1996 Abstract We propose

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

MATH 612 Computational methods for equation solving and function minimization Week # 2

MATH 612 Computational methods for equation solving and function minimization Week # 2 MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Diagonalization of Tensors with Circulant Structure

Diagonalization of Tensors with Circulant Structure Diagonalization of Tensors with Circulant Structure Mansoor Rezgi and Lars Eldén Linköping University Post Print N.B.: When citing this work, cite the original article. Original Publication: Mansoor Rezgi

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information