Lecture 2. Tensor Unfoldings. Charles F. Van Loan

Size: px
Start display at page:

Download "Lecture 2. Tensor Unfoldings. Charles F. Van Loan"

Transcription

1 From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi, Italy

2 What is this Lecture About? A Common Framework for Tensor Computations Turn tensor A into a matrix A. 2. Through matrix computations, discover things about A. 3. Draw conclusions about tensor A based on what is learned about matrix A. Let us begin to get ready for this...

3 What is this Lecture About? Recurring themes... Given tensor A IR n 1 n d, there are many ways to assemble its entries into a matrix A IR N 1 N 2 where N 1 N 2 = n 1 n d. For example, A could be a block matrix whose entries are A-slices. A facility with block matrices and tensor indexing is required to understand the layout possibilities. Computations with the unfolded tensor frequently involve the Kronecker product. A portion of Lecture 3 is devoted to this important bridging the gap matrix operation.

4 What is this Lecture About? Example: Gradients of Multilinear Forms If f :IR n 1 IR n 2 IR n 3 IR is defined by f (u, v, w) = n 1 n 2 n 3 A(i 1, i 2, i 3 )u(i 1 )v(i 2 )w(i 3 ) i 1=1 i 2=1 i 3=1 and A IR n 1 n 2 n 3, then its gradient is given by A (1) w v f (u, v, w) = A (2) w u A (3) v u where A (1), A (2), and A (3) are matrix unfoldings of A.

5 What is this Lecture About? Tensor Unfoldings of A IR A (1) = A (2) = A (3) = a 111 a 121 a 131 a 112 a 122 a 132 a 211 a 221 a 231 a 212 a 222 a 232 a 311 a 321 a 331 a 312 a 322 a 332 a 411 a 421 a 431 a 412 a 422 a 432 a 111 a 211 a 311 a 411 a 112 a 212 a 312 a a 121 a 221 a 321 a 421 a 122 a 222 a 322 a a 131 a 231 a 331 a 431 a 132 a 232 a 332 a 432» a111 a 211 a 311 a 411 a 121 a 221 a 321 a 421 a 131 a 231 a 331 a 431 a 112 a 212 a 312 a 412 a 122 a 222 a 322 a 422 a 132 a 232 a 332 a

6 Where We Are Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products, Contractions Lecture 4. Tensor-Related Singular Value Decompositions Lecture 5. The CP Representation and Rank Lecture 6. The Tucker Representation Lecture 7. Other Decompositions and Nearness Problems Lecture 8. Multilinear Rayleigh Quotients Lecture 9. The Curse of Dimensionality Lecture 10. Special Topics

7 Block Matrices Definition A block matrix is a matrix whose entries are matrices. A 3-by-2 Block Matrix... A = A 11 A 12 A 21 A 22 A 31 A 32 A ij IR Regarded as a matrix of scalars, A is 150-by-140.

8 Block Matrices The blocks in a block matrix do not have to have uniform size. A = A 11 A 12 A 21 A 22 A 31 A 32 = 50-by by by by by by-10

9 Block Matrices: Special Cases A Block Column Vector... A = A 1 A 2 A 3 A i IR A Block Row Vector... A = [ A 1 A 2 A 3 ] A i IR

10 Block Matrices: Special Cases A Column-Partitioned Matrix... A = [ a 1 a 2 a 3 ] a i IR 100 A Row-Partitioned Matrix A = a T 1 a T 2 a i IR 100 a T 3

11 Block Matrices: Addition A 3-by-2 Example... B 11 B 12 B 21 B 22 B 31 B 32 + = C 11 C 12 C 21 C 22 C 31 C 32 B 11 + C 11 B 12 + C 12 B 21 + C 21 B 22 + C 22 b 31 + C 31 B 32 + C 32 The matrices must be partitioned conformably.

12 Block Matrices: Multiplication An Example... B 11 B 12 [ C11 C 12 B 21 B 22 C 21 C 22 B 31 B 32 ] = B 11 C 11 + B 12 C 21 B 11 C 12 + B 12 C 22 B 21 C 11 + B 22 C 21 B 21 C 12 + B 22 C 22 B 31 C 11 + B 32 C 21 B 31 C 12 + B 32 C 22 The matrices must be partitioned conformably.

13 Block Matrices: Transposition An Example... B 11 B 12 B 21 B 22 B 31 B 32 T = [ B T 11 B T 21 B T 31 B T 12 B T 22 B T 32 ] Different from Block Transposition... B 11 B 12 B 21 B 22 B 31 B 32 [T ] = [ B11 B 21 B 31 B 12 B 22 B 32 ] (More on this later.)

14 Recursive Block Structure A block matrix can have block matrix entries. A = A 11 A 12 A 21 A 22 A 11 = A 21 = A 1111 A 1112 A 1121 A 1122 A 2111 A 2112 A 2121 A 2122 A 12 = A 22 = A 1211 A 1212 A 1221 A 1222 A 2211 A 2212 A 2221 A 2222 Example: The Discrete Fourier Transform Matrix F n [ Fm Ω F m ] F 2m P = F m Ω F m P = permutation, Ω = diagonal

15 The vec Operation Turns matrices into vectors by stacking columns... X = vec(x ) =

16 Matlab: Vec of a matrix using the reshape function. % If A is a matrix, then the following % assigns vec (A) to a... [n1,n2] = size (A); a = reshape (A, n1* n2,1); It can be shown that a(i 1 + (i 2 1)n 1 ) = A(i 1, i 2 )

17 The vec Operation Turns tensors into vectors by stacking mode-1 fibers... A(:, 1, 1) A(:, 2, 1) A IR A(:, 3, 1) vec(a) = = A(:, 1, 2) A(:, 2, 2) A(:, 3, 2) a 111 a 211 a 121 a 221 a 131 a 231 a 112 a 212 a 122 a 222 a 132 a 232 We need to specify the order of the stacking...

18 Tensor Notation: Subscript Vectors Reference If A IR n 1 n d and i = (i 1,..., i d ) with 1 i k n k for k = 1:d, then A(i) A(i 1,..., i k ) We say that i is a subscript vector. Bold font will be used designate subscript vectors. Bounds If L and R are subscript vectors having the same dimension, then L R means that L k R k for all k. Special Cases A subscript vector of all ones is denoted by 1. (Dimension clear from context.) If N is an integer, then N = N 1.

19 The Index-Mapping Function col Definition If i and n are length-d subscript vectors, then the integer-valued function col(i, n) is defined by col(i, n) = i 1 + (i 2 1)n 1 + (i 3 1)n 1 n (i d 1)n 1 n d 1 The Formal Specification of Vec If A IR n 1 n d and v = vec(a), then a (col(i, n)) = A(i) 1 i n

20 Matlab: Vec of a tensor using the reshape function % If A is a n (1) x... x n(d), then the % following assigns vec ( A) to a... n = size (A); a = reshape (A, prod (n ),1); If A IR n1 n d and a = vec(a) then a(i 1 + (i 2 1)n 1 + (i 3 1)n 1 n (i d 1)n 1... n d 1 ) = A(i 1, i 2..., i d )

21 Parts of a Tensor Fibers A fiber of a tensor A is a vector obtained by fixing all but one A s indices. For example, if A = A(1:3, 1:5, 1:4, 1:7), then A(2, :, 4, 6) = A(2, 1:5, 4, 6) = A(2, 1, 4, 6) A(2, 2, 4, 6) A(2, 3, 4, 6) A(2, 4, 4, 6) A(2, 5, 4, 6) is a fiber. We adopt the convention that a fiber is a column-vector. Problem 2.1. How many fibers are there in the tensor A IR n 1 n d?

22 Matlab: Fiber Extraction n = [ ]; A = randn (n); v0 = A (2,:,4,6); % v0 (1,j,1,1) = A(2,j,4,6) v1 = squeeze ( v0 ); % v1(j) = A(2,j,4,6) [r,c] = size (v1 ); if r ==1 v = v1 ; else v = v1; end The function squeeze removes singleton dimensions. In the above, v0 is a fourth-order tensor that has dimension 1 in modes 1, 3, and 4. Sometimes squeeze produces a row vector. The if-else is necessary because (by convention) we insist that fibers are column vectors.

23 Problem 2.2. Write a recursive Matlab function alpha = Tensor1norm(A) that returns the maximum value of f 1 where f is a fiber of the input tensor A IR n 1 n d.

24 Parts of a Tensor Slices A slice of a tensor A is a matrix obtained by fixing all but two of A s indices. For example, if A = A(1:3, 1:5, 1:4, 1:7), then A(:, 3, :, 6) = A(1, 3, 1, 6) A(1, 3, 2, 6) A(1, 3, 3, 6) A(1, 3, 4, 6) A(2, 3, 1, 6) A(2, 3, 2, 6) A(2, 3, 3, 6) A(2, 3, 4, 6) A(3, 3, 1, 6) A(3, 3, 2, 6) A(3, 3, 3, 6) A(3, 3, 4, 6) is a slice. We adopt the convention that the first unfixed index in the tensor is the row index of the slice and the second unfixed index in the tensor is the column index of the slice. Problem 2.3. How many slices are there in the tensor A IR n 1 n d n 1 = = n d = N? if

25 Matlab: Slice Extraction n = [ ]; A = randn (n); C0 = A (2,:,4,:); % C0 (2,i,4,j) = A(2,i,4,j) C = squeeze (C0 ); % C(i,j) = A(2,i,4,j) In the above, C0 is a fourth-order tensor that has dimension 1 in modes 1 and 3. C is a 5-by-7 matrix, a.k.a., 2nd order tensor. Problem 2.4. Write a recursive Matlab function alpha = MaxFnorm(A) that returns the maximum value of B F where B is a slice of the input tensor A IR n 1 n d.

26 Subtensors Subscript Vectors If i = [ i 1,..., i d ] and j = [ j 1,..., j d ] are integer vectors, then i j means that i k j k for k = 1:d. Suppose A IR n 1 n d with n = [ n 1,..., n d ]. If 1 i n, then A(i) = A(i 1,..., i d ). Specification of Subtensors Suppose A IR n 1 n d with n = [ n 1,..., n d ]. If 1 L R n, then A(L:R) denotes the subtensor B = A(L 1 :R 1,..., L d :R d )

27 Matlab: Assignment to Fibers, Slices, and Subtensors n = [ ]; A = zeros (n); A (2,5,:,6) = ones (4,1); A (1,:,3,:) = rand (5,7); A (2:3,4:5,1:2,5:7) = zeros (2,2,2,3); Problem 2.5. Write a Matlab function A = Sym4(N) that returns a random 4th order tensor A IR N N N N with the property that A(i) = A(j) whenever j(1:2) is a permutation of i(1:2) and j(3:4) is a permutation of i(3:4).

28 The Matlab Tensor Toolbox Why? It provides an excellent environment for learning about tensor computations and for building a facility with multi-index reasoning. Who? Bruce W. Bader and Tammy G. Kolda, Sandia Laboratories Where? tilde tkolda/tensortoolbox/

29 Matlab Tensor Toolbox: A Tensor is a Structure >> n = [ ]; >> A_array = randn ( n); >> A = tensor ( A_array ); >> fieldnames ( A) ans = data size The.data field is a multi-dimensional array. The.size field is an integer vector that specifies the modal dimensions. In the above, A.data and A array have same value and A.size and n have the same value.

30 A First Example The Frobenius norm of a 3-tensor function sigma = NormFro3(A) n = A.size; s = 0; for i1=1:n(1) for i2=1:n(2) for i3=1:n(3) s = s + A(i1,i2,i3)^2; end end end s = sqrt(s);

31 Matlab Tensor Toolbox: Order and Dimension >> n = [ ]; >> A_array = randn ( n); >> A = tensor ( A_array ) >> d = ndims ( A) d = 4 >> n = size ( A) n =

32 Problem 2.6. If A is a d-dimensional array in Matlab, then what is sum(a)? Explain why the following function returns the Frobenius norm of the input tensor. function sigma = NormFro(A) % A is a tensor. % sigma is the square root of the sum of the % squares of its entries. B = A.data.^2; for k=1:ndims(a) B = sum(b); end sigma = sqrt(b);

33 Matlab Tensor Toolbox: Tensor Operations n = [ ]; % Extracting Fibers and Slices a_fiber = A (2,:,3,6); a_slice = A (3,:,2,:); % Familiar initializations... X = tenzeros (n); Y = tenones (n); A = tenrandn (n); B = tenrandn (n); % These operations are legal... C = 3* A; C = -A; C = A +1; C = A.^2; C = A + B; C = A./B; C = A.*B; C = A.^B; % Applying a Function to Each Entry... F = tenfun (@sin,a); G = tenfun (@(x) sin (x )./ exp (x),a);

34 Problem 2.7. Suppose A = A(1:n 1, 1:n 2, 1:n 3). We say A(i) is an interior entry if 1 < i < n. Otherwise, we say A(i) is an edge entry. If A(i 1, i 2, i 3) is an interior entry, then it has six neighbors: A(i 1 ± 1, i 2, i 3),A(i 1, i 2 ± 1, i 3), and A(i 1, i 2, i 3 ± 1). Implement the following function so that it performs as specified function B = Smooth(A) % A is a third order tensor % B is a third order tensor with the property that each % interior entry B(i) is the average of A(i) s six % neighbors. If B(i) is an edge entry, then B(i) = A(i). Strive for an implementation that does not have any loops. Problem 2.8. Formulate and solve an order-d version of Problem 2.7.

35 All Tensors Secretly Wish that They Were Matrices! Example 1. A = A(1:n 1, 1:n 2, 1:n 3 ) ima matrix = [ A(1, :, : ) A(n 1, :, : ) ] ima matrix = [ A( :, 1, : ) A( :, n 2, : ) ] ima matrix = [ A( :, :, 1) A( :, :, n 3 ) ]

36 All Tensors Secretly Wish that They Were Matrices! Example 2. A = A(1:n 1, 1:n 2, 1:2, 1:2, 1:2) A( :, :, 1, 1, 1) A( :, :, 2, 1, 1) A( :, :, 1, 2, 1) A( :, :, 2, 2, 1) ima matrix = A( :, :, 1, 1, 2) A( :, :, 2, 1, 2) A( :, :, 1, 2, 2) A( :, :, 2, 2, 2)

37 All Tensors Secretly Wish that They Were Matrices! Example 3. A = A(1:n 1, 1:n 2, 1:n 3, 1:n 4 ) ima matrix = A( :, :, 1, 1) A( :, :, 1, 2) A( :, :, 1, n 4 ) A( :, :, 2, 1) A( :, :, 2, 2) A( :, :, 2, n 4 ) A( :, :, n 3, 1) A( :, :, n 3, 2) A( :, :, n 3, n 4 ) A(i, j, k, l) is the (i, j) entry of block (k, l)

38 Tensor Unfoldings What are they? A tensor unfolding of A IR n 1 n d is obtained by assembling A s entries into a matrix A IR N 1 N 2 where N 1 N 2 = n 1 n d. Obviously, there are many ways to unfold a tensor. An important family of tensor unfoldings are the mode-k unfoldings. In a mode-k unfolding, the mode-k fibers are assembled to produce an n k -by-(n/n k ) matrix where N = n 1 n d. The tensor toolbox function tenmat can be used to produce modal unfoldings and other, more general unfoldings.

39 Modal Unfoldings Using tenmat Example: A Mode-1 Unfolding of A IR tenmat(a,1) sets up a 111 a 121 a 131 a 112 a 122 a 132 a 211 a 221 a 231 a 212 a 222 a 232 a 311 a 321 a 331 a 312 a 322 a 332 a 411 a 421 a 431 a 412 a 422 a 432 (1,1) (2,1) (3,1) (1,2) (2,2) (3,2) Notice how the fibers are ordered.

40 Modal Unfoldings Using tenmat Example: A Mode-2 Unfolding of A IR tenmat(a,2) sets up a 111 a 211 a 311 a 411 a 112 a 212 a 312 a 412 a 121 a 221 a 321 a 421 a 122 a 222 a 322 a 422 a 131 a 231 a 331 a 431 a 132 a 232 a 332 a 432 (1,1) (2,1) (3,1) (4,1) (1,2) (2,2) (3,2) (4,2) Notice how the fibers are ordered.

41 Modal Unfoldings Using tenmat Example: A Mode-3 Unfolding of A IR tenmat(a,3) sets up [ a111 a 211 a 311 a 411 a 121 a 221 a 321 a 421 a 131 a 231 a 331 a 431 a 112 a 212 a 312 a 412 a 122 a 222 a 322 a 422 a 132 a 232 a 332 a 432 (1,1) (2,1) (3,1) (4,1) (1,2) (2,2) (3,2) (4,2) (1,3) (2,3) (3,3) (4,3) ] Notice how the fibers are ordered.

42 Modal Unfoldings Using tenmat Precisely how are the fibers ordered? If A IR n 1 n d, N = n 1 n d, and B = tenmat(a,k), then B is the matrix A (k) IR n k (N/n k ) with where A (k) (i k, col(ĩ k, ñ)) = A(i) ĩ k = [i 1,..., i k 1, i k+1,..., i d ] ñ k = [n 1,..., n k 1, n k+1,..., m d ] Recall that the col function maps multi-indices to integers. For example, if n = [2 3 2] then i (1, 1, 1) (2, 1, 1) 1, 2, 1) (2, 2, 1) (1, 3, 1) (2, 3, 1) (1, 1, 2) (2, 1, 2)... col(i, n)

43 Matlab Tensor Toolbox: A Tenmat Is a Structure >> n = [ ]; >> A = tenrand (n); >> A2 = tenmat (A,2); >> fieldnames ( A2) ans = tsize rindices cindices data A2.tsize = size of the unfolded tensor = [ ] A2.rindices = mode indices that define A2 s rows = [2] A2.cindices = mode indices that define A2 s columns = [1 3 4] A2.data = the matrix that is the unfolded tensor.

44 Other Modal Unfoldings Using tenmat Mode-k Unfoldings of A IR n 1 n d A particular mode-k unfolding is defined by a permutation v of [1:k 1 k + 1:d]. Tensor entries get mapped to matrix entries as follows A(i) A(i k, col(i(v), n(v))) Name v How to get it... A (k) (Default) [1:k 1 k +1:d] tenmat(a,k) Forward Cyclic [k + 1:d 1:k 1] tenmat(a,k, fc ) Backward Cyclic [k 1: 1:1 d: 1:k +1] tenmat(a,k, bc ) Arbitrary v tenmat(a,k,v)

45 Problem 2.9. Given that A = A(1:n 1, 1:n 2, 1:n 3), show how tenmat can be used to compute the following unfoldings: A 1 = ˆ A(1, :, : ) A(n 1, :, : ) A 2 = ˆ A( :, 1, : ) A( :, n 2, : ) A 3 = ˆ A( :, :, 1) A( :, :, n 3) Problem How can tenmat be used to compute the vec of a tensor?

46 A Block Matrix is an Unfolded 4th Order Tensor Some Natural Identifications A block matrix with uniformly sized blocks A 11 A 1,c2 A =..... A i2,j 2 IR r 1 c 1 A r2,1 A r2,c 2 has several natural reshapings: A i2,j 2 (i 1, j 1 ) A(i 1, i 2, j 1, j 2 ) A IR r 1 r 2 c 1 c 2 A i2,j 2 (i 1, j 1 ) A(i 1, j 1, i 2, j 2 ) A IR r 1 c 1 r 2 c 2 The function tenmat can be used to set up these unfoldings...

47 A 4th Order Tensor is a Block Matrix Example 1. If A IR n 1 n 2 n 3 n 4, then tenmat(a,[1 2],[3 4] sets up A(1, 1, :, :) A(1, n 2, :, :)..... A(n 1, 1, :, :) A(n 1, n 2, :, :) Example 2. If A IR n 1 n 2 n 3 n 4, then tenmat(a,[1 3],[2 4] sets up A(1, :, 1, :) A(1, :, n 3, :)..... A(n 1, :, 1, :) A(n 1, :, n 3, :)

48 Even More General Unfoldings Example: A = A(1:2, 1:3, 1:2, 1:2, 1:3) B = (1,1) (2,1) (1,2) (2,2) (1,3) (2,3) a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a (1,1,1) (2,1,1) (1,2,1) (2,2,1) (1,3,1) (2,3,1) (1,1,2) (2,1,2) (1,2,2) (2,2,2) (1,3,2) (2,3,2) B = tenmat(a,[1 2 3],[4 5])

49 Matlab Tensor Toolbox: General Unfoldings Using tenmat function A_ unfolded = Unfold (A, ridx, cidx ) % A = A (1: n (1),...,1: n(d)) is a tensor. % ridx and cidx are integer vectors with the % property that [ ridx cidx ] is a % permutation of 1: d. % % A_ unfolded is an nrows - by - ncols matrix where % nrows = prod (n( ridx )) % ncols = prod (n( cidx )) % if 1 <=i <=n, then % A(i) = A_unfolded (r,c) % with % r = col (i( ridx ),n( ridx )) % c = col (i( cidx ),n( cidx )) A_unfolded = tenmat (A,rIdx, cidx )

50 Summary of Lecture 2. Key Words A block matrix is a matrix whose entries are matrices. A fiber of a tensor is a column vector defined by fixing all but one index and varying what s left. A slice of a tensor is a matrix defined by fixing all but two indices and varying what s left. A mode-k unfolding of a tensor is obtained by assembling all the mode-k fibers into a matrix. tensor is a Tensor Toolbox command used to construct tensors. tenmat is a Tensor Toolbox command that is used to construct unfoldings of a tensor.

1. Connecting to Matrix Computations. Charles F. Van Loan

1. Connecting to Matrix Computations. Charles F. Van Loan Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping

Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping Algorithm 862: MATLAB Tensor Classes for Fast Algorithm Prototyping BRETT W. BADER and TAMARA G. KOLDA Sandia National Laboratories Tensors (also known as multidimensional arrays or N-way arrays) are used

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell

More information

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Math 515 Fall, 2008 Homework 2, due Friday, September 26. Math 515 Fall, 2008 Homework 2, due Friday, September 26 In this assignment you will write efficient MATLAB codes to solve least squares problems involving block structured matrices known as Kronecker

More information

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra A. Vectors A vector is a quantity that has both magnitude and direction, like velocity. The location of a vector is irrelevant;

More information

Tutorial on MATLAB for tensors and the Tucker decomposition

Tutorial on MATLAB for tensors and the Tucker decomposition Tutorial on MATLAB for tensors and the Tucker decomposition Tamara G. Kolda and Brett W. Bader Workshop on Tensor Decomposition and Applications CIRM, Luminy, Marseille, France August 29, 2005 Sandia is

More information

Introduction to Tensors. 8 May 2014

Introduction to Tensors. 8 May 2014 Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Mathematics 13: Lecture 10

Mathematics 13: Lecture 10 Mathematics 13: Lecture 10 Matrices Dan Sloughter Furman University January 25, 2008 Dan Sloughter (Furman University) Mathematics 13: Lecture 10 January 25, 2008 1 / 19 Matrices Recall: A matrix is a

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

TOPIC 2 Computer application for manipulating matrix using MATLAB

TOPIC 2 Computer application for manipulating matrix using MATLAB YOGYAKARTA STATE UNIVERSITY MATHEMATICS AND NATURAL SCIENCES FACULTY MATHEMATICS EDUCATION STUDY PROGRAM TOPIC 2 Computer application for manipulating matrix using MATLAB Definition of Matrices in MATLAB

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

arxiv: v1 [math.na] 26 Nov 2018

arxiv: v1 [math.na] 26 Nov 2018 arxiv:1811.10378v1 [math.na] 26 Nov 2018 Gradient-based iterative algorithms for solving Sylvester tensor equations and the associated tensor nearness problems Maolin Liang, Bing Zheng School of Mathematics

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

The multiple-vector tensor-vector product

The multiple-vector tensor-vector product I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27 Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

SECTION 2: VECTORS AND MATRICES. ENGR 112 Introduction to Engineering Computing

SECTION 2: VECTORS AND MATRICES. ENGR 112 Introduction to Engineering Computing SECTION 2: VECTORS AND MATRICES ENGR 112 Introduction to Engineering Computing 2 Vectors and Matrices The MAT in MATLAB 3 MATLAB The MATrix (not MAThematics) LABoratory MATLAB assumes all numeric variables

More information

Diagonalization of Tensors with Circulant Structure

Diagonalization of Tensors with Circulant Structure Diagonalization of Tensors with Circulant Structure Mansoor Rezgi and Lars Eldén Linköping University Post Print N.B.: When citing this work, cite the original article. Original Publication: Mansoor Rezgi

More information

Lecture 4.2 Finite Difference Approximation

Lecture 4.2 Finite Difference Approximation Lecture 4. Finite Difference Approimation 1 Discretization As stated in Lecture 1.0, there are three steps in numerically solving the differential equations. They are: 1. Discretization of the domain by

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

3. Array and Matrix Operations

3. Array and Matrix Operations 3. Array and Matrix Operations Almost anything you learned about in your linear algebra classmatlab has a command to do. Here is a brief summary of the most useful ones for physics. In MATLAB matrices

More information

TENSORS AND COMPUTATIONS

TENSORS AND COMPUTATIONS Institute of Numerical Mathematics of Russian Academy of Sciences eugene.tyrtyshnikov@gmail.com 11 September 2013 REPRESENTATION PROBLEM FOR MULTI-INDEX ARRAYS Going to consider an array a(i 1,..., i d

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

MATRICES AND MATRIX OPERATIONS

MATRICES AND MATRIX OPERATIONS SIZE OF THE MATRIX is defined by number of rows and columns in the matrix. For the matrix that have m rows and n columns we say the size of the matrix is m x n. If matrix have the same number of rows (n)

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Faloutsos, Tong ICDE, 2009

Faloutsos, Tong ICDE, 2009 Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

1 Matrices and matrix algebra

1 Matrices and matrix algebra 1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Matrix Basic Concepts

Matrix Basic Concepts Matrix Basic Concepts Topics: What is a matrix? Matrix terminology Elements or entries Diagonal entries Address/location of entries Rows and columns Size of a matrix A column matrix; vectors Special types

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in 8.6 Problem Set 3 - s Due Wednesday, 26 September 27 at 4 pm in 2-6. Problem : (=2+2+2+2+2) A vector space is by definition a nonempty set V (whose elements are called vectors) together with rules of addition

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Tensor intro 1. SIAM Rev., 51(3), Tensor Decompositions and Applications, Kolda, T.G. and Bader, B.W.,

Tensor intro 1. SIAM Rev., 51(3), Tensor Decompositions and Applications, Kolda, T.G. and Bader, B.W., Overview 1. Brief tensor introduction 2. Stein s lemma 3. Score and score matching for fitting models 4. Bringing it all together for supervised deep learning Tensor intro 1 Tensors are multidimensional

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

arxiv: v1 [math.co] 10 Aug 2016

arxiv: v1 [math.co] 10 Aug 2016 POLYTOPES OF STOCHASTIC TENSORS HAIXIA CHANG 1, VEHBI E. PAKSOY 2 AND FUZHEN ZHANG 2 arxiv:1608.03203v1 [math.co] 10 Aug 2016 Abstract. Considering n n n stochastic tensors (a ijk ) (i.e., nonnegative

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44) 1 Introduction Lecture 12 Linear systems of equations II We have looked at Gauss-Jordan elimination and Gaussian elimination as ways to solve a linear system Ax=b. We now turn to the LU decomposition,

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Math Lecture 27 : Calculating Determinants

Math Lecture 27 : Calculating Determinants Math 2270 - Lecture 27 : Calculating Determinants Dylan Zwick Fall 202 This lecture covers section 5.2 from the textbook. In the last lecture we stated and discovered a number of properties about determinants.

More information

TENLAB A MATLAB Ripoff for Tensors

TENLAB A MATLAB Ripoff for Tensors TENLAB A MATLAB Ripoff for Tensors Y. Cem Sübakan, ys2939 Mehmet K. Turkcan, mkt2126 Dallas Randal Jones, drj2115 February 9, 2016 Introduction MATLAB is a great language for manipulating arrays. However,

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Tensor-Tensor Product Toolbox

Tensor-Tensor Product Toolbox Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia

Tensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Block-tridiagonal matrices

Block-tridiagonal matrices Block-tridiagonal matrices. p.1/31 Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

On the solving of matrix equation of Sylvester type

On the solving of matrix equation of Sylvester type Computational Methods for Differential Equations http://cmde.tabrizu.ac.ir Vol. 7, No. 1, 2019, pp. 96-104 On the solving of matrix equation of Sylvester type Fikret Ahmadali Aliev Institute of Applied

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations Lab 2 Worksheet Problems Problem : Geometry and Linear Equations Linear algebra is, first and foremost, the study of systems of linear equations. You are going to encounter linear systems frequently in

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 435 (20) 422 447 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Diagonalization of tensors

More information

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro Tensor Analysis Topics in Data Mining Fall 2015 Bruno Ribeiro Tensor Basics But First 2 Mining Matrices 3 Singular Value Decomposition (SVD) } X(i,j) = value of user i for property j i 2 j 5 X(Alice, cholesterol)

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. . MATH 030: MATRICES Matrix Operations We have seen how matrices and the operations on them originated from our study of linear equations In this chapter we study matrices explicitely Definition 01 A matrix

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting. Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

More information

Orthogonal tensor decomposition

Orthogonal tensor decomposition Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information