Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Size: px
Start display at page:

Download "Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan"

Transcription

1 From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi, Italy

2 Where We Are Lecture 1. Introduction to Tensor Computations Lecture 2. Tensor Unfoldings Lecture 3. Transpositions, Kronecker Products, Contractions Lecture 4. Tensor-Related Singular Value Decompositions Lecture 5. The CP Representation and Tensor Rank Lecture 6. The Tucker Representation Lecture 7. Other Decompositions and Nearness Problems Lecture 8. Multilinear Rayleigh Quotients Lecture 9. The Curse of Dimensionality Lecture 10. Special Topics

3 What is this Lecture About? Pushing the SVD Envelope The SVD is a powerful tool for exposing the structure of a matrix and for capturing its essence through optimal, data-sparse representations: A = rank(a) k=1 σ k u k v T k ˆr k=1 σ k u k v T k For a tensor A, let s try for something similar... A = whatever!

4 What is this Lecture About? The Higher Order Singular Value Decomposition (HOSVD) The HOSVD of a tensor A IR n 1 n d involves computing the matrix SVDs of its modal unfoldings A (1),..., A (d). This results in a representation of A as a sum of rank-1 tensors. Before we present the HOSVD, we need to understand the mode-k matrix product which can be thought of as a tensor-times-matrix product.

5 What is this Lecture About? The Kronecker Product Singular Value Decomposition (KPSVD) The KPSVD of a matrix A represents A as an SVD-like sum of Kronecker products. Applied to an unfolding of A, it results in a representation of A as a sum of low rank tensors. Before we present the KPSVD, we need to understand the nearest Kronecker Product problem.

6 The Mode-k Matrix Product Main Idea A mode-k matrix product is a special contraction that involves a matrix and a tensor. A new tensor of the same order is obtained by applying the matrix to each mode-k fiber of the tensor.

7 The Mode-k Matrix Product A mode-1 example when the Tensor A is Second Order... [ ] [ ] [ b11 b 12 b 13 m11 m = 12 a11 a 12 a 13 b 21 b 22 b 23 m 21 m 22 a 21 a 22 a 23 (The fibers of A are its columns.) ] A mode-2 example when the Tensor A is Second Order... b 11 b 12 m 11 m 12 m 13 b 21 b 22 b 31 b 32 = m 21 m 22 m 23 a 11 a 21 m 31 m 32 m 33 a 12 a 22 a b 41 b 42 m 41 m 42 m 13 a (The fibers of A are its rows.) The matrix doesn t have to be square.

8 The Mode-k Matrix Product A Mode-2 Example When A IR m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 m 41 m 42 m 43 m 51 m 52 m 53 b 111 b 211 b 311 b 411 b 112 b 212 b 312 b 412 b 121 b 221 b 321 b 421 b 122 b 222 b 322 b 422 b 131 b 231 b 331 b 431 b 132 b 232 b 332 b 432 b 141 b 241 b 341 b 441 b 142 b 242 b 342 b 442 b 151 b 251 b 351 b 451 b 152 b 252 b 352 b 452 = a 111 a 211 a 311 a 411 a 112 a 212 a 312 a 412 a 121 a 221 a 321 a 421 a 122 a 222 a 322 a 422 a 131 a 231 a 331 a 431 a 132 a 232 a 332 a 432 Note that (1) B IR and (2) B (2) = M A (2).

9 The Mode-k Matrix Product In General... If A IR n 1 n d and M IR m k n k, then B(α 1,..., α k 1, i, α k+1,..., α d ) = n k M(i, j) A(α 1,..., α k 1, j, α k+1,..., α d ) j=1 is the mode-k matrix product of M and A. Multiply All the Mode-k Fibers by M... B (k) = M A (k)

10 The Mode-k Matrix Product Notation The mode-k matrix product of M and A is denoted by B = A k M Thus, if B = A k M, then B is defined by B (k) = M A (k). Order and Dimension If A IR n 1 n d, M IR m k n k, and B = A k M, then B = B(1:n 1,..., 1:n k 1, 1:m k, 1:n k+1,..., 1:n k ). Thus, B has the same order as A.

11 Matlab Tensor Toolbox: Mode-k Matrix Product Using ttm n = [ ]; A = tenrand (n); for k =1:4 M = randn (n(k),n(k )); % Compute the mode - k product of tensor A % and matrix M to obtain tensor B... B = ttm (A,M,k); end Problem 4.1. Write a function B = Myttm(A,M,k) that does the same thing as ttm. Make effective use of tenmat.

12 Problem 4.2. If A IR n 1 n d and v IR n k, then the mode-k vector product is defined by n k X j=1 B(α 1,..., α k 1, α k+1,..., α d ) = v(j) A(α 1,..., α k 1, j, α k+1,..., α d ) The Tensor Toolbox function ttv (for tensor-times-vector) can be used to compute this contraction. Note that B = B(1:n 1,..., 1:n k 1, 1:n k+1,..., 1:n k ). Thus, the order of B is one less than the order of A. Write a Matlab function B = Myttv(M,A,k) that carries out the mode-k vector product. Use the fact that B is a reshaping of v T A (k).

13 The Mode-k Matrix Product Property 1. (Reshaping as a Matrix-Vector Product) If A IR n 1 n d, M IR m k n k, and B = A k M then vec(b) = ( ) I nd I nk+1 M I nk 1 I n1 vec(a) = ( ) I nk+1 n d M I n1 n k 1 vec(a) Problem 4.3. Prove this. Hint. Show that if p = [1:k 1 k +1:d 1] then (A km) [ p ] = A [ p ] 1M.

14 The Mode-k Matrix Product Property 2. (Successive Products in the Same Mode) If A IR n 1 n d and M 1, M 2 IR m k n k, then (A k M 1 ) k M 2 = A k (M 1 M 2 ). Problem 4.4. Prove this using Property 1 and the Kronecker product fact that (A B)(C D) = (AC) (BD).

15 The Mode-k Matrix Product Property 3. (Successive Products in Different Modes) If A IR n 1 n d, M k IR m k n k, M j IR m j n j, and k j, then (A k M k ) j M j = (A j M j ) k M k Problem 4.5. Prove this using Property 1 and the Kronecker product fact that (A B)(C D) = (AC) (BD).

16 The Tucker Product Definition If A IR n 1 n d and M k IR m k n k for k = 1:d, then B = A 1 M 1 2 M 2 d M d is a Tucker product of A and M 1,..., M d.

17 Matlab Tensor Toolbox: Tucker Products function B = TuckerProd (A, M) % A is a n(1) -by n(d) tensor. % M is a length - d cell array with % M{k} an m(k)-by -n(k) matrix. % B is an m(1) -by by -m(d) tensor given by % B = A x1 M {1} x2 M {2}... xd M{d} % where " xk" denotes mode - k matrix product. B = A; for k =1: length (A. size ) B = ttm (B,M{k},k); end

18 The Tucker Product Representation Before... Given A IR n 1 n d and M k IR m k n k for k = 1:d, compute the Tucker product B = A 1 M 1 2 M 2 d M d Now... Given A IR n 1 n d, find matrices U k IR n k r k tensor S IR r 1 r d such that for k = 1:d and A = S 1 U 1 2 U 2 d U d is an illuminating Tucker product representation of A.

19 The Tucker Product Representation Multilinear Sum Formulation Suppose A IR n 1 n d and U k IR n k r k. If where S IR r 1 r d, then A = S 1 U 1 2 U 2 d U d A(i) = r S(j)U 1 (i 1, j 1 ) U d (i d, j d ) j=1

20 The Tucker Product Representation Matrix-Vector Product Formulation Suppose A IR n 1 n d and U k IR n k r k. If where S IR r 1 r d, then A = S 1 U 1 2 U 2 d U d vec(a) = (U d U 1 ) vec(s)

21 The Tucker Product Representation Important Existence Situation If A IR n 1 n d and U k IR n k n k is nonsingular for k = 1:d, then A = S 1 U 1 2 U 2 d U d with S = A 1 U U 1 2 d U 1 d. We will refer to the U k as the inverse factors and S as the core tensor. Proof. S 1 U 1 2 U 2 3 U 3 = A 1 U U U U 1 2 U 2 3 U 3 = A 1 (U 1 1 U 1) 2 (U 1 2 U 2) 3 (U 1 3 U 3) = A

22 The Tucker Product Representation Core Tensor Unfoldings Suppose A IR n 1 n d and U k IR n k n k is nonsingular for k = 1:d. If A = S 1 U 1 2 U 2 d U d then the mode-k unfoldings of S satisfy S (k) = U 1 k A (k) (U d U k+1 U k 1 U 1 )

23 The Higher-Order SVD (HOSVD) Definition Suppose A IR n 1 n d and that its mode-k unfolding has SVD A (k) = U k Σ k V T k for k = 1:d. Its higher-order SVD is given by A = S 1 U 1 2 U 2 d U d where S = A 1 U T 1 2 U T 2 d U T d A Tucker product representation where the inverse factor matrices are the left singular vector matrices for the unfoldings A (1),..., A (d).

24 Matlab Tensor Toolbox: Computing the HOSVD function [S, U] = HOSVD ( A) % A is an n(1) -by by -n(d) tensor. % U is a length - d cell array with the % property that U{ k} is the left singular % vector matrix of A s mode - k unfolding. % S is an n(1) -by by -n(d) tensor given by % A x1 U {1} x2 U {2}... xd U{d} S = A; for k =1: length (A. size ) C = tenmat (A,k); [U{k}, Sigma,V] = svd (C. data ); S = ttm (S,U{k},k); end

25 The Higher-Order SVD (HOSVD) The HOSVD of a Matrix If d = 2 then A is a matrix and the HOSVD is the SVD. Indeed, if A = A (1) = U 1 Σ 1 V1 T A T = A (2) = U 2 Σ 2 V2 T then we can set U = U 1 = V 2 and V = U 2 = V 1. Note that S = (A 1 U1 T ) 2 U2 T = (U1 T A) 2 U 2 = U1 T AV 1 = Σ 1.

26 The Higher-Order SVD (HOSVD) The Modal Unfoldings of the Core Tensor S If A = S 1 U 1 2 U 2 d U d is the HOSVD of A IR n 1 n d, then for k = 1:d S (k) = U T k A (k) (U d U k+1 U k 1 U 1 ) The Core Tensor S Encodes Singular Value Information Since Uk T A (k)v k = Σ k is the SVD of A (k), we have S (k) = Σ k Vk T (U d U k+1 U k 1 U 1). It follows that the rows of S (k) are mutually orthogonal and that the singular values of A (k) are the 2-norms of these rows.

27 The Higher-Order SVD (HOSVD) The HOSVD as a Multilinear Sum If A = S 1 U 1 2 U 2 d U d is the HOSVD of A IR n 1 n d, then n A(i) = S(j)U 1 (i 1, j 1 ) U d (i d, j d ) j=1 The HOSVD as a Matrix-Vector Product If A = S 1 U 1 2 U 2 d U d is the HOSVD of A IR n 1 n d, then vec(a) = (U d U 1 ) vec(s) Note that U d U 1 is orthogonal.

28 Problem 4.6. Suppose A = S 1 U 1 2 U 2 d U d is the HOSVD of A IR n 1 n d and that r min{n 1,..., n d }. Through Matlab experimentation, what can you say about the approximation A S r 1 U 1(:, 1:r) 2 U 2(:, 1:r) d U d (:, 1:r) where S r = S(1:r, 1:r,..., 1:r)? Problem 4.7. Formulate an HOQRP factorization for a tensor A IR n 1 n d that is based on the QR-with-column-pivoting factorizations A (k) P k = Q k R k for k = 1:d. Does the core tensor have any special properties?

29 The Nearest Kronecker Product Problem The Problem Given the block matrix A 11 A 1,c2 A =..... A i2,j 2 IR r 1 c 1 A r2,1 A r2,c 2 find B IR r 2 c 2 and C IR r 1 c 1 so that is minimized. φ A (B, C) = A B C F Looks like the nearest rank-1 matrix problem which has an SVD solution.

30 The Nearest Kronecker Product Problem The Tensor Connection The block matrix A 11 A 1,c2 A =..... A i2,j 2 IR r 1 c 1 A r2,1 A r2,c 2 corresponds to an order-4 tensor A i2,j 2 (i 1, j 1 ) A(i 1, j 1, i 2, j 2 ) and so does M = (M ij ) = B C: M i2,j 2 (i 1, j 1 ) B(i 1, j 1 )C(i 2, j 2 )

31 The Nearest Kronecker Product Problem The Objective Function in Tensor Terms φ A (B, C) = A B C F r 1 c 1 r 2 c 2 = A(i 1, j 1, i 2, j 2 ) B(i 2, j 2 )C(i 1, j 1 ) i 1=1 j 1=1 i 2=1 j 2=1 We are trying to approximate an order-4 tensor with a pair of order-2 tensors. Analogous to approximating a matrix (an order-2 tensor) with a rank-1 matrix (a pair of order-1 tensors.)

32 The Nearest Kronecker Product Problem Reshaping the Objective Function (3-by-2 case) a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 a 51 a 52 a 53 a 54 a 61 a 62 a 63 a 64 a 11 a 21 a 12 a 22 a 31 a 41 a 32 a 42 a 51 a 61 a 52 a 62 a 13 a 23 a 14 a 24 a 33 a 43 a 34 a 44 a 53 a 63 a 54 a 64 b 11 b 12 b 21 b 22 b 31 b 32 = b 11 b 21 b 31 b 12 b 22 b 32 [ c11 c 12 c 21 c 22 ] F [ c11 c 21 c 12 c 22 ] F

33 The Nearest Kronecker Product Problem Minimizing the Objective Function (3-by-2 case) It is a nearest rank-1 problem, φ A (B, C) = a 11 a 21 a 12 a 22 a 31 a 41 a 32 a 42 a 51 a 61 a 52 a 62 a 13 a 23 a 14 a 24 a 33 a 43 a 34 a 44 a 53 a 63 a 54 a 64 b 11 b 21 b 31 b 12 b 22 b 32 [ ] c11 c 21 c 12 c 22 F = Ã vec(b)vec(c)t F with SVD solution: Ã = UΣV T vec(b) = σ 1 U(:, 1) vec(c) = σ 1 V (:, 1)

34 The Nearest Kronecker Product Problem The Tilde Matrix a 11 a 12 a 13 a 14 A = a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 = a 51 a 52 a 53 a 54 a 61 a 62 a 63 a 64 implies à = a 11 a 21 a 12 a 22 a 31 a 41 a 32 a 42 a 51 a 61 a 52 a 62 a 13 a 23 a 14 a 24 a 33 a 43 a 34 a 44 a 53 a 63 a 54 a 64 = A 11 A 12 A 21 A 22 A 31 A 32 vec(a 11 ) T vec(a 21 ) T vec(a 31 ) T vec(a 12 ) T vec(a 22 ) T vec(a 32 ) T.

35 The Nearest Kronecker Product Problem Solution via Lanczos SVD Since only the big singular value triple (σ 1, u 1, v 1 ) associated with à s SVD is required, we can use Lanczos SVD. Note that if A is sparse, then à is sparse.

36 Problem 4.8. Write a Matlab function [B,C] = NearestKP(A,r1,c1,r2,c2) that solves the nearest Kronecker product problem where A IR m n with m = r 1r 2, n = c 1c 2 and A is regarded as an r 2-by-c 2 block matrix with r 1-by-c 1 blocks. Make effective use of Matlab s sparse SVD solver svds. Problem 4.9. Write a Matlab function [B,C] = NearestTensor(A) where A is an order-4 tensor, B is an order-2 tensor, and C is an order-2 tensor such that s X φ A(B, C) = A(i) B(i 3, i 4)C(i 1, i 2) 2 is minimized. Use tenmat to set up à and use svd. i

37 The Kronecker Product SVD (KPSVD) Theorem If A = A 11 A 1,c A r2,1 A r2,c 2 A i2,j 2 IR r 1 c 1 then there exist U 1,..., U rkp IR r 2 c 2, V 1,..., V rkp IR r 1 c 1, and scalars σ 1 σ rkp > 0 such that A = r KP σ k U k V k. k=1 The sets {vec(u k )} and {vec(v k )} are orthonormal and r KP is the Kronecker rank of A with respect to the chosen blocking.

38 The Kronecker Product SVD (KPSVD) Constructive Proof Compute the SVD of Ã: Ã = UΣV T = r KP σ k u k vk T k=1 and define the U k and V k by for k = 1:r KP. vec(u k ) = u k vec(v k ) = v k U k = reshape(u k, r 2, c 2 ), V k = reshape(v k, r 1, c 1 )

39 Problem Suppose H =» F11 G 11 F 12 G 12 F 21 G 21 F 22 G 22 where F ij IR m m and G ij IR m m. Regarded as a 2m-by-2m block matrix, what is the Kronecker rank of H. Problem Suppose H = F 1. F p ˆ G 1 G q where the F i and G i are all m-by-m. Regarded as a p-by-q block matrix, what is the Kronecker rank of H?

40 The Kronecker Product SVD (KPSVD) Nearest rank-r If r r KP, then A r = r σ k U k V k k=1 is the nearest matrix to A (in the Frobenius norm) that has Kronecker rank r.

41 The Tensor Outer Product Operation Applied to Order-1 Tensors... A = b c b IR n 1, c IR n 2 means A(i 1, i 2 ) = b(i 1 )c(i 2 ) Note that tenmat(a, [1], [2]) = bc T

42 The Tensor Outer Product Operation Applied to Order-2 Tensors... A = B C B IR n 1 r 1, C IR n 2 r 2 means A(i 1, i 2, i 3, i 4 ) = B(i 1, i 2 )C(i 3, i 4 ) Note that tenmat(a, [ 3 1 ], [ 4 2 ]) = B C

43 The Tensor Outer Product Operation More General... A = B C D B IR n 1 r 1, C IR n 2 r 2, D IR n 3 r 3 means A(i 1, i 2, i 3, i 4 ) = B(i 1, i 2 )C(i 3, i 4 )D(i 5, i 6 ) Note that tenmat(a, [ ], [ ]) = B C D

44 The Tensor Outer Product Operation The HOSVD as a Sum of Rank-1 Tensors If A = S 1 U 1 2 U 2 d U d is the HOSVD of A IR n 1 n d, then n A(i) = S(j)U 1 (i 1, j 1 ) U d (i d, j d ) j=1 implies n A = S(j) U 1 (:, j 1 ) U d (:, j d ) j=1

45 The Tensor Outer Product Operation The KPSVD... If tenmat(a,[ 3 1 ], [ 4 2 ]) = i σ i B i C i then A = i σ i B i C i

46 Summary of Lecture 4. Key Words The Mode-k Matrix Product is a contraction between a tensor and a matrix that produces another tensor. The Higher Order SVD of a tensor A assembles the SVDs of A s modal unfoldings. The Nearest Kronecker product problem involves computing the first singular triple of a permuted version of the given matrix. The Kronecker Product SVD characterizes a block matrix as a sum of Kronecker products. By applying it to an unfolding of a tensor A, an outer product expansion for A is obtained. The outer product operation is a way of combining an order-d 1 tensor and an order-d 2 tensor to obtain an order-(d 1 + d 2 ) tensor.

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

Lecture 2. Tensor Unfoldings. Charles F. Van Loan

Lecture 2. Tensor Unfoldings. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

1. Connecting to Matrix Computations. Charles F. Van Loan

1. Connecting to Matrix Computations. Charles F. Van Loan Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell

More information

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm

Kronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,

More information

Fundamentals of Multilinear Subspace Learning

Fundamentals of Multilinear Subspace Learning Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

arxiv: v2 [math.na] 13 Dec 2014

arxiv: v2 [math.na] 13 Dec 2014 Very Large-Scale Singular Value Decomposition Using Tensor Train Networks arxiv:1410.6895v2 [math.na] 13 Dec 2014 Namgil Lee a and Andrzej Cichocki a a Laboratory for Advanced Brain Signal Processing,

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Tensor Decompositions and Applications

Tensor Decompositions and Applications Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

This work has been submitted to ChesterRep the University of Chester s online research repository.

This work has been submitted to ChesterRep the University of Chester s online research repository. This work has been submitted to ChesterRep the University of Chester s online research repository http://chesterrep.openrepository.com Author(s): Daniel Tock Title: Tensor decomposition and its applications

More information

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry

Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor

More information

Introduction to Tensors. 8 May 2014

Introduction to Tensors. 8 May 2014 Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of

More information

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY

TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Linear Algebra V = T = ( 4 3 ).

Linear Algebra V = T = ( 4 3 ). Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA

Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA Carla D. Moravitz Martin July 9-23, 24 This work was supported by AIM. Center for Applied Mathematics,

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

CVPR A New Tensor Algebra - Tutorial. July 26, 2017

CVPR A New Tensor Algebra - Tutorial. July 26, 2017 CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 09-11 TT-Cross approximation for multidimensional arrays Ivan Oseledets 1, Eugene Tyrtyshnikov 1, Institute of Numerical

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Tasha Vanesian LECTURE 3 Calibrated 3D Reconstruction 3.1. Geometric View of Epipolar Constraint We are trying to solve the following problem:

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 70 88 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa TT-cross approximation for

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in 8.6 Problem Set 3 - s Due Wednesday, 26 September 27 at 4 pm in 2-6. Problem : (=2+2+2+2+2) A vector space is by definition a nonempty set V (whose elements are called vectors) together with rules of addition

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Math 671: Tensor Train decomposition methods II

Math 671: Tensor Train decomposition methods II Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Matrix-Product-States/ Tensor-Trains

Matrix-Product-States/ Tensor-Trains / Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

1 Non-negative Matrix Factorization (NMF)

1 Non-negative Matrix Factorization (NMF) 2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,

More information

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro

Tensor Analysis. Topics in Data Mining Fall Bruno Ribeiro Tensor Analysis Topics in Data Mining Fall 2015 Bruno Ribeiro Tensor Basics But First 2 Mining Matrices 3 Singular Value Decomposition (SVD) } X(i,j) = value of user i for property j i 2 j 5 X(Alice, cholesterol)

More information

Computational Multilinear Algebra

Computational Multilinear Algebra Computational Multilinear Algebra Charles F. Van Loan Supported in part by the NSF contract CCR-9901988. Involves Working With Kronecker Products b 11 b 12 b 21 b 22 c 11 c 12 c 13 c 21 c 22 c 23 c 31

More information

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden Index 1-norm, 15 matrix, 17 vector, 15 2-norm, 15, 59 matrix, 17 vector, 15 3-mode array, 91 absolute error, 15 adjacency matrix, 158 Aitken extrapolation, 157 algebra, multi-linear, 91 all-orthogonality,

More information

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016 Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

Parallel Tensor Compression for Large-Scale Scientific Data

Parallel Tensor Compression for Large-Scale Scientific Data Parallel Tensor Compression for Large-Scale Scientific Data Woody Austin, Grey Ballard, Tamara G. Kolda April 14, 2016 SIAM Conference on Parallel Processing for Scientific Computing MS 44/52: Parallel

More information

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

9 Searching the Internet with the SVD

9 Searching the Internet with the SVD 9 Searching the Internet with the SVD 9.1 Information retrieval Over the last 20 years the number of internet users has grown exponentially with time; see Figure 1. Trying to extract information from this

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Kronecker product and

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

A new truncation strategy for the higher-order singular value decomposition

A new truncation strategy for the higher-order singular value decomposition A new truncation strategy for the higher-order singular value decomposition Nick Vannieuwenhoven K.U.Leuven, Belgium Workshop on Matrix Equations and Tensor Techniques RWTH Aachen, Germany November 21,

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Tensors and graphical models

Tensors and graphical models Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations

More information

Linear Methods in Data Mining

Linear Methods in Data Mining Why Methods? linear methods are well understood, simple and elegant; algorithms based on linear methods are widespread: data mining, computer vision, graphics, pattern recognition; excellent general software

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

BOOLEAN TENSOR FACTORIZATIONS. Pauli Miettinen 14 December 2011

BOOLEAN TENSOR FACTORIZATIONS. Pauli Miettinen 14 December 2011 BOOLEAN TENSOR FACTORIZATIONS Pauli Miettinen 14 December 2011 BACKGROUND: TENSORS AND TENSOR FACTORIZATIONS X BACKGROUND: TENSORS AND TENSOR FACTORIZATIONS C X A B BACKGROUND: BOOLEAN MATRIX FACTORIZATIONS

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Exercise Sheet 1 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Show that an infinite mixture of Gaussian distributions, with Gamma distributions as mixing weights in the following

More information

Lecture 5: Web Searching using the SVD

Lecture 5: Web Searching using the SVD Lecture 5: Web Searching using the SVD Information Retrieval Over the last 2 years the number of internet users has grown exponentially with time; see Figure. Trying to extract information from this exponentially

More information