Lecture 4. CP and KSVD Representations. Charles F. Van Loan
|
|
- Justin Baker
- 5 years ago
- Views:
Transcription
1 Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 1 / 36
2 What is this Lecture About? Two More Tensor SVDs The CP Representation has diagonal aspect like the SVD but there is no orthogonality. The Kronecker Product SVD can be used to write a given matrix as an optimal sum of Kronecker products. If the matrix is obtained via a tensor unfolding, then we obtain yet another SVD-like representation. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 2 / 36
3 The CP Representation Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 3 / 36
4 The CP Representation Definition The CP representation for an n 1 n 2 n 3 tensor A has the form A = r λ k F (:, k) G(:, k) H(:, k) k=1 where λ s are real scalars and F IR n 1 r, G IR n 2 r, and H IR n 3 r Equivalent A(i 1, i 2, i 3 ) = vec(a) = r λ j F (i 1, j) G(i 2, j) H(i 3, j)) j=1 r λ j H(:, j) G(:, j) F (:, j) j=1 Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 4 / 36
5 Tucker Vs. CP The Tucker Representation A = r 1 j 1 =1 r 2 j 2 =1 r 3 j 3 =1 S(j 1, j 2, j 3 ) U 1 (:, j 1 ) U 2 (:, j 2 ) U 3 (:, j 3 ) The CP Representation r A = λ j F (:, j) G(:, j) H(:, j) j=1 In Tucker the U s have orthonormal columns. In CP, the matrices F, G, and H do not have orthonormal columns. In CP the core tensor is diagonal while in Tucker it is not. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 5 / 36
6 A Note on Terminology The CP Decomposition It also goes by the name of the CANDECOMP/PARAFAC Decomposition. CANDECOMP = Canonical Decomposition PARAFAC = Parallel Factors Decomposition Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 6 / 36
7 A Little More About Tensor Rank Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 7 / 36
8 The CP Representation and Rank Definition If r A = λ j F (:, j) G(:, j) H(:, j) j=1 is the shortset possible CP representation of A, then rank(a) = r Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 8 / 36
9 Tensor Rank Anomaly 1 The largest rank attainable for an n 1 -by-...-n d tensor is called the maximum rank. It is not a simple formula that depends on the dimensions n 1,..., n d. Indeed, its precise value is only known for small examples. Maximum rank does not equal min{n 1,..., n d } unless d 2. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 9 / 36
10 Tensor Rank Anomaly 2 If the set of rank-k tensors in IR n 1 n d measure, then k is a typical rank. has positive Lebesgue Size Typical Ranks , , ,6 For n 1 -by-n 2 matrices, typical rank and maximal rank are both equal to the smaller of n 1 and n 2. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 10 / 36
11 Tensor Rank Anomaly 3 The rank of a particular tensor over the real field may be different than its rank over the complex field. Anomaly 4 A tensor with a given rank may be approximated with arbitrary precision by a tensor of lower rank. Such a tensor is said to be degenerate. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 11 / 36
12 The Nearest CP Problem Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 12 / 36
13 The CP Approximation Problem Definition Given: A IR n 1 n 2 n 3 and r Determine: λ IR r and F IR n 1 r, G IR n 2 r, and H IR n 3 r (with unit 2-norm columns) so that if X = r λ j F (:, j) G(:, j) H(:, j) j=1 then A X 2 F is minimized. A multilinear optimization problem. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 13 / 36
14 The CP Approximation Problem Equivalent Formulations r A λ j F (:, j) G(:, j) H(:, j) A (1) A (2) A (3) j=1 F = r λ j F (:, j) ( H(:, j) G(:, j) ) T j=1 = r λ j G(:, j) ( H(:, j) F (:, j) ) T j=1 = r λ j H(:, j) ( G(:, j) F (:, j) ) T j=1 F F F Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 14 / 36
15 Introducing the Khatri-Rao Product Definition If B = [ b 1 b r ] IR n 1 r C = [ c 1 c r ] IR n 2 r then the Khatri-Rao product of B and C is given by B C = [ b 1 c 1 b r c r ]. Column-wise KPs. Note that B C IR n1n2 r. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 15 / 36
16 The CP Approximation Problem Equivalent Formulations r A λ j F (:, j) G(:, j) H(:, j) j=1 = A (1) F diag(λ j ) (H G) T F = A (2) G diag(λ j ) (H F ) T F = A (3) H diag(λ j ) (G F ) T F F Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 16 / 36
17 The CP Approximation Problem The Alternating LS Solution Framework... A X F = A (1) F diag(λ j ) (H G) T F = A (2) G diag(λ j ) (H F ) T F = A (3) H diag(λ j ) (G F ) T F 1. Fix G and H and improve λ and F. 2. Fix F and H and improve λ and G. 3. Fix F and G and improve λ and H. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 17 / 36
18 The CP Approximation Problem The Alternating LS Solution Framework Repeat: 1. Let F minimize A (1) F (H G) T F and for j = 1:r set λ j = F (:, j) 2 and F (:, j) = F (:, j)/λ j. 2. Let G minimize A (2) G (H F ) T F and for j = 1:r set λ j = G(:, j) 2 and G(:, j) = G(:, j)/λ j. 3. Let H minimize A (3) H (G F ) T F and for j = 1:r set λ j = H(:, j) 2 and H(:, j) = H(:, j)/λ j. These are linear least squares problems. The columns of F, G, and H are normalized. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 18 / 36
19 The CP Approximation Problem Solving the LS Problems The solution to min F A (1) F (H G) T F = min F A T (1) (H G) F T F can be obtained by solving the normal equation system (H G) T (H G) F T = (H G) T A T (1) Can be solved efficiently by exploiting two properties of the Khatri-Rao product. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 19 / 36
20 The Khatri-Rao Product Fast Property 1. If B IR n1 r and C IR n2 r, then (B C) T (B C) = (B T B). (C T C) where. denotes pointwise multiplication. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 20 / 36
21 The Khatri-Rao Product Fast Property 2. If B = [ b 1 b r ] IR n 1 r C = [ c 1 c r ] IR n 2 r z IR n 1n 2, and y = (B C) T z, then c1 T Zb 1 y =. Z = reshape(z, n 2, n 1 ) cr T Zb r Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 21 / 36
22 Overall: The Khatri-Rao LS Problem Structure Given B IR n 1 r, C IR n 2 r, and b IR n 1n 2, minimize B C)x z 2 Data Sparse: An n 1 n 2 -by-r LS problem defined by O((n 1 + n 2 )r) data. Solution Procedure 1. Form M = (B T B). (C T C). O((n 1 + n 2 )r 2 ). 2. Cholesky: M = LL T. O(r 3 ). 3. Form y = (B C) T using Property 2. O(n 1 n 2 r). 4. Solve Mx = y. O(r 2 ). O(n 1 n 2 r) vs O((n 1 n 2 r 2 ) Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 22 / 36
23 The Kronecker Product SVD Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 23 / 36
24 The Nearest Kronecker Product Problem Find B and C so that A B C F = min a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 a 51 a 52 a 53 a 54 a 61 a 62 a 63 a 64 a 11 a 21 a 12 a 22 a 31 a 41 a 32 a 42 a 51 a 61 a 52 a 62 a 13 a 23 a 14 a 24 a 33 a 43 a 34 a 44 a 53 a 63 a 54 a 64 b 11 b 12 b 21 b 22 b 31 b 32 = b 11 b 21 b 31 b 12 b 22 b 32 [ c11 c 12 c 21 c 22 ] F [ c11 c 21 c 12 c 22 ] F Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 24 / 36
25 The Nearest Kronecker Product Problem Find B and C so that A B C F = min It is a nearest rank-1 problem, φ A (B, C) = a 11 a 21 a 12 a 22 a 31 a 41 a 32 a 42 a 51 a 61 a 52 a 62 a 13 a 23 a 14 a 24 a 33 a 43 a 34 a 44 a 53 a 63 a 54 a 64 b 11 b 21 b 31 b 12 b 22 b 32 [ ] c11 c 21 c 12 c 22 F = Ã vec(b)vec(c)t F with SVD solution: Ã = UΣV T vec(b) = σ 1 U(:, 1) vec(c) = σ 1 V (:, 1) Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 25 / 36
26 The Nearest Kronecker Product Problem The Tilde Matrix a 11 a 12 a 13 a 14 A = a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 = a 51 a 52 a 53 a 54 a 61 a 62 a 63 a 64 implies à = a 11 a 21 a 12 a 22 a 31 a 41 a 32 a 42 a 51 a 61 a 52 a 62 a 13 a 23 a 14 a 24 a 33 a 43 a 34 a 44 a 53 a 63 a 54 a 64 = A 11 A 12 A 21 A 22 A 31 A 32 vec(a 11 ) T vec(a 21 ) T vec(a 31 ) T vec(a 12 ) T vec(a 22 ) T vec(a 32 ) T. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 26 / 36
27 The Kronecker Product SVD (KPSVD) Theorem If A = A 11 A 1,c A r2,1 A r2,c 2 A i2,j 2 IR r 1 c 1 then there exist U 1,..., U rkp IR r 2 c 2, V 1,..., V rkp IR r 1 c 1, and scalars σ 1 σ rkp > 0 such that A = r KP σ k U k V k. k=1 The sets {vec(u k )} and {vec(v k )} are orthonormal and r KP is the Kronecker rank of A with respect to the chosen blocking. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 27 / 36
28 The Kronecker Product SVD (KPSVD) Constructive Proof Compute the SVD of Ã: Ã = UΣV T = r KP σ k u k vk T k=1 and define the U k and V k by for k = 1:r KP. vec(u k ) = u k vec(v k ) = v k U k = reshape(u k, r 2, c 2 ), V k = reshape(v k, r 1, c 1 ) Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 28 / 36
29 The Kronecker Product SVD (KPSVD) Nearest rank-r If r r KP, then A r = r σ k U k V k k=1 is the nearest matrix to A (in the Frobenius norm) that has Kronecker rank r. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 29 / 36
30 Structured Kronecker Product Approximation min B,C A B C F Problems If A is symmetric and positive definite, then so are B and C. If A is a block Toeplitz with Toeplitz blocks, then B and C are Toeplitz. If A is a block band matrix with banded blocks, the B and C are banded. Can use Lanczos SVD if A is large and sparse. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 30 / 36
31 A Tensor Approximation Idea Motivation Unfold A IR n n n n into an n 2 -by-n 2 matrix A. Express A as a sum of Kronecker products: A = r σ k B k C k k=1 B k, C k IR n n Back to tensor: A = i.e., A(i 1, i 2, j 1, j 2 ) = r σ k C k B k k=1 r σ k C k (i 1, i 2 )B k (j 1, j 2 ) k=1 Sums of tensor products of matrices instead of vectors. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 31 / 36
32 The Nearest Kronecker Product Problem Harder φ A (B, C, D) = r 1 c 1 r 2 c 2 r 3 i 1=1 j 1=1 i 2=1 j 2=1 i 3=1 j 3=1 A B C D F = c 2 A(i 1, j 1, i 2, j 2, i 3, j 3 ) B(i 3, j 3 )C(i 2, j 2 )D(i 1, j 1 ) Trying to approximate an order-6 tensor with a triplet of order-2 tensors. Would have to apply componentwise optimization. Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 32 / 36
33 Concluding Remarks Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 33 / 36
34 Optional Fun Problems Problem E4. Suppose " B11 C 11 B 12 C 12 A = B 21 C 21 B 22 C 22 # and that the B ij and C ij are each m-by-m. (a) Assuming that structure is fully exploited, how many flops are required to compute y = Ax where x IR 2m2? (b) How many flops are required to explicitly form A? (c) How many flops are required to compute y = Ax assuming that A has been explicitly formed? Problem A4. Suppose A is n 2 -by-n 2. How would you compute X IR n n so that A X X F is minimized? Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD 34 / 36
Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010
More informationFrom Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationLecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationTensor Decompositions and Applications
Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =
More informationIntroduction to Tensors. 8 May 2014
Introduction to Tensors 8 May 2014 Introduction to Tensors What is a tensor? Basic Operations CP Decompositions and Tensor Rank Matricization and Computing the CP Dear Tullio,! I admire the elegance of
More informationBlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas
BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block
More informationLecture 2. Tensor Unfoldings. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,
More informationTensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University
Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell
More information1. Connecting to Matrix Computations. Charles F. Van Loan
Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationAM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition
AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationMatrix decompositions
Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLinear Least Squares. Using SVD Decomposition.
Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any
More information1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.
Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationKronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm
Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,
More informationComputational Multilinear Algebra
Computational Multilinear Algebra Charles F. Van Loan Supported in part by the NSF contract CCR-9901988. Involves Working With Kronecker Products b 11 b 12 b 21 b 22 c 11 c 12 c 13 c 21 c 22 c 23 c 31
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationTensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA
Tensor Decompositions Workshop Discussion Notes American Institute of Mathematics (AIM) Palo Alto, CA Carla D. Moravitz Martin July 9-23, 24 This work was supported by AIM. Center for Applied Mathematics,
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationTensor Analysis. Topics in Data Mining Fall Bruno Ribeiro
Tensor Analysis Topics in Data Mining Fall 2015 Bruno Ribeiro Tensor Basics But First 2 Mining Matrices 3 Singular Value Decomposition (SVD) } X(i,j) = value of user i for property j i 2 j 5 X(Alice, cholesterol)
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationBOOLEAN TENSOR FACTORIZATIONS. Pauli Miettinen 14 December 2011
BOOLEAN TENSOR FACTORIZATIONS Pauli Miettinen 14 December 2011 BACKGROUND: TENSORS AND TENSOR FACTORIZATIONS X BACKGROUND: TENSORS AND TENSOR FACTORIZATIONS C X A B BACKGROUND: BOOLEAN MATRIX FACTORIZATIONS
More informationFaloutsos, Tong ICDE, 2009
Large Graph Mining: Patterns, Tools and Case Studies Christos Faloutsos Hanghang Tong CMU Copyright: Faloutsos, Tong (29) 2-1 Outline Part 1: Patterns Part 2: Matrix and Tensor Tools Part 3: Proximity
More information4. Multi-linear algebra (MLA) with Kronecker-product data.
ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial
More informationMoore-Penrose Conditions & SVD
ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1
More information/16/$ IEEE 1728
Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector
More informationENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition
More informationOrthogonal tensor decomposition
Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arxiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky.
More informationMatrix assembly by low rank tensor approximation
Matrix assembly by low rank tensor approximation Felix Scholz 13.02.2017 References Angelos Mantzaflaris, Bert Juettler, Boris Khoromskij, and Ulrich Langer. Matrix generation in isogeometric analysis
More information7 Principal Component Analysis
7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is
More informationCP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION
International Conference on Computer Science and Intelligent Communication (CSIC ) CP DECOMPOSITION AND ITS APPLICATION IN NOISE REDUCTION AND MULTIPLE SOURCES IDENTIFICATION Xuefeng LIU, Yuping FENG,
More informationThird-Order Tensor Decompositions and Their Application in Quantum Chemistry
Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationMulti-Way Compressed Sensing for Big Tensor Data
Multi-Way Compressed Sensing for Big Tensor Data Nikos Sidiropoulos Dept. ECE University of Minnesota MIIS, July 1, 2013 Nikos Sidiropoulos Dept. ECE University of Minnesota ()Multi-Way Compressed Sensing
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationProblem set 5: SVD, Orthogonal projections, etc.
Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where
More informationFACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS
FACTORIZATION STRATEGIES FOR THIRD-ORDER TENSORS MISHA E. KILMER AND CARLA D. MARTIN Abstract. Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally,
More informationTensors and graphical models
Tensors and graphical models Mariya Ishteva with Haesun Park, Le Song Dept. ELEC, VUB Georgia Tech, USA INMA Seminar, May 7, 2013, LLN Outline Tensors Random variables and graphical models Tractable representations
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationMachine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationLeast Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined
More informationCVPR A New Tensor Algebra - Tutorial. July 26, 2017
CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationMath 671: Tensor Train decomposition methods II
Math 671: Tensor Train decomposition methods II Eduardo Corona 1 1 University of Michigan at Ann Arbor December 13, 2016 Table of Contents 1 What we ve talked about so far: 2 The Tensor Train decomposition
More informationThe Singular Value Decomposition
The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationc 2008 Society for Industrial and Applied Mathematics
SIAM J MATRIX ANAL APPL Vol 30, No 3, pp 1219 1232 c 2008 Society for Industrial and Applied Mathematics A JACOBI-TYPE METHOD FOR COMPUTING ORTHOGONAL TENSOR DECOMPOSITIONS CARLA D MORAVITZ MARTIN AND
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationTENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY
TENSOR APPROXIMATION TOOLS FREE OF THE CURSE OF DIMENSIONALITY Eugene Tyrtyshnikov Institute of Numerical Mathematics Russian Academy of Sciences (joint work with Ivan Oseledets) WHAT ARE TENSORS? Tensors
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationSVD, PCA & Preprocessing
Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees
More informationMust-read Material : Multimedia Databases and Data Mining. Indexing - Detailed outline. Outline. Faloutsos
Must-read Material 15-826: Multimedia Databases and Data Mining Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. Technical Report SAND2007-6702, Sandia National Laboratories,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationSeismic data interpolation and denoising by learning a tensor tight frame
Seismic data interpolation and denoising by learning a tensor tight frame Lina Liu 1, Gerlind Plonka Jianwei Ma 1 1,Department of Mathematics,Harbin Institute of Technology, Harbin, China,Institute for
More informationApproximate tensor-product preconditioners for very high order discontinuous Galerkin methods
Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods Will Pazner 1 and Per-Olof Persson 2 1 Division of Applied Mathematics, Brown University, Providence, RI, 02912
More informationMatrix-Product-States/ Tensor-Trains
/ Tensor-Trains November 22, 2016 / Tensor-Trains 1 Matrices What Can We Do With Matrices? Tensors What Can We Do With Tensors? Diagrammatic Notation 2 Singular-Value-Decomposition 3 Curse of Dimensionality
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationA LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES
Int J Appl Math Comput Sci, 00, Vol 1, No 1, 5 1 A LINEAR PROGRAMMING BASED ANALYSIS OF HE CP-RANK OF COMPLEELY POSIIVE MARICES YINGBO LI, ANON KUMMER ANDREAS FROMMER Department of Electrical and Information
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationMATH 612 Computational methods for equation solving and function minimization Week # 2
MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week
More informationData Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples
Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationLecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016
Lecture 8 Principal Component Analysis Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza December 13, 2016 Luigi Freda ( La Sapienza University) Lecture 8 December 13, 2016 1 / 31 Outline 1 Eigen
More informationIFT 6760A - Lecture 1 Linear Algebra Refresher
IFT 6760A - Lecture 1 Linear Algebra Refresher Scribe(s): Tianyu Li Instructor: Guillaume Rabusseau 1 Summary In the previous lecture we have introduced some applications of linear algebra in machine learning,
More information