Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan
|
|
- Solomon Johnston
- 5 years ago
- Views:
Transcription
1 Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 1 / 42
2 The Setting We looked at the nearest Rank-1 tensor problem in Lecture 1. For matrices, the nearest Rank-1 problem is an SVD problem. Is the nearest Rank-1 problem related to some tensor SVD? In Lectures 3 and 4 we will look at 4 different tensor SVD ideas. Structured SVDs, Schur Decompositions, and QR factorizations will be part of the scene. We set the stage for this in Lecture 2 by developing various power methods. It will be an occasion to refine what we know about tensor rank, tensor symmetry, and tensor unfoldings. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 2 / 42
3 Rayleigh Quotient Ideas Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 3 / 42
4 Rayleigh Quotient Ideas: The Matrix Case The Variational Approach to Singular Values and Vectors The singular values and singular vectors of a general matrix A IR m n are stationary values and vectors for the multilinear form ψ A (x, y) = x T Ay = m n a ij x i y j i=1 j=1 subject to the constraints x = y = 1 Let us understand this connection and then push the same idea for tensors. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 4 / 42
5 Rayleigh Quotient Ideas: The Matrix Case Gradient Calculations We seek unit vectors x and y that zero the gradient of ψ A (x, y) = ψ(x, y) λ 2 (x T x 1) µ 2 (y T y 1) Since ψ A (x, y) = m x i n a ij y j = ( n m ) y j a ij x i i=1 j=1 j=1 i=1 it follows that we want " Ay λx # " 0 # ψ A(x, y) = A T x µy = 0 Thus, λ = µ = x T Ay = ψ A(x, y). Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 5 / 42
6 Rayleigh Quotient Ideas: The Matrix Case Gradient Calculations ψ A (x, y) = [ Ay (x T Ay)x ] A T x (x T Ay)y = [ 0 ] 0 Singular Values and Vectors If U T AV = Σ = diag(σ i ) is the SVD of A and U = [ u 1 u m ] and V = [ v 1 v n ], then from AV = UΣ and A T U = V Σ T we have Av i = σ i u i A T u i = σ i v i Since σ i = ui T Av i, it follows that Av i (ui T Av i )u i = 0 A T u i (ui T Av i )v i = 0 Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 6 / 42
7 Rayleigh Quotient Ideas: The Matrix Case Gradient Calculations ψ A (x, y) = [ Ay (x T Ay)x ] A T x (x T Ay)y = [ 0 ] 0 Singular Values and Vector If U T AV = Σ = diag(σ i ) is the SVD of A and U = [ u 1 u m ] and V = [ v 1 v n ], then from AV = UΣ and A T U = V Σ T we have Av i = σ i u i A T u i = σ i v i Since σ i = ui T Av i, it follows that Av i (ui T Av i )u i = 0 A T u i (ui T Av i )v i = 0 Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 7 / 42
8 Rayleigh Quotient Ideas: The Tensor Case The Singular Values and Vectors of a Tensor: A Definition If A IR m n p, x IR m, y IR n, and z IR p, then the singular values and vectors of A are the stationary values and vectors of ψ A (x, y, z) = m n i=1 j=1 k=1 p a ijk x i y j z k subject to the constraints x = y = z = 1 Order-3 tensors are plenty good enough to show the main ideas. Generalizations to order d tensors are generally pretty obvious. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 8 / 42
9 Rayleigh Quotient Ideas: The Tensor Case Take a look at ψ A (x, y, z) Some handy rearrangements o fψ A : m n p ψ A (x, y, z) = a ijk x i y j z k = i=1 j=1 k=1 = = m x i i=1 n p a ijk y j z k j=1 k=1 ( n m y j j=1 p z k i=1 k=1 m k=1 i=1 j=1 ) p a ijk x i z k n a ijk x i y j Before we go after the gradient, let s frame the double summations in linear algebra terms... Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 9 / 42
10 Rayleigh Quotient Ideas: The Tensor Case Suppose m = 4,n = 3, and p = 2 ψ A (x, y, z) = = m i=1 x i n j=1 k=1 p a ijk y j z k m [ ] x i ai11 a i21 a i31 a i12 a i22 a i32 i=1 = x T a 111 a 121 a 131 a 112 a 122 a 132 a 211 a 221 a 231 a 212 a 222 a 232 a 311 a 321 a 331 a 312 a 322 a 332 a 411 a 421 a 431 a 412 a 422 a 432 z 1 y 1 z 1 y 2 z 1 y 3 z 2 y 1 z 2 y 2 z 2 y 3 z y Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 10 / 42
11 Rayleigh Quotient Ideas: The Tensor Case Suppose m = 4,n = 3, and p = 2 ψ A (x, y, z) = m i=1 = y T x i n j=1 k=1 p a ijk y j z k a 111 a 121 a 131 a 112 a 122 a 132 a 211 a 221 a 231 a 212 a 222 a 232 a 311 a 321 a 331 a 312 a 322 a 332 a 411 a 421 a 431 a 412 a 422 a 432 z x The matrix you see here is the mode-1 unfolding of A IR Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 11 / 42
12 Rayleigh Quotient Ideas: The Tensor Case ψ A (x, y, z) in terms of the Mode-1 Unfolding m n p ψ A (x, y, z) = x i a ijk y j z k = x T A (1) z y A (1) = i=1 j=1 k=1 a 111 a 121 a 131 a 112 a 122 a 132 a 211 a 221 a 231 a 212 a 222 a 232 a 311 a 321 a 331 a 312 a 322 a 332 a 411 a 421 a 431 a 412 a 422 a 432 (1,1) (2,1) (3,1) (1,2) (2,2) (3,2) Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 12 / 42
13 Rayleigh Quotient Ideas: The Tensor Case ψ A (x, y, z) in terms of the Mode-2 Unfolding ( n m ) p ψ A (x, y, z) = y j a ijk x i z k = y T A (2) z x A (2) = j=1 i=1 k=1 a 111 a 211 a 311 a 411 a 112 a 212 a 312 a 412 a 121 a 221 a 321 a 421 a 122 a 222 a 322 a 422 a 131 a 231 a 331 a 431 a 132 a 232 a 332 a 432 (1,1) (2,1) (3,1) (4,1) (1,2) (2,2) (3,2) (4,2) Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 13 / 42
14 Rayleigh Quotient Ideas: The Tensor Case ψ A (x, y, z) in terms of the Mode-3 Unfolding p m n ψ A (x, y, z) = a ijk x i y j = z T A (3) y x k=1 z k i=1 j=1 A (3) = [ a111 a 211 a 311 a 411 a 121 a 221 a 321 a 421 a 131 a 231 a 331 a 431 a 112 a 212 a 312 a 412 a 122 a 222 a 322 a 422 a 132 a 232 a 332 a 432 (1,1) (2,1) (3,1) (4,1) (1,2) (2,2) (3,2) (4,2) (1,3) (2,3) (3,3) (4,3) ] Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 14 / 42
15 Rayleigh Quotient Ideas: The Tensor Case Important Skill: Framing a Tensor Computation in Matrix Terms ψ A (x, y, z) = m n p a ijk x i y j z k i=1 j=1 k=1 = x T A (1) (z y) = y T A (2) (z x) = z T A (3) (y x) With these characterizations we can readily compute the stationary vectors and values of this function subject to the constraint that x = y = z = 1. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 15 / 42
16 Rayleigh Quotient Ideas: The Tensor Case The Gradient Computations Set the gradient of ψ A(x, y, z) = ψ A(x, y, z) λ 2 (x T x 1) µ 2 (y T y 1) τ 2 (z T z 1) to zero. Conclude that λ = µ = τ = ψ(x, y, z). If unit vectors x, y, and z to satisfy ψ A = A (1) (z y) ψ A(x, y, z)x A (2) (z x) ψ A(x, y, z)y A (3) (y x) ψ A(x, y, z)z = then σ = ψ(x, y, z) is a singular value of A and x, y, and z the associated singular vectors How can we solve this (highly structured) system of nonlinear equations? Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 16 / 42
17 A Higher-Order Power Method Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 17 / 42
18 The Power Method for Matrices The Gradient Equations Ay = σ x where σ = ψ A (x, y) = x T Ay. Iterate... A T x = σ y y a given unit vector Repeat Until Happy x = Ay, x x/ x ỹ = A T x, y ỹ/ ỹ σ = ψ A (x, y) Same as power method applied to A T A.In the limit, σuv T converges to the closest rank-1 to A. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 18 / 42
19 The Higher-Order Power Method for Tensors The Gradient Equations A (1) (z y) = σ x A (2) (z x) = σ y A (3) (y x) = σ z where σ = ψ A(x, y, z) = x T A (1) (z y) = y T A (2) (z x) = z T A (3) (y x) Iterate... y and z given unit vectors Repeat Until Happy x = A (1) (z y), ỹ = A (2) (z x), z = A (3) (y x), σ = ψ A (x, y, z) x = x/ x y = ỹ/ ỹ z = z/ z Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 19 / 42
20 The Higher-Order Power Method for Tensors A Tempting Repetition Step 1. Compute closest σ 1 u 1 v 1 w 1 to A. Step 2. Compute closest σ 2 u 2 v 2 w 2 to A σ 1 u 1 v 1 w 1. Step 3. Compute closest σ 3 u 3 v 3 w 3 to A σ 1 u 1 v 1 w 1 σ 2 u 2 v 2 w 2. Step r. Compute closest σ r u r v r w r to r 1 A σ k u k v k w k. k=1 This does not render a best approximation to A as it does for matrices. So maybe we better look more closely at sums of rank-1 tensors and rank. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 20 / 42
21 The Idea of Tensor Rank Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 21 / 42
22 A IR as a minimal sum of rank-1 tensors. Challenge: Find thinnest possible X, Y, Z IR 2 r so a 111 a 211 a 121 a 221 a 112 a 212 a 122 a 222 = r z k y k x k k=1 where X = [ x 1 x r ], Y = [ y 1 y r ], andz = [ z 1 z r ] are column partitionings. The minimizing r is the tensor rank. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 22 / 42
23 A IR as a minimal sum of rank-1 tensors. A Surprising Fact a 111 a 211 a 121 If a 221 rank = 2 with prob 79% a 112 = randn(8,1), then a 212 rank = 3 with prob 21% a 122 a 222 This is very different from the matrix case where A = randn(n,n) implies rank(a) = n with probability 100%. A strong hint that tensor rank is decidely more complicated than matrix rank. What are the full rank 2-by-2-by-2 tensors? Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 23 / 42
24 The 79/21 Property Connection to a Generalized Eigenvalue Problem If the a ijk are randn, then ([ a111 a det 121 a 211 a 221 ] [ a112 a λ 122 a 212 a 222 ]) = 0 has real distinct roots with probability 79% and complex conjugate roots with probability 21%. The sum-of-rank-ones expansion for A involves the generalized eigenvectors of this problem. Yet another example of turning a tensor problem into a matrix problem. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 24 / 42
25 Symmetry Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 25 / 42
26 What is a Symmetric Tensor? The Order-3 Definition: C IR n n n c ikj c ijk = c jik c jki c kij c kji It just means that permuting the indices does not change the value. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 26 / 42
27 Symmetric Tensors C IR There are 10 values to specify... (1,1,1) (1,1,2),(1,2,1),(2,1,1) (1,1,3),(1,3,1),(3,1,1) (2,2,2) (2,2,1),(2,1,2),(2,2,1) (2,2,3),(2,3,1),(2,2,3) (3,3,3) (3,3,1),(3,1,3),(3,3,1) (3,3,2),(3,2,3),(3,3,2) (1,2,3),(1,3,2),(2,1,3),(2,3,1),(3,1,2),(3,2,1) Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 27 / 42
28 Symmetric Tensors The Modal Unfoldings are all the Same c 111 c 121 c 131 c 112 c 122 c 132 c 113 c 123 c 133 C (1) = c 211 c 221 c 231 c 212 c 222 c 232 c 213 c 223 c 233 c 311 c 321 c 331 c 312 c 322 c c 323 c 333 c 111 c 211 c 311 c 112 c 212 c 312 c 113 c 213 c 313 C (2) = c 121 c 221 c 321 c 122 c 222 c 322 c 123 c 223 c 323 c 131 c 231 c 331 c 132 c 232 c c 233 c 333 c 111 c 211 c 311 c 121 c 221 c 321 c 131 c 231 c 331 C (3) = c 112 c 212 c 312 c 122 c 222 c 322 c 132 c 232 c 332 c 113 c 213 c 313 c 123 c 223 c c 233 c 333 Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 28 / 42
29 Symmetric Tensors Rank-1 Symmetric Tensors If x IR n, then C = x x x is a symmetric rank-1 tensor. This is obvious since c ijk = x i x j x k. Note that vec(x x x) = x x x Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 29 / 42
30 Symmetric Tensors Symmetric Rank An order-3 symmetric tensor C has symmetric rank r if there exists x 1,..., x r IR n and σ IR r such that C = r σ k x k x k x k k=1 and no shorter sum of symmetric rank-1 tensors exists. Symmetric rank is denoted by rank S (C). Note, there may be a shorter sum so C = r k=1 σ k x k ỹ k z k Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 30 / 42
31 Symmetric Tensors: Interesting Aside about Rank Fact If C C n n is an order-d symmetric tensor, then with probability 1 f (d, m) + 1 if (d, n) = (3,5),(4,3),(4,4), or (4,5) rank S (C) = f (d, n) otherwise where ( n + d 1 d f (d, n) = ceil n ) Symmetric Tensor Rank is more tractable than General Tensor Rank. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 31 / 42
32 Rayleigh Quotient Ideas Symmetric Matrix Eigenvalues If C is a symmeytric matrix, then the stationary values of φ C (x) = x T Cx subject to the constraint that x 2 = 1 are the eigenvalues of C. The associated stationary vectors are eigenvectors. Symmetric Tensor Eigenvalues If C is a symmetric tensor, then the stationary values of φ C (x) = n n n c ijk x i x j x k = x T C (1) (x x) i=1 j=1 k=1 subject to the constraint that x 2 = 1 are the eigenvalues of C. The associated stationary vectors are eigenvectors. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 32 / 42
33 Rayleigh Quotients: Symmetric Tensor Case Symmetric Higher-Order Power Method Initialize unit vector x. Repeat Until Happy x = C (1) (x x) x = x/ x Sample Convergence Result If the order of C is even and M is a square unfolding, then the iteration converges if M is positive definite. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 33 / 42
34 The SVD - SymEig Connection The Sym of a Matrix sym(a) = [ 0 A A T 0 The SVD of A Relates to the EVD of sym(a) ] IR (n 1+n 2 ) (n 1 +n 2 ) If A = U diag(σ i ) V T is the SVD of A IR n 1 n 2, then for k = 1:rank(A) [ 0 A A T 0 ] [ uk ±v k where u k = U(:, k) and v k = V (:, k). ] = ±σ k [ uk ±v k ] The above SVD-related power method was essentially traditional power method applied to finding the largest eigenvector of sym(a). Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 34 / 42
35 The SVD - SymEig Connection The Sym of a Matrix sym(a) = [ 0 A A T 0 The SVD of A Relates to the EVD of sym(a) ] IR (n 1+n 2 ) (n 1 +n 2 ) If A = U diag(σ i ) V T is the SVD of A IR n 1 n 2, then for k = 1:rank(A) [ 0 A A T 0 ] [ uk ±v k where u k = U(:, k) and v k = V (:, k). ] = ±σ k [ uk ±v k ] Let us look at the analog of this for tensors. Need transposition Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 35 / 42
36 Tensor Transposition: The Order-3 Case Six possibilities... If C IR n 1 n 2 n 3, then there are 6 = 3! possible transpositions identified by the notation C < [i j k] > where [i j k] is a permutation of [1 2 3]: B = C < [1 2 3] > C < [1 3 2] > C < [2 1 3] > C < [2 3 1] > C < [3 1 2] > C < [3 2 1] > for i = 1:n 1, j = 1:n 2, k = 1:n 3. = b ijk b ikj b jik b jki b kij b kji = c ijk Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 36 / 42
37 Symmetric Embedding of a Tensor C = sym(a) An Order-3 Example... C(:, :, 3) C(:, :, 2) C(:, :, 1) A <[123]> A <[132]> A <[213]> A <[231]> A <[312]> A <[321]> Note the careful placement of A s six transposes Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 37 / 42
38 Connecting A and sym(a) Algorithms Interesting connections between power methods with A and power methods with sym(a) Analysis If σ, u v z is a stationary pair for sym(a) then so are u σ, u v, σ, v, z z σ, u v z Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 38 / 42
39 Symmetric Tensors: Interesting Aside about Rank Interesting Possible Connection Easy: d! rank(a) rank S (sym(a)) Equality is hard or perhaps not true. But if it could be established, then we would have new insight into the tensor rank problem. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 39 / 42
40 Optional Fun Problems Problem E2. Suppose A IR m n2 with m > n 2. Develop an alternating least squares solution framework for minimizing A(x y) b 2 where b IR n2 and x, y IR n. Problem A2. Same notation as E2. What is the gradient of φ(x, y) = 1 2 A(x y) b 2 2 Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 40 / 42
41 Closing Remarks Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 41 / 42
42 Bridging the Gap via Tensor Unfoldings A Common Framework for Tensor Computations Turn tensor A into a matrix A. 2. Through matrix computations, discover things about A. 3. Draw conclusions about tensor A based on what is learned about matrix A. Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Symmetries and Rank 42 / 42
From Matrix to Tensor. Charles F. Van Loan
From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or
More informationLecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010
More informationLecture 4. CP and KSVD Representations. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix
More informationSpectrum and Pseudospectrum of a Tensor
Spectrum and Pseudospectrum of a Tensor Lek-Heng Lim University of California, Berkeley July 11, 2008 L.-H. Lim (Berkeley) Spectrum and Pseudospectrum of a Tensor July 11, 2008 1 / 27 Matrix eigenvalues
More information1. Connecting to Matrix Computations. Charles F. Van Loan
Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations
More informationLecture 2. Tensor Unfoldings. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationBlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas
BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block
More informationbe a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u
MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =
More informationSingular Value Decomposition
Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationMachine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.2 Dimensionality Reduction Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationEigenvalues of tensors
Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors
More informationGROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule
GROUP THEORY PRIMER New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule 1. Tensor methods for su(n) To study some aspects of representations of a
More informationPositive Definite Matrix
1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationAssignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran
Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran 1. Let A m n be a matrix of real numbers. The matrix AA T has an eigenvector x with eigenvalue b. Then the eigenvector y of A T A
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationIn this section again we shall assume that the matrix A is m m, real and symmetric.
84 3. The QR algorithm without shifts See Chapter 28 of the textbook In this section again we shall assume that the matrix A is m m, real and symmetric. 3.1. Simultaneous Iterations algorithm Suppose we
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationLatent Semantic Analysis. Hongning Wang
Latent Semantic Analysis Hongning Wang CS@UVa VS model in practice Document and query are represented by term vectors Terms are not necessarily orthogonal to each other Synonymy: car v.s. automobile Polysemy:
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationCheng Soon Ong & Christian Walder. Canberra February June 2017
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationStructured Matrix Problems from Tensors
Structured Matrix Problems from Tensors Charles F. Van Loan Abstract This chapter looks at the structured matrix computations that arise in the context of various svd-like tensor decompositions. Kronecker
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationLecture Notes in Linear Algebra
Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................
More informationA = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].
Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationA VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010
A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationVECTORS, TENSORS AND INDEX NOTATION
VECTORS, TENSORS AND INDEX NOTATION Enrico Nobile Dipartimento di Ingegneria e Architettura Università degli Studi di Trieste, 34127 TRIESTE March 5, 2018 Vectors & Tensors, E. Nobile March 5, 2018 1 /
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More information1 Non-negative Matrix Factorization (NMF)
2018-06-21 1 Non-negative Matrix Factorization NMF) In the last lecture, we considered low rank approximations to data matrices. We started with the optimal rank k approximation to A R m n via the SVD,
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationNotes on Linear Algebra
1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar
More informationHyperdeterminants, secant varieties, and tensor approximations
Hyperdeterminants, secant varieties, and tensor approximations Lek-Heng Lim University of California, Berkeley April 23, 2008 Joint work with Vin de Silva L.-H. Lim (Berkeley) Tensor approximations April
More informationUNCORRECTED PROOF. Lecture Notes in Mathematics Editors-in-Chief: J.-M. Morel, Cachan. B. Teissier, Paris 4
Lecture Notes in Mathematics 13 1 Editors-in-Chief: J.-M. Morel, Cachan 3 B. Teissier, Paris 4 Advisory Board: Camillo De Lellis, Zurich 5 Mario di Bernardo, Bristol Michel Brion, Grenoble 8 Alessio Figalli,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationLinear Least Squares. Using SVD Decomposition.
Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview
More informationTensors, and differential forms - Lecture 2
Tensors, and differential forms - Lecture 2 1 Introduction The concept of a tensor is derived from considering the properties of a function under a transformation of the coordinate system. A description
More informationLECTURE 5: SURFACES IN PROJECTIVE SPACE. 1. Projective space
LECTURE 5: SURFACES IN PROJECTIVE SPACE. Projective space Definition: The n-dimensional projective space P n is the set of lines through the origin in the vector space R n+. P n may be thought of as the
More informationLatent Semantic Analysis. Hongning Wang
Latent Semantic Analysis Hongning Wang CS@UVa Recap: vector space model Represent both doc and query by concept vectors Each concept defines one dimension K concepts define a high-dimensional space Element
More information1. In this problem, if the statement is always true, circle T; otherwise, circle F.
Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation
More informationLINEAR ALGEBRA KNOWLEDGE SURVEY
LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.
More informationTensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University
Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationAlgebraic models for higher-order correlations
Algebraic models for higher-order correlations Lek-Heng Lim and Jason Morton U.C. Berkeley and Stanford Univ. December 15, 2008 L.-H. Lim & J. Morton (MSRI Workshop) Algebraic models for higher-order correlations
More informationThe QR Decomposition
The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided
More informationNIELINIOWA OPTYKA MOLEKULARNA
NIELINIOWA OPTYKA MOLEKULARNA chapter 1 by Stanisław Kielich translated by:tadeusz Bancewicz http://zon8.physd.amu.edu.pl/~tbancewi Poznan,luty 2008 ELEMENTS OF THE VECTOR AND TENSOR ANALYSIS Reference
More informationVector Spaces, Orthogonality, and Linear Least Squares
Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ
More informationMAA507, Power method, QR-method and sparse matrix representation.
,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationMost tensor problems are NP hard
Most tensor problems are NP hard (preliminary report) Lek-Heng Lim University of California, Berkeley August 25, 2009 (Joint work with: Chris Hillar, MSRI) L.H. Lim (Berkeley) Most tensor problems are
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationChapter f Linear Algebra
1. Scope of the Chapter Chapter f Linear Algebra This chapter is concerned with: (i) Matrix factorizations and transformations (ii) Solving matrix eigenvalue problems (iii) Finding determinants (iv) Solving
More informationMath 291-2: Lecture Notes Northwestern University, Winter 2016
Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationSingular Value Decomposition and Digital Image Compression
Singular Value Decomposition and Digital Image Compression Chris Bingham December 1, 016 Page 1 of Abstract The purpose of this document is to be a very basic introduction to the singular value decomposition
More informationStructure in Data. A major objective in data analysis is to identify interesting features or structure in the data.
Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two
More information