Computational Multilinear Algebra

Size: px
Start display at page:

Download "Computational Multilinear Algebra"

Transcription

1 Computational Multilinear Algebra Charles F. Van Loan Supported in part by the NSF contract CCR

2 Involves Working With Kronecker Products b 11 b 12 b 21 b 22 c 11 c 12 c 13 c 21 c 22 c 23 c 31 c 32 c 33 = b 11 c 11 b 11 c 12 b 11 c 13 b 12 c 11 b 12 c 12 b 12 c 13 b 11 c 21 b 11 c 22 b 11 c 23 b 12 c 21 b 12 c 22 b 12 c 23 b 11 c 31 b 11 c 32 b 11 c 33 b 12 c 31 b 12 c 32 b 12 c 33 b 21 c 11 b 21 c 12 b 21 c 13 b 22 c 11 b 22 c 12 b 22 c 13 b 21 c 21 b 21 c 22 b 21 c 23 b 22 c 21 b 22 c 22 b 22 c 23 b 21 c 31 b 21 c 32 b 21 c 33 b 22 c 31 b 22 c 32 b 22 c 33 Replicated Block Structure

3 Properties Quite predictable: (B C) T = B T C T (B C) 1 = B 1 C 1 (B C)(D F ) = BD CF B (C D) = (B C) D Think twice: B C = C B B C = (Perfect Shuffle)(C B)(Perfect Shuffle) T

4 The Perfect Shuffle S p,q S 3, = S 3, Takes the length-pq card deck x, splits it into p piles of length-q each, and then takes one card from each pile in turn until the deck is reassembled.

5 C B = S m1,m 2 (B C)S T n 1,n 2 B IR m 1 n 1,C IR m 2 n 2 E.g., A = B C = b 11 c 11 b 11 c 12 b 11 c 13 b 12 c 11 b 12 c 12 b 12 c 13 b 11 c 21 b 11 c 22 b 11 c 23 b 12 c 21 b 12 c 22 b 12 c 23 b 21 c 11 b 21 c 12 b 21 c 13 b 22 c 11 b 22 c 12 b 22 c 13 b 21 c 21 b 21 c 22 b 21 c 23 b 22 c 21 b 22 c 22 b 22 c 23 S 2,2 AS T 2,3 = = C B = c 11 b 11 c 11 b 12 c 12 b 11 c 12 b 12 c 13 b 11 c 13 b 12 c 11 b 21 c 11 b 22 c 12 b 21 c 12 b 22 c 13 b 21 c 13 b 22 c 21 b 11 c 21 b 12 c 22 b 11 c 22 b 12 c 23 b 11 c 23 b 12 c 21 b 21 c 21 b 22 c 22 b 21 c 22 b 22 c 23 b 21 c 23 b 22

6 Inheriting Structure If B and C are nonsingular lower(upper) triangular banded symmetric positive definite stochastic Toeplitz permutations orthogonal then B C is nonsingular lower(upper)triangular block banded symmetric positive definite stochastic block Toeplitz a permutation orthogonal

7 Factorizations of B C Obvious... B C =(P T B L B U B ) (P T C L C U C )=(P B P C ) T (L B L C )(U B U C ) B C =(G B G T B) (G C G T C) = (G B G C )(G B G C ) T B C =(Q B R B ) (Q C R C ) = (Q B Q C )(R B R C ) Sort of... B C =(Q B Λ B Q T B) (Q C Λ C Q T C)=(Q B Q C )(Λ B Λ C )(Q B Q C ) T B C =(U B Σ B V T B ) (U C Σ C V T C ) = (U B U C )(Σ B Σ C )(V B V C ) T Problematic... B C =(Q B R B P T B ) (Q C R C P T C )=(Q B Q C )(R B R C )(P B P C ) T

8 Sort of =

9 P =

10 Fast Factoring Suppose B IR m m and C IR n n are positive definite. A = B C is an mn-by-mn symmetric positive definite natrix. Ordinarily, a Cholesky factorization GG T this size would cost O(m 3 n 3 ) flops. But G A = G B G C and G B and G C require O(m 3 + n 3 ) flops.

11 Fast Solving If B,C IR m m, then the m 2 -by-m 2 system (B C)x = f CXB T = F x = vec(x), f= vec(f ) canbesolvedino(m 3 ) flops: CY = F XB T = Y via factorizations of B and C.

12 The vec Operation X IR m n vec(x) = X(:, 1) X(:, 2). X(:,n) Y = CXB T vec(y )=(B C)vec(X)

13 Matrix Equations Sylvester: FX + XG T = C (I n F + G I m ) vec(x) = vec(c) Lyapunov: FX + XF T = C (I n F + F I n )vec(x) = vec(c) Generalized Sylvester: F 1 XG T 1 + F 2 XG T 2 = C (G 1 F 1 + G 2 F 2 ) vec(x) = vec(c) These can be solved in O(n 3 ) time via the (generalized) Schur decomposition. See Bartels and Stewart, Golub, Nash, and Van Loan, and J. Gardiner, M.R. Wette, A.J. Laub, J.J. Amato, and C.B. Moler.

14 Talk Outline How do we solve (1) min W ((B C)x d) (weighted least quares) (2) U T [B C d] V = Σ (total least squares) (3) (A p A 1 λi)x = b (shifted linear systems) given that these problems (1 ) min (B C)x d (2 ) U T (B C)V = Σ are easy (3 ) (A p A 1 )x = b

15 The Rise of the Kronecker Product Thriving Application Areas Sparse Factorizations for fast linear transforms Tensoring Low-Dimension Techniques

16 Thriving application areas such as Semidefinite Programming... Some sample problems... X X + A T DA u = f. 0 A T I A 0 0 Z I 0 X I x y z = r d r p r c. See Alizadeh, Haeberly, and Overton (1998).

17 Symmetric Kronecker Products For symmetric X IR n n and arbitrary B,C IR n n this operation is defined by 1 (B C)svec(X) =svec CXB T + BXC T 2 where the svec operation is a normalized stacking of X s subdiagonal columns, e.g., X = x 11 x 12 x 13 x 21 x 22 x 23 x 31 x 32 x 33 svec(x) = x 11 2x21 2x31 x 22 2x32 x 33 T. svec stacks the subdiagonal portion of X s columns.

18 The Rise of the Kronecker Product Cont d Sparse Factorizations Kronecker products are proving to be a very effective way to look at fast linear transforms.

19 A Sparse Factorization of the DFT Matrix n =2 t F n = A t A 1 P n P n = S 2,n/2 (I 2 S 2,n/4 ) (I n/4 S 2,2 ) A q = I r I L/2 Ω L/2 I L/2 Ω L/2 L =2 q,r= n/l Ω L/2 = diag(1, ω L,...,ω L/2 1 L ) ω L =exp( 2πi/L)

20 Different FFTs Correspond to Different Factorizations of F n TheCooley-TukeyFFTisbasedony = F n x = A t A 1 P n x x P n x for k =1:t x A q x end y x The Gentleman-Sande FFT is based on y = F n x = F T n x = P T n A T 1 A T t x for k = t: 1:1 x A T q x end y P T n x

21 Matrix Transpose A IR m n and B = A T, then vec(b) =S n,m vec(a). a 11 a 12 a 13 a 21 a 22 a 23 = a 11 a 21 a 12 a 22 a 13 a 23 a 11 a 21 a 12 a 22 a 13 a 23 = a 11 a 12 a 13 a 21 a 22 a 23 T

22 Multiple Pass Transpose To compute B = A T where A IR m n factor and then execute a =vec(a) for k =1:t a Γ k a end Define B IR n m by vec(b) =a. S n,m = Γ t Γ 1 Different transpose algorithms correspond to different factorizations of S n,m.

23 An Example If m = pn, then S n,m = Γ 2 Γ 1 = (I p S n,n )(S n,p I n ) The first pass b (1) = Γ 1 vec(a) corresponds to a block transposition: A = A 1 A 2 A 3 A 4 B (1) = A 1 A 2 A 3 A 4. The second pass b (2) = Γ 2 b (1) carries out the transposition of the blocks. B (1) = A 1 A 2 A 3 A 4 B (2) = A T 1 A T 2 A T 3 A T 4. Note that B (2) = A T.

24 The Rise of the Kronecker Product Cont d Tensoring Low-Dimension Techniques Greater willingness to entertain problems of high dimension.

25 Tensoring Low Dimensional Ideas Quadrature in one and three dimensions: b a f(x) dx n i=1 w i f(x i ) = w T f(x) b 1 b 2 b 3 a 1 a 2 a 3 f(x, y, z) dx dy dz n x n y n z i=1 j=1 k=1 w (x) i w (y) j w (z) k f(x i,y j,z k ) = (w (x) w (y) w (z) ) T f(x y z)

26 Higher Dimensional KP Problems 1. Andre, Nowak, and Van Veen (1997). In higher-order statistics the calculations involve the cumulants x x x. (Note that vec(xx T )=x x.) 2. de Lathauwer, de Moor, and Vandewalle (2000) multilinear SVD, low rank approximation of tensors

27 Least Squares Problems that Involve Kronecker Products

28 Some Least Squares Problems Least squares problems of the form min (B C)x b can be efficiently solved by computing the QR factorizations (or SVDs) of B and C. See Fausett and Fulton (1994) and Fausett, Fulton, and Hashish (1997). Barrlund (1998) on surface fitting with splines... (A 1 A 2 )x f such that (B 1 B 2 )x = g

29 Weighted Least Squares min W 1/2 ((B C)x b) 2 W B C B T C T 0 r x = b 0 Compute the QR factorizations B = Q B R B 0 C = Q C R C 0

30 The augmented system transforms to E 11 E 12 R B R C E 21 E 22 0 R T B R T C 0 0 r 1 r 2 x = b1 b2 0 where E 11 E 12 E 21 E 22 is a simple permutation of (Q B Q C ) T W (Q B Q C ). Solve the E 22 system via congugate gradients exploiting structure.

31 When the blocks are Kronecker products... min B 1 C 1 B 2 C 2 x b 2 Compute a pair of GSVD s: B 1 = U 1B D 1B X T B B 2 = U 2B D 2B X T B C 1 = U 1C D 1C X T C C 2 = U 2C D 2C X T C. B 1 C 1 B 2 C 2 = U 1B U 2B 0 0 U 1C U 2C D 1B D 2B D 1C D 2C X T B X T C.

32 Total Least Squares LS: min b + r ran(a) r 2 2 Ax LS = b + r opt TLS: min b + r ran(a + E) E 2 F + r 2 2 (A + E opt )x TLS = b + r opt Errors in Variables

33 Total Least Squares Solution To solve min b + r ran(a + E) E 2 F + r 2 2 (A + E opt )x TLS = b + r opt compute the SVD of [A b] IR m n+1 : U T A b V = Σ and set x TLS = V (1:n, n +1)/V (n +1,n+1)

34 If A is a Kronecker Product B C... We need the last column of V in U T FV = Σ where First compute the SVDs of B and C: F = B C b U T B BV B = Σ B U T C CV C = Σ C If then Ũ = U B U C Ṽ = V B V C Ũ T F Ṽ = Σ B Σ C g F where g = Ũ T b We need the smallest right singular vector of F.

35 Think Eigenvector The right singular vectors of F are the eigenvectors of where C = F T F = ΣB Σ C g T Σ B Σ C g = D = Σ T BΣ B Σ T CΣ C (square and diagonal) D z z T α z =(Σ B Σ C ) T g α = g T g

36 Eigenvectors/Eigenvalues of Bordered Matrices C = d z 1 0 d z d 3 0 z d 4 z 4 z 1 z 2 z 3 z 4 α The eigenvalues of C are the zeros of f(µ) =µ α + z T (D µi) 1 z = µ α + N i=1 z 2 i d i µ Can assume that the d i are distinct and that the z i are nonzero. The zeros of f are separated by the d i. Can show that the smallest root of f (a.k.a. the smallest eigenvalue of F ) is between 0 and the smallest d i.

37 From the eigenvalue equation it follows that D z z T α x 1 = µ min x =(D µ min I) 1 z Thus the sought-after singular vector of F is just a unit vector in the direction x 1 (D µ min I) 1 z 1

38 Overall Solution Procedure 1. First compute the SVDs of B and C: U T B BV B = Σ B U T C CV C = Σ C 2. Find smallest root of f(µ) =µ α + z T (D µi) 1 z. 3. Get smallest singular vector w of ΣB Σ C g 4. Get the TLS solution from v = V B V C w

39 A Shifted Kronecker Product System

40 Frequency Response Suppose we wish to evaluate for many different values of µ. φ(µ) = c T (A µi) 1 d It pays to recompute the real Schur decomposition.. Q T AQ = T = φ(µ) = c T Q(T µi) 1 Q T d (O(n 2 ) per evaluation

41 Solve A p A 1 λi x = b. First compute the real Schur decompositions A k = Q k T k Q T k,i=1:p Set Q = Q (p) Q (1). It follows that Q T A (p) A (1) Q = T (p) T (1) T and we get T (p) T (1) λi n y = c where y IR n and c IR n are defined by x = Q (p) Q (1) y c = Q (p) Q (1) T b.

42 Unshifted Case The standard approach for solving is to recognize that T (p) T (1) y = c T = p i=1 Iρi T (i) I µi T (i) IR n i n i where ρ i = n 1 n i 1 and µ i = n i+1 n p for i =1:p. We then obtain... y c for i =1:p y I ρi T (i) I µi 1 y end

43 The p =2case: (F G λi)x = b f 11 G λi m f 12 G f 13 G f 14 G 0 f 22 G λi m f 23 G f 24 G 0 0 f 33 G λi m f 34 G f 44 G λi m y 1 y 2 y 3 y 4 = c 1 c 2 c 3 c 4 Solve the triangular system (f 44 G λi m ) y 4 = c 4 for y 4. Substituting this into the above system reduces it to f 11 G λi m f 12 G f 13 G 0 f 22 G λi m f 23 G 0 0 f 33 G λi m y 1 y 2 y 3 = c 1 c 2 c 3 where c i = c i f i4 Gy 4, i =1:3.

44 It is recursive... The General Triangular Case function y = KPShiftSolve(T,c,λ,n,α) n is a p-by-1 integer vector T is a p-by-1 cell array where T {i} is an n i -by-n i upper triangular matrix for i =1:p. Assume that c IR N where N = n 1 n p and λ and α are scalars so A =(αt {p} T {1} λi N ) A is nonsingular. y IR N solves Ay = c.

45 p length(n) if p == 1 y (αt {1} λi)\c else m n 1 n p 1 for i = n p : 1:1 idx 1+(i 1)m:im α T {p}(i, i) y(idx) KPShiftSolve(T,b(idx), λ,n(1:p 1), α) z (T {p 1} T {1})y(idx) for j =1:i 1 jdx 1+(j 1)m:jm c(jdx) c(jdx) T {p}(j, i)z end end end

46 Two-by-Two Bumps f 11 G λi m f 12 G f 13 G f 14 G 0 f 22 G λi m f 23 G f 24 G 0 0 f 33 G λi m f 34 G 0 0 f 43 G f 44 G λi m y 1 y 2 y 3 y 4 = c 1 c 2 c 3 c 4 Solve for y 3 and y 4 at the same time. Revert to localized complex arithmetic. Z H f 33 f 34 f 43 f 44 Z = t 33 t 34 0 t 44

47 High Order Kronecker Products y =(F p F 1 ) x F i IR n i n i It is convenient to make use of the factorization F p F 1 = M p M 1 where M i = Π T n i,n/n i IN/ni F i N = n 1 n p Matlab: Z x for i =1:p Z (F i reshape(z, n i,n/n i )) T end y reshape(z, N, 1)

48 Conclusion Our goal is to heighten the profile of the Kronecker product and develop an infrastructure of methods thereby making it easier for the numerical linear algebra community to spot Kronecker opportunities. With precedent... The development of effective algorithms for the QR and SVD factorizations turned many A T A problems into least square/singular value calculations. The development of the QZ algorithm (Moler and Stewart (1973)) for the problem Ax = λbx prompted a rethink of standard eigenproblems that have the form B 1 Ax = λx.

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Lecture 4. CP and KSVD Representations. Charles F. Van Loan

Lecture 4. CP and KSVD Representations. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix

More information

From Matrix to Tensor. Charles F. Van Loan

From Matrix to Tensor. Charles F. Van Loan From Matrix to Tensor Charles F. Van Loan Department of Computer Science January 28, 2016 From Matrix to Tensor From Tensor To Matrix 1 / 68 What is a Tensor? Instead of just A(i, j) it s A(i, j, k) or

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 4. Tensor-Related Singular Value Decompositions Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Lecture 2. Tensor Unfoldings. Charles F. Van Loan

Lecture 2. Tensor Unfoldings. Charles F. Van Loan From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

EE364a Review Session 7

EE364a Review Session 7 EE364a Review Session 7 EE364a Review session outline: derivatives and chain rule (Appendix A.4) numerical linear algebra (Appendix C) factor and solve method exploiting structure and sparsity 1 Derivative

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University

Tensor Network Computations in Quantum Chemistry. Charles F. Van Loan Department of Computer Science Cornell University Tensor Network Computations in Quantum Chemistry Charles F. Van Loan Department of Computer Science Cornell University Joint work with Garnet Chan, Department of Chemistry and Chemical Biology, Cornell

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Math 515 Fall, 2008 Homework 2, due Friday, September 26. Math 515 Fall, 2008 Homework 2, due Friday, September 26 In this assignment you will write efficient MATLAB codes to solve least squares problems involving block structured matrices known as Kronecker

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III

More information

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Kronecker product and

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order Filomat 29:8 2015, 1831 1837 DOI 10.2298/FIL1508831B Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Positive Solution to a Generalized

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Chapter f Linear Algebra

Chapter f Linear Algebra 1. Scope of the Chapter Chapter f Linear Algebra This chapter is concerned with: (i) Matrix factorizations and transformations (ii) Solving matrix eigenvalue problems (iii) Finding determinants (iv) Solving

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan

Lecture 2. Tensor Iterations, Symmetries, and Rank. Charles F. Van Loan Structured Matrix Computations from Structured Tensors Lecture 2. Tensor Iterations, Symmetries, and Rank Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured

More information

1. Connecting to Matrix Computations. Charles F. Van Loan

1. Connecting to Matrix Computations. Charles F. Van Loan Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

More information

APPM 3310 Problem Set 4 Solutions

APPM 3310 Problem Set 4 Solutions APPM 33 Problem Set 4 Solutions. Problem.. Note: Since these are nonstandard definitions of addition and scalar multiplication, be sure to show that they satisfy all of the vector space axioms. Solution:

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Eigenvalue problems. Eigenvalue problems

Eigenvalue problems. Eigenvalue problems Determination of eigenvalues and eigenvectors Ax x, where A is an N N matrix, eigenvector x 0, and eigenvalues are in general complex numbers In physics: - Energy eigenvalues in a quantum mechanical system

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

An error estimate for matrix equations

An error estimate for matrix equations Applied Numerical Mathematics 50 (2004) 395 407 www.elsevier.com/locate/apnum An error estimate for matrix equations Yang Cao, Linda Petzold 1 Department of Computer Science, University of California,

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Chapter 3. Linear and Nonlinear Systems

Chapter 3. Linear and Nonlinear Systems 59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems

Lecture4 INF-MAT : 5. Fast Direct Solution of Large Linear Systems Lecture4 INF-MAT 4350 2010: 5. Fast Direct Solution of Large Linear Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 16, 2010 Test Matrix

More information

12. Cholesky factorization

12. Cholesky factorization L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric

More information

Some notes on Linear Algebra. Mark Schmidt September 10, 2009

Some notes on Linear Algebra. Mark Schmidt September 10, 2009 Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 2 pp 138-153 September 1994 Copyright 1994 ISSN 1068-9613 ETNA etna@mcskentedu ON THE PERIODIC QUOTIENT SINGULAR VALUE DECOMPOSITION JJ HENCH Abstract

More information