ALGEBRAIC AND MULTILINEAR-ALGEBRAIC TECHNIQUES FOR FAST MATRIX MULTIPLICATION GUY MATHIAS GOUAYA. submitted in accordance with the requirements

Size: px
Start display at page:

Download "ALGEBRAIC AND MULTILINEAR-ALGEBRAIC TECHNIQUES FOR FAST MATRIX MULTIPLICATION GUY MATHIAS GOUAYA. submitted in accordance with the requirements"

Transcription

1 ALGEBRAIC AND MULILINEAR-ALGEBRAIC ECHNIQUES FOR FAS MARIX MULIPLICAION by GUY MAHIAS GOUAYA submitted in accordance with the requirements for the degree of MASER OF SCIENCES in the subject Applied Mathematics at the UNIVERSIY OF SOUH AFRICA SUPERVISOR: Prof. Y. HARDY 2015

2 Key terms: Matrix multiplication; multilinear algebra; discrete Fourier transform; higher order singular value decomposition; tensor rank; triple product property; Strassen algorithm; unique solvable puzzles; computer algebra. ii

3 Summary his dissertation reviews the theory of fast matrix multiplication from a multilinear-algebraic point of view, as well as recent fast matrix multiplication algorithms based on discrete Fourier transforms over finite groups. o this end, the algebraic approach is described in terms of group algebras over groups satisfying the triple product Property, and the construction of such groups via uniquely solvable puzzles. he higher order singular value decomposition is an important decomposition of tensors that retains some of the properties of the singular value decomposition of matrices. However, we have proven a novel negative result which demonstrates that the higher order singular value decomposition yields a matrix multiplication algorithm that is no better than the standard algorithm. iii

4 Acknowledgments First of all, I am deeply indebted to my supervisor Professor Y. Hardy Whose stimulating motivation and valuable ideas helped me to complete this masters dissertation. hanks for your understanding and support during this process. I would like to thank the UNISA School of Science and particularly the Mathematical Sciences department for offering me this platform of learning and for their support. I would also like to thank my wife Isabelle assi for her support and motivation during this period. hanks for being there when it was difficult. hanks for blessing me with my two lovely boys who are source of inspiration to me. Most importantly, none of this could have happened without my family s understanding and support which they provided to me all along. Particularly my mother, my aunt and my elder Brother Joseph Pangop. I would like to name and thanks my teacher colleagues: Mr Johson, Mr Mngomezulu, Mrs Mahacha for their motivation, all my friends and my mother in-law for their motivation and support. Of course everything is achieved by the grace of the mighty GOD. iv

5 Contents 1 Introduction Introduction Standard algorithm Matrix multiplication and inner product Strassen s algorithm Winograd s algorithm Strassen s algorithm and large matrices Strassen s algorithm optimization for large matrix Winograd, Strassen and cutoff Strassen and odd sized matrices Multilinear algebraic techniques Introduction Bilinear maps ensor products of bilinear maps Rank of Matrix Multiplication ensor rank and matrix multiplication ensor rank and Strassen algorithm Higher Order Singular Value Decomposition HOSVD SVD and HOSVD HOSVD and Matrix multiplication HOSVD and naive method Algebraic techniques Introduction he triple product property riple product property and related property Simultaneous PP Unique solvable puzzle USP Fast group algebra multiplication Group algebra multiplication Representation theory Discrete Fourier transform on finite group algebra A he higher order singular value decomposition 33 A.1 HOSVD A.1.1 Background A.1.2 Multilinear Singular Value Decomposition A.2 VEC FORM A.3 Commutativity A.4 HOSVD and naive method B Maxima program 46 v

6 List of symbols A: matrix deta: a i,j : A i,j : A i,j : A : infx: determinant of the matrix A row i and column j entry of a matrix Block matrix from the matrix A row i and column j entry of a matrix A transpose of the matrix A smallest element of the set X [1; n]: set of integer from 1 to n log n : A : A B: maxx, y: dimu : U V : veca: e i : E i,j : S : X : δ i,j : logarithm of base n complex conjugate transpose of the matrix A Kronecker-product of the matrices A and B the maximum between x and y dimension of the vector space U Cartesian product of the two vector spaces U and V vector operation of the matrix A standard basis vector with 1 at the row i and 0 elsewhere standard basis matrix with 1 at a row i and column J and 0 elsewhere order of the set S norm of the vector X Kronecker-delta which give one if i j and zero elsewhere A: tensor vi

7 Chapter 1 Introduction 1.1 Introduction One of the most fundamental operations in linear algebra is the multiplication of 2 matrices. his problem is of great importance since any improvement in matrix multiplication also leads to more efficient algorithms for solving a plethora of other algebra problems such as finding a determinant of a matrix using the Leverrier s method see [7], chapter 2 or the formula: deta [deta 11 ] det[a 22 A 21 A 1 11 A 12], finding an inverse of a matrix see [16] for more explanation, solving a system of linear equations the impact of matrix multiplication is clear if one uses the Cramer method and also for some problems in graph theory [5]. Hart and Hedtke give a short history of fast matrix multiplication in [8]. Given two n n matrices A and B over a field F with n N, a 11 a A a 21 a b 11 b B b 21 b One can find the product C of A and B namely: c 11 c C c 21 c defined by: c ij n a ik b kj. k In this dissertation, we are interested in the efficiency of matrix multiplication, i.e how many steps or operations are required to perform matrix multiplication. 1.2 Standard algorithm Until 1968, mathematicians were using the standard naive algorithm to multiply two matrices A n p and B p m over the field F. his is defined by: p AB ij a i,k b k,j k1 his algorithm give us a total count of n p m scalar multiplications and n mp 1 scalar additions and subtractions. Setting n p m one then needs n 3 scalar multiplications and n 3 n 2 scalar additions and subtractions [10, 9] for a total count of 2n 3 n 2 arithmetic operations in the field F. Since multiplication 1

8 1.3 Matrix multiplication and inner product 2 can be view as many additions, one may achieve efficient matrix multiplication just by reducing the number of multiplication required to perform matrix multiplication. In this dissertation, we primarily consider the number of multiplications in the underlying field, since a block structured approach see section benefits most by reducing the number of multiplications. 1.3 Matrix multiplication and inner product Let A a 1 a 2. a n B b 1, b 2...b n where a 1,..., a n are the rows of A and b 1,..., b n are the columns of B. hen AB ij a i b j, where a i b j is the usual Euclidean inner product on R n. his expression involves n multiplications and n 1 additions in the underlying field. In 1968, Winograd [4] showed that one can take the inner product of two vectors using less multiplications but more additions. Given two vectors x 1,..., x n and y 1,..., y n we define: hen the inner product for even n can be found by: and for odd n by: n 2 n 2 n 2 ξ x 2j 1 x 2j j1 n 2 η y 2j 1 y 2j i1 x 2j 1 + y 2j x 2j + y 2j 1 ξ η j1 x 2j 1 + y 2j x 2j + y 2j 1 ξ η + x n y n j1 Hence the total number of multiplications required is and the number of additions is n n n n + 1 If ξ an η are precomputed for each vector, the total number of multiplications per inner product reduces to n+1 2, and the total number of additions and subtractions reduces to 2 n 2 + n Notice that, for the matrix product AB, we compute an ξ for each row and η for each column. We have that to perform the multiplication of two n n matrices using the inner product the total number of multiplication is n + 1 n n 2 + 2n 2 2 and the total number of additions required is n 2 n n his gives us a slight improvement on the standard algorithm. 2. n + 2n 1. 2 We denote by M F n the minimum number of multiplications required to multiply two n n matrices over the field F.

9 1.4 Strassen s algorithm 3 Definition 1 We define the exponential of the matrix multiplication over the field F denoted by wf as: wf inf{τ R : M F n On τ }. From the standard algorithm we have that wf 3 and from the same algorithm, it is also clear that there is a total number of n 2 output entries, hence we can t have less that n 2 multiplications. his leads us to the following theorem. heorem 1 he exponential of matrix multiplication is bounded by 1.4 Strassen s algorithm 2 wf 3. Strassen introduced in 1969 an algorithm stated there for square matrices for multiplying matrices which is based on a clever way of multiplying two 2 2 matrices using 7 multiplications and 18 additions and subtractions. Let A and B be two 2 2 matrices over a field F defined by: a11 a A 12 b11 b B 12 a 21 a 22 b 21 b 22 and their product defined as AB c11 c 12 c 21 c 22 where a jk, b jk, c jk F with j, k {1, 2}. Strassen s algorithm is implemented as follows [16] Winograd s algorithm I a 11 + a 22 b 11 + b 22, II a 21 + a 22 b 11, III a 11 b 12 b 22, IV a 22 b 11 + b 21, V a 11 + a 12 b 22, V I a 11 + a 21 b 11 + b 12, V II a 12 a 22 b 21 + b 22 c 11 I + IV V + V II, c 21 II + IV, c 12 III + V, c 22 I + III II + V I. Although the asymptotic complexity does not depend on the number of additions and subtractions; it is of practical significance if one can reduce the number of additions and subtractions for matrix multiplication. Winograd improved Strassen s algorithm by using only 15 additions/subtractions instead of 18 while retaining 7 multiplications. he implementation of the algorithm is as follows: Let A and B be two 2 2 matrices over a field F defined by: a11 a A 12 b11 b B 12 a 21 a 22 b 21 b 22 and their product defined as AB c11 c 12 c 21 c 22 where a jk, b jk, c jk F with j, k {1, 2}. We first compute: A 1 a 11 a 21, B 1 b 22 b 12 A 2 a 22 A 1, B 2 b 11 + B 1

10 1.4 Strassen s algorithm 4 Secondly we compute: Setting We finally have: 1 a 11 b 11 2 a 12 b 21 3 A 2 B 2 4 a 21 + a 22 b 12 b 11 5 a 1 b 1 6 a 12 a 2 b 22 7 a 22 b 21 B 2. U U c c U 2 c U 1 c U 2 his algorithm is often used in practice and it is called the Strassen-Winograd algorithm Strassen s algorithm and large matrices Since Strassen s construction does not depend on the commutativity of the component multiplications one can apply it for block matrices to recursively implement an algorithm for n n matrices with On log 2 7 On multiplication. Given two m m matrices A and B with m even, we can divide each matrix in to blocks as follows: A11 A A 12 B11 B B 12 A 21 A 22 B 21 B 22 where each A ij and B ij with i, j 1, 2 are all m 2 m 2 matrices. hus to multiply A and B one can then perform one level of Strassen s algorithm for 2 2 block matrices. his gives us a total operation count of : [ m 3 m ] 2 m m m2 Hence the ratio of this operation count to that required by the standard algorithm alone is: 7m m 2 8m 3 4m 2 Which approaches 7 8 for large m. his gives us a 12, 5% improvement over the regular matrix multiplication for sufficiently large matrices. One should notice that on dose the Strassen-Winograd algorithm have improvement in term of addition, he does not significantly change the ratios for large m Strassen s algorithm optimization for large matrix he question is: Is Strassen s algorithm always optimal? he answer is no. For n > 12 one should use Strassen- Winograd algorithm and use the standard algorithm for n < 12. First notice that one level of recursion of Strassen s algorithm can easily be applied to rectangular matrices provided that all the matrix dimensions are even. Secondly we should bear in mind that we do not have to carry the recursion to the scalar level. Let Gm, n be the cost of adding or subtracting two m n matrices and Mm, k, n the cost of multiplying an m k matrix by an k n matrix using the standard matrix multiplication algorithm. hen the cost of Strassen- Winograd s algorithm W m, k, n to multiply an m k and k n matrix provided that m, k, n are all even is: { Mm, k, n if m, k, n satisfies the cutoff criterion W m, k, n 7W m 2, k 2, n 2 + 4G m 2, k 2 + 4G k 2, n 2 + 7G m 2, n 2 otherwise

11 1.4 Strassen s algorithm 5 We now write: Mm, k, n 2mkn mn, Gm, n mn If we apply one level of recursion of Strassen-Winograd algorithm to a pair of m k and k n matrices, assuming that this recursion is at the cutoff criterion, then one should have m Mm, k, n 7M 2, k 2, n m + 4G 2 2, k k + 4G 2 2, n m + 7G 2 2, n 2 mkn 2mkn mn 7 7 mn + mk + kn + 7mn mkn mn + nk + kn 4 mkn 4mn + nk + kn k + 1 m + 1 n if the matrices are square with m n k one will have 1 12 n Clearly, for n 12 one should use the standard algorithm, thus the cutoff criteria is whether the matrix size is less than or equal to Winograd, Strassen and cutoff Let m 2 p m, n 2 p n, k 2 p k, and Let w p W 2 p m, 2 p k, 2 p n then using p recursive steps of the Strassen-Winograd algorithm and using the standard algorithm to multiply the remaining m k and k n matrices, one will have: w p 7w p p m k + 4 p k n + 7 4p 4 m n 7w p p m k + k n m n 7 7w p m p 1 k + k n + 74 m n + 4 m p k + k n + 74 m n 7 2 w p p p m k + k n w p p p p m k + k n since w o 2m k n m n and we find his gives us the following relation. p 7 p w p j 4 j m k + k n j1 p j1 7 p j 4 j 4 7p 4 p 7 4 w p 7 p 2m k n m n + 7p 4 p m k + k n W 2 p m, 2 p k, 2 p n 7 p 2m k n m n + 7 p 4 p 4m k + 4k n + 7m n /3. hus, if we are dealing with square matrices m k n, this equation reduces to: W 2 p n, 2 p n, 2 p n 7 p 2n 3 n 2 + 5n 2 7 p 4 p 1.1

12 1.5 Strassen and odd sized matrices 6 If we are using the Strassen s algorithm we find that 7 p 2n 3 n 2 + 6n 2 7 p 4 p. 1.2 operation are requested. One can see that Strassen-Winograd algorithm involves less operations than Strassen s algorithm with less operations in term n 2 7 p 4 p. And the ratio of the equation 1.2 to equation 1.1 is If p is extremely large, the ratio converges to: 7 p 2n 3 n 2 + 6n 2 7 p 4 p 7 p 2n 3 n 2 + 5n 2 7 p 4 p 2n + 5 4/7 p 2n + 4 4/7 p 5 + 2n 4 + 2n For large square matrices, improvement of Strassen-Winograd over Strassen s original algorithm is 14.3% when full recursion is used n 1, and between 5.26% and 3.45% as n ranges between 7 and 12 [10]. Computing the ratio of the operation counts for Strassen-Winograd equation on square matrices without cutoff to that of cutoff 12 when n 256 hence p 8, n 1 with p 5, n 8 we obtaining a 38.2% improvement using cutoffs. 1.5 Strassen and odd sized matrices Here we explore how one can use Strassen s algorithm for square matrices with an odd number of rows and/or columns. Since Strassen s algorithm was originaly designed for two 2 2 matrices and we find that it can be used recursively for large matrices provided that their dimension are even, we need to consider the case for matrices with an odd number of rows and/or colomns. One must just apply some methods to make the dimension even then apply Strassen s algorithm to the alternative matrices, and then use the reverse method to rectify the final result. Originally, Strassen suggested padding the input matrices with extra rows and columns of zeros, so that the dimensions of all the matrices encountered during the recursive calls will be even. hen delete the extra rows and columns to remain with the required matrix, this method is called static padding, one can see that this name come from the fact that the padding occurs before any recursive use of Strassen s algorithm. We illustrate this for the case n 3 thus given two 3 3 matrices A and B define by: A a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 B b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 padding these matrices with one row and one column of zeros for each matrix give us a 11 a 12 a 13 0 b 11 b 12 b 13 0 A a 21 a 22 a 23 0 a 31 a 32 a 33 0 B b 21 b 22 b 23 0 b 31 b 32 b We divide A and B as 4 block of 2 2 matrices as: a11 a A 1 12 a13 0 A a 21 a 2 22 a 23 0 b11 b B 1 12 b13 0 B b 21 b 2 22 b 23 0 Now define the product A B C as one will have C A 3 B 3 c11 c 12 c 21 c 22 a31 a b31 b A 4 B 4 a11 b c a 12 b 21 + a 13 b 31 a 11 b 12 + a 12 b 22 + a 13 b 32 a 21 b 11 + a 22 b 21 + a 23 b 31 a 21 b 12 + a 22 b 22 + a 23 b 32 a11 b c a 12 b 23 + a 13 b 33 0 a 21 b 13 + a 22 b 23 + a 23 b 33 0 a31 b c a 32 b 21 + a 33 b 31 a 31 b 12 + a 32 b 22 + a 33 b a b

13 1.5 Strassen and odd sized matrices 7 hus we have C c 22 a31 b 13 + a 32 b 23 + a 33 b a 11 b 11 + a 12 b 21 + a 13 b 31 a 11 b 12 + a 12 b 22 + a 13 b 32 a 11 b 13 + a 12 b 23 + a 13 b 33 0 a 21 b 11 + a 22 b 21 + a 23 b 31 a 21 b 12 + a 22 b 22 + a 23 b 32 a 21 b 13 + a 22 b 23 + a 23 b 33 0 a 31 b 11 + a 32 b 21 + a 33 b 31 a 31 b 12 + a 32 b 22 + a 33 b 32 a 31 b 13 + a 32 b 23 + a 33 b One can see that by deleting the fourth row and the fourth column of A B we will have C AB a 11b 11 + a 12 b 21 + a 13 b 31 a 11 b 12 + a 12 b 22 + a 13 b 32 a 11 b 13 + a 12 b 23 + a 13 b 33 a 21 b 11 + a 22 b 21 + a 23 b 31 a 21 b 12 + a 22 b 22 + a 23 b 32 a 21 b 13 + a 22 b 23 + a 23 b 33 a 31 b 11 + a 32 b 21 + a 33 b 31 a 31 b 12 + a 32 b 22 + a 33 b 32 a 31 b 13 + a 32 b 23 + a 33 b 33 he other method is called dynamic padding. Here we do not add the zeros rows and columns at the beginning but we do it at each recursion that the rows or the columns are not even. the advantage is that this do not require lot of memory, but the pay price is that we have to delete this rows or columns at the end of such recursion to avoid different result at the end of the computation. Another method is call dynamic peeling described as follows. Given a m k matrix A and a k n matrixb. With m, k, n all odd, we divide A and B into block matrices A11 A A 12 A 21 a 22 B11 B B 12 B 21 b 22 Where A 11 is an m 1 k 1 matrix, A 12 is an m 1 1 matrix A 21 is an 1 k 1 matrix, a 22 is 1 1 and B 11 is an k 1 n 1 matrix,b 12 is an k 1 1 matrix B 21 is an 1 n 1 matrix, b 22 is 1 1. Hence the product c11 c C AB 12 c 21 c 22 is computed by c 11 A 11 B 11 + A 12 B 21 c 12 A 11 B 12 + A 12 b 22 c 21 A 21 B 11 + a 22 B 21 c 22 A 21 B 12 + a 22 b 22 where A 11 B 11 is computed using Strassen s algorithm and the other computations are done with the standard algorithm.

14 Chapter 2 Multilinear algebraic techniques 2.1 Introduction In this chapter we will investigate some properties of the algebraic complexity of matrix multiplication, since the improvement of matrix multiplication or the investigation of fast algorithm for matrix multiplication is closely related to the problem of algebraic complexity theory. he singular value decomposition plays a central role in low rank representation of matrices. he higher order singular value decomposition retains many of the properties of the singular value decomposition for higher order tensors. Surprisingly, we show that the higher order singular value decomposition yield an algorithm which is no more efficient than the standard algorithm. 2.2 Bilinear maps Definition 2 Let U, V and W be three vector spaces over a field K, a map f : U V W is said to be bilinear iff for any u U, v V and λ K : fu 1 + λu 2, v fu 1, v + λfu 2, v fu, v 1 + λv 2 fu, v 1 + λfu, v 2 Definition 3 Denote by BilU,V,W the vector space of all bilinear maps f : U V W. Definition 4 Given any space V over a field K, the dual space V is the set of all linear maps ϕ : V K. Note that the dual space V is itself a vector space over K when equipped with the following addition and scalar multiplication: ϕ + ψx : ϕx + ψx kϕx : kϕx he map K n p K p m K n m describing multiplication of n p by p m matrices over K is such a bilinear map which we denote by < n, p, m > K. Here the integers n, p, m are the components of the map. Definition 5 [6] he rank of the bilinear map f : U V W is the smallest integer r such that r fu, v x i uy i vz i with x i U, y i V and z i W for i 1,..., r. We write Rf r. In terms of tensor products we write r F x i y i z i U V W so that F u v I fu, v where I is the identity of W. i1 i1 8

15 2.3 Rank of Matrix Multiplication ensor products of bilinear maps Consider two bilinear maps f i : U i V i W i, i 1, 2 where U, V, W are finite dimensional K vector spaces. Because of the following canonical isomorphisms BilU 1, V 1 ; W 1 BilU 2, V 2 ; W 2 U 1 V 1 W 1 U 2 V 2 W 2 U 1 U 2 V 1 V 2 W 1 W 2 U 1 U 2 V 1 V 2 W 1 W 2 U 1 U 2 V 1 V 2 W 1 W 2 BilU 1 U 2, V 1 V 2 ; W 1 W 2 One has f 1 f 2 BilU 1, V 1 ; W 1 BilU 2, V 2 ; W 2 as a bilinear mapping We uniquely determine this linear mapping by f 1 f 2 : U 1 U 2 V 1 V 2 W 1 W 2. f 1 f 2 x 1 x 2, y 1 y 2 f 1 x 1, y 1 f 2 x 2, y 2. f 1 f 2 is called the tensor product of f 1 and f 2 [[6] pp 41]. Recalling the definition of a tensor of a bilinear mapping one has the following: then f i x i, y i f 1 x 1, y 1 f 2 x 2, y 2 his leads us to the following proposition. Rf 1 ρ1 Rf i ρ1 u i ρ Rf 2 σ1 x i v ρ i y i w ρ i i 1, 2 u 1 ρ x 1 u 2 σ x 2 v ρ 1 y 1 v σ 2 y 2 w ρ 1 w σ 2 f 1 f 2 x 1 x 2, y 1 y 2. Proposition 1 Rf 1 f 2 Rf 1 Rf 2. Definition 6 A bilinear map f : U V W is concise if the following three conditions are satisfied. 1. he left kernel {u U fu, v 0 v V } {0} 2. he right kernel {v V fu, v 0 u U} {0} 3. Span {fu, v, u U, v V } W Lemma 1 Matrix multiplication defines a concise bilinear map. 2.3 Rank of Matrix Multiplication Recall that the rank of a bilinear map f : U V W, which implements matrix multiplication, denoted by Rf is the minimum number of field multiplications required to perform matrix multiplication. See the definition in the previous section. In this section, we will investigate some properties of the rank of a bilinear map and we will apply them to matrix multiplication. Proposition 2 If a bilinear map f : U V W is concise, then Rf maxdimu, dimv, dimw. Proof. By contra-position: Suppose fu, v rf i1 x i uy i vz i where x i U, y i V and z i W. If Rf < dimu, then {x 1, x 2,..., x rf } does not form a basis for U. Hence u 0 such that x i u 0 for all x i. Hence the left kernel of f will be non-zero. his contradicts the first condition of conciseness. Using a similar argument with V one can prove Rf dimv. Suppose

16 2.3 Rank of Matrix Multiplication 10 Rf < dimw, hence the dimension of the image of fu, v will be less than the dimension of the space W. his contradicts the third condition of the definition of conciseness. We use this proposition and the Lemma Rf maxnp, pm, nm. If we then have n p m Rf n 2. Definition 7 Given two bilinear maps f U, V ; W and f U, V ; W in the field K; we say f is the restriction of f and we write f K f if there exist linear maps α : U U, β : V V and θ : W W such that fu, v θ f α, βu, v for all u, v U V. his leads us to the following proposition. Proposition 3 Given f U, V ; W and f U, V ; W ; f K f Rf Rf. Proof. Let Rf r and Rf r. Suppose f K f. By definition there are linear maps α : U U, β : V V and θ : W W such that fu, v θ f α, βu, v for all u, v U V. θ f α βu, v θf α, βu, v θ f αu, βu r θ f jαug jβvw j r j1 r j1 θ f jαug jβvw j f jαug jβvθw j j1 f j αug j βvθw j r j1 r j1 fu, v f j ug j vθw j where f j U, g j V, w j W 1 j r is the minimal bilinear computation for f. Since we must have fu, v r fi ugi vw i i1 where fj U, gi V, w i W 1 j r gives a minimal bilinear computation for f, the minimum number r of terms satisfies r r. Definition 8 [1, page 355] wo bilinear maps f U, V ; W and f U, V ; W are isomorphic if there exist isomorphisms α : U U, β : V V and θ : W W such that θ f f α β. Finally we have the following proposition. Proposition 4 For any bilinear maps f U, V ; W and f U, V ; W, f f f f and f f i.e Rf Rf.

17 2.3 Rank of Matrix Multiplication ensor rank and matrix multiplication Let n N. Consider the vector space M n F of n n matrices over the field F. Let {B 1,..., B n 2} be a basis for this space. Now let n 2 A a j B j M n F, a 1,..., a n 2 F and hen Now let j1 B b j B j M n F, b 1,..., b n 2 F. n 2 j1 AB n 2 j,k1 a j b k B j B k. B j B k c jkl B l where c jk1,..., c jkn 2 F are fixed constants. It follows that which can be written in the form AB n 2 j,k,l1 n 2 l1 a j b k c jkl B l AB [A] C l [B]B l n 2 l1 [A] [B] vec Cl B l n 2 l1 [ ] [A] [B] C I n B where [A] a 1,... a n 2 and [B] b 1,... b n 2 are the matrix representations of A and B respectively and C l jk : c jkl are n 2 n 2 matrices and Now write C : e l,n vec 2 C l, B : e l,n 2 B l. n 2 l1 C n 2 l1 r x j C A,j C B,j j1 where r is minimum, C A,j and C B,j are n 2 1 matrices and x j are 1 n 2 matrices. hus r is the minimum number of products in F required to perform the matrix multiplication in terms of a 1,..., a n 2 and b 1,..., b n 2. Note that r is independent of the choice of the basis. One can see that r is the minimum number of products required in F to perform the matrix multiplication. hus the complexity over products in F of matrix multiplication is closely tied to tensor rank and related problems in multilinear algebra. For the standard basis we find where l 1 in + j n C l E i,k E k,j k1

18 2.4 Higher Order Singular Value Decomposition HOSVD ensor rank and Strassen algorithm Given n 2 with the standard basis, we find for Strassen s algorithm: p 1 [A] [B] a 11 + a 22 b 11 + b 22 p 2 [A] [B] o a 21 + a 22 b 11 p 3 [A] [B] a 11 b 12 b 22 p 4 [A] [B] a 22 b 11 + b 21 p 5 [A] [B] a 11 + a 12 b 22 p 6 [A] [B] a 11 + a 22 b 11 + b 12 p 7 [A] [B] a 12 a 22 b 21 + b 22 which provides the 7 multiplications in the Strassen s algorithm. 2.4 Higher Order Singular Value Decomposition HOSVD Since the fastest matrix multiplication algorithm corresponds to a tensor decomposition of C with lowest rank, we now consider low rank decompositions of tensors. Let A be an m n matrix over R. he singular value decomposition SVD A UΣV where U is an m m and orthogonal, V is an n n and orthogonal, and diagσ 1, σ 2,..., σ r is a diagonal matrix. A second order tensor decomposition of A is given by A r σ j Ue j,m V e j,n j1 where r is the number of non-zero singular values of A i.e. the rank of A. his is a minimum rank tensor decomposition of A, [12] define a higher order singular value decomposition A S 1 U 1 2 U 2... n U n where A is an n-th order tensor and U j j 1,..., n are orthogonal matrices. his definition recaptures many properties of the singular value decomposition, however the higher order singular value decomposition does not necessarily provide a decomposition with minimal rank. Appendix A elaborates on the above notation and properties of the higher order singular value decomposition. We apply the higher order singular value decomposition to the matrix multiplication problem and show that this decomposition never yields an algorithm which is faster than the naive method. o compute the HOSVD one has to compute the matrix U j via singular value decomposition of j-th unfolding A n of A for all n [1; n] and then we find S by computing: S A 1 U 1H U 2H... U nh. his can be well understood if one use the vec notation. See section 1 in appendix A for basic definitions and properties of the HOSVD SVD and HOSVD Some properties of the singular values σ j of A have analogues for the tensor S of A. Proposition 5 For any α, β, S ik β and S ik α are orthogonal for all possible values of k [1; n], α and β provide that α β. Here orthogonality is in the usual Euclidean sense.

19 2.4 Higher Order Singular Value Decomposition HOSVD 13 Proof. he Euclidean inner product S inα, S inβ is given by S inα, S inβ S i1,i 2...α...i n Si 1,i 2...β...i n i 1 i 2 i k 1 i k+1 i n j k A 1 U 1 2 U 2... k U k... n U n i 1,i 2,,α,,i n i 1 i 2 i k 1 i k+1 i n j k A 1 U 1 2 U 2... n U n... N U N i 1 i 2 i k 1 i k+1 i n j k a i 1,i 2,...,i N U 1 j 1i 1 U 2 j 2i 2 U n i 1 i 2 i n+1 U n i n 1 j nα U n j nβ i N a i1,i 2,...,i n U 1 j 1i 1 J n 1i n 1 U n j U n+1 nα j n N...U j N i N U N j N i N i 1,i 2,,β, i n U 2 j 2i 2...U n 1 J n 1i n 1 U n j U k+1 nα j k+1 i k+1...u n j ni n j n+1i n+1 U N a i1,i 2,...,i N a i1,i2,...,in U 1 j 1i 1 U 1 j 1i 1 U 2 j 2i 2 U 2 j 2i 2...U n 1 j n 1i n 1 U n 1 j n 1i n 1 j N i N Since U n are orthogonal for all n we have: j n U n j U n nα j 0 for all α β. Hence S nβ i nα, S inβ 0 all α β for Let: W n U n 1... U N U 1... U n 1 V n since U n And V n are orthogonal matrices for all n, it follows that W is an orthogonal matrix. Now we write W 1 W 2 W... W N where W i for all i are the row vectors. First we find that: [ vec e α,i n Σ n U n+1... U N U 1... U n 1 V n In a similar way, we have: vec [ vec [e α,in n W ] vec [σ α W α] σ α vec σ α W α. [W α] e β,i n n U n+1... U N U 1... U n 1 V n] σ α W β. ] One has the same proposition for the SVD since any two columns of the singular diagonal are all orthogonal. Proposition 6 Similar to the decreasing order of singular values in the SVD, we find the decreasing order for all k [1; n], for the HOSVD. S ik 1 S ik 2... S ik n 0,

20 2.4 Higher Order Singular Value Decomposition HOSVD HOSVD and Matrix multiplication One has shown in the Kronecker product and matrix multiplication section that for two given matrices A and B over the field F we can construct a matrix C such that the minimum number of multiplications required to multiply the two matrices A and B determined by the tensor rank of C when viewed as a third order tensor. Here we describe our construction of C by the use of HOSVD. Note that C can be decomposed as follows. he higher order singular value decomposition [12] allows us to write C in the form C S i1,i 2,i 3 U 1 i 1 U 2 i 2 U 3 i 1,i 2,i 3 where U 1 i,1, U 2 i,2, U 3 i,3 are 1 n2 and U 1 i1, are all orthogonal, and similarly for U 2 i,2 i 3. 3 and U i,3. he method to find S, U and C is as follows. Let E i,j be the matrix with a one at row i and column j, with a zero elsewhere. For a given n, we use the following algorithm 1. For p : 1 to n 2 do: B p B p 1n+j B i 1n+j E i,j 2. For l : 1 to n 2 do C l C i 1n+j n k1 E i,k E k,j 3. For l : 1 to n 2 construct all the n 2 standard vector bases e l,n 2 with one at the l-th row and zero elsewhere. 4. C n 2 l1 e l,n 2 vecc l 5. Find the 3-order tensor A i,j,k C j 1n2 +k,i 6. Find the three unfoldings A 1, A 2 and A 3 7. Now use the SVD of A 1 U 1 Σ 1 V, A 2 U 2 Σ 2 V and A 3 U 3 Σ 3 V to find U 1, U 2 and U 3 respectively. 8. S 1 U 1 A 1 U 2 U 3 9. S 2 U 2 A 2 U 3 U S 3 U 3 A 3 U 1 U 2 One has that the number of non zero in the matrix S i are the number of multiplications that we need to perform the matrix multiplication of A and B using this algorithm. however, the HOSVD yield S with exactly the same number of non-zero entries as the number of multiplications required using the naive algorithm. he programs to compute the HOSVD for the matrix multiplication is in Appendix B for the 2 2 matrices and we can provide for the other case on request HOSVD and naive method In the following we write l i 1n + j where i, j are uniquely determine by l and n. First note that C l C i 1n+j here B l B l 1n+j E i,j. hus l i 1n + j

21 2.4 Higher Order Singular Value Decomposition HOSVD 15 Secondly one sees that C l C i 1n+j n k1 E i,k E k,j. It follows that see appendix 1 C l C l where E j,k E u,v δ k,u E j,v thus we have n n E i,k E k,j E i,k E k,j k1 n Ei,k Ek,j k1 n n k1 u1 n k1 u1 n k1 u1 n k1 u1 k1 n E i,u E u,j u1 E i,k Ek,j Ei,u E l,j n E i,k E i,k E u,j E u,j n E k,i E i,u E j,k E u,j n E k,u δ k,u E j,j C l C l n E k,k E j,j k1 I E j,j since E j,j is diagonal for all j it follow that I E j,j is diagonal. therefore Cl C l is a diagonal matrix for all l. Using a similar argument one can also prove that C l Cl is diagonal for all l. Since the first unfolding C 1 of C is given by C 1 e k,n 2 vecck. k We find that C 1 C 1 l e l,n 2 vecc l l k vecc l veccl. e k,n 2 vecc k Now we use the definition n veca I n A e j,n e j,n for an m n matrix A. We have C1 C 1 I n 2 Cl In e j,n 2 e j,n 2 2 Cl e k,n 2 e k,n 2 l j k e j,n 2e k,n 2 C l e j,n 2e k,n 2C l l j k j1 and using the fact that n C l E i,k E k,j k1

22 2.4 Higher Order Singular Value Decomposition HOSVD 16 we obtain C1 C 1 u,v j u,v j u,v j e j,n 2e k,n 2 p k e j,n 2e k,n 2 p k e j,n 2e k,n 2 p k E u,p E p,v e j,n 2e k,n 2 q E u,p Ep,v ej,n 2e k,n 2 q E u,q E q,v E u,q E q,v E u,p Ep,v ej,n 2e k,n E 2 u,q E q,v. q Since E j,k e j,n e k,n, we find C 1 C 1 n u,v,j 1,j 2,k 1,k 2,p,q n u,v,j 1,j 2,k 1,k 2,p,q e j1,n e j2,n e k 1,n e [ k 2,n E u,p Ep,v ej1,ne k 1,n e j2,ne k 1,n Eu,q E q,v ] [ ej1,ne k 1,n e j2,ne ] [ k 2,n ep,n e u,n e v,n e p,n ej1,ne k 1,n e j2,ne k 2,n Eu,q E q,v ] and using the fact that yields e u,ne v,n δ u,v C 1 C 1 C 1 C 1 n u,v,j 1,j 2,k 1,k 2,p,q n u,v,k 1,k 2,p,q e j1,n e j2,n e k 1,n e [ k 2,n δu,j1 e p,n e k 1,n δ p,j2 e v,n e k 2,n Eu,q E q,v ] e u,n e p,n e k 1,n e [ k 2,n ep,n e k 1,n e v,n e k 2,n eu,n e q,n e q,n e ] v,n n I n E p,q E p,q I n p,q and since we find C 1 C 1 E i,j e i,n e j,n n I n e p,n e q,n e p,n e q,n In p,q I n e p,n e p,n e q,n e q,n In p,q n n I n e p,n e p,n e q,n e q,n I n p q Now we recall the following three properties of matrix rank: 1. he matrix I n is a full rank matrix. 2. he matrix product of a column vector and a row vector is always a rank 1 matrix. Hence the matrix n n e p,n e p,n e q,n e q,n p q is a rank one matrix. 3. RA B RA RB where RX is the rank of the matrix X.

23 2.4 Higher Order Singular Value Decomposition HOSVD 17 Applying these properties to our matrix: n C1 C n 1 I n e p,n e p,n e q,n e q,n I n p q We find that this matrix is a rank n 2 matrix. Similarly, the unfoldings C 1 C 2 C 3 n j 1,j 2,k n j 1,j 2,k n j 1,j 2,k e j1 e j2 e j 1 e k e k e j 2 e j1 e k e k e j 2 e j1 e j 2 e k e j2 e j 1 e j 2 e j1 e k are used to find the matrices; C 2 C 2, C 3 C 3 C 3. he matrix C 2 C 2 is given by C 2 C 2 n j 1,j 2,k e j1 e k ek e j2 e j1 e j2 j 2,i 2 I n e j2 e i 2 I n e j2 e i 2. i 1,i 2,p e i1 e p e p e i 2 e i1 e i 2 For C 3 C 3 we find C 3 C 3 n j 1,j 2,k n e k e j 2 ej1 e j2 e j1 e k i 1,j 1 e j1 e i 1 I n e j1 e i1 I n. We can find two permutations matrices P 1 and P 2 such that n i 1,i 2,p e p e i2 e i 1 e i 2 e i1 e p C 1 C 1 P 1 C 2 C 2P 1 C1 C 1 P 2 C3 C 3P2 Hence the three matrices C1 C 1, C2 C 2, C3 C 3 all have rank n 2. We now compute C 1 C1, C 2C2, C 3C3. n C 1 C1 e j1 e j2 e j 1 e k e k e n j 2 e i1 e i 2 e i1 e p ep e i2 j 1,j 2,k i 1,i 2,p I n I n n ni n 2. With a similar computation we find that C 1 C1 C 2C2 C 3C3 ni n 2. which also give us rank n 2 matrices.

24 2.4 Higher Order Singular Value Decomposition HOSVD 18 Now we find vecc j vecc j e u Cj e u n 2 u1 n n e u, e v Ek,j 1 e u E k,j2 e v u,v1 k1 u,v e u e v σ j,u e k σ v,k e j2 n e j1 e k e k e j2 k1 here j j 1 1n + j 2. Now we will find the eigenvalues and eigenvectors of C 1 C 1. Let a n p e p,n e p,n we can write C 1 C 1 I n aa I n. the eigenvalues of aa are n and 0. hus the eigenvalues of C 1 C 1 are n with the orthonormal basis for the corresponding eigenspace { } 1 n e p a e q : p, q [1; n] And 0 with the appropriete orthonormal basis {V 1, V 2,..., V n4 n 2}. Now let C 1 UΣV, be the singular values decomposition of C 1. hus C 1 C 1 V Σ ΣV I n aa I n. We deduce that the eigenvalues of C 1 C 1 are n with algebraic multiplicity n 2. hus: n n Σ. 0 0 n and V 1 1 n e 1 a e 1 1 n e 1 a e 2, 1 n e n 2 a e n 2 V 1, V 2... V n 4 n 2 where Σ is an n 2 n 4 matrix and V is an orthogonal matrix. We find U 1 Σ C 1 V 1 ni n 2 0 n 2 n 4 n 2 so that U 1 I n 2. Similarly U 2 U 3 I n 2. his leads us to S 1 ΣV 1 U 2 U 3 Σ 1 V 1 Since the number of multiplications one has to perform to compute our matrix multiplication is exactly the number of non-zero entries in S 1, one can find this by just counting the number of non-zero of Σ 1 V 1 which is n 3 since the number of non-zero of V 1 is n n 2 first n 2 columns each have n non-zero entries. We chose our basis for the eigenspace above and hence the column of V to obtain the minimum number of non-zero entries in S 1

25 Chapter 3 Algebraic techniques. 3.1 Introduction In this chapter we investigate the realization of matrix multiplication using a group theoretic approach. his technique was first introduced in [3] by Cohn and Umans in hey proved that one can embed matrix multiplication in a certain group algebra providing that this group satisfies the so called riple Product Property PP. If a group satisfies the PP then one can perform matrix multiplication using the Fast Fourier ransform similar to the technique for fast polynomial multiplication. We will give more detail in this chapter but before doing so, we will revise some basic group representation theory since some of the proofs in this chapter require a knowledge of representation theory. We will also investigate some groups that can satisfy the PP, one of which has been introduced in [2] in Our motivation for investigating such a group comes from the conjecture that if one can find such group then we can use it to prove that the exponent of matrix multiplication w will converge to 2. We will end this chapter by investigating the relationship between the matrix multiplication tensor rank and the group exponential for marix multiplication. 3.2 he triple product property Let S, and U be three subsets of a group G and let A a s,t s S,t, B b t,u t,u U be S and U matrices over C respectively, then one can define A, B C[G] by: A a s,t s 1 t and B b t,u t 1 u. s S,t AB t a s,t b t,u s 1 u t,u U Consider s S, u U and s s 1 t t 1 u u 1 1 s s, t t, u u then the coefficient of s 1 u in AB is AB s,u. One has that, if S,, U satisfy the triple product property see section then one can read off the entries of the product matrix AB from AB C[G]: entry AB s,u is simply to the coefficient of the group element s 1 u. Since matrix multiplication can be seen as a special case of group algebra multiplication, efficient group algebra multiplication algorithms are likely to lead to efficient matrix multiplication algorithms riple product property and related property We now define a so called riple Product Property PP which was proven in [3] to be crucial when embedding matrix multiplication in a group algebra. Definition 9 Given a subset S of a group G, we denote the right quotient set of S by QS as subset of G that we define by: QS {s 1 s 1 2 : s 1, s 2 S}. Definition 10 hree subsets S 1, S 2, S 3 of a group G with S i n i for i 1, 2, 3 satisfy the riple Product Property if, for any q i QS i, q 1 q 2 q 3 1 q 1 q 2 q

26 3.2 he triple product property 20 When this condition is satisfied, we say that G realizes < n 1, n 2, n 3 >. his definition will be used in the following section when embedding matrix multiplication via group representation. Lemma 2 If G realizes < n 1, n 2, n 3 > then it does so for every cyclic permutation of n 1, n 2, n 3. Proof. Suppose G realizes < n 1, n 2, n 3 > through S 1, S 2, S 3, and suppose s i, s i S i. Hence s 1s 1 1 s 2s 1 2 s 3s s 1s 1 1 s 2s2 1 s 3s 1 3 s 1s 1 1 s 1s 1 1 s 1s s 1s 1 1 s 2s 1 2 s 3s 1 3 s 1s 1 1 s 1s s 1s 1 1 s 1 s 1 1 s 1s 1 1 s 2s 1 2 s 3s 1 3 s 1s 1 1 s 1 s 1 1 s 1s 1 1 using the invertible property of a group in both side we then obtain: s 2s 1 2 s 3s 1 3 s 1s One can thus see that it is possible to obtain every cyclic shift by repeating the above transformation provided that one does the appropriate choices. We find a generator for all cyclic permutation. herefore we can permute them with all the possibility and the riple Product property will still hold for all the cyclic permutation of 1,2,3. his completes the proof. Lemma 3 If N is a normal subgroup of G that realizes < n 1, n 2, n 3 > and G/N realizes < m 1, m 2, m 3 >, then G realizes < n 1 m 1, n 2 m 2, n 3 m 3 >. Proof. Suppose a normal subgroup N of G realizes < n 1, n 2, n 3 > through S 1, S 2, S 3, i.e s 1s 1 1 s 2s 1 2 s 3s iff s 1s 1 1 s 2s 1 2 s 3s Suppose G/N realizes < m 1, m 2, m 3 > through 1, 2, 3 i.e t 1t 1 1 t 2t 1 2 t 3t 1 3 N N iff t 1t 1 1 N t 2t 1 2 N t 3t 1 3 N N Let K 1 {x i : x i x 1 j N N x i x j } identify the cosets in 1 and let k 1 K 1, and similarly for k 2 and k 3. hen, G realizes < n 1 m 1, n 2 m 2, n 3 m 3 > through the pointwise products S 1 K 1, S 2 K 2, S 3 K 3 where S i K i {s i k i : s i S i, k i K i }. Now we have that, for k 1, k 1 K 1, k 2, k 2 K 2, k 3, k 3 K 3, Suppose Hence his give us: So that From our assumption we then have Inserting into we will have that so that, from our assumption, In other words his yields the desired result. k 1k 1 1 k 2k 1 2 k 3k iff k 1k 1 1 k 2k 1 2 k 3k s 1k 1s 1 k 1 1 s 2k 2s 2 k 2 1 s 3k 3s 3 k [s 1k 1s 1 k 1 1 s 2k 2s 2 k 2 1 s 3k 3s 3 k 3 1 ]N 1N 1N s 1k 1s 1 k 1 1 N s 2k 2s 2 k 2 1 N s 3k 3s 3 k 3 1 N k 1k 1 1 2k2 1 3k3 1 k 1k 1 1 k 2k 1 2 k 3k k 1k 1 1 k 2k 1 2 k 3k k 1k 1 1 k 2k 1 2 k 3k s 1k 1s 1 k 1 1 s 2k 2s 2 k 2 1 s 3k 3s 3 k s 1s 1 1 s 2s 1 2 s 3s s 1 s 1, s 2 s 2, s 3 s 3. s 1k 1 s 1 k 1, s 2k 2 s 2 k 2, s 3k 3 s 3 k 3.

27 3.2 he triple product property Simultaneous PP Definition 11 We say that n triples of subsets A i, B i, C i of a group G satisfy the Simultaneous riple Product Property if 1. for each i the three subsets A i, B i, C i satisfy the triple product property 2. for all i, j, k a i a j 1 b j b k 1 c k c i 1 1 i j k for a i A i, a j A j, b j B j, b k B k, c k C k, c i C i. From this property one sees that if then i j k so a j 1 b j b k 1 c k a 1 i c i a i 1 b i b i 1 c i a 1 i c i and since the A i, B i, C i satisfy the triple product property, we get that a i a i, b i b i, c i c i. hus, if we multiply two elements in the group algebra, we get that the coefficient of a 1 i c i provides the matrix product. A a 1 i b i B i b i B b 1 i In the following, we will show that the Simultaneous riple Product Property also applies to group products see lemma in [2]. c i Lemma 4 [15, Lemma 32] If n triples of subsets A i, B i, C i H and n triples of subsets A i, B i, C i H all satisfy the Simultaneous riple Product Property, hen so does the nn triples of subsets A i A i, B i B i, C i C i H H Proof. We first show that three sets A i A i, B i B i, C i C i H H satisfy the riple Product Property. consider a 1, a 1, a 2, a 2 A i A i, b 1, b 1, b 2, b 2 B i B i and c 1, c 1, c 2, c 2 C i C i. Looking at the equation a 1, a 1a 2, a 2 1 b 1, b 1b 2, b 2 1 c 1, c 1c 2, c 2 1 1, 1 H H we see that a 1 a 2 1 b 1 b 2 1 c 1 c 2 1, a 1a 2 1 b 1b 2 1 c 1c 2 1 1, 1 H H. We have two separate equations to solve. However, since A i, B i, C i and A i, B i, C i satisfy the riple Product Property, it follows that a 1 a 2 1 b 1 b 2 1 c 1 c only if and only if a 1 a 2, b 1 b 2, c 1 c 2 a 1a 2 1 b 1b 2 1 c 1c a 1 a 2, b 1 b 2, c 1 c 2. Hence a 1, a 1 a 2, a 2, b 1, b 1 b 2, b 2 and c 1, c 1 c 2, c 2, hus the triple product property is preserved. o prove the second part simultaneity, consider the following subsets of H H : A ii A i A i A jj A j A j B ii B i B i B jj B j B j C ii C i C i C jj C j C j. Let a ii A ii, a jj A jj, b jj B jj, b kk B kk, c kk C kk, c ii C ii. Considering each coordinate separately we have a i a j 1 b j b k 1 c k c i 1 1 H

28 3.2 he triple product property 22 a ia j 1 b jb k 1 c kc i 1 1 H since all triples A i, B i, C i and A i, B i, C i satisfy the Simultaneous riple Product Property, it follow that i j k and i j k and so ii jj kk, and hence these groups satisfy the Simultaneous riple Product Property. he following theorem elaborates on the importance of the simultaneous triple product property. heorem 2 [2, heorem 5.5] If a group H simultaneously realizes < a 1, b 1, c 1 >,..., < a n, b n, c n >, and has character degrees d k then n i1a i b i c i ω/3 k One has that if the group H is abelian then k dω k H therefore H is the rank of the matrix multiplication. o prove this theorem we need some extra lemmas. Lemma 5 [2, Lemma 1.1] If we have non-negative real numbers s 1,...s n, with the property N n s µi i C N µ i1 for all N N and all µ where µ is a vector of non-negative integers with n i1 µ i N and C > 0, then n s i C. Proof. Fix N. We have for all µ with µ µ 1,..., µ n and i µ i N that i1 N n s µi i C N µ i1 summing over all possible values of µ with i µ i N, gives which yields µ N n s µi i µ i1 i1 d ω k N + n 1 n 1 C N n N N + n 1 s i C N. n 1 Letting N grow and taking N th roots we get the desire result. Lemma 6 [2, heorem 1.7] We suppose n triple subsets A i, B i, C i H satisfy the simultaneous triple product property, then the subsets H 1, H 2, H 3 H n Sym n satisfy the triple product property: H 1 {hπ : π Sym n, h i A i i} H 2 {hπ : π Sym n, h i B i i} H 3 {hπ : π Sym n, h i C i i} Proof. Let h i π i, h i π i H i and consider the triple product h 1 π 1 π 1 1 h 1 1 h 2 π 2 π 2 1 h 2 1 h 3 π 3 π 3 1 h We must have We then say that which makes the above equivalent to π 1 π 1 1 π 2 π 2 1 π 3 π π 1 π 1 1 π 2 π 2 1 ρ, h 3 1 h 1 h 1 1 h 2 π h 2 1 h 3 ρ 1

29 3.2 he triple product property 23 where the superscripts that the actions of performing π and ρ have been performed on the group elements. hus, for each co-ordinate i, [h 3 1 ] i [h 1 ] i [h 1 1 ] πi [h 3 ] πi 1 since A i, A π1, B ρi, C ρi, C i are all parts of triples that satisfy the simultaneous triple product property, it must follow that πi ρi i meaning π ρ 1. Since A i, B i, C i satisfy the triple product property, it then follows that h 1 h 1 1 h 2 h 2 1 h 3 h this implies that h 1 h 1, h 2 h 2, h 3 h 3. hus the three sets constructed above satisfy the triple product property. Lemma 7 [15, heorem 31] If a group G with character degrees d i realize < m, n, p > then he complete proof can be fund in [15]. mnp ω 3 d ω i. Lemma 8 [15, Lemma 36] If d k are the character degrees of a finite group H and c j degrees of Sym n H n, then n c ω j n! ω 1 d ω k j k are the character Proof. If H is abelian, we may use the fact that k d2 k H and c 2 j n! H n. We multiply both sides by n! ω 2, and using the fact that ω 2 j n! ω 2 c 2 j n! ω 1 j k d ω k n. Since c j n!, j c ω j j n! ω 2 c 2 j and the lemma holds for abelian H. he non-abelian case is discussed in [2], we omit it here because it is not relevant for our purpose. Lemma 9 [15, Lemma 37] If H is a finite group with character degrees d i and n triples of subsets A i, B i, C i H satisfying the simultaneous triple product property, then 1/n n A i B i C i ω/3 i k d ω k Proof. In Lemma 6, we described subsets of Sym n H n of size n! i A i, n! i B i and n! i C i. We have that n! ω/3 3 A i B i C i c ω j i j where the c j are the character degrees of Sym n H n. From the previous lemma we have j c ω j n! ω 1 k d ω k n. Hence, A i B i C i ω/3 1 n! replacing H by H t and n by n t we then obtain n A i B i C i tnt 1ω/3 1 n t d ω k,! i i k d ω k k n

30 3.2 he triple product property 24 We then take n th roots and let t to get the desired statement note that we used Sterling s formula. Now we prove theorem 2. Proof.[of heorem 2] We raise H to the N-th power, and we take subsets of H N denoted by A j, B j, C j HN. o create these subsets we choose a vector in Z n, µ µ 1,..., µ n, µ 0, n i1 µ i N. We set A j n i1 Aµi i, B j n i1 Bµi i, C j n i Cµi i. here are N µ such triples, and A j B j C j n i1 a ib i c i µi. We apply the previous lemma to this triple to obtain n ω/3 N N a i b i c i µ d k µ i1 k and applying Lemma 5 we obtain the desired inequality Unique solvable puzzle USP Now we consider the problem of finding a group which satisfies the triple product property and some tools that can be used to generate this type of group and subset. We use what we call uniquely solvable puzzle USP [2]. Definition 12 A uniquely solvable puzzle of width k is a subset U {1, 2, 3} k satisfying the following property: For all permutations π 1, π 2, π 3 SymU, either π 1 π 2 π 3 or else there exist u U and i 1,..., k such that at least two of π 1 u i 1, π 2 u i 2 or π 3 u i 3 hold. Definition 13 A strong uniquely solvable puzzle is a USP in which the defining property is extended as follows For all permutations π 1, π 2, π 3 SymU, either π 1 π 2 π 3 or else there exist u U and I 1,..., k such that exactly two of π 1 u i 1, π 2 u i 2 or π 3 u i 3 hold. Proposition 7 For each k 1, there exists a strong USP of size 2 k and width 2k. Proof. Noting that {1, 3} k {2, 3} k {1, 2, 3} 2k. Define U to be. U {u {1, 3} k {2, 3} k : for i [k], u i 1 iff U i+k 2}. assume π 1, π 2, π 3 SymU. If π 1 π 3, then there exists u U such that π 1 u i 1 and π 3 u i 3 for some i [k]. Similarly if π 2 π 3, then there exists u U such that π 2 u i 1 and π 3 u i 3 for some i [2k] \ [k]. In either case, exactly two of π 1 u i 1, π 2 u i 2 and π 3 u i 3 hold because in each coordinate only two of the three symbols 1,2 and 3 can occur. We then have that u is a strong USP, as requested. In what follows, we will show that a strong USP satisfies the PP. Given a strong USP U of width k, let H be the abelian group of all functions from U [k] to the cyclic group Cyc m H is a group under pointwise addition. he symmetric group SymU acts on H via πhu, i hπ 1 u, i for π Symu, h H, u U, and i [k]. Let G be the semi direct product H symu, and define subsets S 1, S 2 and S 3 of G where S i consist of all products hπ with π SymU and h G with the property: for all u U and j [k]. hu, j 0 iff u j i Proposition 8 If U is a strong USP, then S 1, S 2 and S 3 satisfy the riple Product Property. Proof. Let the product h 1 π 1 h 1π 1 1 h 2 π 2 h 2π 2 1 h 3 π 3 h 3π with h i π i, h i π i S i. for the above equation to hold we must have π 1 π 1 1 π 2π 1 2 π 3 π set π π 1 π 1 1 and ρ π 1 π 1 1 π 2 π 1 2. then the remaining condition for the equation is that in the abelian group H with its SymU action, h 1 h 3 + πh 2 h 1 + ρh 3 h 2 0.

Group-theoretic approach to Fast Matrix Multiplication

Group-theoretic approach to Fast Matrix Multiplication Jenya Kirshtein 1 Group-theoretic approach to Fast Matrix Multiplication Jenya Kirshtein Department of Mathematics University of Denver Jenya Kirshtein 2 References 1. H. Cohn, C. Umans. Group-theoretic

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Approaches to bounding the exponent of matrix multiplication

Approaches to bounding the exponent of matrix multiplication Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Simons Institute Sept. 7,

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Group-theoretic Algorithms for Matrix Multiplication

Group-theoretic Algorithms for Matrix Multiplication Group-theoretic Algorithms for Matrix Multiplication Henry Cohn Robert Kleinberg Balázs Szegedy Christopher Umans Abstract We further develop the group-theoretic approach to fast matrix multiplication

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS

REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS SUMMER PROJECT REPRESENTATIONS AND CHARACTERS OF FINITE GROUPS September 29, 2017 Miriam Norris School of Mathematics Contents 0.1 Introduction........................................ 2 0.2 Representations

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Linear Algebra Lecture Notes

Linear Algebra Lecture Notes Linear Algebra Lecture Notes Lecturers: Inna Capdeboscq and Damiano Testa Warwick, January 2017 Contents 1 Number Systems and Fields 3 1.1 Axioms for number systems............................ 3 2 Vector

More information

Some notes on linear algebra

Some notes on linear algebra Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that

More information

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y). Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y

More information

Exercises on chapter 1

Exercises on chapter 1 Exercises on chapter 1 1. Let G be a group and H and K be subgroups. Let HK = {hk h H, k K}. (i) Prove that HK is a subgroup of G if and only if HK = KH. (ii) If either H or K is a normal subgroup of G

More information

Representation Theory

Representation Theory Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 Paper 1, Section II 19I 93 (a) Define the derived subgroup, G, of a finite group G. Show that if χ is a linear character

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Vector spaces, duals and endomorphisms

Vector spaces, duals and endomorphisms Vector spaces, duals and endomorphisms A real vector space V is a set equipped with an additive operation which is commutative and associative, has a zero element 0 and has an additive inverse v for any

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION

THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION THE SINGULAR VALUE DECOMPOSITION AND LOW RANK APPROXIMATION MANTAS MAŽEIKA Abstract. The purpose of this paper is to present a largely self-contained proof of the singular value decomposition (SVD), and

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent

is an isomorphism, and V = U W. Proof. Let u 1,..., u m be a basis of U, and add linearly independent Lecture 4. G-Modules PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 4. The categories of G-modules, mostly for finite groups, and a recipe for finding every irreducible G-module of a

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg Math 52H: Multilinear algebra, differential forms and Stokes theorem Yakov Eliashberg March 202 2 Contents I Multilinear Algebra 7 Linear and multilinear functions 9. Dual space.........................................

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

0 Sets and Induction. Sets

0 Sets and Induction. Sets 0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

I. Approaches to bounding the exponent of matrix multiplication

I. Approaches to bounding the exponent of matrix multiplication I. Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Modern Applications of

More information

Abstract Algebra II Groups ( )

Abstract Algebra II Groups ( ) Abstract Algebra II Groups ( ) Melchior Grützmann / melchiorgfreehostingcom/algebra October 15, 2012 Outline Group homomorphisms Free groups, free products, and presentations Free products ( ) Definition

More information

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes. Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes Matrices and linear equations A matrix is an m-by-n array of numbers A = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007

Honors Algebra II MATH251 Course Notes by Dr. Eyal Goren McGill University Winter 2007 Honors Algebra II MATH251 Course Notes by Dr Eyal Goren McGill University Winter 2007 Last updated: April 4, 2014 c All rights reserved to the author, Eyal Goren, Department of Mathematics and Statistics,

More information

Categories and Quantum Informatics: Hilbert spaces

Categories and Quantum Informatics: Hilbert spaces Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Diane Maclagan and Damiano Testa 2017-18 Term 2 Contents 1 Introduction 3 2 Matrix review 3 3 Gaussian Elimination 5 3.1 Linear equations and matrices.......................

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

arxiv: v1 [math.ra] 13 Jan 2009

arxiv: v1 [math.ra] 13 Jan 2009 A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

(Group-theoretic) Fast Matrix Multiplication

(Group-theoretic) Fast Matrix Multiplication (Group-theoretic) Fast Matrix Multiplication Ivo Hedtke Data Structures and Efficient Algorithms Group (Prof Dr M Müller-Hannemann) Martin-Luther-University Halle-Wittenberg Institute of Computer Science

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

A Little Beyond: Linear Algebra

A Little Beyond: Linear Algebra A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

How to find good starting tensors for matrix multiplication

How to find good starting tensors for matrix multiplication How to find good starting tensors for matrix multiplication Markus Bläser Saarland University Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

Math 594, HW2 - Solutions

Math 594, HW2 - Solutions Math 594, HW2 - Solutions Gilad Pagi, Feng Zhu February 8, 2015 1 a). It suffices to check that NA is closed under the group operation, and contains identities and inverses: NA is closed under the group

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Duality of finite-dimensional vector spaces

Duality of finite-dimensional vector spaces CHAPTER I Duality of finite-dimensional vector spaces 1 Dual space Let E be a finite-dimensional vector space over a field K The vector space of linear maps E K is denoted by E, so E = L(E, K) This vector

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Week 15-16: Combinatorial Design

Week 15-16: Combinatorial Design Week 15-16: Combinatorial Design May 8, 2017 A combinatorial design, or simply a design, is an arrangement of the objects of a set into subsets satisfying certain prescribed properties. The area of combinatorial

More information

A concise proof of Kruskal s theorem on tensor decomposition

A concise proof of Kruskal s theorem on tensor decomposition A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem

More information

Fast Matrix Product Algorithms: From Theory To Practice

Fast Matrix Product Algorithms: From Theory To Practice Introduction and Definitions The τ-theorem Pan s aggregation tables and the τ-theorem Software Implementation Conclusion Fast Matrix Product Algorithms: From Theory To Practice Thomas Sibut-Pinote Inria,

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information