Powers of Tensors and Fast Matrix Multiplication

Size: px
Start display at page:

Download "Powers of Tensors and Fast Matrix Multiplication"

Transcription

1 Powers of Tensors and Fast Matrix Multiplication François Le Gall Department of Computer Science Graduate School of Information Science and Technology The University of Tokyo Simons Institute, 12 November 2014

2 Overview of our Results

3 Algebraic Complexity of Matrix Multiplication Compute the product of two n n matrices A and B over a field F Model: algebraic circuits gates: +,,, (operations on two elements of the field) input: a ij, b ij (2n 2 inputs) output: c ij = n k=1 a ikb kj (n 2 outputs) C M (n) =minimal number of algebraic operations needed to compute the product note: may depend on the field F Exponent of matrix multiplication = inf C M (n) n for all large enough n Obviously, 2. note: may depend on the field F

4 History of the main improvements on the exponent of square matrix multiplication Upper bound Year Authors < Strassen < Pan < Bini, Capovani, Romani and Lotti < Schönhage < Pan < Romani < Coppersmith and Winograd < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall This work

5 History of the main improvements on the exponent of square matrix multiplication Upper bound Year Authors < Strassen < Pan < Bini, Capovani, Romani and Lotti < Schönhage < Pan < Romani < Coppersmith and Winograd < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall analysis of a tensor by the laser method

6 History of the main improvements on the exponent of square matrix multiplication < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall The tensors considered become more difficult to analyze (technical difficulties appear + the size of the tensor increases) Previous versions (up to v2.2): LM-based analysis v1 LM-based analysis v2.0 LM-based analysis v2.1 LM-based analysis v2.2 LM-based analysis v2. analyzing the tensor required solving a complicated optimization problem (difficult when the size of the tensor increases) Our new technique (v2.): analyzing the tensor (i.e., obtaining an upper bound on ω from it) can be done in time polynomial in the size of the tensor analysis based on convex optimization analysis of the tensor by the laser method (LM)

7 Applications of our method any tensor from which an upper bound on ω can be obtained from the laser method Laser-method-based analysis v2. polynomial time corresponding upper bound on ω which tensor? powers of the basic tensor from Coppersmith and Winograd s paper analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall LM-based analysis v1 LM-based analysis v2.0 LM-based analysis v2.1 LM-based analysis v2.2 LM-based analysis v2.

8 Applications of our method any tensor from which an upper bound on ω can be obtained from the laser method Laser-method-based analysis v2. polynomial time corresponding upper bound on ω which tensor? powers of the basic tensor from Coppersmith and Winograd s paper analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall LM-based analysis v1 LM-based analysis v2.0 LM-based analysis v2.1 LM-based analysis v2.2 LM-based analysis v2.

9 Applications of our method any tensor from which an upper bound on ω can be obtained from the laser method Laser-method-based analysis v2. polynomial time corresponding upper bound on ω which tensor? powers of the basic tensor from Coppersmith and Winograd s paper analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall LM-based analysis v1 LM-based analysis v2.0 LM-based analysis v2.1 LM-based analysis v2.2 LM-based analysis v2.

10 Applications of our method any tensor from which an upper bound on ω can be obtained from the laser method Laser-method-based analysis v2. polynomial time corresponding upper bound on ω which tensor? powers of the basic tensor from Coppersmith and Winograd s paper analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall LM-based analysis v1 LM-based analysis v2.0 LM-based analysis v2.1 LM-based analysis v2.2 LM-based analysis v2.

11 How to Obtain Upper Bounds on ω?

12 Strassen s algorithm (for the product of two 2 2 matrices) Goal: compute the product of A = a 11 a 12 a 21 a 22 by B = b 11 b 12 b 21 b Compute: m 1 = a 11 (b 12 b 22 ), m 2 =(a 11 + a 12 ) b 22, m =(a 21 + a 22 ) b 11, m 4 = a 22 (b 21 b 11 ), m 5 =(a 11 + a 22 ) (b 11 + b 22 ), m 6 =(a 12 a 22 ) (b 21 + b 22 ), m 7 =(a 11 a 21 ) (b 11 + b 12 ). 2. Output: m 2 + m 4 + m 5 + m 6 = c 11, m 1 + m 2 = c 12, m + m 4 = c 21, m 1 m + m 5 m 7 = c multiplications 18 additions/substractions

13 Strassen s algorithm (for the product of two 2 k 2 k matrices) Goal: compute the product of A = a 11 a 12 a 21 a 22 by B = b 11 b 12 b 21 b Compute: m 1 = a 11 (b 12 b 22 ), m 2 =(a 11 + a 12 ) b 22, m =(a 21 + a 22 ) b 11, m 4 = a 22 (b 21 b 11 ), m 5 =(a 11 + a 22 ) (b 11 + b 22 ), m 6 =(a 12 a 22 ) (b 21 + b 22 ), m 7 =(a 11 a 21 ) (b 11 + b 12 ). 2. Output: m 2 + m 4 + m 5 + m 6 = c 11, m 1 + m 2 = c 12, m + m 4 = c 21, m 1 m + m 5 m 7 = c multiplications 18 additions/substractions Recursive application gives C M (2 k )=O(7 k ) = O((2 k ) log 2 7 ) = log 2 (7) = [Strassen 69]

14 Strassen s algorithm (for the product of two 2 k 2 k matrices) More generally: Suppose that the product of two m m matrices can be computed with t multiplications. Then log m (t) or, equivalently, m t. Strassen s algorithm is the case m =2and t =7 7 multiplications 18 additions/substractions Recursive application gives C M (2 k )=O(7 k ) = O((2 k ) log 2 7 ) = log 2 (7) = [Strassen 69]

15 The tensor of matrix multiplication Definition The tensor corresponding to the multiplication of an m n matrix by an n p matrix is m p n a ik b kj c ij. m, n, p = i=1 j=1 k=1 intuitive interpretation: this is a formal sum when the aik and the bkj are replaced by the corresponding entries of matrices, the coefficient of cij becomes n k=1 a ikb kj

16 General -tensors Consider three vector spaces U, V and W over F Take bases of U, V and W : U = span{x 1,...,x dim(u) } V = span{y 1,...,y dim(v ) } W = span{z 1,...,z dim(w ) } A tensor over (U, V, W ) is an element of U V W i.e., a formal sum T = dim(u) u=1 dim(v ) v=1 dim(w ) w=1 d uvw x u y v z w F a three-dimension array with dim(u) dim(v ) dim(w ) entries in F

17 General -tensors A tensor over (U, V, W ) is an element of U V W i.e., a formal sum T = dim(u) u=1 dim(v ) v=1 dim(w ) w=1 dim(u) =mn, dim(v )=np and dim(w )=mp d uvw x u y v z w F U = span {a ik } 1 i m,1 k n V = span {b k j } 1 k n,1 j p W = span {c i j } 1 i m,1 j p d ikk ji j = 1 if i = i, j = j, k = k 0 otherwise Definition The tensor corresponding to the multiplication of an m n matrix by an n p matrix is m p n a ik b kj c ij. m, n, p = i=1 j=1 k=1

18 Rank Definition The tensor corresponding to the multiplication of an m n matrix by an n p matrix is m p n a ik b kj c ij. m, n, p = i=1 j=1 k=1 R( m, n, p ) mnp 2, 2, 2 =a 11 (b 12 b 22 ) (c 12 + c 22 ) +(a 11 + a 12 ) b 22 ( c 11 + c 12 ) +(a 21 + a 22 ) b 11 (c 21 c 22 ) + a 22 (b 21 b 11 ) (c 11 + c 21 ) +(a 11 + a 22 ) (b 11 + b 22 ) (c 11 + c 22 ) Strassen s algorithm gives R( 2, 2, 2 ) 7 rank = # of multiplications of the best (bilinear) algorithm +(a 12 a 22 ) (b 21 + b 22 ) c 11 +(a 11 a 21 ) (b 11 + b 12 ) ( c 22 )

19 How to obtain upper bounds on ω? Remember: Suppose that the product of two m m matrices can be computed with t multiplications. Then log m (t) or, equivalently, m t. In our terminology: R( m, m, m ) t = m t First generalization: Theorem R( m, n, p ) t = (mnp) / t Second generalization: [Bini et al. 1979] Theorem border rank R( m, n, p ) R( m, n, p ) R( m, n, p ) t = (mnp) / t

20 How to obtain upper bounds on ω? Third generalization: Theorem (the asymptotic sum inequality, special case) [Schönhage 1981] R( m 1,n 1,p 1 m 2,n 2,p 2 ) t = (m 1 n 1 p 1 ) / +(m 2 n 2 p 2 ) / t Theorem (the asymptotic sum inequality, general form) [Schönhage 1981] R k i=1 direct sum m i,n i,p i t = k i=1 (m i n i p i ) / t First generalization: Theorem R( m, n, p ) t = (mnp) / t Second generalization: [Bini et al. 1979] Theorem border rank R( m, n, p ) R( m, n, p ) R( m, n, p ) t = (mnp) / t

21 History of the main improvements on the exponent of square matrix multiplication Upper bound Year Authors < Strassen upper bound on ω from the < Pan analysis of the rank of a tensor < Bini et al. < Schönhage < Pan < Romani < Coppersmith and Winograd < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall analysis of the border rank of a tensor analysis of a tensor by the asymptotic sum inequality analysis of a tensor by the laser method

22 The Laser Method on a Simpler Example

23 The laser method Why this is called the laser method? from V. Strassen. Algebra and Complexity. Proceedings of the first European Congress of Mathematics, pp , Ref. [27] Upper bound Year Authors < Strassen < Pan < Bini, Capovani, Romani and Lotti < Schönhage < Pan < Romani < Coppersmith and Winograd < Strassen < Coppersmith and Winograd < Stothers < Vassilevska Williams < Le Gall variants (improvements) of the laser method

24 The first CW construction Let q be a positive integer. Consider three vector spaces U, V and W of dimension q +1over F. U = span{x 0,...,x q } V = span{y 0,...,y q } W = span{z 0,...,z q } Coppersmith and Winograd (1987) introduced the following tensor: T easy = T 011 easy + T 101 easy + T 110 easy, tensor over (U, V, W ) where T 011 easy = T 101 easy = T 110 easy = q i=1 q i=1 q i=1 x 0 y i z i x i y 0 z i x i y i z 0 = 1, 1,q = q, 1, 1 = 1,q,1 T 011 easy = T 101 easy = T 110 easy = q i=1 q i=1 q i=1 x 00 y 0i z 0i 1 1 matrix by 1 q matrix x i0 y 00 z i0 q 1 matrix by 1 1 matrix x 0i y i0 z 00 1 q matrix by q 1 matrix

25 The first CW construction U = span{x 0,...,x q } V = span{y 0,...,y q } W = span{z 0,...,z q } U = U 0 U 1, where U 0 = span{x 0 } and U 1 = span{x 1,...,x q } V = V 0 V 1, where V 0 = span{y 0 } and V 1 = span{y 1,...,y q } W = W 0 W 1, where W 0 = span{z 0 } and W 1 = span{z 1,...,z q } Coppersmith and Winograd (1987) introduced the following tensor: T easy = T 011 easy + T 101 easy + T 110 easy, This is not a direct sum where T 011 easy = T 101 easy = T 110 easy = q i=1 q i=1 q i=1 x 0 y i z i x i y 0 z i x i y i z 0 tensor over (U 0,V 1,W 1 ) tensor over (U 1,V 0,W 1 ) tensor over (U 1,V 1,W 0 )

26 The first CW construction T easy = Teasy Teasy Teasy 110 R(T easy ) q +2 Actually, R(T easy )=q +2 Since the sum in not direct, we cannot use the asymptotic sum inequality Consider T 2 easy =(T 011 easy + T 101 easy + T 110 easy) (T 011 easy + T 101 easy + T 110 easy) = Teasy 011 Teasy Teasy 011 Teasy Teasy 110 Teasy 110 (9 terms) Consider Teasy N = Teasy 011 Teasy Teasy 110 Teasy 110 ( N terms) N Note: R(Teasy )=(q + 1) N+o(N) would imply =2 Coppersmith and Winograd showed how to select do not share variables (i.e., form a direct sum) N 2 2/ terms that by zeroing variables (i.e., without increasing the rank)

27 The first CW construction: Analysis H 1, 2 = 1 log 1 = log 1/ 2 2 log 2 (entropy) 2/ = log 2 2/ Theorem [Coppermith and Winograd 87] The tensor T easy N can be converted into a direct sum of by zeroing variables (i.e., without increasing the rank) exp H 1, 2 o(1) N = 2 2/ (1 o(1))n terms, each containing N copies of T 011 easy, N copies of T 101 easy and N copies of T 110 easy. Consider Teasy N = Teasy 011 Teasy Teasy 110 Teasy 110 ( N terms) N copies of Teasy copies of Teasy copies of Teasy copies of Teasy copies of Teasy 101 N copies of Teasy 110

28 The first CW construction: Analysis Theorem [Coppermith and Winograd 87] The tensor T easy N can be converted into a direct sum of exp H 1, 2 o(1) N = 2 2/ (1 o(1))n terms, each containing N copies of T 011 easy, N copies of T 101 easy and N copies of T 110 easy. isomorphic to T 011 easy N/ T 101 easy N/ T 110 easy Theorem (the asymptotic sum inequality, general form) [Schönhage 1981] R k i=1 m i,n i,p i t = k i=1 (m i n i p i ) / t N/ = q N/,q N/,q N/ Consequence: 2 2/ = (1 o(1))n q N / R(T N easy ) R(T easy ) N =(q + 2) N 2 2/ q / q +2 = for q =8

29 Idea behind the proof T easy = T 011 easy + T 101 easy + T 110 easy T 011 easy = T 101 easy = T 110 easy = q i=1 q i=1 q i=1 x 0 y i z i x i y 0 z i x i y i z 0 Consider N =2 T 2 easy T 011 easy T 101 easy = =(T 011 easy + T 101 easy + T 110 easy) (T 011 easy + T 101 easy + T 110 easy) = Teasy 011 Teasy Teasy 011 Teasy Teasy 110 Teasy q i,i =0 (9 terms) label (x 0 x i ) (y i y 0 ) (z i z i ) tensor over (U 0 U 1 ) (V 1 V 0 ) (W 1 W 1 )

30 Idea behind the proof T 011 easy T 011 easy = q i,i =0 (x 0 x 0 ) (y i y i ) (z i z i ) tensor over (U 0 U 0 ) (V 1 V 1 ) (W 1 W 1 ) remove this term (e.g., by setting all variables in V 1 V 1 to zero) note: this removes more than one term! Consider N =2 T 2 easy T 011 easy T 101 easy = =(T 011 easy + T 101 easy + T 110 easy) (T 011 easy + T 101 easy + T 110 easy) = Teasy 011 Teasy Teasy 011 Teasy Teasy 110 Teasy q i,i =0 SHARE VARIABLES (x 0 x i ) (y i y 0 ) (z i z i ) (9 terms) by setting all variables in U 1 U 0, V 0 V 0 and Wlabel 0 W to zero tensor over (U 0 U 1 ) (V 1 V 0 ) (W 1 W 1 )

31 Idea behind the proof Conclusion: we can convert T 2 easy (a sum of 9 terms) into a direct sum of 2 terms NEXT STEP Consider Teasy N = Teasy 011 Teasy Teasy 110 Teasy 110 ( N terms) labels: N N

32 Idea behind the proof Theorem [Coppermith and Winograd 87] The tensor T easy N can be converted into a direct sum of exp H 1, 2 o(1) N = 2 2/ (1 o(1))n terms, each containing N copies of T 011 easy, N copies of T 101 easy and N copies of T 110 easy. We can obtain 2 2/ (1 o(1))n labels of the form N number of possibilities N N, 2N exp H 1, 2 N #0 = N/ #1 = 2N/ #0 = N/ #1 = 2N/ #0 = N/ #1 = 2N/ that do not share a blue part, a red part or a green part The proof of this theorem is based on a complicated construction using the existence of dense sets of integers with no three-term arithmetic progression

33 and Reinterpretation General Formulation of the Laser Method

34 The laser method: general formulation For any tensor T, any N 1 and any [2, ] define V,N (T ) k as the maximum of i=1 (m in i p i ) / over all restrictions of T N k isomorphic to i=1 m i,n i,p i V (T ) = lim N V,N (T ) 1/N The value of T This is the definition for symmetric tensors. Otherwise we use V (T )=V (T T 2 T ) 1/ V ( m, n, p )=(mnp) / This is an increasing function of ρ V (T T ) V (T )+V (T ) V (T T ) V (T ) V (T ) Theorem (the asymptotic sum inequality, general form) [Schönhage 1981] k (m i n i p i ) / R i=1 i=1 k m i,n i,p i

35 Example: The first CW construction For any tensor T, any N 1 and any [2, ] define V,N (T ) k as the maximum of i=1 (m in i p i ) / over all restrictions of T N k isomorphic to i=1 m i,n i,p i V (T ) = lim N V,N (T ) 1/N The value of T Theorem [Coppermith and Winograd 87] The tensor T easy N can be converted into a direct sum of exp H 1, 2 o(1) N = 2 2/ (1 o(1))n terms, each containing N copies of T 011 easy, N copies of T 101 easy and N copies of T 110 easy. isomorphic to T 011 easy N/ T 101 easy N/ T 110 easy N/ = q N/,q N/,q N/ V,N (T easy ) 2 2/ (1 o(1))n q N/ V (T easy ) 2 2/ q /

36 The laser method: general formulation For any tensor T, any N 1 and any [2, ] define V,N (T ) k as the maximum of i=1 (m in i p i ) / over all restrictions of T N k isomorphic to i=1 m i,n i,p i V (T ) = lim N V,N (T ) 1/N The value of T for instance, V ( m, n, p )=(mnp) Theorem (the asymptotic sum inequality, general form) [Schönhage 1981] k (m i n i p i ) / R i=1 i=1 Theorem (simple generalization of the asymptotic sum inequality) V (T ) R(T ) k m i,n i,p i

37 The laser method: general formulation Consider three vector spaces U, V and W over F A tensor T over (U, V, W ) is an element of U V W Assume that U, V and W are decomposed as U = U i V = V j W = W k for some I,J,K Z i I j J k K The tensor T is a partitioned tensor (with respect to this decomposition) if it can be written as T = (i,j,k) I J K T ijk where T ijk U i V j W k for each (i, j, k) I J K support of the tensor: supp(t )={(i, j, k) I J K T ijk =0} each non-zero T ijk is called a component of T We say that the tensor is tight if there exists some integer d such that i + j + k = d for all (i, j, k) supp(t )

38 Example: The first CW construction U = U 0 U 1, where U 0 = span{x 0 } and U 1 = span{x 1,...,x q } V = V 0 V 1, where V 0 = span{y 0 } and V 1 = span{y 1,...,y q } W = W 0 W 1, where W 0 = span{z 0 } and W 1 = span{z 1,...,z q } I = {0, 1} J = {0, 1} K = {0, 1} T easy = T 011 easy + T 101 easy + T 110 easy, where T 011 easy = T 101 easy = T 110 easy = q i=1 q i=1 q i=1 x 0 y i z i x i y 0 z i x i y i z 0 tensor over (U 0,V 1,W 1 ) tensor over (U 1,V 0,W 1 ) tensor over (U 1,V 1,W 0 ) supp(t easy )={(0, 1, 1), (1, 0, 1), (1, 1, 0)} it is tight, since i + j + k = 2 for all (i, j, k) supp(t easy )

39 The laser method: general formulation Main Theorem [LG 14] (reinterpretation of prior works) For any tight partitioned tensor T, any probability distribution P over supp(t ), and any [2, ], we have log(v (T )) =1 H(P ) + (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). H: entropy P : projection of P along the -th coordinate (= marginal distribution) (P ): to be defined later (zero in the case of simple tensors) Conclusion: we can compute a lower bound on the value of T if we know a lower bound on the value of each component we can then obtain an upper bound on ω via V (T ) R(T ) concretely, we use V (T ) R(T )= and do a binary search on ρ

40 Example: The first CW construction Main Theorem [LG 14] (reinterpretation of prior works) For any tight partitioned tensor T, any probability distribution P over supp(t ), and any [2, ], we have log(v (T )) =1 H(P ) + (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). H: entropy P : projection of P along the -th coordinate (= marginal distribution) (P ): to be defined later (zero in the case of simple tensors) supp(t easy )={(0, 1, 1), (1, 0, 1), (1, 1, 0)} V (T 011 easy) =V ( 1, 1,q )=q / P (0, 1, 1) = P (1, 0, 1) = P (1, 1, 0) = 1/ (P )=0 P 1 (0) = 1/, P 1 (1) = 2/ and P 2 = P = P 1 V (T 101 easy) =V ( q, 1, 1 )=q / V (T 110 easy) =V ( 1,q,1 )=q / log(v (T easy )) H 1, log q / + 1 log q / + 1 log q /

41 Theorem [Coppersmith and Winograd 87] Example: The first CW construction The tensor T N can be converted into a direct sum of easy exp H 1, 2 o(1) N = 2 2/ (1 o(1))n terms, each containing N copies of T 011 easy, N copies of T 101 easy and N copies of T 110 easy. V,N (T easy ) 2 2/ (1 o(1))n q N/ V (T easy ) 2 2/ q / supp(t easy )={(0, 1, 1), (1, 0, 1), (1, 1, 0)} V (T 011 easy) =V ( 1, 1,q )=q / P (0, 1, 1) = P (1, 0, 1) = P (1, 1, 0) = 1/ (P )=0 P 1 (0) = 1/, P 1 (1) = 2/ and P 2 = P = P 1 V (T 101 easy) =V ( q, 1, 1 )=q / V (T 110 easy) =V ( 1,q,1 )=q / log(v (T easy )) H 1, log q / + 1 log q / + 1 log q /

42 The laser method: general formulation Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any [2, ], we have log(v (T )) =1 H(P ) + (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). Interpretation: the laser method enables us to convert (by zeroing variables) T N into a direct sum of exp =1 H(P ) (P ) o(1) N terms, each isomorphic to (i,j,k) supp(t ) [T ijk ] P (i,j,k)n

43 The second CW construction Let q be a positive integer. Consider three vector spaces U, V and W of dimension q + 2 over F. U = span{x 0,...,x q, x q+1 } W = span{z 0,...,z q, z q+1 } V = span{y 0,...,y q, y q+1 } Coppersmith and Winograd (1987) considered the following tensor: T CW = T easy + TCW TCW TCW 200 R(T CW )=q +2 T CW = T 011 CW + T 101 CW + T 110 CW + T 002 CW + T 020 CW + T 200 CW, where TCW 011 = Teasy 011 TCW 101 = Teasy 101 TCW 110 = Teasy 110 and T 002 CW = x 0 y 0 z q+1 = 1, 1, 1 T 020 CW = x 0 y q+1 z 0 = 1, 1, 1 T 200 CW = x q+1 y 0 z 0 = 1, 1, 1.

44 The second CW construction U = span{x 0,...,x q, x q+1 } V = span{y 0,...,y q, y q+1 } W = span{z 0,...,z q, z q+1 } U = U 0 U 1 U 2, where U 0 = span{x 0 },U 1 = span{x 1,...,x q } and U 2 = span{x q+1 } V = V 0 V 1 V 2, where V 0 = span{y 0 },V 1 = span{y 1,...,y q } and V 2 = span{y q+1 } W = W 0 W 1 W 2, where W 0 = span{z 0 },W 1 = span{z 1,...,z q } and W 2 = span{z q+1 } T CW = TCW TCW TCW TCW TCW TCW 200 This is not a direct sum TCW 011 tensor over (U 0,V 1,W 1 ) TCW 101 tensor over (U 1,V 0,W 1 ) TCW 110 tensor over (U 1,V 1,W 0 ) T 002 T 002 CW = x 0 CW y 0 = xz 0q+1 y= 0 1, z1, q+1 1 tensor over (U 0,V 0,W 2 ) T 020 T 020 CW = x 0 CW y q+1 = x 0 z 0 y= q+1 1, 1, z1 0 tensor over (U 0,V 2,W 0 ) T 200 T 200 CW = x q+1 CW = y 0 x q+1 z 0 = y 0 1, 1, z1 0 tensor over (U 2,V 0,W 0 )

45 The second CW construction: laser method supp(t CW )={(0, 1, 1), (1, 0, 1), (1, 1, 0), (0, 0, 2), (0, 2, 0), (2, 0, 0)} V (T 002 CW )=V (T 020 CW )=V (T 200 CW )=1 V (T 011 CW )=V (T 101 CW )=V (T 110 CW )=q / P (0, 1, 1) = P (1, 0, 1) = P (1, 1, 0) = take P (0, 0, 2) = P (0, 2, 0) = P (2, 0, 0) = (1/ ) with 0 1/ P 1 (0) = + 2(1/ ), P 1 (1) = 2, P 1 (2) = (1/ ) T CW = TCW TCW TCW TCW TCW TCW 200 TCW 011 tensor over (U 0,V 1,W 1 ) TCW 101 tensor over (U 1,V 0,W 1 ) TCW 110 tensor over (U 1,V 1,W 0 ) T 002 T 002 CW = x 0 CW y 0 = xz 0q+1 y= 0 1, z1, q+1 1 tensor over (U 0,V 0,W 2 ) T 020 T 020 CW = x 0 CW y q+1 = x 0 z 0 y= q+1 1, 1, z1 0 tensor over (U 0,V 2,W 0 ) T 200 T 200 CW = x q+1 CW = y 0 x q+1 z 0 = y 0 1, 1, z1 0 tensor over (U 2,V 0,W 0 )

46 The second CW construction: laser method supp(t CW )={(0, 1, 1), (1, 0, 1), (1, 1, 0), (0, 0, 2), (0, 2, 0), (2, 0, 0)} V (T 002 CW )=V (T 020 CW )=V (T 200 CW )=1 V (T 011 CW )=V (T 101 CW )=V (T 110 CW )=q / take P (0, 1, 1) = P (1, 0, 1) = P (1, 1, 0) = with 0 1/ P (0, 0, 2) = P (0, 2, 0) = P (2, 0, 0) = (1/ ) P 1 (0) = + 2(1/ ), P 1 (1) = 2, P 1 (2) = (1/ ) Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any [2, ], we have log(v (T )) =1 H(P ) + (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). combined with this gives = log(v (T CW )) H 2 V (T CW ) R(T CW )=q +2, 2, 1 + log(q ) for q =6and =0.17

47 Analysis of the second construction analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014)

48 Analysis of the second power T 2 CW =(T 011 CW + T 101 CW + T 110 CW + T 002 CW + T 020 CW + T 200 CW ) 2 R(T 2 CW ) (q + 2)2 (6 terms) Idea: rewrite it as a (non-direct) sum of 15 terms by regrouping terms T 2 CW = T T T T 10 + T 01 + T 10 + T 10 + T 01 where + T 01 + T T T T T T 112, T 400 = T 200 CW T 200 CW, T 10 = T 200 CW T 110 CW + T 110 CW T 200 CW, T 220 = T 200 CW T 020 CW + T 020 CW T 200 CW + T 110 CW T 110 CW, T 211 = T 200 CW T 011 CW + T 011 CW T 200 CW + T 110 CW T 101 CW + T 101 CW T 110 CW, and the other 11 terms are obtained by permuting the variables (e.g., T 040 = TCW 020 TCW 020). MERGING

49 Analysis of the second power supp(t 2 CW )={(4, 0, 0),...,(0, 0, 4), (, 1, 0),...,(0, 1, ), (2, 2, 0),...,(0, 2, 2), (2, 1, 1),...,(1, 1, 2)} permutations 6 permutations permutations permutations lower bounds on the values of each component can be computed (recursively) choice of distribution: P (4, 0, 0) =...= P (0, 0, 4) =, P (, 1, 0) =...= P (0, 1, ) = (4-1= parameters) T 2 CW = T T T T 10 + T 01 + T 10 + T 10 + T 01 where P (2, 2, 0) =...= P (0, 2, 2) =, P(2, 1, 1) =...= P (1, 1, 2) = + T 01 + T T T T T T 112, T 400 = T 200 CW T 200 CW, T 10 = T 200 CW T 110 CW + T 110 CW T 200 CW, T 220 = T 200 CW T 020 CW + T 020 CW T 200 CW + T 110 CW T 110 CW, T 211 = T 200 CW T 011 CW + T 011 CW T 200 CW + T 110 CW T 101 CW + T 101 CW T 110 CW, and the other 11 terms are obtained by permuting the variables (e.g., T 040 = TCW 020 TCW 020).

50 Analysis of the second power supp(t 2 CW )={(4, 0, 0),...,(0, 0, 4), (, 1, 0),...,(0, 1, ), (2, 2, 0),...,(0, 2, 2), (2, 1, 1),...,(1, 1, 2)} permutations 6 permutations permutations permutations lower bounds on the values of each component can be computed (recursively) choice of distribution: P (4, 0, 0) =...= P (0, 0, 4) =, P (, 1, 0) =...= P (0, 1, ) = (4-1= parameters) we have (P )=0 P (2, 2, 0) =...= P (0, 2, 2) =, P(2, 1, 1) =...= P (1, 1, 2) =

51 Analysis of the second power supp(t 2 CW )={(4, 0, 0),...,(0, 0, 4), (, 1, 0),...,(0, 1, ), (2, 2, 0),...,(0, 2, 2), (2, 1, 1),...,(1, 1, 2)} permutations 6 permutations permutations permutations lower bounds on the values of each component can be computed (recursively) choice of distribution: P (4, 0, 0) =...= P (0, 0, 4) =, P (, 1, 0) =...= P (0, 1, ) = (4-1= parameters) we have (P )=0 Main Theorem [LG 14] P (2, 2, 0) =...= P (0, 2, 2) =, P(2, 1, 1) =...= P (1, 1, 2) = For any tight partitioned tensor T, any probability distribution P over supp(t ), and any [2, ], we have log(v (T )) =1 H(P ) + (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). Theorem V (T ) R(T ) = for q =6and =0.0002, =0.0125, = and =

52 Analysis of the second power analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) What about the third power (using similar merging schemes)? this does not give any improvement

53 Analysis of the fourth power T 4 CW =(T 011 CW + T 101 CW + T 110 CW + T 002 CW + T 020 CW + T 200 CW ) 4 (6 4 terms) R(T 4 CW ) (q + 2)4 Idea: rewrite it as a (non-direct) sum of a smaller number of terms by regrouping terms T 4 CW = T T T T T 50 + T T T 41 + T T 2 + permutations of these terms T 080,T 008,T 701,T 107,T 170,T 017,T 071, =9 parameters for the probability distribution this time (P ) =0

54 The laser method: general formulation Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any [2, ], we have log(v (T )) =1 H(P ) + (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). H: entropy P : projection of P along the -th coordinate (= marginal distribution) (P ): to be defined later (zero in the case of simple tensors)

55 The laser method: general formulation Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any [2, ], we have log(v (T )) =1 H(P ) + (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). H: entropy P : projection of P along the -th coordinate (= marginal distribution) (P ): to be defined later (zero in the case of simple tensors) (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q when the structure of support is simple, we typically have P 1 = Q 1,P 2 = Q 2,P = Q = P = Q and thus (P )=0

56 The laser method: general formulation Interpretation: the laser method enables us to convert (by zeroing variables) T N into a direct sum of exp =1 H(P ) (P ) o(1) N terms, each isomorphic to (i,j,k) supp(t ) [T ijk ] P (i,j,k)n type P we can control only the choice of the marginal distributions P1, P2 and P what we obtain is a (non-direct) sum of all type Q terms the most frequent terms are those with Q maximizing H(Q) the fact that type P are not the most frequent introduces the penalty term -Γ(P) (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q

57 The laser method: computing the bound Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any log(v (T )) [2, ], we have concave =1 H(P ) + (i,j,k) supp(t ) How to find the best distribution for a given ρ? linear P (i, j, k) log(v (T ijk )) (P ). assume that (a lower bound on) each V (T ijk ) is known =0 If Γ(P) = 0 for all distributions P, the best distribution can be done efficiently (numerically) using convex optimization maximization of a concave function under linear constraints

58 The laser method: computing the bound Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any log(v (T )) [2, ], we have concave =1 H(P ) + (i,j,k) supp(t ) How to find the best distribution for a given ρ? linear P (i, j, k) log(v (T ijk )) (P ). assume that (a lower bound on) each V (T ijk ) is known??? In general: (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q hard to solve, but can be done up to the 4th power of the CW tensor [Stothers 10]

59 The laser method: computing the bound analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) In general: (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q hard to solve, but can be done up to the 4th power of the CW tensor [Stothers 10]

60 The laser method: computing the bound Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any log(v (T )) [2, ], we have concave =1 H(P ) + (i,j,k) supp(t ) How to find the best distribution for a given ρ? linear P (i, j, k) log(v (T ijk )) (P ). assume that (a lower bound on) each V (T ijk ) is known??? In general: (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q hard to solve, but can be done up to the 4th power of the CW tensor [Stothers 10] Simplification: restrict the search to the set of distributions P such that (P )=0 still hard to solve, but can be done up to the 8th power of the CW tensor [Vassilevska-Williams 12]

61 The laser method: computing the bound analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) Simplification: restrict the search to the set of distributions P such that (P )=0 still hard to solve, but can be done up to the 8th power of the CW tensor [Vassilevska-Williams 12]

62 The laser method: computing the bound Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any log(v (T )) [2, ], we have concave =1 H(P ) + (i,j,k) supp(t ) How to find the best distribution for a given ρ? linear P (i, j, k) log(v (T ijk )) (P ). assume that (a lower bound on) each V (T ijk ) is known??? In general: (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q hard to solve, but can be done up to the 4th power of the CW tensor [Stothers 10] Simplification: restrict the search to the set of distributions P such that (P )=0 still hard to solve, but can be done up to the 8th power of the CW tensor [Vassilevska-Williams 12]

63 The laser method: general formulation Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any log(v (T )) [2, ], we have =1 H(P ) (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q + call this expression f(p ) (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). Efficient method to find a solution [LG 14](close to the optimal solution):

64 The laser method: general formulation Main Theorem [LG 14] For any tight partitioned tensor T, any probability distribution P over supp(t ), and any log(v (T )) [2, ], we have =1 H(P ) (P ) = max[h(q)] H(P ) where the max is over all distributions Q over supp(t ) such that P 1 = Q 1, P 2 = Q 2 and P = Q + call this expression f(p ) (i,j,k) supp(t ) P (i, j, k) log(v (T ijk )) (P ). Efficient method to find a solution [LG 14](close to the optimal solution): 1. find a distribution P that maximizes f(p ), and call it ˆP concave objective function, linear constraints 2. find the distribution Q that maximizes H(Q) under the constraints Q 1 = ˆP 1, Q 2 = ˆP 2 and Q = ˆP. Call it ˆQ.. output f( ˆQ) concave objective function, linear constraints Since ( ˆQ) =0, we have log(v (T )) f( ˆQ) from the theorem

65 Analysis of power 16 and 2 analysis of the m-th power of the tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) solutions to the optimization problems obtained numerically by convex optimization

66 Conclusion We constructed a time-efficient implementation of the laser method any tight partitioned tensor for which (lower bounds on) the value of each component is known convex optimization polynomial time analysis of the m-th power of the tensor by CW upper bound on ω We applied it to study higher powers of the basic tensor by CW m Upper bound Number of variables in in the optimization problem Authors 1 < CW (1987) 2 < CW (1987) 4 < Stothers (2010) 8 < Vassilevska Williams (2012) 16 < Le Gall (2014) 2 < Le Gall (2014) recent result [Ambainis, Filmus, LG 14]: Laser-method-based analysis (v2.) studying higher powers (using the same approach) cannot give an upper bound better than 2.725

Complexity of Matrix Multiplication and Bilinear Problems

Complexity of Matrix Multiplication and Bilinear Problems Complexity of Matrix Multiplication and Bilinear Problems François Le Gall Graduate School of Informatics Kyoto University ADFOCS17 - Lecture 3 24 August 2017 Overview of the Lectures Fundamental techniques

More information

I. Approaches to bounding the exponent of matrix multiplication

I. Approaches to bounding the exponent of matrix multiplication I. Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Modern Applications of

More information

How to find good starting tensors for matrix multiplication

How to find good starting tensors for matrix multiplication How to find good starting tensors for matrix multiplication Markus Bläser Saarland University Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y

More information

Fast Matrix Multiplication: Limitations of the Laser Method

Fast Matrix Multiplication: Limitations of the Laser Method Electronic Colloquium on Computational Complexity, Report No. 154 (2014) Fast Matrix Multiplication: Limitations of the Laser Method Andris Ambainis University of Latvia Riga, Latvia andris.ambainis@lu.lv

More information

Approaches to bounding the exponent of matrix multiplication

Approaches to bounding the exponent of matrix multiplication Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Simons Institute Sept. 7,

More information

Fast Matrix Product Algorithms: From Theory To Practice

Fast Matrix Product Algorithms: From Theory To Practice Introduction and Definitions The τ-theorem Pan s aggregation tables and the τ-theorem Software Implementation Conclusion Fast Matrix Product Algorithms: From Theory To Practice Thomas Sibut-Pinote Inria,

More information

On the geometry of matrix multiplication

On the geometry of matrix multiplication On the geometry of matrix multiplication J.M. Landsberg Texas A&M University Spring 2018 AMS Sectional Meeting 1 / 27 A practical problem: efficient linear algebra Standard algorithm for matrix multiplication,

More information

CS 4424 Matrix multiplication

CS 4424 Matrix multiplication CS 4424 Matrix multiplication 1 Reminder: matrix multiplication Matrix-matrix product. Starting from a 1,1 a 1,n A =.. and B = a n,1 a n,n b 1,1 b 1,n.., b n,1 b n,n we get AB by multiplying A by all columns

More information

Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition

Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition Grey Ballard SIAM onference on omputational Science & Engineering March, 207 ollaborators This is joint work with... Austin Benson

More information

Fast matrix multiplication using coherent configurations

Fast matrix multiplication using coherent configurations Fast matrix multiplication using coherent configurations Henry Cohn Christopher Umans Abstract We introduce a relaxation of the notion of tensor rank, called s-rank, and show that upper bounds on the s-rank

More information

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1 Algorithms and Data Structures Strassen s Algorithm ADS (2017/18) Lecture 4 slide 1 Tutorials Start in week (week 3) Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/

More information

Inner Rank and Lower Bounds for Matrix Multiplication

Inner Rank and Lower Bounds for Matrix Multiplication Inner Rank and Lower Bounds for Matrix Multiplication Joel Friedman University of British Columbia www.math.ubc.ca/ jf Jerusalem June 19, 2017 Joel Friedman (UBC) Inner Rank and Lower Bounds June 19, 2017

More information

Fast reversion of formal power series

Fast reversion of formal power series Fast reversion of formal power series Fredrik Johansson LFANT, INRIA Bordeaux RAIM, 2016-06-29, Banyuls-sur-mer 1 / 30 Reversion of power series F = exp(x) 1 = x + x 2 2! + x 3 3! + x 4 G = log(1 + x)

More information

Complexity Theory of Polynomial-Time Problems

Complexity Theory of Polynomial-Time Problems Complexity Theory of Polynomial-Time Problems Lecture 8: (Boolean) Matrix Multiplication Karl Bringmann Recall: Boolean Matrix Multiplication given n n matrices A, B with entries in {0,1} compute matrix

More information

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d)

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d) DS 2018/19 Lecture 4 slide 3 DS 2018/19 Lecture 4 slide 4 Tutorials lgorithms and Data Structures Strassen s lgorithm Start in week week 3 Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Improved bound for complexity of matrix multiplication

Improved bound for complexity of matrix multiplication Proceedings of the Royal Society of Edinburgh, 143A, 351 369, 2013 Improved bound for complexity of matrix multiplication A. M. Davie and A. J. Stothers School of Mathematics, University of Edinburgh,

More information

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d)

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d) The Master Theorem for solving recurrences lgorithms and Data Structures Strassen s lgorithm 23rd September, 2014 Theorem Let n 0 N, k N 0 and a, b R with a > 0 and b > 1, and let T : N R satisfy the following

More information

Ma/CS 6b Class 12: Graphs and Matrices

Ma/CS 6b Class 12: Graphs and Matrices Ma/CS 6b Class 2: Graphs and Matrices 3 3 v 5 v 4 v By Adam Sheffer Non-simple Graphs In this class we allow graphs to be nonsimple. We allow parallel edges, but not loops. Incidence Matrix Consider a

More information

Fast Approximate Matrix Multiplication by Solving Linear Systems

Fast Approximate Matrix Multiplication by Solving Linear Systems Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. n c ij = a ik b kj k=1 c 11 c 12 c 1n c 21 c 22 c 2n c n1 c n2 c nn = a 11 a 12 a 1n

More information

Asymptotic Spectrum and Matrix Multiplication. ISSAC 2012, Grenoble

Asymptotic Spectrum and Matrix Multiplication. ISSAC 2012, Grenoble Asymptotic Spectrum and Matrix Multiplication Volker Strassen Affiliation: University of Konstanz Residence: Dresden, Germany ISSAC 2012, Grenoble Ladies and gentlemen, my talk will be on the complexity

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

On the tensor rank of networks of entangled pairs

On the tensor rank of networks of entangled pairs On the tensor rank of networks of entangled pairs tensor surgery and the laser method Ma4hias Christandl (Copenhagen) Peter Vrana (Budapest) and Jeroen Zuiddam (Amsterdam) ariv:1606.04085 and ariv:1609.07476

More information

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem CS 577 Introduction to Algorithms: Jin-Yi Cai University of Wisconsin Madison In the last class, we described InsertionSort and showed that its worst-case running time is Θ(n 2 ). Check Figure 2.2 for

More information

Explicit tensors. Markus Bläser. 1. Tensors and rank

Explicit tensors. Markus Bläser. 1. Tensors and rank Explicit tensors Markus Bläser Abstract. This is an expository article the aim of which is to introduce interested students and researchers to the topic of tensor rank, in particular to the construction

More information

Introduction to Algorithms

Introduction to Algorithms Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that

More information

Computational complexity and some Graph Theory

Computational complexity and some Graph Theory Graph Theory Lars Hellström January 22, 2014 Contents of todays lecture An important concern when choosing the algorithm to use for something (after basic requirements such as correctness and stability)

More information

Strassen-like algorithms for symmetric tensor contractions

Strassen-like algorithms for symmetric tensor contractions Strassen-like algorithms for symmetric tensor contractions Edgar Solomonik Theory Seminar University of Illinois at Urbana-Champaign September 18, 2017 1 / 28 Fast symmetric tensor contractions Outline

More information

Fast reversion of power series

Fast reversion of power series Fast reversion of power series Fredrik Johansson November 2011 Overview Fast power series arithmetic Fast composition and reversion (Brent and Kung, 1978) A new algorithm for reversion Implementation results

More information

(Group-theoretic) Fast Matrix Multiplication

(Group-theoretic) Fast Matrix Multiplication (Group-theoretic) Fast Matrix Multiplication Ivo Hedtke Data Structures and Efficient Algorithms Group (Prof Dr M Müller-Hannemann) Martin-Luther-University Halle-Wittenberg Institute of Computer Science

More information

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule GROUP THEORY PRIMER New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule 1. Tensor methods for su(n) To study some aspects of representations of a

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

On Strassen s Conjecture

On Strassen s Conjecture On Strassen s Conjecture Elisa Postinghel (KU Leuven) joint with Jarek Buczyński (IMPAN/MIMUW) Daejeon August 3-7, 2015 Elisa Postinghel (KU Leuven) () On Strassen s Conjecture SIAM AG 2015 1 / 13 Introduction:

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

On the Exponent of the All Pairs Shortest Path Problem

On the Exponent of the All Pairs Shortest Path Problem On the Exponent of the All Pairs Shortest Path Problem Noga Alon Department of Mathematics Sackler Faculty of Exact Sciences Tel Aviv University Zvi Galil Department of Computer Science Sackler Faculty

More information

A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES

A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES Donald E. KNUTH Department of Computer Science. Standford University. Stanford2 CA 94305. USA Christos H. PAPADIMITRIOU Department of Computer Scrence and

More information

structure tensors Lek-Heng Lim July 18, 2017

structure tensors Lek-Heng Lim July 18, 2017 structure tensors Lek-Heng Lim July 18, 2017 acknowledgments kind colleagues who nominated/supported me: Shmuel Friedland Sayan Mukherjee Pierre Comon Ming Gu Jean Bernard Lasserre Jiawang Nie Bernd Sturmfels

More information

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort Xi Chen Columbia University We continue with two more asymptotic notation: o( ) and ω( ). Let f (n) and g(n) are functions that map

More information

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm Chapter 2 Divide-and-conquer This chapter revisits the divide-and-conquer paradigms and explains how to solve recurrences, in particular, with the use of the master theorem. We first illustrate the concept

More information

Algebraic Complexity Theory

Algebraic Complexity Theory Peter Biirgisser Michael Clausen M. Amin Shokrollahi Algebraic Complexity Theory With the Collaboration of Thomas Lickteig With 21 Figures Springer Chapter 1. Introduction 1 1.1 Exercises 20 1.2 Open Problems

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

The Divide-and-Conquer Design Paradigm

The Divide-and-Conquer Design Paradigm CS473- Algorithms I Lecture 4 The Divide-and-Conquer Design Paradigm CS473 Lecture 4 1 The Divide-and-Conquer Design Paradigm 1. Divide the problem (instance) into subproblems. 2. Conquer the subproblems

More information

Algorithms for structured matrix-vector product of optimal bilinear complexity

Algorithms for structured matrix-vector product of optimal bilinear complexity Algorithms for structured matrix-vector product of optimal bilinear complexity arxiv:160306658v1 mathna] Mar 016 Ke Ye Computational Applied Mathematics Initiative Department of Statistics University of

More information

Coskewness and Cokurtosis John W. Fowler July 9, 2005

Coskewness and Cokurtosis John W. Fowler July 9, 2005 Coskewness and Cokurtosis John W. Fowler July 9, 2005 The concept of a covariance matrix can be extended to higher moments; of particular interest are the third and fourth moments. A very common application

More information

Symmetries and particle physics Exercises

Symmetries and particle physics Exercises Symmetries and particle physics Exercises Stefan Flörchinger SS 017 1 Lecture From the lecture we know that the dihedral group of order has the presentation D = a, b a = e, b = e, bab 1 = a 1. Moreover

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

Geometric Complexity Theory and Matrix Multiplication (Tutorial)

Geometric Complexity Theory and Matrix Multiplication (Tutorial) Geometric Complexity Theory and Matrix Multiplication (Tutorial) Peter Bürgisser Technische Universität Berlin Workshop on Geometric Complexity Theory Simons Institute for the Theory of Computing Berkeley,

More information

Matrix Multiplication, Trilinear Decompositions, APA Algorithms, and Summation

Matrix Multiplication, Trilinear Decompositions, APA Algorithms, and Summation City University of New York (CUNY) CUNY Academic Works Publications and Research Graduate Center 2015 Matrix Multiplication, Trilinear Decompositions, APA Algorithms, and Summation Victor Pan Lehman College

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

CSE 421 Algorithms. T(n) = at(n/b) + n c. Closest Pair Problem. Divide and Conquer Algorithms. What you really need to know about recurrences

CSE 421 Algorithms. T(n) = at(n/b) + n c. Closest Pair Problem. Divide and Conquer Algorithms. What you really need to know about recurrences CSE 421 Algorithms Richard Anderson Lecture 13 Divide and Conquer What you really need to know about recurrences Work per level changes geometrically with the level Geometrically increasing (x > 1) The

More information

Delsarte s linear programming bound

Delsarte s linear programming bound 15-859 Coding Theory, Fall 14 December 5, 2014 Introduction For all n, q, and d, Delsarte s linear program establishes a series of linear constraints that every code in F n q with distance d must satisfy.

More information

A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser

A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser Presented By: Chris Standish chriss@cs.tamu.edu 23 November 2005 1 Outline Problem Definition Frieze s Generic

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Finding a Heaviest Triangle is not Harder than Matrix Multiplication

Finding a Heaviest Triangle is not Harder than Matrix Multiplication Finding a Heaviest Triangle is not Harder than Matrix Multiplication Artur Czumaj Department of Computer Science New Jersey Institute of Technology aczumaj@acm.org Andrzej Lingas Department of Computer

More information

CSE 613: Parallel Programming. Lectures ( Analyzing Divide-and-Conquer Algorithms )

CSE 613: Parallel Programming. Lectures ( Analyzing Divide-and-Conquer Algorithms ) CSE 613: Parallel Programming Lectures 13 14 ( Analyzing Divide-and-Conquer Algorithms ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2015 A Useful Recurrence Consider the

More information

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,

More information

THE COMPLEXITY OF THE QUATERNION PROD- UCT*

THE COMPLEXITY OF THE QUATERNION PROD- UCT* 1 THE COMPLEXITY OF THE QUATERNION PROD- UCT* Thomas D. Howell Jean-Claude Lafon 1 ** TR 75-245 June 1975 2 Department of Computer Science, Cornell University, Ithaca, N.Y. * This research was supported

More information

Chordal structure in computer algebra: Permanents

Chordal structure in computer algebra: Permanents Chordal structure in computer algebra: Permanents Diego Cifuentes Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology Joint

More information

The Proper Generalized Decomposition: A Functional Analysis Approach

The Proper Generalized Decomposition: A Functional Analysis Approach The Proper Generalized Decomposition: A Functional Analysis Approach Méthodes de réduction de modèle dans le calcul scientifique Main Goal Given a functional equation Au = f It is possible to construct

More information

Tropical Islands. Jan Verschelde

Tropical Islands. Jan Verschelde Tropical Islands Jan Verschelde University of Illinois at Chicago Department of Mathematics, Statistics, and Computer Science http://www.math.uic.edu/ jan jan@math.uic.edu Graduate Computational Algebraic

More information

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits

A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits A Lower Bound for the Size of Syntactically Multilinear Arithmetic Circuits Ran Raz Amir Shpilka Amir Yehudayoff Abstract We construct an explicit polynomial f(x 1,..., x n ), with coefficients in {0,

More information

Theorems of Erdős-Ko-Rado type in polar spaces

Theorems of Erdős-Ko-Rado type in polar spaces Theorems of Erdős-Ko-Rado type in polar spaces Valentina Pepe, Leo Storme, Frédéric Vanhove Department of Mathematics, Ghent University, Krijgslaan 28-S22, 9000 Ghent, Belgium Abstract We consider Erdős-Ko-Rado

More information

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals Satoshi AOKI and Akimichi TAKEMURA Graduate School of Information Science and Technology University

More information

Math 121 Homework 2 Solutions

Math 121 Homework 2 Solutions Math 121 Homework 2 Solutions Problem 13.2 #16. Let K/F be an algebraic extension and let R be a ring contained in K that contains F. Prove that R is a subfield of K containing F. We will give two proofs.

More information

Growth Rate of Spatially Coupled LDPC codes

Growth Rate of Spatially Coupled LDPC codes Growth Rate of Spatially Coupled LDPC codes Workshop on Spatially Coupled Codes and Related Topics at Tokyo Institute of Technology 2011/2/19 Contents 1. Factor graph, Bethe approximation and belief propagation

More information

Copyright 2000, Kevin Wayne 1

Copyright 2000, Kevin Wayne 1 //8 Fast Integer Division Too (!) Schönhage Strassen algorithm CS 8: Algorithm Design and Analysis Integer division. Given two n-bit (or less) integers s and t, compute quotient q = s / t and remainder

More information

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk CMPS 2200 Fall 2017 Divide-and-Conquer Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 1 The divide-and-conquer design paradigm 1. Divide the problem (instance)

More information

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 5 Divide and Conquer CLRS 4.3 Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Divide-and-Conquer Divide-and-conquer. Break up problem into several parts. Solve

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

CS 580: Algorithm Design and Analysis

CS 580: Algorithm Design and Analysis CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 208 Announcement: Homework 3 due February 5 th at :59PM Final Exam (Tentative): Thursday, May 3 @ 8AM (PHYS 203) Recap: Divide

More information

Generic properties of Symmetric Tensors

Generic properties of Symmetric Tensors 2006 1/48 PComon Generic properties of Symmetric Tensors Pierre COMON - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University 2006 2/48 PComon Tensors & Arrays Definitions

More information

Quantum Algorithms. Andreas Klappenecker Texas A&M University. Lecture notes of a course given in Spring Preliminary draft.

Quantum Algorithms. Andreas Klappenecker Texas A&M University. Lecture notes of a course given in Spring Preliminary draft. Quantum Algorithms Andreas Klappenecker Texas A&M University Lecture notes of a course given in Spring 003. Preliminary draft. c 003 by Andreas Klappenecker. All rights reserved. Preface Quantum computing

More information

Representation theory & the Hubbard model

Representation theory & the Hubbard model Representation theory & the Hubbard model Simon Mayer March 17, 2015 Outline 1. The Hubbard model 2. Representation theory of the symmetric group S n 3. Representation theory of the special unitary group

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Integer multiplication with generalized Fermat primes

Integer multiplication with generalized Fermat primes Integer multiplication with generalized Fermat primes CARAMEL Team, LORIA, University of Lorraine Supervised by: Emmanuel Thomé and Jérémie Detrey Journées nationales du Calcul Formel 2015 (Cluny) November

More information

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012

REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012 REPRESENTATION THEORY NOTES FOR MATH 4108 SPRING 2012 JOSEPHINE YU This note will cover introductory material on representation theory, mostly of finite groups. The main references are the books of Serre

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution.

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution. Divide-and-Conquer Chapter 5 Divide and Conquer Divide-and-conquer.! Break up problem into several parts.! Solve each part recursively.! Combine solutions to sub-problems into overall solution. Most common

More information

Tropical decomposition of symmetric tensors

Tropical decomposition of symmetric tensors Tropical decomposition of symmetric tensors Melody Chan University of California, Berkeley mtchan@math.berkeley.edu December 11, 008 1 Introduction In [], Comon et al. give an algorithm for decomposing

More information

Graph Expansion Analysis for Communication Costs of Fast Rectangular Matrix Multiplication

Graph Expansion Analysis for Communication Costs of Fast Rectangular Matrix Multiplication Graph Expansion Analysis for Communication Costs of Fast Rectangular Matrix Multiplication Grey Ballard James Demmel Olga Holtz Benjamin Lipshitz Oded Schwartz Electrical Engineering and Computer Sciences

More information

NOTE ON CYCLIC DECOMPOSITIONS OF COMPLETE BIPARTITE GRAPHS INTO CUBES

NOTE ON CYCLIC DECOMPOSITIONS OF COMPLETE BIPARTITE GRAPHS INTO CUBES Discussiones Mathematicae Graph Theory 19 (1999 ) 219 227 NOTE ON CYCLIC DECOMPOSITIONS OF COMPLETE BIPARTITE GRAPHS INTO CUBES Dalibor Fronček Department of Applied Mathematics Technical University Ostrava

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

CSE 613: Parallel Programming. Lecture 8 ( Analyzing Divide-and-Conquer Algorithms )

CSE 613: Parallel Programming. Lecture 8 ( Analyzing Divide-and-Conquer Algorithms ) CSE 613: Parallel Programming Lecture 8 ( Analyzing Divide-and-Conquer Algorithms ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2012 A Useful Recurrence Consider the following

More information

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n Class Note #14 Date: 03/01/2006 [Overall Information] In this class, we studied an algorithm for integer multiplication, which improved the running time from θ(n 2 ) to θ(n 1.59 ). We then used some of

More information

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

CSC 8301 Design & Analysis of Algorithms: Lower Bounds CSC 8301 Design & Analysis of Algorithms: Lower Bounds Professor Henry Carter Fall 2016 Recap Iterative improvement algorithms take a feasible solution and iteratively improve it until optimized Simplex

More information

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016 Department of Mathematics, University of California, Berkeley YOUR 1 OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016 1. Please write your 1- or 2-digit exam number on

More information

arxiv: v2 [quant-ph] 20 Dec 2012

arxiv: v2 [quant-ph] 20 Dec 2012 A Time-Efficient Output-Sensitive Quantum Algorithm for Boolean Matrix Multiplication arxiv:1201.6174v2 [quant-ph] 20 Dec 2012 François Le Gall Department of Computer Science Graduate School of Information

More information

Solving systems of polynomial equations and minimization of multivariate polynomials

Solving systems of polynomial equations and minimization of multivariate polynomials François Glineur, Polynomial solving & minimization - 1 - First Prev Next Last Full Screen Quit Solving systems of polynomial equations and minimization of multivariate polynomials François Glineur UCL/CORE

More information

Mathematical Preliminaries

Mathematical Preliminaries Mathematical Preliminaries Introductory Course on Multiphysics Modelling TOMAZ G. ZIELIŃKI bluebox.ippt.pan.pl/ tzielins/ Table of Contents Vectors, tensors, and index notation. Generalization of the concept

More information

Fast Matrix Multiplication and Symbolic Computation

Fast Matrix Multiplication and Symbolic Computation Fast Matrix Multiplication and Symbolic Computation Jean-Guillaume Dumas, Victor Pan To cite this version: Jean-Guillaume Dumas, Victor Pan. Fast Matrix Multiplication and Symbolic Computation. 2016.

More information

Copositive matrices and periodic dynamical systems

Copositive matrices and periodic dynamical systems Extreme copositive matrices and periodic dynamical systems Weierstrass Institute (WIAS), Berlin Optimization without borders Dedicated to Yuri Nesterovs 60th birthday February 11, 2016 and periodic dynamical

More information

Tensor algebras and subproduct systems arising from stochastic matrices

Tensor algebras and subproduct systems arising from stochastic matrices Tensor algebras and subproduct systems arising from stochastic matrices Daniel Markiewicz (Ben-Gurion Univ. of the Negev) Joint Work with Adam Dor-On (Univ. of Waterloo) OAOT 2014 at ISI, Bangalore Daniel

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:??? MIT 6.972 Algebraic techniques and semidefinite optimization May 9, 2006 Lecture 2 Lecturer: Pablo A. Parrilo Scribe:??? In this lecture we study techniques to exploit the symmetry that can be present

More information

Rings. EE 387, Notes 7, Handout #10

Rings. EE 387, Notes 7, Handout #10 Rings EE 387, Notes 7, Handout #10 Definition: A ring is a set R with binary operations, + and, that satisfy the following axioms: 1. (R, +) is a commutative group (five axioms) 2. Associative law for

More information

A ZERO ENTROPY T SUCH THAT THE [T,ID] ENDOMORPHISM IS NONSTANDARD

A ZERO ENTROPY T SUCH THAT THE [T,ID] ENDOMORPHISM IS NONSTANDARD A ZERO ENTROPY T SUCH THAT THE [T,ID] ENDOMORPHISM IS NONSTANDARD CHRISTOPHER HOFFMAN Abstract. We present an example of an ergodic transformation T, a variant of a zero entropy non loosely Bernoulli map

More information