Fast Matrix Product Algorithms: From Theory To Practice

Size: px
Start display at page:

Download "Fast Matrix Product Algorithms: From Theory To Practice"

Transcription

1 Introduction and Definitions The τ-theorem Pan s aggregation tables and the τ-theorem Software Implementation Conclusion Fast Matrix Product Algorithms: From Theory To Practice Thomas Sibut-Pinote Inria, École Polytechnique, France Éric Schost University of Waterloo, ON,Canada Specfun Seminar February 8th, /22

2 Motivation Complexity of matrix product complexity of linear algebra; ω = inf { θ it takes n θ operations to multiply in M n (K) } [2, 3]; Strassen 69 : ω < 2.81 (used in practice); Le Gall 14 : ω < (theoretical). Malgré leurs performances asymptotiques, aucun des algorithmes de cette section ne semble devoir être implémenté sur machine dans un proche avenir Abdeljaoued & Lombardi, /22

3 Motivation Complexity of matrix product complexity of linear algebra; ω = inf { θ it takes n θ operations to multiply in M n (K) } [2, 3]; Strassen 69 : ω < 2.81 (used in practice); Le Gall 14 : ω < (theoretical). Malgré leurs performances asymptotiques, aucun des algorithmes de cette section ne semble devoir être implémenté sur machine dans un proche avenir Abdeljaoued & Lombardi, 2003 Can we bridge the gap a little? 2/22

4 Problem Statement Let m, n, p denote the bilinear map: M m,n (K) M n,p (K) M m,p (K) (A, B) A B. Goal: determine the arithmetic complexity of m, n, p. 3/22

5 Problem Statement Let m, n, p denote the bilinear map: M m,n (K) M n,p (K) M m,p (K) (A, B) A B. Goal: determine the arithmetic complexity of m, n, p. Known: naive algorithm in mnp operations: n i 1, m, j 1, p, [AB] i,j = a i,k b k,j. k=1 3/22

6 Problem Statement Let m, n, p denote the bilinear map: M m,n (K) M n,p (K) M m,p (K) (A, B) A B. Goal: determine the arithmetic complexity of m, n, p. Known: naive algorithm in mnp operations: i 1, m, j 1, p, [AB] i,j = Can we do better? n a i,k b k,j. k=1 3/22

7 Strassen s Algorithm Strassen s algorithm: 2, 2, 2 in 7 multiplications (instead of = 8): α 1 = (a 1,2 a 2,2 ), β 1 = (b 2,1 + b 2,2 ), p 1 = α 1 β 1 α 2 = (a 2,1 a 1,1 ), β 2 = (b 1,2 + b 1,1 ), p 2 = α 2 β 2 α 3 = a 1,1, β 3 = (b 1,2 b 2,2 ), p 3 = α 3 β 3 α 4 = a 2,2, β 4 = (b 2,1 b 1,1 ), p 4 = α 4 β 4 α 5 = (a 2,1 + a 2,2 ), β 5 = b 1,1, p 5 = α 5 β 5 α 6 = (a 1,2 + a 1,1 ), β 6 = b 2,2, p 6 = α 6 β 6 α 7 = (a 1,1 + a 2,2 ), β 7 = (b 1,1 + b 2,2 ), p 7 = α 7 β 7 4/22

8 Strassen s Algorithm Strassen s algorithm: 2, 2, 2 in 7 multiplications (instead of = 8): α 1 = (a 1,2 a 2,2 ), β 1 = (b 2,1 + b 2,2 ), p 1 = α 1 β 1 α 2 = (a 2,1 a 1,1 ), β 2 = (b 1,2 + b 1,1 ), p 2 = α 2 β 2 α 3 = a 1,1, β 3 = (b 1,2 b 2,2 ), p 3 = α 3 β 3 α 4 = a 2,2, β 4 = (b 2,1 b 1,1 ), p 4 = α 4 β 4 α 5 = (a 2,1 + a 2,2 ), β 5 = b 1,1, p 5 = α 5 β 5 α 6 = (a 1,2 + a 1,1 ), β 6 = b 2,2, p 6 = α 6 β 6 α 7 = (a 1,1 + a 2,2 ), β 7 = (b 1,1 + b 2,2 ), p 7 = α 7 β 7 c 1,1 = p 1 + p 4 p 6 c 1,2 = p 4 + p 5 c 2,1 = p 3 + p 6 c 2,2 = p 2 + p 3 p 5 + p 7 ( ) c1,1 c C = 1,2 c 2,1 c 2,2 4/22

9 Strassen s Algorithm Strassen s algorithm: 2, 2, 2 in 7 multiplications (instead of = 8): α 1 = (a 1,2 a 2,2 ), β 1 = (b 2,1 + b 2,2 ), p 1 = α 1 β 1 c 1,1 = p 1 + p 4 p 6 α 2 = (a 2,1 a 1,1 ), β 2 = (b 1,2 + b 1,1 ), p 2 = α 2 β 2 c 1,2 = p 4 + p 5 α 3 = a 1,1, β 3 = (b 1,2 b 2,2 ), p 3 = α 3 β 3 c 2,1 = p 3 + p 6 α 4 = a 2,2, β 4 = (b 2,1 b 1,1 ), p 4 = α 4 β 4 α 5 = (a 2,1 + a 2,2 ), β 5 = b 1,1, p 5 = α 5 β 5 c 2,2 = p 2 + p 3 p 5 + p 7 ( ) α 6 = (a 1,2 + a 1,1 ), β 6 = b 2,2, p 6 = α 6 β 6 c1,1 c C = 1,2 α 7 = (a 1,1 + a 2,2 ), β 7 = (b 1,1 + b 2,2 ), p 7 = α 7 β 7 c 2,1 c 2,2 Observe: C = p 1 γ 1 + p 2 γ 2 + p 3 γ 3 + p 4 γ 4 + p 5 γ 5 + p 6 γ 6 + p 7 γ 7. where γ 1 = E 1,1, γ 2 = E 2,2, γ 3 = E 2,1 + E 2,2, γ 4 = E 1,1 + E 1,2, γ 5 = E 1,2 E 2,2, γ 6 = E 2,1 E 2,2, γ 7 = E 2,2 E i,j canonical basis 4/22

10 Strassen s Algorithm Strassen s algorithm: 2, 2, 2 in 7 multiplications (instead of = 8): α 1 = (a 1,2 a 2,2 ), β 1 = (b 2,1 + b 2,2 ), p 1 = α 1 β 1 c 1,1 = p 1 + p 4 p 6 α 2 = (a 2,1 a 1,1 ), β 2 = (b 1,2 + b 1,1 ), p 2 = α 2 β 2 c 1,2 = p 4 + p 5 α 3 = a 1,1, β 3 = (b 1,2 b 2,2 ), p 3 = α 3 β 3 c 2,1 = p 3 + p 6 α 4 = a 2,2, β 4 = (b 2,1 b 1,1 ), p 4 = α 4 β 4 α 5 = (a 2,1 + a 2,2 ), β 5 = b 1,1, p 5 = α 5 β 5 c 2,2 = p 2 + p 3 p 5 + p 7 ( ) α 6 = (a 1,2 + a 1,1 ), β 6 = b 2,2, p 6 = α 6 β 6 c1,1 c C = 1,2 α 7 = (a 1,1 + a 2,2 ), β 7 = (b 1,1 + b 2,2 ), p 7 = α 7 β 7 c 2,1 c 2,2 Observe: C = p 1 γ 1 + p 2 γ 2 + p 3 γ 3 + p 4 γ 4 + p 5 γ 5 + p 6 γ 6 + p 7 γ 7. where γ 1 = E 1,1, γ 2 = E 2,2, γ 3 = E 2,1 + E 2,2, γ 4 = E 1,1 + E 1,2, γ 5 = E 1,2 E 2,2, γ 6 = E 2,1 E 2,2, γ 7 = E 2,2 E i,j canonical basis 7 Tensor notation: α i β i γ i. 4/22

11 Tensors and algorithms General tensor notation identified with a bilinear map: m p n m, n, p = a i,k b k,j c i,j. j=1 k=1 Representing m, n, p as r α i β i γ i gives an algorithm. Example: The elementary tensor (a 1,2 + a 3,5 ) b 2,4 (c 1,4 + c 2,4 ) reads as the algorithm tmp (a 1,2 + a 3,5 ) b 2,4 c 1,4 tmp c 2,4 tmp 5/22

12 Composition t t : computes the composition of two tensors. To multiply A of size (mm, nn ) by B of size (nn, pp ), decompose A and B into blocks: A 1,1 A 1,n B 1,1 B 1,p A =.., B =.. A m,1 A m,n B n,1 B n,p where A i,j of size (m, n ), B j,k of size (n, p ). If t = m, n, p and t = m, n, p : t t mm, nn, pp. 6/22

13 Composition t t : computes the composition of two tensors. To multiply A of size (mm, nn ) by B of size (nn, pp ), decompose A and B into blocks: A 1,1 A 1,n B 1,1 B 1,p A =.., B =.. A m,1 A m,n B n,1 B n,p where A i,j of size (m, n ), B j,k of size (n, p ). If t = m, n, p and t = m, n, p : t t mm, nn, pp. Also set t k = t t t m }{{} k, n k, p k. k times 6/22

14 Direct Sum of Tensors t t : computes two independent matrix products in parallel. We will denote s t for t t t. }{{} s times 7/22

15 Rank and ω Definition (Rank of a Tensor t) { R(t) := min r t can be written as } r x i y i z i 8/22

16 Rank and ω Definition (Rank of a Tensor t) { R(t) := min r t can be written as } r x i y i z i R( m, n, p ) is the minimal number of multiplications for m, n, p. 8/22

17 Rank and ω Definition (Rank of a Tensor t) { R(t) := min r t can be written as } r x i y i z i R( m, n, p ) is the minimal number of multiplications for m, n, p. Definition (Linear Algebra Exponent) ω := inf{τ There exists an algorithm to multiply n n matrices in O(n τ ) additions and multiplications}( [2, 3]) 8/22

18 Rank and ω Definition (Rank of a Tensor t) { R(t) := min r t can be written as } r x i y i z i R( m, n, p ) is the minimal number of multiplications for m, n, p. Definition (Linear Algebra Exponent) ω := inf{τ There exists an algorithm to multiply n n matrices in O(n τ ) additions and multiplications}( [2, 3]) Theorem inf{τ R( n, n, n ) = O(n τ )} = ω 8/22

19 Back to Strassen s Algorithm Theorem (Strassen 69) R ( 2, 2, 2 ) 7, hence ω log 2 (7) Idea: R ( 2 k, 2 k, 2 k ) 7 k by induction on k. Cut into blocks of size 2 k 1 and proceed recursively. 9/22

20 Back to Strassen s Algorithm Theorem (Strassen 69) R ( 2, 2, 2 ) 7, hence ω log 2 (7) Idea: R ( 2 k, 2 k, 2 k ) 7 k by induction on k. Cut into blocks of size 2 k 1 and proceed recursively. Lemma R ( m, n, p ) r R ( mnp, mnp, mnp ) r 3. Idea: If we can do m, n, p in r operations, then we can obtain n, p, m and p, m, n in r operations. Then we compose them. 9/22

21 Back to Strassen s Algorithm Theorem (Strassen 69) R ( 2, 2, 2 ) 7, hence ω log 2 (7) Idea: R ( 2 k, 2 k, 2 k ) 7 k by induction on k. Cut into blocks of size 2 k 1 and proceed recursively. Lemma R ( m, n, p ) r R ( mnp, mnp, mnp ) r 3. Idea: If we can do m, n, p in r operations, then we can obtain n, p, m and p, m, n in r operations. Then we compose them. Theorem R ( m, n, p ) r ω 3 log(r) log(mnp). R ( mnp, mnp, mnp ) r 3 ; Proceed recursively for (mnp) k, (mnp) k, (mnp) k just like for the 2, 2, 2 case. 9/22

22 Bini s Approximate Algorithms ( 79) Idea: K K[ε] Definition (degenerate rank of a tensor t) r R(t) := min{r t(ε), t(ε) = u i (ε) v i (ε) w i (ε) with t(ε) = ε q 1 t + ε q t 1 (ε) and q > 0}. Algorithmically, one can obtain t by computing t(ε) modulo ε q. 10/22

23 Bini s Approximate Algorithms ( 79) Idea: K K[ε] Definition (degenerate rank of a tensor t) R(t) := min{r t(ε), t(ε) = r u i (ε) v i (ε) w i (ε) with t(ε) = ε q 1 t + ε q t 1 (ε) and q > 0}. Algorithmically, one can obtain t by computing t(ε) modulo ε q. Theorem (Bini 79) Consequence: ω < R ( m, n, p ) r ω 3 log(r) log(mnp) 10/22

24 The τ-theorem Theorem (τ-theorem, Schönhage 81) If and then ( s ) R m i, n i, p i r, s (m i n i p i ) β = r, ω 3 β. Consequence (Schönhage again): ω < Crucial for recent records (including Le Gall 14: ω < ) 11/22

25 Towards a Practical Use of the τ-theorem Theoretical Obstacles The τ-theorem gives great bounds on ω but it is not seen as a way to build concrete matrix product algorithms (non-effective proofs). Degenerate rank rank relies on the fact that computing with polynomials is asymptotically negligible compared with scalars. Theoretical Contributions More constructive proof of the τ-theorem (an algorithm). Get rid of ε and use the τ-theorem constructively! (for specific kinds of tensors) 12/22

26 Sketch of the constructive proof ( s m i, n i, p i ) k 13/22

27 Sketch of the constructive proof ( s k m i, n i, p i ) µ=(µ 1,,µ s ) µ 1+ +µ s =k (several matrix products (M, N, P)) 13/22

28 Sketch of the constructive proof ( s k m i, n i, p i ) µ=(µ 1,,µ s ) µ 1+ +µ s =k ( k µ 1,, µ s ) s m µ i i }{{} M, s n µ i i }{{} N, s p µ i i }{{} } {{ } ( µ) k matrix products (M, N, P) in parallel P 13/22

29 Sketch of the constructive proof ( s k m i, n i, p i ) µ=(µ 1,,µ s ) µ 1+ +µ s =k Suppose t(ε) is a degeneration of ( k µ 1,, µ s ) s m µ i i }{{} M, s n µ i i }{{} N, s p µ i i }{{} } {{ } ( µ) k matrix products (M, N, P) in parallel s m i, n i, p i. In the same way, t(ε) k µ t µ (ε). P 13/22

30 Sketch of the constructive proof ( s k m i, n i, p i ) µ=(µ 1,,µ s ) µ 1+ +µ s =k Suppose t(ε) is a degeneration of ( k µ 1,, µ s ) s m µ i i }{{} M, s n µ i i }{{} N, s p µ i i }{{} } {{ } ( µ) k matrix products (M, N, P) in parallel s m i, n i, p i. In the same way, t(ε) k µ t µ (ε). P 1 Choose one specific t µ (ε) we can do ( k µ) M, N, P matrix products in parallel effectively (with ε s). 2 Compute M l, N l, P l = M l 1, N l 1, P l 1 M, N, P recursively like previously, using t µ to gain operations at each stage. 13/22

31 Visualizing the exponent in the proof ( 2, 1, 2 1, 3, 1 ) 2 = 4, 1, 4 2 2, 3, 2 1, 9, 1 Figure : Direct Sum, iterated once, of two matrix products 2, 1, 2 and 1, 3, 1 14/22

32 Pan s aggregation tables ( 84) Builds a family of tensors computing independent matrix products to improve ω: Input: table with various tensors. Example: x i,0 y 0,k ε 2 z k,i m, 1, p m 1 p 1 i=0 k=0 εu 0,k,i εv k,i,0 w 0,0 1, (m 1)(p 1), 1 m 1 p 1 i=0 k=0 Every row gives a matrix product (actually, some variables to adjust); Aggregate terms by summing over columns, m 1 p 1 here: t = (x i,0 + εu 0,k,i ) (y 0,k + εv k,i,0 ) (ε 2 z k,i + w 0,0 ). i=0 k=0 t = ε 2 ( m, 1, p 1, (m 1)(p 1), 1 ) + t 2 15/22

33 Correction term m 1 p 1 t = (x i,0 + εu 0,k,i ) (y 0,k + εv k,i,0 ) (ε 2 z k,i + w 0,0 ) i=0 k=0 To apply the τ-theorem we want: t = ε 2 ( m, 1, p 1, (m 1)(p 1), 1 ) + terms of higher degree in ε Let us remove terms of degree 0 and 1, hence the corrected term: t 1 = t We get the output: ( m 1 ) x i,0 ( p 1 i=0 k=0 ) y 0,k w 0,0. t 1 = ε 2 ( m, 1, p 1, (m 1)(p 1), 1 ) + ε 3 t 2 Hence R( m, 1, p 1, (m 1)(p 1), 1 ) mp + 1. Consequence: ω < 2.55 with m = 4,p = 4. 16/22

34 Combined use with the τ-theorem Every matrix variable appears with the same degree in ε: homogenous tensor. Theorem (S,S-P 12) Let t be a homogenous tensor. If we apply the algorithm of the constructive proof of the τ-theorem to t, for any µ and k > 1, the resulting tensor t µ (ε) can be written as where t 1 does not contain any ε. Consequence t µ (ε) = ε q t 1, Set ε = 1 in t µ (ε): get an ε-free tensor computing disjoint matrix products. Even better: set ε = 1 in t(ε) before extracting t µ from t(ε) k. We can get rid of the ε while still benefiting from the τ-theorem! 17/22

35 Example Example: 2 4, 9, 4 in 243 multiplications (instead of 2 (4 9 4) = 288) with: m 1 p 1 t 1 = (x i,0 + εu 0,k,i ) (y 0,k + εv k,i,0 ) (ε 2 z k,i + w 0,0 ) i=0 k=0 ( m 1 ) x i,0 i=0 ( p 1 ) y 0,k w 0,0. k=0 with m = p = 4, k = 2 and µ = (1, 1) in the τ-theorem. This gives an ω-equivalent of /22

36 Example Example: 2 4, 9, 4 in 243 multiplications (instead of 2 (4 9 4) = 288) with: m 1 p 1 t 1 = (x i,0 + εu 0,k,i ) (y 0,k + εv k,i,0 ) (ε 2 z k,i + w 0,0 ) i=0 k=0 ( m 1 ) x i,0 i=0 ( p 1 ) y 0,k w 0,0. k=0 with m = p = 4, k = 2 and µ = (1, 1) in the τ-theorem. This gives an ω-equivalent of Better, with the same tensor: µ = (4, 2), k = 6, m = p = 4: , 81, 256 matrix products in multiplications, ω-equivalent /22

37 Example Example: 2 4, 9, 4 in 243 multiplications (instead of 2 (4 9 4) = 288) with: m 1 p 1 t 1 = (x i,0 + εu 0,k,i ) (y 0,k + εv k,i,0 ) (ε 2 z k,i + w 0,0 ) i=0 k=0 ( m 1 ) x i,0 i=0 ( p 1 ) y 0,k w 0,0. k=0 with m = p = 4, k = 2 and µ = (1, 1) in the τ-theorem. This gives an ω-equivalent of Better, with the same tensor: µ = (4, 2), k = 6, m = p = 4: , 81, 256 matrix products in multiplications, ω-equivalent Even better, not built explicitly: µ = (10, 5), ω-equivalent /22

38 Software implementation in OCaml Parse degenerate tensors as Pan-style aggregation tables; Compose tensors symbolically; Extract a given coefficient µ m µ i i τ-theorem;, n µ i i Test of tensors by applying them to random matrices;, p µ i i following the Maple code generation which computes the rank of a subterm of a power of tensor without actually computing it; C++ code generation implementing a given tensor. 19/22

39 Specifics of doing this in OCaml Static typing much helpful; Caveat: some algebraic computations had to be recoded; Symbolic computations on algorithms akin to compilation passes: AST manipulation; Some interaction with Maple : generating code to do some computations; Parametricity: Export possible to Latex, C++, Maple. 20/22

40 How to Use this Result and Implementation, Future Work Roadmap of use Try out new or modified Pan Tables extract good algorithms; Optimize corresponding code as much as possible (cache, other algorithms at leaves,...). Future work Finish trying out all Pan tables. This work showed improvements in ω are not purely theoretical results. Adapt other theoretical improvements to build concrete tensors? For instance, the laser method? 21/22

41 Thank you for your attention! Any questions? 22/22

Powers of Tensors and Fast Matrix Multiplication

Powers of Tensors and Fast Matrix Multiplication Powers of Tensors and Fast Matrix Multiplication François Le Gall Department of Computer Science Graduate School of Information Science and Technology The University of Tokyo Simons Institute, 12 November

More information

CS 4424 Matrix multiplication

CS 4424 Matrix multiplication CS 4424 Matrix multiplication 1 Reminder: matrix multiplication Matrix-matrix product. Starting from a 1,1 a 1,n A =.. and B = a n,1 a n,n b 1,1 b 1,n.., b n,1 b n,n we get AB by multiplying A by all columns

More information

How to find good starting tensors for matrix multiplication

How to find good starting tensors for matrix multiplication How to find good starting tensors for matrix multiplication Markus Bläser Saarland University Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y

More information

I. Approaches to bounding the exponent of matrix multiplication

I. Approaches to bounding the exponent of matrix multiplication I. Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Modern Applications of

More information

Complexity of Matrix Multiplication and Bilinear Problems

Complexity of Matrix Multiplication and Bilinear Problems Complexity of Matrix Multiplication and Bilinear Problems François Le Gall Graduate School of Informatics Kyoto University ADFOCS17 - Lecture 3 24 August 2017 Overview of the Lectures Fundamental techniques

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. n c ij = a ik b kj k=1 c 11 c 12 c 1n c 21 c 22 c 2n c n1 c n2 c nn = a 11 a 12 a 1n

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

On the geometry of matrix multiplication

On the geometry of matrix multiplication On the geometry of matrix multiplication J.M. Landsberg Texas A&M University Spring 2018 AMS Sectional Meeting 1 / 27 A practical problem: efficient linear algebra Standard algorithm for matrix multiplication,

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Strassen-like algorithms for symmetric tensor contractions

Strassen-like algorithms for symmetric tensor contractions Strassen-like algorithms for symmetric tensor contractions Edgar Solomonik Theory Seminar University of Illinois at Urbana-Champaign September 18, 2017 1 / 28 Fast symmetric tensor contractions Outline

More information

Approaches to bounding the exponent of matrix multiplication

Approaches to bounding the exponent of matrix multiplication Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Simons Institute Sept. 7,

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Fast Matrix Multiplication: Limitations of the Laser Method

Fast Matrix Multiplication: Limitations of the Laser Method Electronic Colloquium on Computational Complexity, Report No. 154 (2014) Fast Matrix Multiplication: Limitations of the Laser Method Andris Ambainis University of Latvia Riga, Latvia andris.ambainis@lu.lv

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

Complexity Theory of Polynomial-Time Problems

Complexity Theory of Polynomial-Time Problems Complexity Theory of Polynomial-Time Problems Lecture 8: (Boolean) Matrix Multiplication Karl Bringmann Recall: Boolean Matrix Multiplication given n n matrices A, B with entries in {0,1} compute matrix

More information

MATRIX MULTIPLICATION AND INVERSION

MATRIX MULTIPLICATION AND INVERSION MATRIX MULTIPLICATION AND INVERSION MATH 196, SECTION 57 (VIPUL NAIK) Corresponding material in the book: Sections 2.3 and 2.4. Executive summary Note: The summary does not include some material from the

More information

Three Ways to Test Irreducibility

Three Ways to Test Irreducibility Three Ways to Test Irreducibility Richard P. Brent Australian National University joint work with Paul Zimmermann INRIA, Nancy France 12 Feb 2009 Outline Polynomials over finite fields Irreducibility criteria

More information

Three Ways to Test Irreducibility

Three Ways to Test Irreducibility Outline Three Ways to Test Irreducibility Richard P. Brent Australian National University joint work with Paul Zimmermann INRIA, Nancy France 8 Dec 2008 Polynomials over finite fields Irreducibility criteria

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

feb abhi shelat Matrix, FFT

feb abhi shelat Matrix, FFT L7 feb 11 2016 abhi shelat Matrix, FFT userid: = Using the standard method, how many multiplications does it take to multiply two NxN matrices? cos( /4) = cos( /2) = sin( /4) = sin( /2) = Mergesort Karatsuba

More information

Introduction to Algorithms 6.046J/18.401J/SMA5503

Introduction to Algorithms 6.046J/18.401J/SMA5503 Introduction to Algorithms 6.046J/8.40J/SMA5503 Lecture 3 Prof. Piotr Indyk The divide-and-conquer design paradigm. Divide the problem (instance) into subproblems. 2. Conquer the subproblems by solving

More information

Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition

Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition Grey Ballard SIAM onference on omputational Science & Engineering March, 207 ollaborators This is joint work with... Austin Benson

More information

Fast Approximate Matrix Multiplication by Solving Linear Systems

Fast Approximate Matrix Multiplication by Solving Linear Systems Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Fast reversion of formal power series

Fast reversion of formal power series Fast reversion of formal power series Fredrik Johansson LFANT, INRIA Bordeaux RAIM, 2016-06-29, Banyuls-sur-mer 1 / 30 Reversion of power series F = exp(x) 1 = x + x 2 2! + x 3 3! + x 4 G = log(1 + x)

More information

Matrices and RRE Form

Matrices and RRE Form Matrices and RRE Form Notation R is the real numbers, C is the complex numbers (we will only consider complex numbers towards the end of the course) is read as an element of For instance, x R means that

More information

Algebraic Complexity Theory

Algebraic Complexity Theory Peter Biirgisser Michael Clausen M. Amin Shokrollahi Algebraic Complexity Theory With the Collaboration of Thomas Lickteig With 21 Figures Springer Chapter 1. Introduction 1 1.1 Exercises 20 1.2 Open Problems

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Section 1.6. M N = [a ij b ij ], (1.6.2)

Section 1.6. M N = [a ij b ij ], (1.6.2) The Calculus of Functions of Several Variables Section 16 Operations with Matrices In the previous section we saw the important connection between linear functions and matrices In this section we will

More information

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes. Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes Matrices and linear equations A matrix is an m-by-n array of numbers A = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a

More information

Fast matrix multiplication

Fast matrix multiplication THEORY OF COMPUTING www.theoryofcomputing.org Fast matrix multiplication Markus Bläser March 6, 2013 Abstract: We give an overview of the history of fast algorithms for matrix multiplication. Along the

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Construction of Real Algebraic Numbers in Coq

Construction of Real Algebraic Numbers in Coq Construction of Real Algebraic Numbers in Coq INRIA Saclay Île-de-France LIX École Polytechnique INRIA Microsoft Research Joint Centre cohen@crans.org August 13, 2012 Why algebraic numbers? Field strictly

More information

(Group-theoretic) Fast Matrix Multiplication

(Group-theoretic) Fast Matrix Multiplication (Group-theoretic) Fast Matrix Multiplication Ivo Hedtke Data Structures and Efficient Algorithms Group (Prof Dr M Müller-Hannemann) Martin-Luther-University Halle-Wittenberg Institute of Computer Science

More information

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic Unit II - Matrix arithmetic matrix multiplication matrix inverses elementary matrices finding the inverse of a matrix determinants Unit II - Matrix arithmetic 1 Things we can already do with matrices equality

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

1111: Linear Algebra I

1111: Linear Algebra I 1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Michaelmas Term 2015 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Michaelmas Term 2015 1 / 15 From equations to matrices For example, if we consider

More information

William Stallings Copyright 2010

William Stallings Copyright 2010 A PPENDIX E B ASIC C ONCEPTS FROM L INEAR A LGEBRA William Stallings Copyright 2010 E.1 OPERATIONS ON VECTORS AND MATRICES...2 Arithmetic...2 Determinants...4 Inverse of a Matrix...5 E.2 LINEAR ALGEBRA

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Fast Multivariate Power Series Multiplication in Characteristic Zero

Fast Multivariate Power Series Multiplication in Characteristic Zero Fast Multivariate Power Series Multiplication in Characteristic Zero Grégoire Lecerf and Éric Schost Laboratoire GAGE, École polytechnique 91128 Palaiseau, France E-mail: lecerf,schost@gage.polytechnique.fr

More information

The Divide-and-Conquer Design Paradigm

The Divide-and-Conquer Design Paradigm CS473- Algorithms I Lecture 4 The Divide-and-Conquer Design Paradigm CS473 Lecture 4 1 The Divide-and-Conquer Design Paradigm 1. Divide the problem (instance) into subproblems. 2. Conquer the subproblems

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Introduction to Matrices

Introduction to Matrices 214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Copyright 2000, Kevin Wayne 1

Copyright 2000, Kevin Wayne 1 //8 Fast Integer Division Too (!) Schönhage Strassen algorithm CS 8: Algorithm Design and Analysis Integer division. Given two n-bit (or less) integers s and t, compute quotient q = s / t and remainder

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem CS 577 Introduction to Algorithms: Jin-Yi Cai University of Wisconsin Madison In the last class, we described InsertionSort and showed that its worst-case running time is Θ(n 2 ). Check Figure 2.2 for

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

THE COMPLEXITY OF THE QUATERNION PROD- UCT*

THE COMPLEXITY OF THE QUATERNION PROD- UCT* 1 THE COMPLEXITY OF THE QUATERNION PROD- UCT* Thomas D. Howell Jean-Claude Lafon 1 ** TR 75-245 June 1975 2 Department of Computer Science, Cornell University, Ithaca, N.Y. * This research was supported

More information

Lectures on Linear Algebra for IT

Lectures on Linear Algebra for IT Lectures on Linear Algebra for IT by Mgr. Tereza Kovářová, Ph.D. following content of lectures by Ing. Petr Beremlijski, Ph.D. Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 11. Determinants

More information

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and

More information

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm Chapter 2 Divide-and-conquer This chapter revisits the divide-and-conquer paradigms and explains how to solve recurrences, in particular, with the use of the master theorem. We first illustrate the concept

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Professor Terje Haukaas University of British Columbia, Vancouver Notation

Professor Terje Haukaas University of British Columbia, Vancouver  Notation Notation This document establishes the notation that is employed throughout these notes. It is intended as a look-up source during the study of other documents and software on this website. As a general

More information

1.3 LECTURE 3. Vector Product

1.3 LECTURE 3. Vector Product 12 CHAPTER 1. VECTOR ALGEBRA Example. Let L be a line x x 1 a = y y 1 b = z z 1 c passing through a point P 1 and parallel to the vector N = a i +b j +c k. The equation of the plane passing through the

More information

Linear algebra. S. Richard

Linear algebra. S. Richard Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................

More information

Diagonals and Walks. Alin Bostan 1 Louis Dumont 1 Bruno Salvy 2. November 4, Introduction Diagonals Walks. 1 INRIA, SpecFun.

Diagonals and Walks. Alin Bostan 1 Louis Dumont 1 Bruno Salvy 2. November 4, Introduction Diagonals Walks. 1 INRIA, SpecFun. and Alin Bostan 1 Louis Dumont 1 Bruno Salvy 2 1 INRIA, SpecFun 2 INRIA, AriC November 4, 2014 1/16 Louis Dumont and 2/16 Louis Dumont and Motivations In combinatorics: Lattice walks w i,j : number of

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

Matrix Algebra & Elementary Matrices

Matrix Algebra & Elementary Matrices Matrix lgebra & Elementary Matrices To add two matrices, they must have identical dimensions. To multiply them the number of columns of the first must equal the number of rows of the second. The laws below

More information

shelat 16f-4800 sep Matrix Mult, Median, FFT

shelat 16f-4800 sep Matrix Mult, Median, FFT L5 shelat 16f-4800 sep 23 2016 Matrix Mult, Median, FFT merge-sort (A, p, r) if p

More information

Fast Matrix Multiplication and Symbolic Computation

Fast Matrix Multiplication and Symbolic Computation Fast Matrix Multiplication and Symbolic Computation Jean-Guillaume Dumas, Victor Pan To cite this version: Jean-Guillaume Dumas, Victor Pan. Fast Matrix Multiplication and Symbolic Computation. 2016.

More information

Fast reversion of power series

Fast reversion of power series Fast reversion of power series Fredrik Johansson November 2011 Overview Fast power series arithmetic Fast composition and reversion (Brent and Kung, 1978) A new algorithm for reversion Implementation results

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

An Efficient Algorithm for Deriving Summation Identities from Mutual Recurrences

An Efficient Algorithm for Deriving Summation Identities from Mutual Recurrences Discrete Mathematics, Algorithms and Applications c World Scientific Publishing Company An Efficient Algorithm for Deriving Summation Identities from Mutual Recurrences BERKELEY R CHURCHILL Department

More information

On the Splitting of MO(2) over the Steenrod Algebra. Maurice Shih

On the Splitting of MO(2) over the Steenrod Algebra. Maurice Shih On the Splitting of MO(2) over the Steenrod Algebra Maurice Shih under the direction of John Ullman Massachusetts Institute of Technology Research Science Institute On the Splitting of MO(2) over the Steenrod

More information

Lattice reduction of polynomial matrices

Lattice reduction of polynomial matrices Lattice reduction of polynomial matrices Arne Storjohann David R. Cheriton School of Computer Science University of Waterloo Presented at the SIAM conference on Applied Algebraic Geometry at the Colorado

More information

Integer multiplication with generalized Fermat primes

Integer multiplication with generalized Fermat primes Integer multiplication with generalized Fermat primes CARAMEL Team, LORIA, University of Lorraine Supervised by: Emmanuel Thomé and Jérémie Detrey Journées nationales du Calcul Formel 2015 (Cluny) November

More information

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1 Lecture Notes: Determinant of a Square Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Determinant Definition Let A [a ij ] be an

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Unit 13: Polynomials and Exponents

Unit 13: Polynomials and Exponents Section 13.1: Polynomials Section 13.2: Operations on Polynomials Section 13.3: Properties of Exponents Section 13.4: Multiplication of Polynomials Section 13.5: Applications from Geometry Section 13.6:

More information

CS 580: Algorithm Design and Analysis

CS 580: Algorithm Design and Analysis CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 208 Announcement: Homework 3 due February 5 th at :59PM Final Exam (Tentative): Thursday, May 3 @ 8AM (PHYS 203) Recap: Divide

More information

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations MTHSC 3110 Section 2.1 Matrix Operations Notation Let A be an m n matrix, that is, m rows and n columns. We ll refer to the entries of A by their row and column indices. The entry in the i th row and j

More information

Copyright 2000, Kevin Wayne 1

Copyright 2000, Kevin Wayne 1 Divide-and-Conquer Chapter 5 Divide and Conquer Divide-and-conquer. Break up problem into several parts. Solve each part recursively. Combine solutions to sub-problems into overall solution. Most common

More information

Chapter 4 Divide-and-Conquer

Chapter 4 Divide-and-Conquer Chapter 4 Divide-and-Conquer 1 About this lecture (1) Recall the divide-and-conquer paradigm, which we used for merge sort: Divide the problem into a number of subproblems that are smaller instances of

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Math 108B Professor: Padraic Bartlett Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Week 6 UCSB 2014 1 Lies Fun fact: I have deceived 1 you somewhat with these last few lectures! Let me

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Algebraic Cryptanalysis of a Quantum Money Scheme The Noise-Free Case

Algebraic Cryptanalysis of a Quantum Money Scheme The Noise-Free Case 1 / 27 Algebraic Cryptanalysis of a Quantum Money Scheme The Noise-Free Case Marta Conde Pena 1 Jean-Charles Faugère 2,3,4 Ludovic Perret 3,2,4 1 Spanish National Research Council (CSIC) 2 Sorbonne Universités,

More information

A Generalized Eigenmode Algorithm for Reducible Regular Matrices over the Max-Plus Algebra

A Generalized Eigenmode Algorithm for Reducible Regular Matrices over the Max-Plus Algebra International Mathematical Forum, 4, 2009, no. 24, 1157-1171 A Generalized Eigenmode Algorithm for Reducible Regular Matrices over the Max-Plus Algebra Zvi Retchkiman Königsberg Instituto Politécnico Nacional,

More information

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution.

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution. Divide-and-Conquer Chapter 5 Divide and Conquer Divide-and-conquer.! Break up problem into several parts.! Solve each part recursively.! Combine solutions to sub-problems into overall solution. Most common

More information

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262 Determinants Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T. Determinants Differential equations

More information

Fast algorithms for polynomials and matrices Part 2: polynomial multiplication

Fast algorithms for polynomials and matrices Part 2: polynomial multiplication Fast algorithms for polynomials and matrices Part 2: polynomial multiplication by Grégoire Lecerf Computer Science Laboratory & CNRS École polytechnique 91128 Palaiseau Cedex France 1 Notation In this

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA DAVID MEREDITH These notes are about the application and computation of linear algebra objects in F n, where F is a field. I think everything here works over any field including

More information

Miller Objectives Alignment Math

Miller Objectives Alignment Math Miller Objectives Alignment Math 1050 1 College Algebra Course Objectives Spring Semester 2016 1. Use algebraic methods to solve a variety of problems involving exponential, logarithmic, polynomial, and

More information

Announcements Wednesday, October 10

Announcements Wednesday, October 10 Announcements Wednesday, October 10 The second midterm is on Friday, October 19 That is one week from this Friday The exam covers 35, 36, 37, 39, 41, 42, 43, 44 (through today s material) WeBWorK 42, 43

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information