How to find good starting tensors for matrix multiplication
|
|
- Kelley Chapman
- 5 years ago
- Views:
Transcription
1 How to find good starting tensors for matrix multiplication Markus Bläser Saarland University
2 Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y n,n n z i,j = x i,k y k,j, k= i, j n entries are variables allowed operations: addition, multiplication, scalar multiplication
3 Strassen s algorithm ( ) ( z z 2 x x = 2 z 2 z 22 x 2 x 22 ) ( y y 2 y 2 y 22 ). p = (x + x 22 )(y + y 22 ), p 2 = (x + x 22 )y, p 3 = x (y 2 y 22 ), p 4 = x 22 ( y + y 2 ), p 5 = (x + x 2 )y 22, p 6 = ( x + x 2 )(y + y 2 ), p 7 = (x 2 x 22 )(y 2 + y 22 ). ( z z 2 z 2 z 22 ) ( p + p = 4 p 5 + p 7 p 3 + p 5 p 2 + p 4 p + p 3 p 2 + p 6 ).
4 Strassen s algorithm (2) 7 mults, 8 adds instead of 8 mults, 4 adds
5 Strassen s algorithm (2) 7 mults, 8 adds instead of 8 mults, 4 adds Observation: Strassen s algorithm works over any ring!
6 Strassen s algorithm (2) 7 mults, 8 adds instead of 8 mults, 4 adds Observation: Strassen s algorithm works over any ring! Recurse: ( ) ( ) ( ) =. C(n) 7 C(n/2) + O(n 2 ), C() = Theorem (Strassen) We can multiply n n-matrices with O(n log 2 7 ) = O(n 2.8 ) arithmetic operations
7 Tensor rank In general: bilinear forms b (X, Y),... b n (X, Y) in variables X = {x,..., x k } and Y = {y,..., y m }. Write n k m n b i z i = t h,i,j x h y i z j. j= h= i= j= t = (t h,i,j ) K k K m K n is the tensor corresponding to b,..., b n.
8 Tensor rank Definition u v w U V W is called a triad rank-one tensor. Definition (Rank) R(t) is the smallest r such that there are rank-one tensors t,..., t r with t = t + + t r. Lemma Let t U V W and t U V W. R(t t ) R(t) + R(t ) R(t t ) R(t)R(t )
9 Sums and products Direct sum t t (U U ) (V V ) (W W ): n k m k n m Tensor product t t (U U ) (V V ) (W W ):
10 Matrix multiplication tensor Example: 2 2-matrix multiplication 2, 2, 2 : x x 2 x 2 x 22 y y 2 y 2 y 22 z z 2 z 2 z 22 In general: t (h,h ),(i,i ),(j,j ) = δ h,iδ i,jδ j,h. Lemma R( k, m, n ) = R( n, k, m ) = = R( n, m, k ). k, m, n k, m, n = kk, mm, nn.
11 Strassen s algorithm and tensors Observation: Tensor product = Recursion Strassen s algorithm: 2, 2, 2 s = 2 s, 2 s, 2 s R( 2, 2, 2 s ) 7 s Definition (Exponent of matrix multiplication) ω = inf{τ R( n, n, n ) = O(n τ )} Strassen: ω log Lemma If R( k, m, n ) r, then ω 3 log r log kmn.
12 What next? Maybe we can multiply 2 2-matrices with 6 multiplications? Theorem (Winograd) R( 2, 2, 2 ) 7 Open question (not so open anymore) Is there a small tensor n, n, n, say, n 0, which gives a better bound on the exponent than Strassen? Smirnov: R( 3, 3, 6 ) 40 ω 2.79
13 Border rank (example) Polynomial multiplication mod X 2 : (a 0 + a X)(b 0 + b X) = a 0 b }{{} 0 +(a b 0 + a 0 b )X + a }{{} b X 2 f 0 f Observation R(t) = 3 However, t can be approximated by tensors of rank 2. t(ɛ) = (, ɛ) (, ɛ) (0, ɛ ) + (, 0) (, 0) (, ɛ ) ɛ
14 Proof of observation restrictions Definition Let A : U U, B : V V, C : W W be homomorphism. (A B C)(u v w) = A(u) B(v) C(w) (A B C)t = r i= A(u i) B(v i ) C(w i ) for t = r i= u i v i w i. t t if there are A, B, C such that t = (A B C)t. ( restriction ). Lemma If t t, then R(t ) R(t) R(t) r iff t r. ( r diagonal of size r.)
15 Proof of observation Let t = r i= u i v i w i. lin{w,..., w r } = K 2. Asume that w r = (, ). Let C be the projection along lin{w r } onto lin{(0, )}. (I I C)t = 0, which has rank 2.
16 Border rank Definition Let h N, t K k m n.. R h (t) = min{r u ρ K[ɛ] k, v ρ K[ɛ] m, w ρ K[ɛ] n : r u ρ v ρ w ρ = ɛ h t + O(ɛ h+ )}. ρ= 2. R(t) = min h R h (t). R(t) is called the border rank of t. Bini, Capovani, Lotti, Romani: R( 2, 2, 3 ) 0. Lemma If R( k, m, n ) r, then ω 3 Corollary ω log r log kmn.
17 Schönhage s τ-theorem Schönhage: R( k,, n, (k )(n ), ) kn +. Theorem (Schönhage s τ-theorem) If R( p k i, m i, n i ) r with r > p then ω 3τ where τ is i= defined by p (k i m i n i ) τ = r. i= Corollary ω 2.55.
18 Strassen s tensor q Str = (e i e 0 e i + e }{{} 0 e i e i ) }{{} i= q,,,,q = q (e 0 + ɛe i ) (e 0 + ɛe i ) e i ɛ ɛ e 0 e 0 i= q e i + O(ɛ) i=
19 Rank versus border rank Theorem R(Str) = 2q. Let Str = r i= u i v i w i. W.l.o.g. assume that u r / lin{e 0,..., e q }. Let A be the projection along u r onto lin{e 0,..., e q }. Let B be the projection along e q onto lin{e 0,..., e q }. R(A I B) Str) R(Str). (A I B) Str is like Str with one inner tensor now being q,,. Do this q times and kill q triads. We are left with a matrix of rank q. Gap of almost 2 between rank and border rank.
20 Laser method Think of Strassen s tensor having an outer and an inner structure: Cut Str into (combinatorial) cubiods! inner tensors: q,,,,, q outer structure: Put in every cubiod that is nonzero., 2,. (Str π Str π 2 Str) s has inner tensors x, y, z with xyz = q 3s, outer tensor 2 s, 2 s, 2 s.
21 Degeneration Definition. Let t = r u ρ v ρ w ρ K k m n, A(ɛ) K[ɛ] k k, ρ= B(ɛ) K[ɛ] m m, and C(ɛ) K[ɛ] n n. Define (A(ɛ) B(ɛ) C(ɛ))t = r A(ɛ)u ρ B(ɛ)v ρ C(ɛ)w ρ. ρ= 2. t is a degeneration of t K k m n ( t t ), if there are A(ɛ), B(ɛ), C(ɛ), and q such that ɛ q t = (A(ɛ) B(ɛ) C(ɛ))t + O(ɛ q+ ). Remark R(t) r t r
22 Laser method (2) A degeneration (A(ɛ), B(ɛ), C(ɛ)) is called monomial if all entries are monomials. Lemma (Strassen) 3 4 n2 n, n, n by a monomial degeneration. inner tensors x, y, z with xyz = q 3s, outer tensor 2 s, 2 s, 2 s. 2 2s independent matrix products with x, y, z with xyz = q 3s Now apply the τ-theorem! Corollary (Strassen) ω 2.48
23 Coppersmith Winograd tensor q ɛ 5 CW = ɛ (e 0 + ɛ 2 e i ) (e 0 + ɛ 2 e i ) (e 0 + ɛ 2 e i ) i= q q q (e 0 + ɛ 3 e i ) (e 0 + ɛ 3 e i ) (e 0 + ɛ 3 e i ) i= i= i= + ( qɛ) e 0 e 0 e 0 + O(ɛ 6 )
24 Coppersmith Winograd tensor Remark (last time, a doable open question) R(CW) = 2q +
25 Laser method (3) CW has inner structure q,,,, q,,,, q. outer structure There is a general method how to degenerate large diagonals from arbitrary tensors. apply to outer tensor Corollary (Coppersmith & Winograd) ω 2.4 Coppersmith & Winograd, Stothers, Vassilevska-Williams, LeGall: ω
26 What did we learn so far?
27 What did we learn so far? How to multiply matrices of astronomic sizes fast!
28 What did we learn so far? How to multiply matrices of astronomic sizes fast! If we want to multiply matrices of astronomic sizes even faster, we need tensors with border rank close to max{dim U, dim V, dim W} with a rich structure
29 What did we learn so far? How to multiply matrices of astronomic sizes fast! If we want to multiply matrices of astronomic sizes even faster, we need tensors with border rank close to max{dim U, dim V, dim W} with a rich structure or completely new methods.
30 Cheap approaches that do not work (not yet?) Main tools: R R 2, 2, , 2, 3 [9,0] 2, 2, 4 4 [2,4] 2, 3, 3 [4,5] [0,5] 3, 3, 3 [9,23] [5,20] Rank: substition method (Pan), de Groote s twist of it Border rank: vanishing equations (Strassen, Lickteig, Landsberg & Ottaviani) in combination with substitution method (Landsberg & Michalek, B & Lysikov) Did not find any upper bounds
31 Characterization problem Definition S n (q) = {t K n K n K n R(t) q}, X n (q) = {t K n K n K n R(t) q}. These definitions are in complexity-theoretic terms. We need algebraic terms. But: {t t q } is not very useful We need easy to check algebraic criteria. Remark: all tensors considered are tight.
32 S n (n) Theorem t S n (n) iff t = n
33 S n (n) Theorem t S n (n) iff t = n The multiplication in any finite dimensional algebra A can be described by a set of bilinear forms. tensor t A Example: A ɛ = K[X]/(X n ɛ) = K n A ɛ K[X]/(X n ) R(A) = 2n. Theorem (Alder Strassen) R(A) 2 dim A number of maximal twosided ideals.
34 X n (n) Definition Let t U V W. t is U -generic ( V, W ) if the U-slices (V, W) contain an invertible element. Proposition (B & Lysikov) Let t be U - and V -generic. Then there is an algebra A with structural tensor t A such that t A = t.
35 X n (n) Theorem (B & Lysikov) Let A and B be algebras with tensors t A and t B. Then t A GL 3 n t B iff t A GL n t B. Theorem (B & Lysikov) Let t be U - and V -generic. Then t X n (n) iff there is an algebra A such that t A = t and ta GL n n
36 Smoothable algebras Definition An algebra A of dimension n of the form K[X,..., X m ]/I for some ideal I is called smoothable if I is a degeneration of some ideal whose zero set consists of n distinct points. Theorem (B & Lysikov) Let t be U - and V -generic. Then t X n (n) iff there is a smoothable algebra A such that t A = t.
37 Examples Cartwright et al.: All (commutative) algebras of dimension 7 are smoothable. All algebras generated by two elements are smoothable. All algebras with dim rad(a) 2 / rad(a) 3 = All algebras defined by a monomial ideal. Str + has minimal border rank. Its structural tensor is isomorphic to k[x,..., X q ]/(X i X j i, j q) CW + has minimal border rank. Its structural tensor is isomorphic to k[x,..., X q+ ]/(X i X j, X 2 i X2 j, X3 i i j)
38 Comon s conjecture symmetric tensor = invariant under permutation of dimensions symmetric rank = use symmetric rank-one tensors Conjecture (Comon) For symmetric tensors, the rank equals the symmetric rank. Proposition The border rank Comon conjecture is true for -generic tensors of minimal border rank.
39 X n (n + ) Theorem t S n (n + ) \ S n (n) iff t is isomorphic to the multiplication tensors in the algebras K[X]/(X 2 ) K n 2 or T 2 K n 3. where T 2 is the algebra of upper triangular 2 2-matrices. Open question (Doable) What about X n (n + ) (for -generic tensors)?
40 The asymptotic rank of CW We know that R(CW q ) = q + 2. For fast matrix multiplication, good upper bounds on R(CW N q ) are sufficient. In particular, R(CW N 3 ) /N 3 implies ω = 2. Theorem (B. & Lysikov) R(CW N q ) (q + ) N + 2 N.
On the geometry of matrix multiplication
On the geometry of matrix multiplication J.M. Landsberg Texas A&M University Spring 2018 AMS Sectional Meeting 1 / 27 A practical problem: efficient linear algebra Standard algorithm for matrix multiplication,
More informationComplexity of Matrix Multiplication and Bilinear Problems
Complexity of Matrix Multiplication and Bilinear Problems François Le Gall Graduate School of Informatics Kyoto University ADFOCS17 - Lecture 3 24 August 2017 Overview of the Lectures Fundamental techniques
More informationPowers of Tensors and Fast Matrix Multiplication
Powers of Tensors and Fast Matrix Multiplication François Le Gall Department of Computer Science Graduate School of Information Science and Technology The University of Tokyo Simons Institute, 12 November
More informationI. Approaches to bounding the exponent of matrix multiplication
I. Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Modern Applications of
More informationApproaches to bounding the exponent of matrix multiplication
Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Simons Institute Sept. 7,
More informationInner Rank and Lower Bounds for Matrix Multiplication
Inner Rank and Lower Bounds for Matrix Multiplication Joel Friedman University of British Columbia www.math.ubc.ca/ jf Jerusalem June 19, 2017 Joel Friedman (UBC) Inner Rank and Lower Bounds June 19, 2017
More informationExplicit tensors. Markus Bläser. 1. Tensors and rank
Explicit tensors Markus Bläser Abstract. This is an expository article the aim of which is to introduce interested students and researchers to the topic of tensor rank, in particular to the construction
More informationFast Matrix Product Algorithms: From Theory To Practice
Introduction and Definitions The τ-theorem Pan s aggregation tables and the τ-theorem Software Implementation Conclusion Fast Matrix Product Algorithms: From Theory To Practice Thomas Sibut-Pinote Inria,
More informationFast matrix multiplication using coherent configurations
Fast matrix multiplication using coherent configurations Henry Cohn Christopher Umans Abstract We introduce a relaxation of the notion of tensor rank, called s-rank, and show that upper bounds on the s-rank
More informationAlgebras of minimal multiplicative complexity
Algebras of minimal multiplicative complexity Markus Bläser Department of Computer Science Saarland University Saarbrücken, Germany mblaeser@cs.uni-saarland.de Bekhan Chokaev Department of Computer Science
More informationFast matrix multiplication
THEORY OF COMPUTING www.theoryofcomputing.org Fast matrix multiplication Markus Bläser March 6, 2013 Abstract: We give an overview of the history of fast algorithms for matrix multiplication. Along the
More informationREPRESENTATION THEORY WEEK 5. B : V V k
REPRESENTATION THEORY WEEK 5 1. Invariant forms Recall that a bilinear form on a vector space V is a map satisfying B : V V k B (cv, dw) = cdb (v, w), B (v 1 + v, w) = B (v 1, w)+b (v, w), B (v, w 1 +
More informationOn Strassen s Conjecture
On Strassen s Conjecture Elisa Postinghel (KU Leuven) joint with Jarek Buczyński (IMPAN/MIMUW) Daejeon August 3-7, 2015 Elisa Postinghel (KU Leuven) () On Strassen s Conjecture SIAM AG 2015 1 / 13 Introduction:
More informationFast Matrix Multiplication: Limitations of the Laser Method
Electronic Colloquium on Computational Complexity, Report No. 154 (2014) Fast Matrix Multiplication: Limitations of the Laser Method Andris Ambainis University of Latvia Riga, Latvia andris.ambainis@lu.lv
More informationJ.M. LANDSBERG AND MATEUSZ MICHA LEK
A 2n 2 log 2 (n) 1 LOWER BOUND FOR THE BORDER RANK OF MATRIX MULTIPLICATION J.M. LANDSBERG AND MATEUSZ MICHA LEK Abstract. Let M n C n2 C n2 C n2 denote the matrix multiplication tensor for n n matrices.
More informationComplexity Theory of Polynomial-Time Problems
Complexity Theory of Polynomial-Time Problems Lecture 8: (Boolean) Matrix Multiplication Karl Bringmann Recall: Boolean Matrix Multiplication given n n matrices A, B with entries in {0,1} compute matrix
More informationMatrix Multiplication
Matrix Multiplication Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. n c ij = a ik b kj k=1 c 11 c 12 c 1n c 21 c 22 c 2n c n1 c n2 c nn = a 11 a 12 a 1n
More informationDefinition 2.3. We define addition and multiplication of matrices as follows.
14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row
More informationRanks of Real Symmetric Tensors
Ranks of Real Symmetric Tensors Greg Blekherman SIAM AG 2013 Algebraic Geometry of Tensor Decompositions Real Symmetric Tensor Decompositions Let f be a form of degree d in R[x 1,..., x n ]. We would like
More informationSemisimple algebras of almost minimal rank over the reals
Semisimple algebras of almost minimal rank over the reals Markus Bläser a,, Andreas Meyer de Voltaire b a Computer Science, Saarland University, Postfach 151150, D-66041 Saarbrücken, Germany b Chair of
More informationFast reversion of power series
Fast reversion of power series Fredrik Johansson November 2011 Overview Fast power series arithmetic Fast composition and reversion (Brent and Kung, 1978) A new algorithm for reversion Implementation results
More informationGroup-theoretic approach to Fast Matrix Multiplication
Jenya Kirshtein 1 Group-theoretic approach to Fast Matrix Multiplication Jenya Kirshtein Department of Mathematics University of Denver Jenya Kirshtein 2 References 1. H. Cohn, C. Umans. Group-theoretic
More information* 8 Groups, with Appendix containing Rings and Fields.
* 8 Groups, with Appendix containing Rings and Fields Binary Operations Definition We say that is a binary operation on a set S if, and only if, a, b, a b S Implicit in this definition is the idea that
More informationElementary Row Operations on Matrices
King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix
More informationRANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA
Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,
More informationGeometric Complexity Theory and Matrix Multiplication (Tutorial)
Geometric Complexity Theory and Matrix Multiplication (Tutorial) Peter Bürgisser Technische Universität Berlin Workshop on Geometric Complexity Theory Simons Institute for the Theory of Computing Berkeley,
More informationAsymptotic Spectrum and Matrix Multiplication. ISSAC 2012, Grenoble
Asymptotic Spectrum and Matrix Multiplication Volker Strassen Affiliation: University of Konstanz Residence: Dresden, Germany ISSAC 2012, Grenoble Ladies and gentlemen, my talk will be on the complexity
More informationAlgebras of Minimal Rank over Perfect Fields
Algebras of Minimal Rank over Perfect Fields Markus Bläser Institut für Theoretische Informatik, Med. Universität zu Lübeck Wallstr. 40, 23560 Lübeck, Germany blaeser@tcs.mu-luebeck.de Abstract Let R(A)
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationRepresentations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III
Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III Lie algebras. Let K be again an algebraically closed field. For the moment let G be an arbitrary algebraic group
More informationk times l times n times
1. Tensors on vector spaces Let V be a finite dimensional vector space over R. Definition 1.1. A tensor of type (k, l) on V is a map T : V V R k times l times which is linear in every variable. Example
More informationPIT problems in the light of and the noncommutative rank algorithm
PIT problems in the light of and the noncommutative rank algorithm Gábor Ivanyos MTA SZTAKI Optimization, Complexity and Invariant Theory, IAS, June 4-8, 2018. PIT problems in this talk Determinant: det(x
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationMath Linear algebra, Spring Semester Dan Abramovich
Math 52 0 - Linear algebra, Spring Semester 2012-2013 Dan Abramovich Fields. We learned to work with fields of numbers in school: Q = fractions of integers R = all real numbers, represented by infinite
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationMa/CS 6b Class 12: Graphs and Matrices
Ma/CS 6b Class 2: Graphs and Matrices 3 3 v 5 v 4 v By Adam Sheffer Non-simple Graphs In this class we allow graphs to be nonsimple. We allow parallel edges, but not loops. Incidence Matrix Consider a
More informationINTRODUCTION TO LIE ALGEBRAS. LECTURE 1.
INTRODUCTION TO LIE ALGEBRAS. LECTURE 1. 1. Algebras. Derivations. Definition of Lie algebra 1.1. Algebras. Let k be a field. An algebra over k (or k-algebra) is a vector space A endowed with a bilinear
More information1.4 Solvable Lie algebras
1.4. SOLVABLE LIE ALGEBRAS 17 1.4 Solvable Lie algebras 1.4.1 Derived series and solvable Lie algebras The derived series of a Lie algebra L is given by: L (0) = L, L (1) = [L, L],, L (2) = [L (1), L (1)
More informationFast Approximate Matrix Multiplication by Solving Linear Systems
Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationComponents and change of basis
Math 20F Linear Algebra Lecture 16 1 Components and change of basis Slide 1 Review: Isomorphism Review: Components in a basis Unique representation in a basis Change of basis Review: Isomorphism Definition
More informationSolution to Homework 1
Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false
More informationREPRESENTATION THEORY OF S n
REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November
More information3.2 Real and complex symmetric bilinear forms
Then, the adjoint of f is the morphism 3 f + : Q 3 Q 3 ; x 5 0 x. 0 2 3 As a verification, you can check that 3 0 B 5 0 0 = B 3 0 0. 0 0 2 3 3 2 5 0 3.2 Real and complex symmetric bilinear forms Throughout
More informationAlgebraic Complexity Theory
Peter Biirgisser Michael Clausen M. Amin Shokrollahi Algebraic Complexity Theory With the Collaboration of Thomas Lickteig With 21 Figures Springer Chapter 1. Introduction 1 1.1 Exercises 20 1.2 Open Problems
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,
More informationINTRODUCTION TO LIE ALGEBRAS. LECTURE 7.
INTRODUCTION TO LIE ALGEBRAS. LECTURE 7. 7. Killing form. Nilpotent Lie algebras 7.1. Killing form. 7.1.1. Let L be a Lie algebra over a field k and let ρ : L gl(v ) be a finite dimensional L-module. Define
More informationCS 4424 Matrix multiplication
CS 4424 Matrix multiplication 1 Reminder: matrix multiplication Matrix-matrix product. Starting from a 1,1 a 1,n A =.. and B = a n,1 a n,n b 1,1 b 1,n.., b n,1 b n,n we get AB by multiplying A by all columns
More informationImproved bound for complexity of matrix multiplication
Proceedings of the Royal Society of Edinburgh, 143A, 351 369, 2013 Improved bound for complexity of matrix multiplication A. M. Davie and A. J. Stothers School of Mathematics, University of Edinburgh,
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More informationDynamic Matrix Rank.
Dynamic Matrix Rank Gudmund Skovbjerg Frandsen 1 and Peter Frands Frandsen 2 1 BRICS, University of Aarhus, Denmark gudmund@daimi.au.dk 2 Rambøll Management, Aarhus, Denmark Peter.Frandsen@r-m.com Abstract.
More informationBackground on Chevalley Groups Constructed from a Root System
Background on Chevalley Groups Constructed from a Root System Paul Tokorcheck Department of Mathematics University of California, Santa Cruz 10 October 2011 Abstract In 1955, Claude Chevalley described
More informationMatrices. Chapter Definitions and Notations
Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which
More informationx y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2
5. Finitely Generated Modules over a PID We want to give a complete classification of finitely generated modules over a PID. ecall that a finitely generated module is a quotient of n, a free module. Let
More informationDiscovering Fast Matrix Multiplication Algorithms via Tensor Decomposition
Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition Grey Ballard SIAM onference on omputational Science & Engineering March, 207 ollaborators This is joint work with... Austin Benson
More informationHomework 5 M 373K Mark Lindberg and Travis Schedler
Homework 5 M 373K Mark Lindberg and Travis Schedler 1. Artin, Chapter 3, Exercise.1. Prove that the numbers of the form a + b, where a and b are rational numbers, form a subfield of C. Let F be the numbers
More informationA remarkable representation of the Clifford group
A remarkable representation of the Clifford group Steve Brierley University of Bristol March 2012 Work with Marcus Appleby, Ingemar Bengtsson, Markus Grassl, David Gross and Jan-Ake Larsson Outline Useful
More informationL(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that
ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive
More informationQuadratic Forms. Marco Schlichting Notes by Florian Bouyer. January 16, 2012
Quadratic Forms Marco Schlichting Notes by Florian Bouyer January 16, 2012 In this course every ring is commutative with unit. Every module is a left module. Definition 1. Let R be a (commutative) ring
More informationFast reversion of formal power series
Fast reversion of formal power series Fredrik Johansson LFANT, INRIA Bordeaux RAIM, 2016-06-29, Banyuls-sur-mer 1 / 30 Reversion of power series F = exp(x) 1 = x + x 2 2! + x 3 3! + x 4 G = log(1 + x)
More informationPermutation representations and rational irreducibility
Permutation representations and rational irreducibility John D. Dixon School of Mathematics and Statistics Carleton University, Ottawa, Canada March 30, 2005 Abstract The natural character π of a finite
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationTIGHT CLOSURE CONTENTS
TIGHT CLOSURE ZHAN JIANG CONTENTS 1. Preliminiary 1 1.1. Base change 1 1.2. Characteristic of a ring 2 1.3. Frobenius functor 2 2. Tight Closure 2 2.1. Definition 2 2.2. Basic properties 3 2.3. Behaviour
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationSome notes on linear algebra
Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationInteger multiplication with generalized Fermat primes
Integer multiplication with generalized Fermat primes CARAMEL Team, LORIA, University of Lorraine Supervised by: Emmanuel Thomé and Jérémie Detrey Journées nationales du Calcul Formel 2015 (Cluny) November
More informationAlgorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1
Algorithms and Data Structures Strassen s Algorithm ADS (2017/18) Lecture 4 slide 1 Tutorials Start in week (week 3) Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/
More informationMath 121 Homework 4: Notes on Selected Problems
Math 121 Homework 4: Notes on Selected Problems 11.2.9. If W is a subspace of the vector space V stable under the linear transformation (i.e., (W ) W ), show that induces linear transformations W on W
More informationSpectra of Semidirect Products of Cyclic Groups
Spectra of Semidirect Products of Cyclic Groups Nathan Fox 1 University of Minnesota-Twin Cities Abstract The spectrum of a graph is the set of eigenvalues of its adjacency matrix A group, together with
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationRing Theory Problems. A σ
Ring Theory Problems 1. Given the commutative diagram α A σ B β A σ B show that α: ker σ ker σ and that β : coker σ coker σ. Here coker σ = B/σ(A). 2. Let K be a field, let V be an infinite dimensional
More informationLecture 11 The Radical and Semisimple Lie Algebras
18.745 Introduction to Lie Algebras October 14, 2010 Lecture 11 The Radical and Semisimple Lie Algebras Prof. Victor Kac Scribe: Scott Kovach and Qinxuan Pan Exercise 11.1. Let g be a Lie algebra. Then
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationTensor Rank is Hard to Approximate
Electronic Colloquium on Computational Complexity, Report No. 86 (2018) Tensor Rank is Hard to Approximate Joseph Swernofsky - KTH Royal Institute of Technology April 20, 2018 Abstract We prove that approximating
More information9. Integral Ring Extensions
80 Andreas Gathmann 9. Integral ing Extensions In this chapter we want to discuss a concept in commutative algebra that has its original motivation in algebra, but turns out to have surprisingly many applications
More informationThe Gelfand-Tsetlin Realisation of Simple Modules and Monomial Bases
arxiv:8.976v [math.rt] 3 Dec 8 The Gelfand-Tsetlin Realisation of Simple Modules and Monomial Bases Central European University Amadou Keita (keita amadou@student.ceu.edu December 8 Abstract The most famous
More informationRepresentations. 1 Basic definitions
Representations 1 Basic definitions If V is a k-vector space, we denote by Aut V the group of k-linear isomorphisms F : V V and by End V the k-vector space of k-linear maps F : V V. Thus, if V = k n, then
More informationMatrix & Linear Algebra
Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A
More informationJordan blocks. Defn. Let λ F, n Z +. The size n Jordan block with e-value λ is the n n upper triangular matrix. J n (λ) =
Jordan blocks Aim lecture: Even over F = C, endomorphisms cannot always be represented by a diagonal matrix. We give Jordan s answer, to what is the best form of the representing matrix. Defn Let λ F,
More informationDeterminants. Beifang Chen
Determinants Beifang Chen 1 Motivation Determinant is a function that each square real matrix A is assigned a real number, denoted det A, satisfying certain properties If A is a 3 3 matrix, writing A [u,
More informationProblem set 2. Math 212b February 8, 2001 due Feb. 27
Problem set 2 Math 212b February 8, 2001 due Feb. 27 Contents 1 The L 2 Euler operator 1 2 Symplectic vector spaces. 2 2.1 Special kinds of subspaces....................... 3 2.2 Normal forms..............................
More information1 Linear transformations; the basics
Linear Algebra Fall 2013 Linear Transformations 1 Linear transformations; the basics Definition 1 Let V, W be vector spaces over the same field F. A linear transformation (also known as linear map, or
More informationTutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d)
DS 2018/19 Lecture 4 slide 3 DS 2018/19 Lecture 4 slide 4 Tutorials lgorithms and Data Structures Strassen s lgorithm Start in week week 3 Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/
More informationReal representations
Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where
More informationSolutions to Example Sheet 1
Solutions to Example Sheet 1 1 The symmetric group S 3 acts on R 2, by permuting the vertices of an equilateral triangle centered at 0 Choose a basis of R 2, and for each g S 3, write the matrix of g in
More informationRepresentation Theory
Representation Theory Representations Let G be a group and V a vector space over a field k. A representation of G on V is a group homomorphism ρ : G Aut(V ). The degree (or dimension) of ρ is just dim
More informationFormal power series rings, inverse limits, and I-adic completions of rings
Formal power series rings, inverse limits, and I-adic completions of rings Formal semigroup rings and formal power series rings We next want to explore the notion of a (formal) power series ring in finitely
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationGeneric properties of Symmetric Tensors
2006 1/48 PComon Generic properties of Symmetric Tensors Pierre COMON - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University 2006 2/48 PComon Tensors & Arrays Definitions
More informationSYMMETRIC DETERMINANTAL REPRESENTATIONS IN CHARACTERISTIC 2
SYMMETRIC DETERMINANTAL REPRESENTATIONS IN CHARACTERISTIC 2 BRUNO GRENET, THIERRY MONTEIL, AND STÉPHAN THOMASSÉ Abstract. This paper studies Symmetric Determinantal Representations (SDR) in characteristic
More informationMath 121 Homework 5: Notes on Selected Problems
Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements
More informationMATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.
MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. Elementary matrices Theorem 1 Any elementary row operation σ on matrices with n rows can be simulated as left multiplication
More informationLINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More information1 Determinants. 1.1 Determinant
1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to
More informationLectures on Linear Algebra for IT
Lectures on Linear Algebra for IT by Mgr. Tereza Kovářová, Ph.D. following content of lectures by Ing. Petr Beremlijski, Ph.D. Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 11. Determinants
More informationTHE MINIMAL POLYNOMIAL AND SOME APPLICATIONS
THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise
More information