Examensarbete. Tensor Rank Elias Erdtman, Carl Jönsson. LiTH - MAT - EX / SE

Size: px
Start display at page:

Download "Examensarbete. Tensor Rank Elias Erdtman, Carl Jönsson. LiTH - MAT - EX / SE"

Transcription

1 Examensarbete Tensor Rank Elias Erdtman, Carl Jönsson LiTH - MAT - EX / SE

2

3 Tensor Rank Applied Mathematics, Linköpings Universitet Elias Erdtman, Carl Jönsson LiTH - MAT - EX / SE Examensarbete: 30 hp Level: A Supervisor: Göran Bergqvist, Applied Mathematics, Linköpings Universitet Examiner: Milagros Izquierdo Barrios, Applied Mathematics, Linköpings Universitet Linköping June 2012

4

5 Abstract This master s thesis addresses numerical methods of computing the typical ranks of tensors over the real numbers and explores some properties of tensors over finite fields. We present three numerical methods to compute typical tensor rank. Two of these have already been published and can be used to calculate the lowest typical ranks of tensors and an approximate percentage of how many tensors have the lowest typical ranks (for some tensor formats), respectively. The third method was developed by the authors with the intent to be able to discern if there is more than one typical rank. Some results from the method are presented but are inconclusive. In the area of tensors over finite fields some new results are shown, namely that there are eight GL q (2) GL q (2) GL q (2)-orbits of tensors over any finite field and that some tensors over F q have lower rank when considered as tensors over F q 2. Furthermore, it is shown that some symmetric tensors over F 2 do not have a symmetric rank and that there are tensors over some other finite fields which have a larger symmetric rank than rank. Keywords: generic rank, symmetric tensor, tensor rank, tensors over finite fields, typical rank. URL for electronic version: Erdtman, Jönsson, v

6 vi

7 Preface Tensors? Richard had no idea what a tensor was, but he had noticed that when math geeks started throwing the word around, it meant that they were headed in the general direction of actually getting something done. - Neal Stephenson, Reamde (2011). This text is a master s thesis, written by Elias Erdtman and Carl Jönsson at Linköpings universitet, with Göran Bergqvist as supervisor and Milagros Izquierdo Barrios as examiner, in Background The study of tensors of order greater than two has recently had an upswing, both from a theoretical point of view and in applications, and there are lots of unanswered questions in both areas. Questions of interest are for example what a generic tensor looks like, what are useful tensor decompositions and how can one calculate them, what are and how can one find equations for sets of tensors, etc. Basically one wants to have a theory of tensors as well-developed and easy to use as the theory of matrices. Purpose In this thesis we aim to show some basic results on tensor rank and investigate methods for discerning generic and typical ranks of tensors, i.e., searhing for an answer to the question, which ranks are the most common?. Chapter outline Chapter 1. Introduction In the first chapter we present theory relevant to tensors. It is divided in four major parts: the first part is about multilinear algebra, the second part is a short introduction to the CP decomposition, the third part gives the reader the background in algebraic geometry necessary to understand the results in chapter 2. The fourth and last part of the chapter gives an example of the application of tensor decomposition, more specifically the multiplication tensor for 2 2 matrices and Strassen s algorithm for matrix multiplication. Erdtman, Jönsson, vii

8 viii Chapter 2. Tensor rank In the second chapter we introduce different notions of rank: tensor rank, multilinear rank, Kruskal rank, etc. We show some basic results on tensors using algebraic geometry, among them some results on generic ranks over C and typical ranks over R. Chapter 3. Numerical methods and results Numerical results for determining typical ranks are presented in chapter three. We present an algorithm which can calculate the generic rank for any format of tensor spaces and another algorithm from which one can infer if there is more than one typical rank over R for some tensor space formats. A method developed by the authors is also presented along with results giving an indication that the method does not seem to work. Chapter 4. Tensors over finite fields This chapter contains some results on finite fields. We present a classification and the sizes of the eight GL q (2) GL q (2) GL q (2)-orbits of F 2 q F 2 q F 2 q and show that the elements of one of the orbits have lower rank when considered as tensors over F q 2. Finally we show that there are symmetric tensors over F 2 which do not have a symmetric rank and over some other finite fields a symmetric tensor can have a symmetric rank which is greater than its rank. Chapter 5. Summary and future work The results of the thesis are summarized and some directions of future work are indicated. Appendix A. Programs Program code for Mathematica or MATLAB used to produce the results in the thesis is given in this appendix. Distribution of work Since this is a master s thesis we give account for who has done what in the table below. Section Author 1.1 CJ/EE 1.2 EE 1.3 CJ 1.4 CJ/EE 2.1 CJ/EE CJ EE/CJ 3.3 EE 4 CJ 5 CJ & EE

9 Nomenclature Most of the reoccurring abbreviations and symbols are described here. Symbols F is a field. F q is the finite field of q elements. I(V ) is the ideal of an algebraic set V. V(I) is the algebraic set of zeros to an ideal I. Seg is the Segre mapping. σ r (X) is the r:th secant variety of X. S d is the symmetric group on d elements. is tensor product. is the matrix Kronecker product. ˆX is the affine cone to a set X PV. x is the number x rounded up to the nearest integer. Erdtman, Jönsson, ix

10 x

11 Contents 1 Introduction Multilinear algebra Tensor products and multilinear maps Symmetric and skew-symmetric tensors GL(V 1 ) GL(V k ) acts on V 1 V k Tensor decomposition Algebraic geometry Basic definitions Varieties and ideals Projective spaces and varieties Dimension of an algebraic set Cones, joins, and secant varieties Real algebraic geometry Application to matrix multiplication Tensor rank Different notions of rank Results on tensor rank Symmetric tensor rank Kruskal rank Multilinear rank Varieties of matrices over C Varieties of tensors over C Equations for the variety of tensors of rank one Varieties of higher ranks Real tensors Numerical methods and results Comon, Ten Berge, Lathauwer and Castaing s method Numerical results Discussion Choulakian s method Numerical results Discussion Surjectivity check Results Discussion Erdtman, Jönsson, xi

12 xii Contents 4 Tensors over finite fields Finite fields and linear algebra GL q (2) GL q (2) GL q (2)-orbits of F 2 q F 2 q F 2 q Rank zero and rank one orbits Rank two orbits Rank three orbits Main result Lower rank over field extensions Symmetric rank Summary and future work Summary Future work A Programs 59 A.1 Numerical methods A.1.1 Comon, Ten Berge, Lathauwer and Castaing s method.. 59 A.1.2 Choulakian s method A.1.3 Surjectivity check A.2 Tensors over finite fields A.2.1 Rank partitioning A.2.2 Orbit paritioning Bibliography 68

13 List of Tables 3.1 Known typical ranks for 2 N 2 N 3 arrays over R Known typical ranks for 3 N 2 N 3 arrays over R Known typical ranks for 4 N 2 N 3 arrays over R Known typical ranks for 5 N 2 N 3 arrays over R Known typical ranks for N d arrays over R Number of real solutions to (3.7) for random tensors Number of real solutions to (3.7) for random tensors Number of real solutions to (3.7) for random tensors Number of real solutions to (3.7) for random tensors Number of real solutions to (3.7) for random tensors Approximate probability that a random I J K tensor has rank I Euclidean distances depending on the fraction of the area on the n-sphere Number of points from φ 2 close to some control points for the tensor Number of points from φ 3 close to some control points for the tensor Number of points from φ 3 close to some control points for the tensor Number of points from φ 5 close to some control points for the tensor Orbits of F 2 q F 2 q F 2 q under the action of GL q (2) GL q (2) GL q (2) for q = 2, Orbits of F 2 q F 2 q F 2 q under the action of GL q (2) GL q (2) GL q (2) Number of symmetric tensors generated by symmetric rank one tensors over some small finite fields Number of N N N symmetric tensors generated by symmetric rank one tensors over F Erdtman, Jönsson, xiii

14 xiv List of Tables

15 List of Figures 1.1 The image of t (t, t 2, t 3 ) for 1 t The intersection of the surfaces defined by y x 2 = 0 and z x 3 = 0, namely the twisted cubic, for ( 1, 0, 1) (x, y, z) (1, 1, 1) The cuspidal cubic An example of a semi-algebraic set Connection between Euclidean distance and an angle on a 2- dimensional intersection of a sphere Erdtman, Jönsson, xv

16 xvi List of Figures

17 Chapter 1 Introduction This first chapter will introduce basic notions, definitions and results concerning multilinear algebra, tensor decomposition, tensor rank and algebraic geometry. A general reference for this chapter is [25]. The simplest way to look at tensors is as a generalization of matrices; they are objects in which one can arrange multidimensional data in a natural way. For instance, if one wants to analyze a sequence of images with small differences in some property, e.g. lighting or facial expression, one can use matrix decomposition algorithms, but then one has to vectorize the images and lose the natural structure. If one could use tensors, one can keep the natural structure of the pictures and it will be a significant advantage. However, the problem then becomes that one needs new results and algorithms for tensor decomposition. The study of decomposition of higher order tensors has its origins in articles by Hitchcock from 1927 [19, 20]. Tensor decomposition was introduced in psychometrics by Tucker in the 1960 s [41], and in chemometrics by Appellof and Davidson in the 1980 s [2]. Strassen published his algorithm for matrix multiplication in 1969 [37] and since then tensor decomposition has received attention in the area of algebraic complexity theory. An overview of the subject, its literature and applications can be found in [1, 24]. Tensor rank, as introduced later in this chapter, is a natural generalization of matrix rank. Kruskal [23] states that is so natural that it was introduced independently at least three times before he introduced it himself in Tensors have recently been studied from the viewpoint of algebraic geometry, yielding results on typical ranks, which are the ranks a random tensor takes with non-zero probability. The recent book [25] summarizes the results in the field. Results often concern the typical ranks of certain formats of tensors, methods for discerning the rank of a tensor or algorithms for computing tensor decompositions. Algorithms for tensor decompositions are often of interest in applications areas, where one wants to find structures and patterns in data. In some cases, just finding a decomposition is not enough, one wants the decomposition to be essentially unique. In these cases one wants an algorithm to find a decomposition of a tensor and some way of determining if it is unique. In other fields of applications, one wants to find decompositions of important tensors, since this will yield better performing algorithms in the field, e.g. Strassen s algorithm. Of course, an algorithm for finding a decomposition would be of high interest also in this case, but uniqueness is not important. However, in this case, just Erdtman, Jönsson,

18 2 Chapter 1. Introduction knowing that a tensor has a certain rank gives one the knowledge that there is a better algorithm, but if the decomposition is the important part, just knowing the rank is of little help. We take a look at efficient matrix multiplication and Strassen s algorithm as an example application in the end of the chapter. There are other examples of applications of tensor decomposition and rank, e.g. face recognition in the area of pattern recognition, modeling fluorescence excitation-emission data in chemistry, blind deconvolution of DS-CDMA signals in wireless communications, Bayesian networks in algebraic statistics, tensor network states in quantum information theory [25] and in neuroscience tensors are used in the study of effects of new drugs on brain activity [1, 24]. Efficient matrix multiplication is a special case of efficient evaluation of bilinear forms, see [22, 21, section pp ], which, among other things, is studied in algebraic complexity theory [9, 25, chapter 13]. Historically, tensors over R and C have been investigated. In chapter 4, we investigate tensors over finite fields and show some new results. 1.1 Multilinear algebra In this section we introduce the basics of multilinear algebra, which is an extension of linear algebra by expanding the domain from one vector space to several. For an easy introduction to tensor products of vector spaces see [42] Tensor products and multilinear maps Definition (Dual space, dual basis). For a vector space V over the field F, the dual space V of V is the vector space of all linear maps V F. If {v 1, v 2,..., v n } is a basis for V the dual basis {α 1, α 2,..., α n } in V is defined by { 1 i = j α i (v j ) = 0 i j and extending linearly. Theorem If V is of finite dimension, the dual basis is a basis of V. Furthermore, V is isomorphic to V. The dual of the dual, (V ) is naturally isomorphic to V. Definition (Tensor product). For vector spaces V, W we define the tensor product V W to be the vector space of all expressions of the form v 1 w v k w k where v i V, w i W and the following equalities hold for the operator : λ(v w) = (λv) w = v (λw). (v 1 + v 2 ) w = v 1 w + v 2 w. v (w 1 + w 2 ) = v w 1 + v w 2. i.e., ( ) is linear in both arguments.

19 1.1. Multilinear algebra 3 Since V W is a vector space, we can iteratively form tensor products V 1 V 2 V k of an arbitrary number of vector spaces V 1, V 2,..., V k. An element of V 1 V 2 V k is said to be a tensor of order k. Theorem If {v i } n V i=1 and {w j} n W j=1 are bases for V and W respectively, then {v i w j } n V,n W i=1,j=1 is a basis for V W and dim(v W ) = dim(v ) dim(w ). Proof. Any T V W can be written T = n a k b k k=1 for a k V, b k W. Since v i and w j are bases, we can write a k = n V i=1 a ki v i b k = n W j=1 b kj w j and thus T = = = ( n nv ) a ki v i k=1 n n W i=1 j=1 n V n W k=1 i=1 j=1 n V n W i=1 j=1 k=1 a ki b kj v i w j = ( n ) a ki b kj v i w j b kj w j = so that {v i w j } n V,n W i=1,j=1 is a basis follows, and this in turn implies dim(v W ) = dim(v ) dim(w ). If {v (i) j }ni j=1 is a basis for V i, this implies that {v (1) j 1 v (2) j 2 v (k) j k } n1,...,n k j 1=1,...,j k =1 is a basis for V 1 V 2 V k. Furthermore, if we have chosen a basis for each V i, we can identify a tensor T V 1 V 2 V k with a k-dimensional array of size dim V 1 dim V 2 dim V k where the element in position (j 1, j 2,..., j k ) is the coefficient for v (1) j 1 v (2) j 2 v (k) j k in the expansion of T in the induced basis for V 1 V 2 V k. If k = 2, one gets matrices. If one describes a third order tensor as a three-dimensional array, one can describe the tensor as a tuple of matrices. For example, say the I J K tensor T has the entries t ijk in its array. Then T can be described as the tuple (T 1, T 2,... T I ) where T i = (t ijk ) J,K j=1,k=1, but it can also be described as the tuples (T 1, T 2,..., T J ) or (T 1, T 2,..., T K ), where T j = (t ijk) I,K i=1,k=1 and T i = (t ijk ) I,J i=1,j=1. The matrices in the tuples are called the slices of the array. Sometimes the adjectives frontal, horizontal and lateral are used to distinguish the different kinds of slices. Example (Arrays). Let {e 1, e 2 } be a basis for R 2. Then e 1 e 1 + 2e 1 e 2 + 3e 2 e 1 R 2 R 2 can be expressed as the matrix ( )

20 4 Chapter 1. Introduction The third order tensor e 1 e 1 e 1 +2e 1 e 2 e 2 +3e 2 e 1 e 2 +4e 2 e 2 e 2 R 2 R 2 R 2 can be expressed as a 3-dimensional array: ( ) and the slices of the array are ( ) ( ) , ( ) ( ) , ( ) ( ) , where each pair arises from a different way of cutting the tensor. Definition (Tensor rank). The smallest R for which T V 1 V k can be written R T = v r (1) v r (k), (1.1) r=1 for arbitrary vectors v (i) r V i is called the tensor rank of T. Definition (Multilinear map). Let V 1,... V k be vector spaces over F. A map f : V 1 V k F, is a multilinear map if f is linear in each factor V i. Theorem The set of all multilinear maps V 1 V k F can be identified with V1 Vk. Proof. Let V i have dimension n i and basis {v (i) 1,..., v(i) n i }, and let the dual basis be {α (i) 1,..., α(i) n i }. Then f V1 Vk can be written f = β i1,...,i k α (1) i 1 i 1,...,i k... α (k) i k and for (u 1,..., u k ) V 1 V k acts as a multilinear mapping by: f(u 1,..., u k ) = β i1,...,i k α (1) i 1 (u 1 ) α (k) i k (u k ). i 1,...,i k Conversely, let f : V 1 V k F be a multilinear mapping. Pick a basis {v (i) 1,..., v(i) n i } for V i and let the dual basis be {α (i) 1,..., α(i) n i }. Define and thus β i1,...,i k α (1) i 1 i 1,...,i k β i1,...,i k = f(v (1) i 1,..., v (k) i k )... α (k) i k V1 Vk f acts as the multilinear map f by the description above.

21 1.1. Multilinear algebra 5 A multilinear mapping (V 1 V k ) W F can be seen as an element of V1 Vk W and can also be seen as a map V 1 V k W. Explicitly, if f : (V 1 V k ) W F is written f = i α(1) i α (k) i w i it acts on an element in V 1 V k W by f(v 1,..., v k, β) = i α (1) i (v 1 ) α (k) i (v k )w i (β) F but it can also act on an element in V 1 V k by f(v 1,..., v k ) = i α (1) i (v 1 ) α (k) i (v k )w i W. Example (Linear maps). Given two vector spaces V, W the set of all linear maps V W can be identified with V W. If f = n i=1 α i w i, f acts as a linear map V W by f(v) = n α i (v)w i i=1 or, going in the other direction, if f is a linear map f : V W, we can describe it as a member of V W by taking a basis {v 1, v 2,..., v n } for V and its dual basis {α 1, α 2,..., α n } and setting w i = f(v i ), so we get f = n α i w i Symmetric and skew-symmetric tensors i=1 Two important subspaces of second order tensors V V are the symmetric tensors and the skew-symmetric tensors. First, define the map τ : V V V V by τ(v 1 v 2 ) = v 2 v 1 and extending linearly (τ can be interpreted as the non-trivial permutation on two elements). The spaces of symmetric tensors, S 2 V, and skew-symmetric tensors, Λ 2 V, can then be defined as: S 2 V := span{v v v V } = = {T V V τ(t ) = T }, Λ 2 V := span{v w w v v, w V } = = {T V V τ(t ) = T }. Let us define two operators that give the symmetric and anti-symmetric part of a second order tensor. For v 1, v 2 V, define the symmetric part of v 1 v 2 to be v 1 v 2 = 1 2 (v 1 v 2 +v 2 v 1 ) S 2 V and the anti-symmetric part of v 1 v 2 to be v 1 v 2 = 1 2 (v 1 v 2 v 2 v 1 ) Λ 2 V and we have v 1 v 2 = v 1 v 2 + v 1 v 2. To expand the definition of symmetric and skew-symmetric tensor, over R and C, to higher order we need to generalize these operators. Denote the tensor product of the same vector space k times as V k. Then for the symmetric case the map π S : V k V k is defined on rank-one tensors by π S (v 1 v k ) = 1 v τ(1) v τ(k) = v 1 v 2 v k, k! τ S k

22 6 Chapter 1. Introduction where S k is the symmetric group on k elements. For the skew-symmetric tensors the map π Λ V k V k is defined on rankone elements by π Λ (v 1 v k ) = 1 sgn(τ)v τ(1) v τ(k) = v 1 v k. k! τ S k π S and π Λ are then extended linearly to act on the entire space. Definition (S k V, Λ k V ). Let V be a vector space. The space of symmetric tensors S k V is defined as S k V = π S (V k ) = = {X V k π S (X) = X}. The space of skew-symmetric tensors or alternating tensors is defined as Λ k V = π Λ (V k ) = = {X V k π Λ (X) = X}. The space S k V can be seen as the space of symmetric k-linear forms on V, but also as the space of homogeneous polynomials of degree k on V, so we can identify homogeneous polynomials of degree k with symmetric k-linear forms. We do this through a process called polarization. Theorem (Polarization identity). Let f be a homogeneous polynomial of degree k. Then f(x 1, x 2,..., x k ) = 1 k! I [k],i is a symmetric k-linear form. Here [k] = {1, 2,..., k}. ( ) ( 1) k I f x i Example Let P (s, t, u) be a cubic homogenous polynomial in three variables. Plugging this into the polarization identity yields the folowing multilinear form: P s 1 t 1, s 2 s 3 t 2, t 3 = 1 3! [P (s 1 + s 2 + s 3, t 1 + t 2 + t 3, u 1 + u 2 + u 3 ) u 1 u 2 u 3 P (s 1 + s 2, t 1 + t 2, u 1 + u 2 ) P (s 1 + s 3, t 1 + t 3, u 1 + u 3 ) P (s 2 + s 3, t 2 + t 3, u 2 + u 3 ) + P (s 1, t 1, u 1 ) + P (s 2, t 2, u 2 ) + P (s 3, t 3, u 3 )] For example, if P (s, t, u) = stu one gets i I P = 1 6 (s 1t 2 u 3 + s 1 t 3 u 2 + s 2 t 1 u 3 + s 2 t 3 u 1 + s 3 t 1 u 2 + s 3 t 2 u 1 ).

23 1.2. Tensor decomposition GL(V 1 ) GL(V k ) acts on V 1 V k GL(V ) is the group of invertible linear maps V V. An element (g 1, g 2,..., g k ) GL(V 1 ) GL(V k ) acts on an element v 1 v 2 v k V 1 V k by (g 1, g 2,... g k ) (v 1 v k ) = g 1 (v 1 ) g k (v k ) and on the whole space V 1 V k by extending linearly. If one picks a basis for each V 1,..., V k, say {v (i) j }ni j=1 is a basis for V i, one can write ni g i (v (i) j ) = α (i) j,l v(i) l, (1.2) l=1 and if T V 1 V k, T = β j1,...,j k v (1) j 1 j 1,...,j k Thus, if g = (g 1,..., g k ), g T = β j1,...,j k g 1 (v (1) j 1 j 1,...,j k = j 1,...,j k β j1,...,j k = l 1,...,l k ) g(v (k) j k ) = α (1) j 1,l 1 α (k) j k,l k v (1) l 1 l 1,...,l k β j1,...,j k α (1) j 1,l 1 α (k) j 1,...,j k v (k) j k. (1.3) j k,l k v (k) l k = v (1) l 1 v (k) l k. (1.4) One can note that the α s in (1.2) gives the matrix of g i, and that the β s in (1.3) gives the tensor T as a k-dimensional array. Thus the scalars β j1,...,j k α (1) j 1,l 1 α (k) j k,l k j 1,...,j k in (1.4) gives the coefficients in the k-dimensional array representing g T. 1.2 Tensor decomposition Let us start to consider how factorisation and decomposition works for tensors of order two, in other word matrices. Depending on the application and the resources for calculation, different decompositions are used. A very important decomposition is the singular value decomposition (SVD). It decomposes a matrix M into a sum of outer products (tensor products) of vectors as M = R σ r u r vr T = r=1 R σ r u r v r. Here u r and v r are pairwise orthonormal vectors, σ r are the singular values and R is the rank of the matrix M, and these conditions make the decomposition essentially unique. The rank of M is the number of non-zero singular values and the best low rank-approximations of M are given by truncating the sum. r=1

24 8 Chapter 1. Introduction For tensors of order greater than two the situation is different. A decomposition that is a generalization of the SVD, but not of all its properties, is called CANDECOMP (canonical decomposition), PARAFAC (parallel factors analysis) or CP decomposition [24]. It is also the sum of tensor products of vectors as the following: T = R r=1 v (1) r v (k) r, where V j are vector spaces and v (j) i V j. As one can see the CP decomposition is used to define the rank of a tensor, where R is the rank of T if R is the smallest possible number such that equality holds (definition 1.1.6). A big issue with higher order tensors is that there is no method or algorithm to calculate the CP decomposition exactly, which would also give the rank of a tensor. A common algorithm to calculate the CP decomposition is the alternating least square (ALS) algorithm. It can be summarized as a least square method where we let the values from one vector space change while the others are fixed. Then the same is done for the next vector space and so forth for all vector spaces. If the difference between the approximation and the given tensor is too large the whole procedure is repeated until the difference is small enough. The algorithm is described in algorithm 1 where T is a tensor of size d 1 d N. The norm that is used is the Frobenius norm, and it is defined as T 2 = d 1,...,d N i 1=1,...,i N =1 T i1,...,i N 2, (1.5) where T i1,...,i N denotes the i 1,..., i N component of T. One thing to notice is that the rank is needed as a parameter for the calculations, so if the rank is not known it needs to be approximated before the algorithm can start. Algorithm 1 ALS algorithm to calculate the CP decomposition Require: T, R Initialize a (n) r R dn for n = 1,..., N and r = 1,..., R. repeat for n = 1,...,N do R 2 Solve min T a (1) r a (N) r. a (n) i,i=1,...,r r=1 Update a (n) i to its newly calculated value, for i = 1,... R. end for until T R r=1 a(1) r a r (N) 2 < threshold or maximum iteration is reached return a (1) r,... a (N) r for r = 1,..., R. This is actually a way to decide the rank of a tensor, but the method has a few problems. First of all is the issue with border rank (see section 2.1), which makes it possible to approximate some tensors arbitrary well with tensors with lower rank (see example 2.1.1). Furthermore the algorithm is not guaranteed to converge to a global optimum, and even if it does converge, it might need a large number of iterations [24].

25 1.3. Algebraic geometry Algebraic geometry In this section we introduce basic notions of algebraic geometry, which is the study of objects defined by polynomial equations. References for this section are [13, 17, 25, 31], and for section 1.3.6, [6] Basic definitions Definition (Monomial). A monomial in variables x 1, x 2,..., x n is a product of variables x α1 1 xα xαn n where α i N = {0, 1, 2,... }. Another notation for this is x α where x = (x 1, x 2,..., x n ) and α = (α 1, α 2,..., α n ) N n. α is called a multi-index. Definition (Polynomial). Given a field F, a polynomial is a finite linear combination of monomials with coefficients in F, i.e. if f is a polynomial over F it can be written f = α A c α x α for some finite set A and c α F. A homogenuos polynomial is a polynomial where all the multi-indices α A sum to the same integer. In other words, all the monomials have the same degree. The set F[x 1, x 2,..., x n ] of all polynomials over the field F in variables x 1, x 2,..., x n forms a commutative ring. Since it will be important in the sequel, we remind of some important definitions and results in ring theory. Definition (Ideal). If R is a commutative ring (e.g. F[x 1, x 2,..., x n ]), an ideal in R is a set I for which the following holds: If x, y I, we have x + y I (I is a subgroup of (R, +).) If x I and r R we have rx I. If f 1, f 2,..., f k R, the ideal generated by f 1, f 2,..., f k, denoted f 1, f 2,..., f k, is defined as: { k } f 1, f 2,..., f k = q i f i q i R. The next theorem is a special case of Hilbert s basis theorem. Theorem Every ideal in the polynomial ring F[x 1, x 2,..., x n ] is finitely generated, i.e. for every ideal I there exists polynomials f 1, f 2,..., f k such that I = f 1, f 2,..., f k. i=1

26 10 Chapter 1. Introduction Varieties and ideals Definition (Affine algebraic set). An affine algebraic set is the set X F n of solutions to a system of polynomial equations f 1 = 0 f 2 = 0. f k = 0 for a given set {f 1, f 2,..., f k } of polynomials in n variables. We write X = V(f 1, f 2,... f k ) for this affine algebraic set. An algebraic set X is called irreducible, or a variety if it cannot be written as X = X 1 X 2 for algebraic sets X 1, X 2 X. Definition (Ideal of an affine algebraic set). For an algebraic set X F n, the ideal of X, denoted I(X) is the set of polynomials f F[x 1, x 2,..., x n ] such that f(a 1, a 2,..., a n ) = 0 for every (a 1, a 2,..., a n ) X. When one works with algebraic sets one wants to find equations for the set and this can mean different things. A set of polynomials P = {p 1, p 2,..., p k } is said to cut out the algebraic set X set-theoretically if the set of common zeros of p 1, p 2,..., p k is X. P is said to cut out X ideal-theoretically if P is a generating set for I(X). Example (Twisted cubic). The twisted cubic is a curve in R 3 which can be given as the image of R under the mapping t (t, t 2, t 3 ), fig However, the twisted cubic can also be viewed as an algebraic set, namely V(y x 2, z x 3 ), fig Figure 1.1: The image of t (t, t 2, t 3 ) for 1 t 1.

27 1.3. Algebraic geometry 11 Figure 1.2: The intersection of the surfaces defined by y x 2 = 0 and z x 3 = 0, namely the twisted cubic, for ( 1, 0, 1) (x, y, z) (1, 1, 1). Example (Matrices of rank r). Given vector spaces V, W of dimensions n and m and bases {v i } n i=1 and {w j} m j=1 respectively, V W can be identified with the set of m n matrices. The set of matrices of rank at most r is a variety in this space, namely the variety defined as the zero set of all (r + 1) (r + 1) minors, since a matrix has rank less than or equal to r if and only if all of its (r + 1) (r + 1) minors are zero. For example, if n = 4 and m = 3, a matrix defining a map between V and W can be written x 11 x 12 x 13 x 14 x 21 x 22 x 23 x 24 x 31 x 32 x 33 x 34 and the variety of matrices of rank 2 or less is the matrices satisfying x 11 x 12 x 13 x 21 x 22 x 23 x 31 x 32 x 33 = 0 x 11 x 12 x 14 x 21 x 22 x 24 x 31 x 32 x 34 = 0 x 11 x 13 x 14 x 21 x 23 x 24 x 31 x 33 x 34 = 0 x 12 x 13 x 14 x 22 x 23 x 24 x 32 x 33 x 34 = 0. That these equations cut out the set of 4 3 matrices of rank 2 or less settheoreotically is easy to prove. They also generate the ideal for the variety, but this is harder to prove Projective spaces and varieties Definition (Projective space). The n-dimensional projective space over F, denoted P n (F), is the set F n+1 \{0} modulo the equivalence relation where x y if and only if x = λy for some λ F \ {0}. For a vector space V we write PV for the projectivization of V, and if v V, we write [v] for the equivalence

28 12 Chapter 1. Introduction class to which v belongs, i.e. [v] is the element in PV corresponding to the line λv in V. For a subset X PV we will write ˆX for the affine cone of X in V, i.e. ˆX = {v V : [v] X}. We will now define what is meant by a projective algebraic set. Note that the zero locus of a polynomial is not defined in projective space, since in general f(x) f(λx) for a polynomial f, but x = λx in projective space. However, for a polynomial F which is homogeneous of degree d the zero locus is well defined, since F (λx) = λ d F (x). Note that even though the zero locus of a homogeneous polynomial is well defined on projective space, the homogeneous polynomials are not functions on projective space. Definition (Projective algebraic set). A projetive algebraic set X P n (F) is the solution set to a system of polynomial equations F 1 (x) = 0 F 2 (x) = 0. F k (x) = 0 for a set {F 1, F 2,..., F k } of homogeneous polynomials in n + 1 variables. A projective algebraic set is called irreducible or a projective variety if it is not the union of two projective algebraic sets. Definition (Ideal of a projective algebraic set). If X P n (F) is an algebraic set, its ideal I(X) is the set of all homogeneous polynomials which vanish on X, i.e. I(X) consists of all polynomials F such that for all (a 1, a 2,..., a n+1 ) X. F (a 1, a 2,..., a n+1 ) = 0 Definition (Zariski topology). The Zariski topology on P n (F) (or F n ) is defined by its closed sets, which are taken to be all the sets X for which there exists a set S of homogeneous polynomials (or arbritrary polynomials in the case of F n ) such that X = {α : f(α) = 0 f S}. The Zariski closure of a set X is the set V(I(X)) Dimension of an algebraic set Definition (Tangent space). Let M be a subset of a vector space V over F = R or C and let x M. The tangent space ˆT x M V is the span of vectors which are derivatives α (0) of a smooth curve α : F M such that α(0) = x. For a projective algebraic set X PV, the affine tangent space to X at [x] X is ˆT [x] X := ˆT x ˆX. Definition (Smooth and singular points). If dim ˆT x X is constant at and near x, x is called a smooth point of X. If x is not smooth, it is called a singular point. For a variety X, let X smooth and X sing denote the smooth and singular points of X respectively.

29 1.3. Algebraic geometry 13 Definition (Dimension of a variety). For an affine algebraic set X, define the dimension of X as dim(x) := dim( ˆT x X) for x X smooth. For an projective algebraic set X, define the dimension of X as dim(x) := dim( ˆT x X) 1 for x X smooth. Example (Cuspidal cubic). The variety X in R 2 given by X = V(y 2 x 3 ) is called the cuspidal cubic, see fig The cuspidal cubic has one singular point, namely (0, 0). One can see that both the unit vector in the x-direction and the unit vector in the y-direction are tangent vectors to the variety at the point (0, 0). Thus dim ˆT (0,0) X = 2, but for all x (0, 0) on the cuspidal cubic we have dim ˆT x X = 1, so (0, 0) is a singular point but all other points are smooth and the dimension of the cuspidal cubic is one. Figure 1.3: The cuspidal cubic. Example (Matrices of rank r). Going back to the example of the matrices of size m n with rank r or less, these can also be seen as a projective variety. We form the projective space P m n 1 (F) (i.e. the space of matrices where matrices A and B are identified iff A = λb for some λ 0, note that if A and B are identified they have the same rank). The equations will still be the same; the minors of size (r + 1) (r + 1), which are homogeneous of degree r + 1. Example (Segre variety). This variety will be very important in the sequel. Let V 1, V 2,... be complex vector spaces. The two-factor Segre variety is the variety defined as the image of the map Seg : PV 1 PV 2 P(V 1 V 2 ) Seg([v 1 ], [v 2 ]) = [v 1 v 2 ] and it can be seen that the image of this map is the projectivization of the set of rank one tensors in V 1 V 2. We can in a similar fashion define the n-factor Segre as the image of Seg : PV 1 PV n P(V 1 V n ) Seg([v 1 ],..., [v n ]) = [v 1 v n ]

30 14 Chapter 1. Introduction and the image is once again the projectivization of the set of rank one tensors in V 1 V n. That the 2-factor Segre variety is an algebraic set follows from the fact that the 2 2 minors furnish equations for the variety. In the next chapter we will work with the 3-factor Segre variety, for which equations are provided in section For a general proof for the n-factor Segre, see [25, page 103]. Any curve in Seg(PV 1 PV 2 ) is of the form v 1 (t) v 2 (t), and its derivative will be v 1(0) v 2 (0) + v 1 (0) v 2(0). Thus ˆT [v1 v 2] Seg(PV 1 PV 2 ) = V 1 v 2 + v 1 V 2 and the intersection between V 1 v 2 and v 1 V 2 is the one-dimensional space spanned by v 1 v 2. Therefore the dimension of the Segre variety is n 1 + n 2 2, where n 1, n 2 are the dimensions of V 1, V 2 respectively Cones, joins, and secant varieties Definition (Cone). Let X P n (F) be a projective variety and p P n (F) a point. The cone over X with vertex p, J(p, X), is the Zariski closure of the union of all the lines pq joining p with a point q X, i.e.: J(p, X) = pq. Definition (Join of varieties). Let X 1, X 2 P n (F) be two varieties. The join of X 1 and X 2 is the set J(X 1, X 2 ) = q X p 1 X 1,p 2 X 2,p 1 p 2 p 1 p 2 which can be interpreted as the Zariski closure of the union of all cones over X 2 with a vertex in X 1, or vice versa. The join of several varieties X 1, X 2,..., X k is defined inductively: J(X 1, X 2,..., X k ) = J(X 1, J(X 2,..., X k )). Definition (Secant variety). Let X be a variety. The r:th secant variety of X is the set σ r (X) = J(X,..., X). }{{} k copies Lemma (Secant varieties are varieties). Secant varieties of irreducible algebraic sets are irreducible, i.e. they are varieties. Proof. See [17, p. 144, prop ]. Let X P n (F) be an algebraic set of dimension k. The expected dimension of σ r (X) is min{rk + r 1, n}. However, the dimension is not always the expected. Definition (Degenerate secant variety). Let X P n (F) be an projective variety with dim(x) = k. If dim σ r (X) < min{rk + r 1, n}, then σ r (X) is called degenerate with defect δ r (X) = rk + r 1 dim σ r (X).

31 1.3. Algebraic geometry 15 Definition (X-rank). If V is a vector space over C, X PV is a projective variety and p PV is a point, the X-rank of p is the smallest number r of points in X such that p lies in their linear span. The X-border rank of p is the least number r such that p lies in the σ r (X), the r:th secant variety of X. The generic X-rank is the smallest r such that σ r (X) = PV. These notions of X-rank and X-border rank will coincide with the ideas of tensor rank and tensor border rank (see section 2.1) when X is taken to be the Segre variety. Lemma (Terracini s lemma). Let x i for i = 1,..., r be general points of ˆX i, where X i are projective varieties in PV for a complex vector space V and let [u] = [x x r ] J(X 1,..., X r ). Then ˆT [u] J(X 1,, X r ) = ˆT [x1]x ˆT [xr]x r. Proof. It is enough to consider the case of u = x 1 + x 2 for x 1 X 1, x 2 X 2 for varieties X 1, X 2 PV and deriving the expression for ˆT [u] J(X 1, X 2 ). The addition map a : V V V is defined by a(v 1, v 2 ) = v 1 + v 2. Then Ĵ(X 1, X 2 ) = a( ˆX 1 ˆX 2 ) and so, for general points x 1, x 2, ˆT[u] J(X 1, X 2 ) is obtained by differentiating curves x 1 (t) X 1, x 2 (t) X 2 with x 1 (0) = x 1, x 2 (0) = x 2. Thus the tangent space to x 1 + x 2 in J(X 1, X 2 ) will be the sum of tangent spaces of x 1 in X 1 and x 2 in X Real algebraic geometry In section 2.4 we will need the following definition. Definition (Affine semi-algebraic set). An affine semi-algebraic set is a subset of R n of the form: s r i {x R f i,j i,j 0} i=1 j=1 where f i,j R[x 1,..., x n ] and i,j is < or =. Example (Semi-algebraic set). Consider the semi-algebraic set given by f 1,1 = x 2 + y 2 2 f 1,2 = x 3 2 y f 1,3 = y f 2,1 = x 2 + y 2 2 f 2,2 = x y f 2,3 = y f 3,1 = (x 2) 2 + y f 4,1 = (x 7/2) 2 + y 2 1 4

32 16 Chapter 1. Introduction and all i,j being <. The set can be vizualised as in figure 1.4. Figure 1.4: An example of a semi-algebraic set. 1.4 Application to matrix multiplication We take a look at the problem of efficient computation of the product of 2 2 matrices. Let A, B, C be copies of the space of n n matrices, and let the multiplication mapping m n : A B C given by m n (M 1, M 2 ) = M 1 M 2. To compute the matrix M 3 = m 2 (M 1, M 2 ) = M 1 M 2 one can naively use eight multiplications and four additions using the standard method for matrix multiplication. Explicitly, if ( ) a 1 M 1 = 1 a 1 2 a 2 1 a 2 2 ( ) b 1 M 2 = 1 b 1 2 b 2 1 b 2 2 one can compute M 3 = M 1 M 2 by c 1 1 = a 1 1b a 1 2b 2 1 c 1 2 = a 1 1b a 1 2b 2 2 c 2 1 = a 2 1b a 2 2b 2 1 c 2 2 = a 2 1b a 2 2b 2 2. However, this is not optimal. Strassen [37] showed that one can calculate M 3 = M 1 M 2 using only seven multiplications. First, one calculates k 1 = (a a 2 2)(b b 2 2) k 2 = (a a 2 2)b 1 1 k 3 = a 1 1(b 1 2 b 2 2) k 4 = a 2 2( b b 2 1) k 5 = (a a 1 2)b 2 2 k 6 = ( a a 2 1)(b b 1 2) k 7 = (a 1 2 a 2 2)(b b 2 2)

33 1.4. Application to matrix multiplication 17 and the coeffients of M 3 = M 1 M 2 can then be calculated as c 1 1 = k 1 + k 4 k 5 + k 7 c 2 1 = k 2 + k 4 c 1 2 = k 3 + k 5 c 2 2 = k 1 + k 3 k 2 + k 6. Now, the map m n : A B C is obviously a bilinear map and as such can be expressed as a tensor. Let us take a look at m 2. Equip A, B, C with the same basis {( ) ( ) ( ) ( )} For clarity, let m 2 : A B C and let the bases be {a j i }2,2 i=1,j=1, {bj i }2,2 i=1,j=1, {c j i }2,2 i=1,j=1. Let the dual bases of A, B be {αj i }2,2 i=1,j=1, {βj i }2,2 i=1,j=1 respectively. Thus, m 2 A B C and the standard algorithm for matrix multplication corresponds to the following rank eight decomposition of m 2 : m 2 = (α 1 1 β α 1 2 β 2 1) c (α 1 1 β α 1 2 β 2 2) c (α 2 1 β α 2 2 β 2 1) c (α 2 1 β α 2 2 β 2 2) c 2 2 whereas Strassen s algorithm corresponds to a rank seven decomposition of m 2 : m 2 = (α α 2 2) (β β 2 2) (c c 2 2) + (α α 2 2) β 1 1 (c 2 1 c 2 2) + α 1 1 (β 1 2 β 2 2) (c c 2 2) + α 2 2 ( β β 2 1) (c c 2 1) + (α α 1 2) β 2 2 ( c c 1 2) + ( α α 2 1) (β β 1 2) c (α 1 2 α 2 2) (β β 2 2) c 1 1. It has been proven that both the rank and border rank of m 2 is seven [26]. This can be seen from the fact that σ 7 (Seg(PA PB PC)) = P(A B C). However, the rank of m n for n 3 is still unkown. Even for m 3, all that is known is that the rank is between 19 and 23 [25, chapter 11]. It is interesting to note that this is lower than the generic rank for tensors, which is 30 (theorem 2.3.8). The rank of m 2 is however the generic seven.

34 18 Chapter 1. Introduction

35 Chapter 2 Tensor rank In this chapter we present some results on tensor rank, mainly from the view of algebraic geometry. We introduce different types of rank of a tensor and show some basic results concerning these different types of ranks. We derive equations for the Segre variety and show some basic results on secant defects of the Segre variety and generic ranks. A general reference for this chapter is [25]. 2.1 Different notions of rank If T : U V is a linear operator and U, V are vector spaces, the rank of T is the dimension of the image T (U). If one considers T as an element of U V, the rank of T coincides with the smallest integer R such that T can be written T = R α i v i. i=1 However, if one considers a T V 1 V 2 V k, this can be viewed as a linear operator Vi V 1 V i 1 V i+1 V k for any 1 i k, so T can be viewed as a linear operator in these k different ways, and for every way we get a different rank. The k-tuple (dim T (V1 ),..., dim T (Vk )) is known as the multilinear rank of T. However, the smallest integer R such that T can be written R T = v (1) i v (k) i i=1 is known as the rank of T (sometimes called the outer product rank). If T is a tensor, let R(T ) denote the rank of T. The idea of tensor rank gets more complicated still. If a tensor T has rank R it is possible that there exist tensors of rank R < R such that T is the limit of these tensors, in which case T is said to have border rank R. Let R(T ) denote the border rank of the tensor T. Erdtman, Jönsson,

36 20 Chapter 2. Tensor rank Example (Border rank). Consider the numerically given tensor T ( ( ( ( ( ( T = + 1) 0) 1) 2) 0) 1) ( ( ( ( ( ( ) ( ) =. 1) 1) 1) 1) 0) One can show that T has rank 3, for instance with a method for p p 2 tensors used in [36]. Now consider the rank-two tensor T (ε) T (ε) = ε 1 ( ( ( ε 1) 0) 1) + 1 (( ) ( )) (( ) ( )) (( ) ( )) ε + ε + ε. ε Calculating T (ε) for a few values of ε gives us the following results ( ) T (1) =, T ( 10 1) ( ) =, T ( 10 3) ( ) =, T ( 10 5) ( ) =, which gives us an indication that T (ε) T when ε 0. The above tensor is a special case of tensors on the form T = a 1 b 1 c 1 + a 2 b 1 c 1 + a 1 b 2 c 1 + a 1 b 1 c 2 and even in this general case one can show that T has rank three, but there are tensors of rank two arbitrarly close to it: T (ε) = 1 ε ((ε 1)a 1 b 1 c 1 + (a 1 + εa 2 ) (b 1 + εb 2 ) (c 1 + εc 2 )) = = 1 ε (εa 1 b 1 c 1 a 1 b 1 c 1 + a 1 b 1 c 1 + εa 2 b 1 c εa 1 b 2 c 1 + εa 1 b 1 c 2 + O(ε 2 )) T, when ε 0. There is a well-known result for matrices which states that if one fills an n m matrix with random entries, the matrix will have maximal rank, min{n, m}, with probability one. In the case of square matrices, a random matrix will be invertible with probability one. For tensors over C the situation is similar; a random tensor will have a certain rank with probability one - this rank is called the generic rank. Over R however, there can be multiple ranks, called typical ranks, which a random tensor takes with non-zero probability, see more in section 2.4. For now, we remind of definition , and that the generic rank is the smallest r such that the r:th secant variety of the Segre variety is the whole space. Compare these observations and definitions with the fact that GL(n, C) is a n 2 -dimensional manifold in the n 2 -dimensional space of n n matrices, and a random matrix in this space is invertible with probability one.

37 2.1. Different notions of rank Results on tensor rank Theorem Given an I J K tensor T, R(T ) is the minimal number p of rank one J K matrices S 1,..., S p such that T i span(s 1,..., S p ) for all slices T i of T. Proof. For a tensor T one can write T = R(T ) k=1 and thus, if a k = (a 1 k,..., ai k )T, we have T i = R(T ) k=1 a k b k c k a i kb k c k so T i span(b 1 c 1,..., b R(T ) c R(T ) ) for i = 1,..., I, which proves R(T ) p. If T i span(s 1,..., S p ) with rank(s j ) = 1 for i = 1,..., I, we can write T i = p x i ks k = k=1 and thus with x k = (x 1 k,..., xi k ) we get T = p x i ky k z k k=1 p x k y k z k k=1 which proves R(T ) p, resulting in R(T ) = p. Corollary For an I J K-tensor T, R(T ) min{ij, IK, JK}. Proof. One observes from theorem that one can manipulate any of the three kinds of slices in T, and thus one can pick the kind which results in the smallest matrices, say m n. The space of m n matrices is spanned by the mn matrices M kl = {δ kl (i, j)} m,n i,j=1. Thus one cannot need more than mn rank one matrices to get all the slices in the linear span Symmetric tensor rank Definition (Symmetric rank). Given a tensor T S d V, the symmetric rank of T, denoted R S (T ), is defined as the smallest R such that T = R v r v r, r=1 for v i V. The symmetric border rank of T is defined as the smallest R such that T is the limit of symmetric tensors of symmetric rank R. Since we, over R and C, can put symmetric tensors of order d in bijective correspondence with homogeneous polynomials of degree d, and vectors in bijective correspondence with linear forms, the symmetric rank of a given symmetric

38 22 Chapter 2. Tensor rank tensor can be translated to the number R of linear forms needed for a given homogeneous polynomial of degree d to be expressed as a sum of linear forms to the power of d. That is, if P is a homogeneous polynomial of degree d over C, what is the least R such that P = l d l d R for linear forms l i? Over C, the following theorem gives an answer to this question in the generic case. Theorem (Alexander-Hirschowitz). The generic symmetric rank in S d C n is ) ( n+d 1 d n (2.1) except for d = 2, where the generic symmetric rank is n and for (d, n) {(3, 5), (4, 3), (4, 4), (4, 5)} where the generic symmetric rank is (2.1) plus one. Proof. A proof can be found in [7]. An overview and introduction to the proof can be found in [25, chapter 15]. During the American Institute of Mathematics (AIM) workshop in Palo Alto, USA, 2008 (see [33]) P. Comon stated the following conjecture: Conjecture For a symmetric tensor T S d C n, its symmetric rank R S (T ) and tensor rank R(T ) are equal. This far the conjecture has been proved true for R(T ) = 1, 2, R(T ) n and for sufficiently large d with respect to n [10], and for tensors of border rank two [3]. Furthermore during the AIM workshop D. Gross showed that the conjecture is also true when R(T ) R(T k,d k ) if k < d/2, here T k,d k is a way to view T as a second order tensor, i.e., T k,d k S k V S d k V Kruskal rank The Kruskal rank is named after Joseph B. Kruskal and is also called k-rank. For a matrix A the k-rank is the largest number, κ A, such that any κ A columns of A are linearly independent. Let T = R r=1 a r b r c r and let A, B and C denote the matrices with a 1,..., a R, b 1,..., b R and c 1,..., c R as column vectors, respectively. Then the k-rank of T is the tuple (κ A, κ B, κ C ) of the k-ranks of the matrices A, B, C. With the k-rank of T Kruskal showed that the condition κ A + κ B + κ C 2R(T ) + 2 is sufficient for T to have a unique, up to trivialities, CP-decomposition ([22]). This result has been generalized in [35] to order d tensors as d κ Ai 2R(T ) + d 1, (2.2) i=1 where A i is the matrix corresponding to the i:th vector space with k-rank κ Ai.

Tensors. Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Tensors. Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Tensors Notes by Mateusz Michalek and Bernd Sturmfels for the lecture on June 5, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra This lecture is divided into two parts. The first part,

More information

Tensor decomposition and tensor rank

Tensor decomposition and tensor rank from the point of view of Classical Algebraic Geometry RTG Workshop Tensors and their Geometry in High Dimensions (September 26-29, 2012) UC Berkeley Università di Firenze Content of the three talks Wednesday

More information

Generic properties of Symmetric Tensors

Generic properties of Symmetric Tensors 2006 1/48 PComon Generic properties of Symmetric Tensors Pierre COMON - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University 2006 2/48 PComon Tensors & Arrays Definitions

More information

Algebraic Geometry (Math 6130)

Algebraic Geometry (Math 6130) Algebraic Geometry (Math 6130) Utah/Fall 2016. 2. Projective Varieties. Classically, projective space was obtained by adding points at infinity to n. Here we start with projective space and remove a hyperplane,

More information

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic Varieties Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic varieties represent solutions of a system of polynomial

More information

The Geometry of Matrix Rigidity

The Geometry of Matrix Rigidity The Geometry of Matrix Rigidity Joseph M. Landsberg Jacob Taylor Nisheeth K. Vishnoi November 26, 2003 Abstract Consider the following problem: Given an n n matrix A and an input x, compute Ax. This problem

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Exercise Sheet 7 - Solutions

Exercise Sheet 7 - Solutions Algebraic Geometry D-MATH, FS 2016 Prof. Pandharipande Exercise Sheet 7 - Solutions 1. Prove that the Zariski tangent space at the point [S] Gr(r, V ) is canonically isomorphic to S V/S (or equivalently

More information

Linear Vector Spaces

Linear Vector Spaces CHAPTER 1 Linear Vector Spaces Definition 1.0.1. A linear vector space over a field F is a triple (V, +, ), where V is a set, + : V V V and : F V V are maps with the properties : (i) ( x, y V ), x + y

More information

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism 8. Smoothness and the Zariski tangent space We want to give an algebraic notion of the tangent space. In differential geometry, tangent vectors are equivalence classes of maps of intervals in R into the

More information

Parameterizing orbits in flag varieties

Parameterizing orbits in flag varieties Parameterizing orbits in flag varieties W. Ethan Duckworth April 2008 Abstract In this document we parameterize the orbits of certain groups acting on partial flag varieties with finitely many orbits.

More information

TOPOLOGY OF TENSOR RANKS

TOPOLOGY OF TENSOR RANKS TOPOLOGY OF TENSOR RANKS PIERRE COMON, LEK-HENG LIM, YANG QI, AND KE YE Abstract. We study path-connectedness and homotopy groups of sets of tensors defined by tensor rank, border rank, multilinear rank,

More information

Secant Varieties of Segre Varieties. M. Catalisano, A.V. Geramita, A. Gimigliano

Secant Varieties of Segre Varieties. M. Catalisano, A.V. Geramita, A. Gimigliano . Secant Varieties of Segre Varieties M. Catalisano, A.V. Geramita, A. Gimigliano 1 I. Introduction Let X P n be a reduced, irreducible, and nondegenerate projective variety. Definition: Let r n, then:

More information

Institutionen för matematik, KTH.

Institutionen för matematik, KTH. Institutionen för matematik, KTH. Contents 7 Affine Varieties 1 7.1 The polynomial ring....................... 1 7.2 Hypersurfaces........................... 1 7.3 Ideals...............................

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Vector bundles in Algebraic Geometry Enrique Arrondo. 1. The notion of vector bundle

Vector bundles in Algebraic Geometry Enrique Arrondo. 1. The notion of vector bundle Vector bundles in Algebraic Geometry Enrique Arrondo Notes(* prepared for the First Summer School on Complex Geometry (Villarrica, Chile 7-9 December 2010 1 The notion of vector bundle In affine geometry,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Giorgio Ottaviani. Università di Firenze. Tutorial on Tensor rank and tensor decomposition

Giorgio Ottaviani. Università di Firenze. Tutorial on Tensor rank and tensor decomposition Tutorial: A brief survey on tensor rank and tensor decomposition, from a geometric perspective. Workshop Computational nonlinear Algebra (June 2-6, 2014) ICERM, Providence Università di Firenze Tensors

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

c 2016 Society for Industrial and Applied Mathematics

c 2016 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 37, No. 4, pp. 1556 1580 c 2016 Society for Industrial and Applied Mathematics SEMIALGEBRAIC GEOMETRY OF NONNEGATIVE TENSOR RANK YANG QI, PIERRE COMON, AND LEK-HENG LIM

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 6: Some Other Stuff PD Dr.

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Algebraic Geometry. Question: What regular polygons can be inscribed in an ellipse?

Algebraic Geometry. Question: What regular polygons can be inscribed in an ellipse? Algebraic Geometry Question: What regular polygons can be inscribed in an ellipse? 1. Varieties, Ideals, Nullstellensatz Let K be a field. We shall work over K, meaning, our coefficients of polynomials

More information

Vector spaces, duals and endomorphisms

Vector spaces, duals and endomorphisms Vector spaces, duals and endomorphisms A real vector space V is a set equipped with an additive operation which is commutative and associative, has a zero element 0 and has an additive inverse v for any

More information

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY ADVANCED TOPICS IN ALGEBRAIC GEOMETRY DAVID WHITE Outline of talk: My goal is to introduce a few more advanced topics in algebraic geometry but not to go into too much detail. This will be a survey of

More information

arxiv: v1 [math.ag] 25 Dec 2015

arxiv: v1 [math.ag] 25 Dec 2015 ORTHOGONAL AND UNITARY TENSOR DECOMPOSITION FROM AN ALGEBRAIC PERSPECTIVE arxiv:1512.08031v1 [math.ag] 25 Dec 2015 ADA BORALEVI, JAN DRAISMA, EMIL HOROBEŢ, AND ELINA ROBEVA Abstract. While every matrix

More information

Oeding (Auburn) tensors of rank 5 December 15, / 24

Oeding (Auburn) tensors of rank 5 December 15, / 24 Oeding (Auburn) 2 2 2 2 2 tensors of rank 5 December 15, 2015 1 / 24 Recall Peter Burgisser s overview lecture (Jan Draisma s SIAM News article). Big Goal: Bound the computational complexity of det n,

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Tensors: a geometric view Open lecture November 24, 2014 Simons Institute, Berkeley. Giorgio Ottaviani, Università di Firenze

Tensors: a geometric view Open lecture November 24, 2014 Simons Institute, Berkeley. Giorgio Ottaviani, Università di Firenze Open lecture November 24, 2014 Simons Institute, Berkeley Plan of the talk Introduction to Tensors A few relevant applications Tensors as multidimensional matrices A (complex) a b c tensor is an element

More information

Spectrum and Pseudospectrum of a Tensor

Spectrum and Pseudospectrum of a Tensor Spectrum and Pseudospectrum of a Tensor Lek-Heng Lim University of California, Berkeley July 11, 2008 L.-H. Lim (Berkeley) Spectrum and Pseudospectrum of a Tensor July 11, 2008 1 / 27 Matrix eigenvalues

More information

12. Hilbert Polynomials and Bézout s Theorem

12. Hilbert Polynomials and Bézout s Theorem 12. Hilbert Polynomials and Bézout s Theorem 95 12. Hilbert Polynomials and Bézout s Theorem After our study of smooth cubic surfaces in the last chapter, let us now come back to the general theory of

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions ARCS IN FINITE PROJECTIVE SPACES SIMEON BALL Abstract. These notes are an outline of a course on arcs given at the Finite Geometry Summer School, University of Sussex, June 26-30, 2017. Let K denote an

More information

Secant Varieties and Inverse Systems. Anthony V. Geramita. Ottawa Workshop on Inverse Systems January, 2005

Secant Varieties and Inverse Systems. Anthony V. Geramita. Ottawa Workshop on Inverse Systems January, 2005 . Secant Varieties and Inverse Systems Anthony V. Geramita Ottawa Workshop on Inverse Systems January, 2005 1 X P n non-degenerate, reduced, irreducible projective variety. Definitions: 1) Secant P s 1

More information

Rings and groups. Ya. Sysak

Rings and groups. Ya. Sysak Rings and groups. Ya. Sysak 1 Noetherian rings Let R be a ring. A (right) R -module M is called noetherian if it satisfies the maximum condition for its submodules. In other words, if M 1... M i M i+1...

More information

Resolution of Singularities in Algebraic Varieties

Resolution of Singularities in Algebraic Varieties Resolution of Singularities in Algebraic Varieties Emma Whitten Summer 28 Introduction Recall that algebraic geometry is the study of objects which are or locally resemble solution sets of polynomial equations.

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS J. WARNER SUMMARY OF A PAPER BY J. CARLSON, E. FRIEDLANDER, AND J. PEVTSOVA, AND FURTHER OBSERVATIONS 1. The Nullcone and Restricted Nullcone We will need

More information

Exercises on chapter 0

Exercises on chapter 0 Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that

More information

Schubert Varieties. P. Littelmann. May 21, 2012

Schubert Varieties. P. Littelmann. May 21, 2012 Schubert Varieties P. Littelmann May 21, 2012 Contents Preface 1 1 SMT for Graßmann varieties 3 1.1 The Plücker embedding.................... 4 1.2 Monomials and tableaux.................... 12 1.3 Straightening

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

GEOMETRY OF FEASIBLE SPACES OF TENSORS. A Dissertation YANG QI

GEOMETRY OF FEASIBLE SPACES OF TENSORS. A Dissertation YANG QI GEOMETRY OF FEASIBLE SPACES OF TENSORS A Dissertation by YANG QI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR

More information

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S Advanced Linear Algebra, GTM #135 (Not very good) Let k be a field, eg, R, Q, C, F q, K(t), 1 Vector spaces Definition A vector space over k is

More information

Math 210C. The representation ring

Math 210C. The representation ring Math 210C. The representation ring 1. Introduction Let G be a nontrivial connected compact Lie group that is semisimple and simply connected (e.g., SU(n) for n 2, Sp(n) for n 1, or Spin(n) for n 3). Let

More information

Categories and Quantum Informatics: Hilbert spaces

Categories and Quantum Informatics: Hilbert spaces Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:

More information

Hyperdeterminants, secant varieties, and tensor approximations

Hyperdeterminants, secant varieties, and tensor approximations Hyperdeterminants, secant varieties, and tensor approximations Lek-Heng Lim University of California, Berkeley April 23, 2008 Joint work with Vin de Silva L.-H. Lim (Berkeley) Tensor approximations April

More information

Is Every Secant Variety of a Segre Product Arithmetically Cohen Macaulay? Oeding (Auburn) acm Secants March 6, / 23

Is Every Secant Variety of a Segre Product Arithmetically Cohen Macaulay? Oeding (Auburn) acm Secants March 6, / 23 Is Every Secant Variety of a Segre Product Arithmetically Cohen Macaulay? Luke Oeding Auburn University Oeding (Auburn) acm Secants March 6, 2016 1 / 23 Secant varieties and tensors Let V 1,..., V d, be

More information

Projective Varieties. Chapter Projective Space and Algebraic Sets

Projective Varieties. Chapter Projective Space and Algebraic Sets Chapter 1 Projective Varieties 1.1 Projective Space and Algebraic Sets 1.1.1 Definition. Consider A n+1 = A n+1 (k). The set of all lines in A n+1 passing through the origin 0 = (0,..., 0) is called the

More information

Secant varieties. Marin Petkovic. November 23, 2015

Secant varieties. Marin Petkovic. November 23, 2015 Secant varieties Marin Petkovic November 23, 2015 Abstract The goal of this talk is to introduce secant varieies and show connections of secant varieties of Veronese variety to the Waring problem. 1 Secant

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

0.2 Vector spaces. J.A.Beachy 1

0.2 Vector spaces. J.A.Beachy 1 J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a

More information

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule GROUP THEORY PRIMER New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule 1. Tensor methods for su(n) To study some aspects of representations of a

More information

On the Waring problem for polynomial rings

On the Waring problem for polynomial rings On the Waring problem for polynomial rings Boris Shapiro jointly with Ralf Fröberg, Giorgio Ottaviani Université de Genève, March 21, 2016 Introduction In this lecture we discuss an analog of the classical

More information

Yuriy Drozd. Intriduction to Algebraic Geometry. Kaiserslautern 1998/99

Yuriy Drozd. Intriduction to Algebraic Geometry. Kaiserslautern 1998/99 Yuriy Drozd Intriduction to Algebraic Geometry Kaiserslautern 1998/99 CHAPTER 1 Affine Varieties 1.1. Ideals and varieties. Hilbert s Basis Theorem Let K be an algebraically closed field. We denote by

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003

Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003 Handout V for the course GROUP THEORY IN PHYSICS Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003 GENERALIZING THE HIGHEST WEIGHT PROCEDURE FROM su(2)

More information

ALGEBRAIC GEOMETRY (NMAG401) Contents. 2. Polynomial and rational maps 9 3. Hilbert s Nullstellensatz and consequences 23 References 30

ALGEBRAIC GEOMETRY (NMAG401) Contents. 2. Polynomial and rational maps 9 3. Hilbert s Nullstellensatz and consequences 23 References 30 ALGEBRAIC GEOMETRY (NMAG401) JAN ŠŤOVÍČEK Contents 1. Affine varieties 1 2. Polynomial and rational maps 9 3. Hilbert s Nullstellensatz and consequences 23 References 30 1. Affine varieties The basic objects

More information

Upper triangular matrices and Billiard Arrays

Upper triangular matrices and Billiard Arrays Linear Algebra and its Applications 493 (2016) 508 536 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Upper triangular matrices and Billiard Arrays

More information

Linear Algebra Lecture Notes-I

Linear Algebra Lecture Notes-I Linear Algebra Lecture Notes-I Vikas Bist Department of Mathematics Panjab University, Chandigarh-6004 email: bistvikas@gmail.com Last revised on February 9, 208 This text is based on the lectures delivered

More information

The geometry of projective space

The geometry of projective space Chapter 1 The geometry of projective space 1.1 Projective spaces Definition. A vector subspace of a vector space V is a non-empty subset U V which is closed under addition and scalar multiplication. In

More information

ON PARTIAL AND GENERIC UNIQUENESS OF BLOCK TERM TENSOR DECOMPOSITIONS IN SIGNAL PROCESSING MING YANG

ON PARTIAL AND GENERIC UNIQUENESS OF BLOCK TERM TENSOR DECOMPOSITIONS IN SIGNAL PROCESSING MING YANG ON PARTIAL AND GENERIC UNIQUENESS OF BLOCK TERM TENSOR DECOMPOSITIONS IN SIGNAL PROCESSING MING YANG Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements

More information

REPRESENTATION THEORY OF S n

REPRESENTATION THEORY OF S n REPRESENTATION THEORY OF S n EVAN JENKINS Abstract. These are notes from three lectures given in MATH 26700, Introduction to Representation Theory of Finite Groups, at the University of Chicago in November

More information

to appear insiam Journal on Matrix Analysis and Applications.

to appear insiam Journal on Matrix Analysis and Applications. to appear insiam Journal on Matrix Analysis and Applications. SYMMETRIC TENSORS AND SYMMETRIC TENSOR RANK PIERRE COMON, GENE GOLUB, LEK-HENG LIM, AND BERNARD MOURRAIN Abstract. A symmetric tensor is a

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS 1. Lie groups A Lie group is a special smooth manifold on which there is a group structure, and moreover, the two structures are compatible. Lie groups are

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

SUMMARY ALGEBRA I LOUIS-PHILIPPE THIBAULT

SUMMARY ALGEBRA I LOUIS-PHILIPPE THIBAULT SUMMARY ALGEBRA I LOUIS-PHILIPPE THIBAULT Contents 1. Group Theory 1 1.1. Basic Notions 1 1.2. Isomorphism Theorems 2 1.3. Jordan- Holder Theorem 2 1.4. Symmetric Group 3 1.5. Group action on Sets 3 1.6.

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Abstract Vector Spaces

Abstract Vector Spaces CHAPTER 1 Abstract Vector Spaces 1.1 Vector Spaces Let K be a field, i.e. a number system where you can add, subtract, multiply and divide. In this course we will take K to be R, C or Q. Definition 1.1.

More information

Topological K-theory

Topological K-theory Topological K-theory Robert Hines December 15, 2016 The idea of topological K-theory is that spaces can be distinguished by the vector bundles they support. Below we present the basic ideas and definitions

More information

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES CHAPTER 1. AFFINE ALGEBRAIC VARIETIES During this first part of the course, we will establish a correspondence between various geometric notions and algebraic ones. Some references for this part of the

More information

MOTIVES ASSOCIATED TO SUMS OF GRAPHS

MOTIVES ASSOCIATED TO SUMS OF GRAPHS MOTIVES ASSOCIATED TO SUMS OF GRAPHS SPENCER BLOCH 1. Introduction In quantum field theory, the path integral is interpreted perturbatively as a sum indexed by graphs. The coefficient (Feynman amplitude)

More information

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that ALGEBRAIC GROUPS 33 3. Lie algebras Now we introduce the Lie algebra of an algebraic group. First, we need to do some more algebraic geometry to understand the tangent space to an algebraic variety at

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille

Math 429/581 (Advanced) Group Theory. Summary of Definitions, Examples, and Theorems by Stefan Gille Math 429/581 (Advanced) Group Theory Summary of Definitions, Examples, and Theorems by Stefan Gille 1 2 0. Group Operations 0.1. Definition. Let G be a group and X a set. A (left) operation of G on X is

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Polynomial mappings into a Stiefel manifold and immersions

Polynomial mappings into a Stiefel manifold and immersions Polynomial mappings into a Stiefel manifold and immersions Iwona Krzyżanowska Zbigniew Szafraniec November 2011 Abstract For a polynomial mapping from S n k to the Stiefel manifold Ṽk(R n ), where n k

More information

div(f ) = D and deg(d) = deg(f ) = d i deg(f i ) (compare this with the definitions for smooth curves). Let:

div(f ) = D and deg(d) = deg(f ) = d i deg(f i ) (compare this with the definitions for smooth curves). Let: Algebraic Curves/Fall 015 Aaron Bertram 4. Projective Plane Curves are hypersurfaces in the plane CP. When nonsingular, they are Riemann surfaces, but we will also consider plane curves with singularities.

More information

A Do It Yourself Guide to Linear Algebra

A Do It Yourself Guide to Linear Algebra A Do It Yourself Guide to Linear Algebra Lecture Notes based on REUs, 2001-2010 Instructor: László Babai Notes compiled by Howard Liu 6-30-2010 1 Vector Spaces 1.1 Basics Definition 1.1.1. A vector space

More information

REPRESENTATION THEORY. WEEKS 10 11

REPRESENTATION THEORY. WEEKS 10 11 REPRESENTATION THEORY. WEEKS 10 11 1. Representations of quivers I follow here Crawley-Boevey lectures trying to give more details concerning extensions and exact sequences. A quiver is an oriented graph.

More information

Algebraic structures I

Algebraic structures I MTH5100 Assignment 1-10 Algebraic structures I For handing in on various dates January March 2011 1 FUNCTIONS. Say which of the following rules successfully define functions, giving reasons. For each one

More information

Patrick Iglesias-Zemmour

Patrick Iglesias-Zemmour Mathematical Surveys and Monographs Volume 185 Diffeology Patrick Iglesias-Zemmour American Mathematical Society Contents Preface xvii Chapter 1. Diffeology and Diffeological Spaces 1 Linguistic Preliminaries

More information

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d The Algebraic Method 0.1. Integral Domains. Emmy Noether and others quickly realized that the classical algebraic number theory of Dedekind could be abstracted completely. In particular, rings of integers

More information

MATH32062 Notes. 1 Affine algebraic varieties. 1.1 Definition of affine algebraic varieties

MATH32062 Notes. 1 Affine algebraic varieties. 1.1 Definition of affine algebraic varieties MATH32062 Notes 1 Affine algebraic varieties 1.1 Definition of affine algebraic varieties We want to define an algebraic variety as the solution set of a collection of polynomial equations, or equivalently,

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

On Strassen s Conjecture

On Strassen s Conjecture On Strassen s Conjecture Elisa Postinghel (KU Leuven) joint with Jarek Buczyński (IMPAN/MIMUW) Daejeon August 3-7, 2015 Elisa Postinghel (KU Leuven) () On Strassen s Conjecture SIAM AG 2015 1 / 13 Introduction:

More information

LECTURE 3: REPRESENTATION THEORY OF SL 2 (C) AND sl 2 (C)

LECTURE 3: REPRESENTATION THEORY OF SL 2 (C) AND sl 2 (C) LECTURE 3: REPRESENTATION THEORY OF SL 2 (C) AND sl 2 (C) IVAN LOSEV Introduction We proceed to studying the representation theory of algebraic groups and Lie algebras. Algebraic groups are the groups

More information

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes!

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes! ALGEBRAIC GROUPS Disclaimer: There are millions of errors in these notes! 1. Some algebraic geometry The subject of algebraic groups depends on the interaction between algebraic geometry and group theory.

More information

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014 Algebraic Geometry Andreas Gathmann Class Notes TU Kaiserslautern 2014 Contents 0. Introduction......................... 3 1. Affine Varieties........................ 9 2. The Zariski Topology......................

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

ALGEBRAIC GEOMETRY I - FINAL PROJECT

ALGEBRAIC GEOMETRY I - FINAL PROJECT ALGEBRAIC GEOMETRY I - FINAL PROJECT ADAM KAYE Abstract This paper begins with a description of the Schubert varieties of a Grassmannian variety Gr(k, n) over C Following the technique of Ryan [3] for

More information

T -equivariant tensor rank varieties and their K-theory classes

T -equivariant tensor rank varieties and their K-theory classes T -equivariant tensor rank varieties and their K-theory classes 2014 July 18 Advisor: Professor Anders Buch, Department of Mathematics, Rutgers University Overview 1 Equivariant K-theory Overview 2 Determinantal

More information