Decomposing Commutative Representations of Two-dimensional Quivers

Similar documents
Decomposition Methods for Representations of Quivers appearing in Topological Data Analysis

Representations of quivers

Chapter 2 Linear Transformations

0.2 Vector spaces. J.A.Beachy 1

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Formal power series rings, inverse limits, and I-adic completions of rings

Topological Data Analysis - Spring 2018

Abstract Vector Spaces and Concrete Examples

Metric spaces and metrizability

First we introduce the sets that are going to serve as the generalizations of the scalars.

Topics in linear algebra

Quiver Representations

5 Quiver Representations

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Math 762 Spring h Y (Z 1 ) (1) h X (Z 2 ) h X (Z 1 ) Φ Z 1. h Y (Z 2 )

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 2

REPRESENTATION THEORY. WEEKS 10 11

LECTURE 3: RELATIVE SINGULAR HOMOLOGY

Computational Approaches to Finding Irreducible Representations

1 Fields and vector spaces

Math 396. Quotient spaces

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

4.1. Paths. For definitions see section 2.1 (In particular: path; head, tail, length of a path; concatenation;

4.2. ORTHOGONALITY 161

Math 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm

THE CLASSIFICATION PROBLEM REPRESENTATION-FINITE, TAME AND WILD QUIVERS

Infinite-Dimensional Triangularization

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Categories and Quantum Informatics: Hilbert spaces

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

0. Introduction 1 0. INTRODUCTION

12. Projective modules The blanket assumptions about the base ring k, the k-algebra A, and A-modules enumerated at the start of 11 continue to hold.

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

Introduction to modules

Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003

Math 530 Lecture Notes. Xi Chen

ADVANCED TOPICS IN ALGEBRAIC GEOMETRY

Lecture Summaries for Linear Algebra M51A

Spanning, linear dependence, dimension

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014

Algebraic Geometry

Equivalence Relations

2. Intersection Multiplicities

A NOTE ON GENERALIZED PATH ALGEBRAS

Representation Theory. Ricky Roy Math 434 University of Puget Sound

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

A brief introduction to trace class operators

A TALE OF TWO FUNCTORS. Marc Culler. 1. Hom and Tensor

Classification of semisimple Lie algebras

REPRESENTATIONS OF S n AND GL(n, C)

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra, Summer 2011, pt. 2

On Linear Subspace Codes Closed under Intersection

NOTES ON SPLITTING FIELDS

Tangent spaces, normals and extrema

Another algorithm for nonnegative matrices

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Parameterizing orbits in flag varieties

Set theory. Math 304 Spring 2007

Math 594. Solutions 5

LINEAR ALGEBRA: THEORY. Version: August 12,

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

LECTURE 16: REPRESENTATIONS OF QUIVERS

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Modules Over Principal Ideal Domains

Isomorphisms between pattern classes

Algebraic Methods in Combinatorics

Summer Algebraic Geometry Seminar

Operads. Spencer Liang. March 10, 2015

Elementary linear algebra

AUSLANDER-REITEN THEORY FOR FINITE DIMENSIONAL ALGEBRAS. Piotr Malicki

arxiv: v1 [math.co] 5 Apr 2019

Linear and Bilinear Algebra (2WF04) Jan Draisma

The real root modules for some quivers.

MODULES OVER A PID. induces an isomorphism

Test 1 Review Problems Spring 2015

Where is matrix multiplication locally open?

Classification of root systems

HOMOLOGY THEORIES INGRID STARKEY

1 Categorical Background

CW-complexes. Stephen A. Mitchell. November 1997

Universal Algebra for Logics

2. Prime and Maximal Ideals

Abstract Algebra II Groups ( )

FACTORIZATION AND THE PRIMES

Linear and Bilinear Algebra (2WF04) Jan Draisma

Exercise: Consider the poset of subsets of {0, 1, 2} ordered under inclusion: Date: July 15, 2015.

Mathematics Department Stanford University Math 61CM/DM Vector spaces and linear maps

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

Tree sets. Reinhard Diestel

Math 210B. Profinite group cohomology

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

A visual introduction to Tilting

Math 121 Homework 5: Notes on Selected Problems

Transcription:

Decomposing Commutative Representations of Two-dimensional Quivers May 20, 2017 Abstract In this paper, a means of determining the decomposition type of the commutative quiver representations CL(f) and CL(f, f) in a computationally facile way is presented. A linear relationship between some numerical invariants pertaining to every representation of the quivers in question and the numbers of its different constituent indecomposable representations is established. The theoretical background as well as the practical motivation for this work is covered briefly in the first sections of the text. The limitations of the method for representations of other quivers are discussed, and a brief treatment of the outlook for the procedure as well as a review of other approaches for determining decomposition types of quiver representations are considered. 1 Preliminaries A quiver is a finite directed graph: intuitively a bunch of dots with arrows between them, more formally a quadruple (Q v, Q a, s, t) where Q v and Q a are finite sets (the vertices and arrows, respectively), and s, t : Q a Q v are functions assigning to each arrow in Q a its source and target, respectively. The free category generated by a quiver Q is the category F Q obtained simply by introducing identity arrows for every vertex in Q where needed. The objects of F Q are thus the vertices of Q and the arrows of F Q are precisely the arrows of Q, with the composition given by concatenation. A representation of a quiver, in turn, is a functor from the quiver (viewed as a category) to the category Vect K more precisely to the subcategory of finite dimensional vector spaces. 1 Given two representations V and W of a quiver Q, it is often natural to ask how similar they are (for reasons to be covered in the background section), or even, given the arbitrariness in choosing bases for vector spaces, which might result in the shrouding of some information on their structure, if they might be considered the same: two representations V and W of a quiver Q are said to be isomorphic if there exists a natural transformation τ : V W, a functor in the functor category Fun(F Q, Vect K ), which simply enough is an isomorphism. In the language of linear algebra, this translates to the existence of invertible linear transformations between the individual vector spaces in the representations; V and W are isomorphic if there for 1 Actually, the definition is more general: a representation of a quiver is a functor to R-Mod, but since this text only will be dealing with vector spaces, the restricted definition above will suffice. 1

every pair of vertices v i and v j in the set of vertices Q v of Q that are joined by an arrow f in Q a exist invertible linear transformations ϕ : V i W i and ψ : V j W j, where V i is the vector space assigned to the vertex v i in V (and analogously for V j, W i and W j ), such that the following diagram commutes (f V and f W are the linear transformations representing the arrow f in V and W, respectively): f V V i V j ϕ ψ W i W j. f W 2 Background Any measurement data on some population for instance the weight of a number of individuals chosen on the basis of a given selection requirement is readily interpreted as a discrete metric space (M, d). In the case of weight as the (only) measurement variable, M could be for instance all individuals being weighed, and the distance between two elements (persons) x and y in M could simply be the absolute value of their difference in weight. Now imagine that the same measurement has been made on another population chosen according to the same selection requirements. What is of immediate interest is to be able to compare these populations (to answer Are pensioners more prone to obesity in rural or in urban areas?, for instance), i.e. to be able to compare the metric spaces representing the two populations, to determine how close they are to being isometric. There exists a notion for this, known as the Gromov-Hausdorff distance [1], a measure on the dissimilarities of compact metric spaces with respect to isometry, by which one could determine the likeness of two sets of data. However, finding out Gromov-Hausdorff distances is computationally extremely demanding, and is thus only feasible for very small populations. The practical shortcomings of the Gromov-Hausdorff distance calls for another approach to comparing metric spaces; some sort of simplification is needed. Instead of scrutinizing the spaces point by point, one could look at larger parts of the space at a time, and simply be content with determining how many elements lie close to each other in one part or another (using some sort of clustering method, to spell it out). This leads to the following: Definition 1. In a metric space (M, d), given some ε 0, call x M epsilonequivalent to y, denoted x ε y, if there exist points x i M, i = 1,..., n such that x 1 = x, x n = y and d(x i, x i+1 ) ε, i = 1,..., n 1. Epsilon-equivalence is easily checked to be an equivalence relation on the elements of M (hence the name), partitioning them into equivalence classes. A class of elements that are equivalent for a given ε will be denoted by [x] ε, where x is any representative from the class in question. Now the metric space (M, d) of measurement data can be regarded from the perspective of this equivalence relation: by successively increasing ε (perhaps most reasonably by pacing with the step-length of ε this is what will be done in the following) and 2

looking at which elements (equivalence classes) become equivalent with each increase of ε, i.e. which equivalence classes become grouped together as the partitioning gets less and less fine-grained, one gets a simplified picture of the proximity of different of data points. To make things more clear-cut, let first X ε denote the set of equivalence classes of M under ε. If for instance it for [x] ε and [y] ε in X ε holds that x 2ε y, these become part of the same equivalence class [x] 2ε = [y] 2ε in X 2ε = {[z] 2ε z M}. We can in general for k 1 define a function γ kε : X (k 1)ε X kε by γ kε ([x] (k 1)ε ) = [x] kε. When visualizing these successive groupings γ iε (X (i 1)ε ) juxtaposed in order a so-called dendrogram is obtained. For example, for the two-dimensional metric space in Figure 1, endowed with Euclidean distance, one gets the dendrogram x 1 [x 1 ] ε [x1]2ε x 2 [x 2 ] ε x 3 [x 1 ] 3ε [x 3 ] ε x 4 [x 3 ] 2ε x 5 [x 5 ] ε 0 ε 2ε 3ε. γ ε γ 2ε γ (n 1)ε The chain of maps M = X 0 Xε... X (n 1)ε can be viewed as a sort of representation of the so-called A n -quiver... (n vertices); it is an object in Fun(F Q, Sets) rather than a usual representation in Fun(F Q, Vect K ). These two kinds of objects will from now on be referred to simply as representations in Sets and representations in Vect K respectively. Now this representation in Sets is indeed obtained by a simplification; lots of information is lost in the lowering of the resolution, so to speak, when grouping the elements of the metric space together. However, determining the likeness of two such representations still turns out to be a too demanding task.[1] To mold the data into a manageable form, the free vector spaces V i over the corresponding sets of equivalence classes X iε, can be constructed, i.e. V i = Kx, x X iε where K is some appropriate field. 2 Since the γ iε are maps between the basis elements of the V i, they each uniquely determine a linear transformation f i : V i 1 V i, simply by imposing the condition that they be linear, so that f i ( j α j[x j ] (i 1)ε ) j α jγ iε ([x j ] (i 1)ε ) for α j K. f 1 f 2 The resulting chain of vector spaces and linear transformations V 0 V1... fn 1 V n 1 is clearly a representation in Vect K of the A n -quiver. Although perhaps not obvious at first, this turns out to be quite a simplification compared to the previous 2 Several fields are often used for a given set of data, but a discussion on how to choose these is beyond the scope of this text. 3

x 3 x 4 x5 x 1 x2 x 3 x 4 x5 x 1 x2 x 3 x 4 x5 x 1 x2 Figure 1: Partitionings of a metric space {x i } for some ε, 2ε and 3ε, from left to right representation in Sets. To appreciate wherein the simplification in going from sets to vector spaces lies, regard the following: x 1 x 1 y 1 y 1 x 2 x 2 z 1 z 1. x 3 x 3 y 2 y 2 x 4 x 4 The corresponding sets in these dendrogram-like structures are of the same sizes, but the (right-going) functions described by the edges obviously differ in these two cases. It is relatively simple to prove that the two objects above are not isomorphic as representations in Sets, 3 that is, there exists no bijection ϕ between the base sets (between the x i and the x i ) that makes the following commute {x i } {y i } {z 1 } ϕ {x i } {y i } {z 1 }. The construction of the vector spaces V x = 4 i=1 Kx i, V y = 2 i=1 Ky i and V z = Kz 1, and the similar construction of V x, V y and V z for the sets with primed elements, makes the edges induce linear transformations as mentioned above, resulting in the quiver representations 3 Of course the general notion (the definition) of an isomorphism in Fun(F Q, Sets is an invertible natural transformation, just as in Fun(F Q, Vect K. 4

1 1 0 0 [ ] 0 0 1 1 1 1 V x V y V z and 1 0 0 0 [ ] 0 1 1 1 1 1 V x V y V z. Remarkably, these representations in Vect K are isomorphic, as can be seen by checking the commutativity of the following diagram: 1 1 0 0 [ ] 0 0 1 1 1 1 1 1 0 0 V x V y V z 0-1 0 0 1 0 1 0 1 1 0 0 1 0 0 0 1 V x V y [ ] V z. 1 0 0 0 1 1 0 1 1 1 The isomorphisms between representations in Sets consist of permutations of the elements in corresponding sets, whereas the morphisms in Fun(F Q, Vect K ) permit linear combinations of the elements. This relaxation allows for invertible maps between induced vector spaces that are not permutations (thus the matrix in the example is not a permutation matrix, unlike the other matrices found in the diagram, which were formed directly from the maps in the representation in Sets). As a consequence, there exist in general more isomorphisms between the representations in Fun(F Q, Vect K ) of a quiver than what is the case for representations of the same quiver in Sets. * * * Representations of quivers of other shapes than that of the A n -quiver appear very naturally when analyzing data this way. Returning to the above example, imagine that another quantity than weight say length is being measured as well. One can treat this data in the same fashion, with the result of another metric d on the same population set M. Through the same procedure of grouping into equivalence classes as for d, but now perhaps pacing with some δ instead of ε, 4 eventually another chain of 4 In this example δ is most likely different from ε. After all, d and d are measuring totally different physical quantities. However, it is emphasized that the step-length is chosen. 5

vector spaces V0 0 g0 1 V0 1 g0 2 gm 0 1... 0 V m 0 0 can be constructed (the indexing in which will become clear in a moment). For each vector space V0 i in this new chain, its basis elements (which still are elements, or groups of them, in M) can be regarded under ε-equivalence, 2ε-equivalence, and so on, obtaining a representation V0 i f1 i V1 i f2 i... f n i i 1 Vn i i 1 of an A n i -quiver for each Vj i. The same thing can also be done the other way around; the bases for the V j (which are hereby renamed Vj 0, for the sake of coherence) from before can be regarded under δ-equivalence, 2δ-equivalence, etc., yielding vector space chains Vj 0 gj 1 Vj 1 gj 2... gmj 1 j V m j j. Constructing all possible chains in this manner results in a grid of the form V0 0 f1 0 V1 0 f2 0 V 0 f 0 3 f 0 n 1 2 V 0 n 1 g 1 0 g 1 1 g 1 2 g 1 n 1 V0 1 f1 1 V1 1 f2 1 V 1 f 1 3 f 1 n 1 2 V 1 n 1 g 2 0 g 2 1 g 2 2 g 2 n 1 V0 2 f1 2 V1 2 f2 2 V 2 f 2 3 f 2 n 1 2 V 2 n 1 g 3 0 g 3 1 g 3 2 g 3 n 1... g m 1 0 g m 1 1 g m 1 2 g m 1 n 1 V0 m 1 f m 1 1 V1 m 1 f m 1 2 V2 m 1 f m 1 f m 1 3 n 1 Vn 1 m 1.. The grid thus obtained will be commutative, a fact that can be understood either by simply checking that the order in which the different metrics are regarded when grouping the elements together is immaterial, or in an only slightly different manner by noticing that equivalence under two (or more) equivalence relations implicates another equivalence relation: in this case the relations δ and ε induces the equivalence relation δ,ε, which is defined by x δ,ε y x δ y x ε y, and that this means that g1 1 f 1 0 must be the same as f 1 1 g1 0 since both maps each x V 0 0 to [x] δ,ε = [[x] ε ] δ = [[x] δ ] ε V 1 1. One measurement variable thus results in a chain of vector spaces, and two measurement variables give rise to a grid. If a third variable were taken into account, the outcome would be a three-dimensional cubic lattice, and so on for higher numbers of variables. This text will deal with the two-dimensional case, more specifically with commutative representations of 2-by-2 and 2-by-3 grids, the so-called commutative ladders CL(f) and CL(f, f) of the respective shapes 6

and. 3 Prospectus As descanted about in the background section it is of interest to be able to compare different representations of a quiver for data-analytical purposes. There exist different approaches regarding how to go about such a comparison. For the chief technique in this text the following observation is needed: from two representations V and W of a quiver Q, i.e. from any two objects V and W in Fun(F Q, Vect K ), the usual categorytheoretical coproduct V W can be formed. Explicitly, in the category at hand, this is defined so that for any pairs of corresponding 5 vector spaces in V and W, (V i, V j ) and (W i, W j ), joined by linear transformations f V and f W respectively, the corresponding pair of vector spaces in V W is (V i W i, V j W j ), and the map between them is f V f W. More succinctly: the coproduct in Fun(F Q, Vect K ) is a pointwise direct sum of the vector spaces and maps. For this reason, the symbol rather than the usual will be used to denote the coproduct in Fun(F Q, Vect K ). The zero representation of a quiver Q is the representation that assigns to every vertex in Q the zero vector space. With this in mind, the following can be stated: Definition 2. A nonzero representation R of a quiver Q is decomposable if it holds that R = S T where S and T are also nonzero representations of Q. If R is not decomposable, it is said to be indecomposable. What will be done in the following is to find a way to express an arbitrary representation of either of the quiver-representations CL(f) and CL(f, f) as a coproduct of indecomposable representations to decompose the representation. Showing that a representation can be expressed in a unique way as a coproduct of indecomposable representations ( indecomposables in short) enables one to facilely compare different representations. It was discovered relatively recently that the number of indecomposables for the two CL-quiver-representations in question are finitely many.[2] This result opens up for the possibility of completely knowing the decompositions of CL(f)- and CL(f, f)-representations, i.e. of knowing the numbers of all their constituent indecomposable representations, and classifying them thereby. Given a method for determining the decomposition of any CL(f)- or CL(f, f)-quiver, comparing representations would amount to no more than counting how many of each kind of indecomposable are included in the coproduct expressions for the respective representations. In this paper, a method of determining the decomposition of a representation through checking simple numbers invariant under isomorphism related to the maps assigned to the quiver-arrows is presented. 5 In the sense that they are assigned to the same vertices in Q. 7

4 Notation and conventions In the following, K will denote an arbitrary field, and all representations will be in Vect K. K will also denote any one-dimensional vector space over K, and K 2 any two-dimensional vector space over K. 5 The commutative square CL(f) 5.1 Finding indecomposables Proposition 1. If any of the vector spaces in a representation V of the square has dimension greater than or equal to two, then V is decomposable. To prove this the following, which is found in [3], is needed: Lemma 1. A representation R of a quiver is indecomposable iff every endomorphism ε : R R is either nilpotent or an isomorphism. An exhaustive proof of Proposition 1 would be unnecessary lengthy in this context, which is why the following is presented: Sketch of proof of Prop. 1. Assume that for a commutative CL(f)-representation f 01 V 0 V 1 f 02 f 13 V 2 V 3 f 23 dim V 3 > 1. Choose a basis {v 3 i } for V 3 and define an endomorphism on the representation where the map ϕ 3 : V 3 V 3 is defined by taking the quotient with v 3 1. Thereafter define the maps from the other V i to themselves so as to make the diagram commute (which is possible). Since the image of ϕ 3 has a nonzero dimension that is less than that of V 3 and ϕ 3 ϕ 3 = ϕ 3, the maps thus defined will comprise an endomorphism on the representation that is neither nilpotent nor invertible. Similar constructions can be made when any of the other spaces have a dimension greater than or equal to two. This leaves only 2 4 1 = 15 possible configurations of dimensions for candidates for being indecomposable representations of the square (the zero representation, with only zero vector spaces, is by definition not an indecomposable): Because of their linearity, the maps in the above representations can only be multiplication with an invertible element in K or the zero map. However, the possibility of zero maps between vector spaces of nonzero dimension is quickly ruled out. Regard, for the sake of simplicity, the representation of the A 2 -quiver K 0 K 8

K 0 0 K 0 0 0 0 K K 0 0 0 0 K 0 0 K 0 0 1 2 3 4 5 K 0 0 K 0 0 K K 0 K K 0 0 K K K K 0 K K 6 7 8 9 10 K K K 0 0 K K K K 0 K K 0 K K 0 0 K K K 11 12 13 14 15 Figure 2: Candidates for being indecomposable representations for CL(f) where 0 denotes the zero map. The following commutative diagram can be constructed: 1 1 K K K 0 K 0. 0 1 This is of the form 1 A B A. f An elementary result in algebra (found in [3], for example) gives that B = A kerf, i.e. that (K 0 K) = (K 0) (0 K), so K 0 K is decomposable. The same sort of reasoning applies when two adjacent one-dimensional vector spaces are found in a compound diagram such as the square, so it will from now on be assumed that the transformations between any pair of neighbouring spaces of dimension one are identity maps. 6 This narrows down the list of candidates for being indecomposable representations of the square to precisely the ones in Figure 2, with identity maps between 6 What is meant here is that it is always possible to regard an isomorphic copy of the representation, where the maps are identities. 9

neighbouring one-dimensional spaces. However, the requirement that the representations be commutative rules out candidates number 14 and 15, since they obviously do not commute as a consequence of the absence of zero maps between adjacent K:s. Moreover, candidates 12 and 13 disqualify because an indecomposable representation needs to have all spaces of nonzero dimension next to each other; any map passing through a zero space will map everything to zero, so that in fact candidate 12 is isomorphic to candidate 1 plus candidate 4, and number 13 is isomorphic to number 2 plus number 3. Candidates 1-11 can be shown to be indecomposable using Lemma 5.1. Without presenting a proof (one can be found in [2]), it can be said that the principle behind one would be that in constructing a noninvertible endomorphism for any of these, at least one of the maps in the construction would have to be zero, and that this in all possible cases would force the endomorphism to be nilpotent. 5.2 Decomposing the square Lemma 2. For finite-dimensional vector subspaces V V and W W, dim(v W ) = dimv + dimw. Proof. Choose a basis {v i } for V and a basis {w j } for W. Any vector (v w) in V W can be written as ( i α iv i j α jw j ) = ( i α iv i 0) + (0 j α jw j ) = i α i(v i 0)+ j α j(0 w j ) for some α i and α j in K, so {(v i 0), (0 w j )} spans V W. Now suppose that for some α i, α j K, not all zero, i α i(v i 0) + j α j(0 w j ) = ( i α iv i j α jw j ) = (0 0). This means (by definition) that i α iv i = 0 and j α jw j = 0. This contradicts that {v i } is a basis for V and that {w j } is a basis for W, so {(v i 0), (0 w j )} is linearly independent. Thus {(v i 0), (0 w j )}, which consists of dim V + dim W elements, is a basis for V W. Let n i denote the number of indecomposable representations of type i (with the enumeration from Figure 2) in the decomposition of a representation f 01 V 0 V 1 f 02 f 13 V 2 V 3 f 23 of CL(f). From Lemma 5.2 follows for instance that the dimension of V 0 is n 1 +n 5 + n 6 +n 9 +n 11, that is, the dimension of the direct sum of some number of one-dimensional vector spaces is precisely this same number. The numbers n i (obviously) determine dimv i and all other dimensions of vector subspaces found in the representation. What is interesting is that a converse also holds: by establishing some dimensions of subspaces, enough information to determine the decomposition of the representation is obtained. Since all the dimensions of the subspaces depend linearly on the n i (and the inverse relation thus is linear as well) this leads to a system of linear equations that can be solved for the n i once these dimensions d i are known. Eleven dimensions must be known, since the matrix describing this transformation must have full rank. However, any collection of eleven subspace dimensions would not suffice: if for instance the 10

dimensions of V 0 and kerf 01 are known, considering the dimension of imf 01 would be of no avail, since it can be inferred from the two previous using the rank-nullity theorem. This hints at the arbitrariness in choosing spaces to consider: for a map f : V W, the pair (dimv, dim kerf) imparts the same information as the pair (dimv, dim imf), and these in turn carry the same data as (dim kerf, dim imf) does, just to mention an example. Regrettably, no method for determining which dimensions should be added to an incomplete list of d i to obtain eleven independent equations is known to the author, so determining which spaces carry new information requires some trial-and-error. However, a list of d i that entails enough data for determining the decomposition of a representation is d 1 = dim V 1 d 2 = dim V 2 d 3 = dim V 3 d 4 = dim V 4 d 5 = dim ker f 01 d 6 = dim ker f 02 d 7 = dim ker f 13 d 8 = dim ker f 23 d 9 = dim ker(f 13 f 01 ) = dim ker(f 23 f 12 ) d 10 = dim ker(f 01, f 02 ) d 11 = dim im(f 13 + f 23 ). As stated above, d 1 = n 1 + n 5 + n 6 + n 9 + n 11. Analoguosly, d 5 = n 1 + n 6, since indecomposables 1 and 6 are precisely the ones that have a nontrivial kernel for the map assigned to the top arrow (and these kernels are one-dimensional hence the coefficients in front of n 1 and n 6 are both 1). The same straightforward reasoning applies to all other d i. Collecting all the coefficients for the n i in a matrix C f, an equation determining the decompsition of the representation, C f n = d, where n and d contain the n i and d i respectively, is obtained. Explicitly, this says that 1 0 0 0 1 1 0 0 1 0 1 n 1 dim V 0 0 1 0 0 1 0 1 0 1 1 1 n 2 dim V 1 0 0 1 0 0 1 0 1 1 1 1 n 3 dim V 2 0 0 0 1 0 0 1 1 0 1 1 n 4 dim V 3 1 0 0 0 0 1 0 0 0 0 0 n 5 dim ker f 01 1 0 0 0 1 0 0 0 0 0 0 n 6 = dim ker f 02 0 1 0 0 1 0 0 0 1 0 0 n 7 dim ker f 13 0 0 1 0 0 1 0 0 1 0 0 n 8 dim ker f 24 1 0 0 0 1 1 0 0 1 0 0 n 9 dim ker f 03 1 0 0 0 0 0 0 0 0 0 0 n 10 dim ker(f 01, f 02 ) 0 0 0 0 0 0 1 1 0 1 1 dim im(f 13 + f 23 ) Given the vector with the d i, all that is needed to obtain the decomposition is solving 11 n 11

this equation. The d i are easily calculated given for example a set of matrices describing the transformation in some bases. However, what is considerably useful is that the dimensions are invariant under isomorphism and do not depend on the choice of bases. The problem of determining the distance between two metric spaces representing sets of data is thus reduced to determining the distance between a pair of points in a single metric space, that is between the two n i -vectors in Z 11. 6 The commutative figure eight CL(f, f) In this section, the notation for a representation of the figure eight quiver will be V 0 V 1 V 2 f 03 f 01 f 12 f 14 f 25 V 3 V 4 V 5, f 34 f 45 and f ij will denote the map from V i to V j, so that f 04 = f 14 f 01 = f 34 f 03, for example. Methods analogous with the ones used for finding indecomposables for the commuting square can be applied to the figure eight grid CL(f, f). The process will however be slightly more tedious, mainly due to the large number number of indecomposables that need to be determined. The indecomposable CL(f, f)-representations are therefore presented without proof in Figure 3. A proof is found in [2]. The maps between adjacent K:s are identities, because of the same reasons as for the indecomposables for the square. Representations 28 and 29 have a special trait: they each comprise a two-dimensional space. The transformations in these indecomposable representations involve diagonal maps to and from the two-dimensional spaces: p 1 K K 2 K 0 K K p 2 i 1 p 1 K K 0 K K 2 K. Here denotes the diagonal map 1 [ 1 1 ], and i j and p j denotes inclusion to and projection from the j:th coordinate respectively. When decomposing a representation of the figure eight, more elaborate strategies than for the commuting square are needed to find a list of subspace dimensions that yields the required 29 independent equations (29 dimensions are of course needed, since there must be one equation per indecomposable). To start with, it is fruitful to regard precisely the same type of invariants as for CL(f); in the figure eight six commuting squares, each conveying some independent information about the decomposition, are ingrained: 12

K 0 0 0 K 0 0 0 K 0 0 0 0 0 0 0 0 0 1 2 3 0 0 0 0 0 0 0 0 0 K 0 0 0 K 0 0 0 K 4 5 6 K K 0 K 0 0 0 K K 0 0 0 K 0 0 0 0 0 7 8 9 0 K 0 0 0 K 0 0 0 0 K 0 0 0 K K K 0 10 11 12 0 0 0 K K K 0 0 0 0 K K 0 0 0 K K K 13 14 15 K K 0 0 K 0 0 K K K 0 0 K K 0 0 K 0 16 17 18 0 0 K K K 0 0 K K 0 K K K K 0 0 K K 19 20 21 K K K 0 0 K 0 K K K 0 0 K K K K K 0 22 23 24 K K K 0 K K K K K K K 0 K K K K K K 25 26 27 K K 2 K 0 K K K K 0 K K 2 K 28 29 Figure 3: The indecomposable representations of CL(f, f) 13

f 01 V 0 V 1 V 1 V 2 V 0 V 2 f 12 f 02 f 03 f 14 f 14 f 25 f 03 f 25 f 34 f 45 V 3 V 4 V 4 V 5 V 3 V 5 f02 f V 0 01 f V 2 V 0 V 1 01 V 0 V 1 f 04 f 25 f 03 f 45 f V 4 V 5 V 3 V 5 45 V 4 V 5. f 35 f15 f 35 f 04 f 15 By treating these interwoven squares as CL(f)-representations in their own right, determining the dimensions of all spaces and kernels, intersection of kernels, etc., 26 independent equations are obtained. For the remaining three, some trickery is needed. Three invariants that turn out to suffice are dim f 14 (ker f 12 ) im f 34, dim f 12 (ker f 14 ) im f 02 and dim im f 14 im f 34 ker f 45, but just as for the square these choices are of a highly arbitrary nature. 7 Very reasonably, these latter invariants are of a nature that make them impossible to find in a simple square. Thus a list of dimensions resulting in an invertible matrix is d 1 = dim V 0 d 16 = dim ker f 05 d 2 = dim V 1 d 17 = dim ker f 15 d 3 = dim V 2 d 18 = dim ker f 35 d 4 = dim V 3 d 19 = dim ker(f 01, f 03 ) d 5 = dim V 4 d 20 = dim ker(f 12, f 14 ) d 6 = dim V 5 d 21 = dim ker(f 02, f 04 ) d 7 = dim ker f 01 d 22 = dim ker(f 02, f 03 ) d 8 = dim ker f 12 d 23 = dim im(f 14 + f 34 ) d 9 = dim ker f 03 d 24 = dim im(f 15 + f 35 ) d 10 = dim ker f 14 d 25 = dim im(f 25 + f 35 ) d 11 = dim ker f 25 d 26 = dim im(f 25 + f 45 ) d 12 = dim ker f 34 d 27 = dim f 14 (ker f 12 ) im f 34 d 13 = dim ker f 45 d 28 = dim f 12 (ker f 14 ) im f 02 d 14 = dim ker f 02 d 29 = dim im f 14 im f 34 ker f 45 d 15 = dim ker f 04. 7 For example dim f 12 (f 1 14 (im f 14 im f 34 )) could be used instead of dim f 14 (ker f 12 ) im f 34. It should be pointed out that these are the only invariants mentioned that involve precisely the spaces V 1, V 2, V 3 and V 4. 14

In forming the matrix equation C ff n = d describing the decomposition of a representation of the figure eight quiver, the reasoning is about as plain as for CL(f), albeit possibly a little bit more involved, due to the presence diagonal maps and the invariants of types not found in the square. With the choice of invariants as above, the equation looks as follows: 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 1 0 1 1 1 0 1 1 1 0 1 1 1 1 2 1 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 1 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1 1 0 1 1 1 1 1 1 2 0 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 1 0 1 0 0 0 1 1 0 0 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 0 1 1 0 0 2 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 1 0 1 0 1 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 1 1 0 1 1 0 1 1 1 1 1 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 0 1 0 n 1 n 2 n 3 n 4 n 5 n 6 n 7 n 8 n 9 n 10 n 11 n 21 n 13 n 14 n 15 n 16 n 17 n 18 n 19 n 20 n 21 n 22 n 23 n 24 n 25 n 26 n 27 n 28 n 29 = d 1 d 2 d 3 d 4 d 5 d 6 d 7 d 8 d 9 d 10 d 11 d 12 d 13 d 14 d 15 d 16 d 17 d 18 d 19 d 20 d 21 d 22 d 23 d 24 d 25 d 26 d 27 d 28 d 29 7 The matrix sets C f and C ff Since the choice of dimensions to consider when finding the equations for the decomposition is arbitrary to a considerable extent for both CL(f) and CL(f, f), the matrices C f and C ff above are not the only ones that can be used to find the decompositions of these representations. Let C f and C ff denote the sets of matrices that, given some vector with subspace dimensions, can be used to find the decomposition of CL(f)- and CL(f, f)-representations respectively. It is not very thoroughly investigated how the matrices comprising these sets behave. The first question to ask could be how many such matrices there are. It is tempting to conjecture that they all would have some fundamental properties in common, in addition to the more obvious characteristics they must share. Regard C f. All matrices therein describe a bijection from N 11 to a proper subset of N 11 ; any configuration of indecomposables make up a valid representation, so every point in N 11 is in the domain of the transformation described by any of these matrices, but there are restrictions on the dimensions as for example dim ker f 01 cannot be greater than dim V 0. The image of such a map will thus lie in a convex 8 polytope in 8 The linear image of a convex set is convex, a fact that is is easily proven. 15

the first orthant in R 11, with an apex in the origin, skewed in some direction since the dimensions of the V i impose conditions on the dim ker f ij wholly different from the restrictions enforced the other way around. This can be seen by looking at the image of the vector [11... 1] T, i.e. of the row sums of the matrix, which at least in the matrices C f and C ff chosen above vary heavily between the rows, so that this vector, originally directed to the center of the first orthant, gets rotated palpably (but of course stays in the same orthant). The determinant of the matrix C f is 1, and he determinant for C ff is 1. The question whether determinants of matrices in C f and C ff can take other values than ±1 might be raised, especially since there are four elements equal to two in C ff. This query corresponds to asking if a choice of dimensions can be made that makes the image of the map considerably more sparse. (This might even be the case for a matrix in C f if other spaces or rather direct sums thereof are considered.) The inverses of C f and C ff are and 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 1 1 0 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1 respectively. The high numbers of negative elements can be thought of as reflecting the extent of the folding of the first orthant by the maps described by C f and C ff ; 16

the image of the map will occupy a smaller multi-dimensional solid angle than the points in the domain. This means that quite a few points in N 11, closer to the planes spanned by the coordinate axes, will not describe a valid configuration of dimensions; their mapping by the inverses would result in vectors with negative components, and there is no such thing as a negative indecomposable representation. By answering such questions, maybe some light can be shed on what lists of dimensions could be an optimal choice for decomposing CL(f) and CL(f, f) (if a notion of an optimal choice at all is meaningful in any case the invariants above are very easily computed given the maps in some bases). Knowing what properties a matrix in C f and C ff must have might be valuable for constructing decomposition matrices for representations of quivers of other shapes than the square and figure eight; perhaps there is a possibility of first constructing a matrix and only afterward decode from it what the indecomposable representations of the quiver generating it are. 8 Generalizing the method The construction of the decomposition matrices presented in this paper is made possible by the fact that there are finitely many indecomposable representations of CL(f) and CL(f, f) they are said to be of finite representation type. Two-dimensional grid quivers are in general not of finite representation type. That commutativity of the small ladder-shaped quiver-representations is a sufficient condition for their number of indecomposables to be limited is a main result in [2]. Therein is also explained that the commutative ladder representations CL(f, f, f), of the form, are of finite representation type as well, but that ladders of greater length fail to have finitely many indecomposables. Most likely, a matrix equation similar to the ones for CL(f) and CL(f, f) can be found for the 72 indecomposables of CL(f, f, f). Quiver representations of other shapes than those of the ladders could also be considered, and generalizations to higher-dimensional quivers, describing correlations of more variables in the data-analytical applications, could be attempted. Perhaps some aspect of the method for determining decompositions presented in this paper can be carried over to quiver representations with infinitely many indecomposables finding equations describing the numbers of some indecomposables might be valuable in itself, even if an exhaustive decomposition procedure may turn out to need other mathematical devices. 9 Acknowledgements I would like to thank my project collaborators, Erik Lindell and Daniel Nyman for their contributions to this work, and for making the project as enjoyable an experience as could ever be wished. 17

I would also like to direct my dearest thanks to our supervisor, professor Wojtek Chacholski, for great enthusiasm in the project, encouraging devotion to his students, and inspiring mathematical nursery tales. References [1] G. Carlsson, F. Mémoli, Charcterization, Stability and Convergence of Hierarchical Clustering Methods, Journal of Machine Learning Research 11 (2010), 1425-1470 [2] E. Escolar, Y. Hiraoka, Persistence Modules on Commutative Ladders of Finite Representation Type, arxiv:1404.7588v2 [math.at] 2015 [3] A. Assem, D. Simson, A. Skowronski, Elements of the Representation Theory of Associative Algebras, 1: Techniques of Representation Theory, London Mathematical Society tudent Texts 65 Cambridge University Press, 2006 18