The Neural Ring: Using Algebraic Geometry to Analyze Neural Codes

Size: px
Start display at page:

Download "The Neural Ring: Using Algebraic Geometry to Analyze Neural Codes"

Transcription

1 University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Dissertations, Theses, and Student Research Papers in Mathematics Mathematics, Department of The Neural Ring: Using Algebraic Geometry to Analyze Neural Codes Nora Youngs University of Nebraska-Lincoln, s-nyoungs1@math.unl.edu Follow this and additional works at: Part of the Algebraic Geometry Commons, Other Applied Mathematics Commons, and the Other Mathematics Commons Youngs, Nora, "The Neural Ring: Using Algebraic Geometry to Analyze Neural Codes" (2014). Dissertations, Theses, and Student Research Papers in Mathematics This Article is brought to you for free and open access by the Mathematics, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Dissertations, Theses, and Student Research Papers in Mathematics by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln.

2 THE NEURAL RING: USING ALGEBRAIC GEOMETRY TO ANALYZE NEURAL CODES by Nora Esther Youngs A DISSERTATION Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfilment of Requirements For the Degree of Doctor of Philosophy Major: Mathematics Under the Supervision of Professor Carina Curto Lincoln, Nebraska August, 2014

3 THE NEURAL RING: USING ALGEBRAIC GEOMETRY TO ANALYZE NEURAL CODES Nora Esther Youngs, Ph.D. University of Nebraska, 2014 Adviser: Carina Curto Neurons in the brain represent external stimuli via neural codes. These codes often arise from stimulus-response maps, associating to each neuron a convex receptive field. An important problem confronted by the brain is to infer properties of a represented stimulus space without knowledge of the receptive fields, using only the intrinsic structure of the neural code. How does the brain do this? To address this question, it is important to determine what stimulus space features can - in principle - be extracted from neural codes. This motivates us to define the neural ring and a related neural ideal, algebraic objects that encode the full combinatorial data of a neural code. We find that these objects can be expressed in a canonical form that directly translates to a minimal description of the receptive field structure intrinsic to the neural code. We consider the algebraic properties of homomorphisms between neural rings, which naturally relate to maps between neural codes. We show that maps between two neural codes are in bijection with ring homomorphisms between the respective neural rings, and define the notion of neural ring homomorphism, a special restricted class of ring homomorphisms which preserve neuron structure. We also find connections to Stanley-Reisner rings, and use ideas similar to those in the theory of monomial ideals to obtain an algorithm for computing the canonical form associated to any neural code, providing the groundwork for inferring stimulus space features from neural activity alone.

4 iii DEDICATION To Mama and Papa (the Paradox!), and Thea and Merike, for supporting me though 23 long years of education. I love you! And to Lauren and Becky. Les Haricots Toujours!

5 iv Contents Contents iv 1 Introduction 1 2 Neural Codes Receptive field codes (RF codes) Stimulus space constraints arising from convex RF codes Helly s theorem and the Nerve theorem Beyond the simplicial complex of the neural code The receptive field structure (RF structure) of a neural code Neural Rings and Neural Ideals Basic algebraic geometry background Definition of the neural ring The spectrum of the neural ring The neural ideal & an explicit set of relations for the neural ring Proof of Lemmas 3 and How to infer RF structure using the neural ideal An alternative set of relations for the neural ring

6 v 4.2 Interpreting neural ring relations as receptive field relationships Pseudo-monomials & a canonical form for the neural ideal Proof of Theorem Comparison to the Stanley-Reisner ideal Algorithms for the Canonical Form Algorithm # Algorithm # An example Primary Decomposition Primary decomposition of the neural ideal Decomposing the neural code via intervals of the Boolean lattice An algorithm for primary decomposition of pseudo-monomial ideals Neural Ring Homomorphisms and Maps between codes Elementary maps between codes Ring Homomorphisms between Neural Rings Module Homomorphisms between Neural Rings Neural rings as modules Note on modules under different rings Compatible τ extend code maps Neural Ring Homomorphisms Neuron-preserving homomorphisms Neuron-preserving code maps Neural ring homomorphisms The effect of elementary code maps on the canonical form

7 vi A Neural codes on three neurons 96 B MATLAB Code 100 Bibliography 105

8 1 Chapter 1 Introduction Building accurate representations of the world is one of the basic functions of the brain. It is well-known that when a stimulus is paired with pleasure or pain, an animal quickly learns the association. Animals also learn, however, the (neutral) relationships between stimuli of the same type. For example, a bar held at a 45-degree angle appears more similar to one held at 50 degrees than to a perfectly vertical one. Upon hearing a triple of distinct pure tones, one seems to fall in between the other two. An explored environment is perceived not as a collection of disjoint physical locations, but as a spatial map. In summary, we do not experience the world as a stream of unrelated stimuli; rather, our brains organize different types of stimuli into highly structured stimulus spaces. The relationship between neural activity and stimulus space structure has, nonetheless, received remarkably little attention. In the field of neural coding, much has been learned about the coding properties of individual neurons by investigating stimulusresponse functions, such as place fields [1, 2], orientation tuning curves [3, 4], and other examples of receptive fields obtained by measuring neural activity in response to experimentally-controlled stimuli. Moreover, numerous studies have shown that neu-

9 2 ral activity, together with knowledge of the appropriate stimulus-response functions, can be used to accurately estimate a newly presented stimulus [5, 6, 7]. This paradigm is being actively extended and revised to include information present in populations of neurons, spurring debates on the role of correlations in neural coding [8, 9, 10]. In each case, however, the underlying structure of the stimulus space is assumed to be known, and is not treated as itself emerging from the activity of neurons. This approach is particularly problematic when one considers that the brain does not have access to stimulus-response functions, and must represent the world without the aid of dictionaries that lend meaning to neural activity [11]. In coding theory parlance, the brain does not have access to the encoding map, and must therefore represent stimulus spaces via the intrinsic structure of the neural code. How does the brain do this? In order to eventually answer this question, we must first tackle a simpler one: Question: What can be inferred about the underlying stimulus space from neural activity alone? I.e., what stimulus space features are encoded in the intrinsic structure of the neural code, and can thus be extracted without knowing the individual stimulusresponse functions? Recently we have shown that, in the case of hippocampal place cell codes, certain topological features of the animal s environment can be inferred from the neural code alone, without knowing the place fields [11]. As will be explained in the next section, this information can be extracted from a simplicial complex associated to the neural code. What other stimulus space features can be inferred from the neural code? For this, we turn to algebraic geometry. Algebraic geometry provides a useful framework for inferring geometric and topological characteristics of spaces by associating rings of functions to these spaces. All relevant features of the underlying space are encoded in

10 3 the intrinsic structure of the ring, where coordinate functions become indeterminates, and the space itself is defined in terms of ideals in the ring. Inferring features of a space from properties of functions without specified domains is similar to the task confronted by the brain, so it is natural to expect that this framework may shed light on our question. Here we introduce the neural ring, an algebro-geometric object that can be associated to any combinatorial neural code. Much like the simplicial complex of a code, the neural ring encodes information about the underlying stimulus space in a way that discards specific knowledge of receptive field maps, and thus gets closer to the essence of how the brain might represent stimulus spaces. Unlike the simplicial complex, the neural ring retains the full combinatorial data of a neural code, packaging this data in a more computationally tractable manner. We find that this object, together with a closely related neural ideal, can be used to algorithmically extract a compact, minimal description of the receptive field structure dictated by the code. This enables us to more directly tie combinatorial properties of neural codes to features of the underlying stimulus space, a critical step towards answering our motivating question. Although the use of an algebraic construction such as the neural ring is quite novel in the context of neuroscience, the neural code (as we define it) is at its core a combinatorial object, and there is a rich tradition of associating algebraic objects to combinatorial ones [12]. The most well-known example is perhaps the Stanley- Reisner ring [13], which turns out to be closely related to the neural ring. Within mathematical biology, associating polynomial ideals to combinatorial data has also been fruitful. Recent examples include inferring wiring diagrams in gene-regulatory networks [14, 15] and applications to chemical reaction networks [16]. Our work also has parallels to the study of design ideals in algebraic statistics [17]. From a data analysis perspective, it is useful to consider the codes as related to

11 4 one another, not merely as isolated objects. A single code can give rise to a host of relatives through natural operations such as adding codewords or dropping neurons. Understanding how these relationships translate to structural information will allow us to extract information from multiple codes simultaneously. From an algebraic perspective, relationships between neural rings stem from ring homomorphisms. We characterize the set of homomorphisms between neural rings, relating each to a code map via the pullback. The organization of this dissertation is as follows. In Chapter 2, we discuss in greater depth the type of neural codes which motivate this work, and explore some previous results which give partial answers to our open questions. In Chapter 3, we introduce our main object of study, the neural ring, an algebraic object which stores information from neural codes. We investigate some of its properties, and in Chapter 4 we determine a preferred canonical presentation that allow us to extract stimulus space features. In Chapter 5, we give two variations on an algorithm for obtaining this canonical form. In Chapter 6, we consider the primary decomposition of the neural ideal and its interpretations. Finally, in Chapter 7, we consider the maps which relate one neural ring to another, and the relationship between these maps and the functions which relate one neural code to another. Much of Chapters 1-6 appears in our recent paper [18]; however, substantial changes have been made to the algorithm for obtaining the canonical form. Finally, in the appendix, we show all possible examples on 3 neurons, to show the wide variety of possibilities on even a small set of neurons, and present Matlab code for some of our algorithms.

12 5 Chapter 2 Neural Codes In this chapter, we introduce the basic objects of study: neural codes, receptive field codes, and convex receptive field codes. We then discuss various ways in which the structure of a convex receptive field code can constrain the underlying stimulus space. These constraints emerge most obviously from the simplicial complex of a neural code, but (as will be made clear) there are also constraints that arise from aspects of a neural code s structure that go well beyond what is captured by the simplicial complex of the code. First, we give a few basic definitions. Definition. Given a set of neurons labelled {1,..., n} def = [n], we define a neural code C {0, 1} n as a set of binary patterns of neural activity. An element of a neural code is called a codeword, c = (c 1,..., c n ) C, and corresponds to a subset of neurons supp(c) def = {i [n] c i = 1} [n]. Similarly, the entire code C can be identified with a set of subsets of neurons, supp C def = {supp(c) c C} 2 [n],

13 6 where 2 [n] denotes the set of all subsets of [n]. Because we discard the details of the precise timing and/or rate of neural activity, what we mean by neural code is often referred to in the neural coding literature as a combinatorial code [19, 20]. For simplicity s sake, we will henceforth dispense with vector notation and reduce to the simpler binary notation; e.g., the codeword (1, 0, 1) will be written 101. Example. Consider the code C = {000, 100, 010, 110, 001}. Here, we have supp(c) = {, {1}, {2}, {1, 2}, {3}}. As a neural code, we interpret this as a set of activity patterns for 3 neurons, where we have observed the following: - At some point, no neurons were firing ( ). - Each neuron fired alone at some point ({1}, {2}, and {3}). - At some point, neurons 1 and 2 fired together, while 3 was silent ({1, 2}). Definition. A set of subsets 2 [n] is an (abstract) simplicial complex if σ and τ σ implies τ. We will say that a neural code C is a simplicial complex if supp C is a simplicial complex. In cases where the code is not a simplicial complex, we can complete the code to a simplicial complex by simply adding in missing subsets of codewords. This allows us to define the simplicial complex of the code as (C) def = {σ [n] σ supp(c) for some c C}. Alternatively, (C) can be defined as the smallest simplicial complex that contains supp C. Example. Our code in the previous example, C = {000, 100, 010, 110, 001} is a simplicial complex.

14 7 However, the code D = {000, 100, 010, 110, 011} is not, because the set {2, 3} is in supp(d), but its subset {3} is not. We can take the simplicial complex of the code D by adding in the necessary subsets, to obtain (D) = {000, 100, 010, 110, 011, 001}. 2.1 Receptive field codes (RF codes) Neurons in many brain areas have activity patterns that can be characterized by receptive fields. 1 Abstractly, a receptive field is a map f i : X R 0 from a space of stimuli, X, to the average firing rate of a single neuron, i, in response to each stimulus. Receptive fields are computed by correlating neural responses to independently measured external stimuli. We follow a common abuse of language, where both the map and its support (i.e., the subset U i X where f i takes on positive values) are referred to as receptive fields. Convex receptive fields are convex subsets of the stimulus space, for X R d. Definition. A subset B R n is convex if, given any pair of points x, y B, the point z = tx + (1 t)y is contained in B for any t [0, 1]. The paradigmatic examples are orientation-selective neurons in visual cortex [3, 4] and hippocampal place cells [1, 2]. Orientation-selective neurons have tuning curves that reflect a neuron s preference for a particular angle. When an animal is presented with stimuli in the form of bars at a certain angle, these neurons have a marked preference for one particular angle. The neuron fires at higher and higher rates as the angle of the bars approaches the preferred angle, producing a tuning curve (see Figure 1A). 1 In the vision literature, the term receptive field is reserved for subsets of the visual field; we use the term in a more general sense, applicable to any modality.

15 8 Place cells are neurons that have place fields; i.e., each neuron has a preferred (convex) region of the animal s physical environment where it has a high firing rate (see Figure 1B). When the animal occupies that particular region, the neuron fires markedly more frequently; when the animal is in any other area. the neuron s firing rate is comparatively very low. Both tuning curves and place fields are examples of receptive fields. In both cases, the receptive field for each neuron is convex (an interval of angles about the preferred angle, or a place field) but not all receptive fields are convex. Grid cells are another type of neuron with a receptive field, very like place cells, but their receptive field consists of a set of distinct regions which form a triangular grid, and thus in this case the receptive field is not convex, or even connected [21]. A B activity pattern codeword activity pattern codeword Figure 2.1: Receptive field overlaps determine codewords in 1D and 2D RF codes. (A) Neurons in a 1D RF code have receptive fields that overlap on a line segment (or circle, in the case of orientation-tuning). Each stimulus on the line corresponds to a binary codeword. Gaussians depict graded firing rates for neural responses; this additional information is discarded by the RF code. (B) Neurons in a 2D RF code, such as a place field code, have receptive fields that partition a two-dimensional stimulus space into non-overlapping regions, as illustrated by the shaded area. All stimuli within one of these regions will activate the same set of neurons, and hence have the same corresponding codeword. A receptive field code (RF code) is a neural code that corresponds to the brain s representation of the stimulus space covered by the receptive fields. When a stimulus lies in the intersection of several receptive fields, the corresponding neurons may co-

16 9 fire while the rest remain silent. The active subset σ of neurons can be identified with a binary codeword c {0, 1} n via σ = supp(c). Unless otherwise noted, a stimulus space X need only be a topological space. However, we usually have in mind X R d, and this becomes important when we consider convex RF codes. Definition. Let X be a stimulus space (e.g., X R d ), and let U = {U 1,..., U n } be a collection of open sets, with each U i X the receptive field of the i-th neuron in a population of n neurons. The receptive field code (RF code) C(U) {0, 1} n is the set of all binary codewords corresponding to stimuli in X: C(U) def = {c {0, 1} n ( i supp(c) U i ) \ ( j / supp(c) U j ) }. If X R d and each of the U i s is also a convex subset of X, then we say that C(U) is a convex RF code. Our convention is that i U i = X and i U i =. This means that if n i=1 U i X, then C(U) includes the all-zeros codeword corresponding to an outside point not covered by the receptive fields; on the other hand, if n i=1 U i, then C(U) includes the all-ones codeword. Figure 1 shows examples of convex receptive fields covering one- and two-dimensional stimulus spaces, and examples of codewords corresponding to regions defined by the receptive fields. Returning to our discussion in the Introduction, we have the following question: If we can assume C = C(U) is a RF code, then what can be learned about the underlying stimulus space X from knowledge only of C, and not of U? The answer to this question will depend critically on whether or not we can assume that the RF code is convex. In particular, if we don t make any assumptions about the receptive fields beyond openness, then any code can be realized as a RF code in any dimension. Thus,

17 10 without some kind of assumption like convexity, the answer to the above question is nothing useful. Lemma 1. Let C {0, 1} n be a neural code. Then, for any d 1, there exists a stimulus space X R d and a collection of open sets U = {U 1,..., U n } (not necessarily convex), with U i X for each i [n], such that C = C(U). Proof. Let C {0, 1} n be any neural code, and order the elements of C as {c 1,..., c m }, where m = C. For each c C, choose a distinct point x c R d and an open neighborhood N c of x c such that no two neighborhoods intersect. Define U j def = j supp(c k ) N c k, let U = {U 1,..., U n }, and X = m i=1 N ci. Observe that if the all-zeros codeword is in C, then N 0 = X \ n i=1 U i corresponds to the outside point not covered by any of the U i s. By construction, C = C(U). Although any neural code C {0, 1} n can be realized as a RF code, it is not true that any code can be realized as a convex RF code. Counterexamples can be found in codes having as few as three neurons. Lemma 2. The neural code C = {0, 1} 3 \ {111, 001} on three neurons cannot be realized as a convex RF code. Proof. Assume the converse, and let U = {U 1, U 2, U 3 } be a set of convex open sets in R d such that C = C(U). The code necessitates that U 1 U 2 (since 110 C), (U 1 U 3 )\U 2 (since 101 C), and (U 2 U 3 )\U 1 U U U 1 p 1 p 2 2 U 1 p p 1 2 Figure 2.2: Two cases in the proof of Lemma 2. 2 (since 011 C). Let p 1 (U 1 U 3 ) \ U 2 and p 2 (U 2 U 3 ) \ U 1. Since p 1, p 2 U 3 and U 3 is convex, the line segment l = (1 t)p 1 + tp 2 for t [0, 1] must also be contained in U 3.

18 11 Every point in l is in U 3. However, as there are no points in U 1 U 2 U 3 or in U 3 \(U 1 U 2 ), then all points on l are in either U 1 or U 2 but no point may be in both. Thus U 1 l, U 2 l are disjoint nonempty open sets which cover l, thus they disconnect l. But as l is a line segment, it should be connected in the subspace topology. This is a contradiction, so no such realization can exist. Figure 2 illustrates the impossibility of such a realization. There are really only two possibilities. Case 1: l passes through U 1 U 2 (see Figure 2, left). This implies U 1 U 2 U 3, and hence 111 C, a contradiction. Case 2: l does not intersect U 1 U 2. Since U 1, U 2 are open sets, this implies l passes outside of U 1 U 2 (see Figure 2, right), and hence 001 C, a contradiction. 2.2 Stimulus space constraints arising from convex RF codes It is clear from Lemma 1 that there is essentially no constraint on the stimulus space for realizing a code as a RF code. However, if we demand that C is a convex RF code, then the overlap structure of the U i s sharply constrains the geometric and topological properties of the underlying stimulus space X. To see how this works, we first consider the simplicial complex of a neural code, (C). Classical results in convex geometry and topology provide constraints on the underlying stimulus space X for convex RF codes, based on the structure of (C). We will discuss these next. We then turn to the question of constraints that arise from combinatorial properties of a neural code C that are not captured by (C).

19 Helly s theorem and the Nerve theorem Here we briefly review two classical and well-known theorems in convex geometry and topology, Helly s theorem and the Nerve theorem, as they apply to convex RF codes. Both theorems can be used to relate the structure of the simplicial complex of a code, (C), to topological features of the underlying stimulus space X. Suppose U = {U 1,..., U n } is a finite collection of convex open subsets of R d, with dimension d < n. We can associate to U a simplicial complex N(U) called the nerve of U. A subset {i 1,.., i k } [n] belongs to N(U) if and only if the appropriate intersection k l=1 U i l is nonempty. If we think of the U i s as receptive fields, then N(U) = (C(U)). In other words, the nerve of the cover corresponds to the simplicial complex of the associated (convex) RF code. Helly s theorem. Consider k convex subsets, U 1,..., U k R d, for d < k. If the intersection of every d+1 of these sets is nonempty, then the full intersection k i=1 U i is also nonempty. A nice exposition of this theorem and its consequences can be found in [22]. One straightforward consequence is that the nerve N(U) is completely determined by its d-skeleton, and corresponds to the largest simplicial complex with that d-skeleton. For example, if d = 1, then N(U) is a clique complex (fully determined by its underlying graph). Since N(U) = (C(U)), Helly s theorem imposes constraints on the minimal dimension of the stimulus space X when C = C(U) is assumed to be a convex RF code. For example, if we have some collection of codewords, and there are three neurons (or more) where each pair of neurons is seen to fire together but there is no word where all fire together, then the minimal dimension of the stimulus space where this code could be realized as a convex receptive field code is 2. Nerve theorem. The homotopy type of X(U) def = n i=1 U i is equal to the homotopy

20 13 type of the nerve of the cover, N(U). In particular, X(U) and N(U) have exactly the same homology groups. The Nerve theorem is an easy consequence of [23, Corollary 4G.3]. This is a powerful theorem relating the simplicial complex of a RF code, (C(U)) = N(U), to topological features of the underlying space, such as homology groups and other homotopy invariants. In [11], this theorem is used in the context of two-dimensional RF codes (specifically, place field codes for place cells in rat hippocampus) to show that topological features of the animal s environment could be inferred from the observed neural code, without knowing the place fields. Note, however, that the similarities between X(U) and N(U) only go so far. In particular, X(U) and N(U) typically have very different dimension. It is also important to keep in mind that the Nerve theorem concerns the topology of X(U) = n i=1 U i. In our setup, if the stimulus space X is larger, so that n i=1 U i X, then the Nerve theorem tells us only about the homotopy type of X(U), not of X. Since the U i are open sets, however, conclusions about the dimension of X can still be inferred. In addition to Helly s theorem and the Nerve theorem, there is a great deal known about (C(U)) = N(U) for collections of convex sets in R d. In particular, the f- vectors of such simplicial complexes have been completely characterized by G. Kalai in [24, 25] Beyond the simplicial complex of the neural code We have just seen how the simplicial complex of a neural code, (C), yields constraints on the stimulus space X if we assume C can be realized as a convex RF code. Consider the example described in Lemma 2. Nothing from Helly s theorem expressly said that C could not be realized in R 2 ; indeed, (C) can be realized easily. Yet we have proven

21 14 it is impossible to realize the code C in any dimension at all. This implies that other kinds of constraints on X may emerge from the combinatorial structure of a neural code, even if there is no obstruction stemming from (C). In Figure 3 we show four possible arrangements of three A 1 2 B 3 convex receptive fields in the 1 2 plane. Each convex RF code 3 has the same corresponding simplicial complex (C) = 2 [3], since 111 C for each code. C D Nevertheless, the arrangements clearly have different combinatorial properties. In Figure 3C, for instance, we have U 1 U 2 U 3, while Figure 3A has no special containment relationships among the receptive fields. Figure 2.3: Four arrangements of three convex receptive fields, U = {U 1, U 2, U 3 }, each having (C(U)) = 2 [3]. Square boxes denote the stimulus space X in cases where U 1 U 2 U 3 X. (A) C(U) = 2 [3], including the all-zeros codeword 000. (B) C(U) = {111, 101, 011, 001}, with X = U 3. (C) C(U) = {111, 011, 001, 000}. (D) C(U) = {111, 101, 011, 110, 100, 010}, and X = U 1 U 2. The minimal embedding dimension for the codes in panels A and D is d = 2, while for panels B and C it is d = 1. This receptive field structure (RF structure) of the code has impliciations for the underlying stimulus space. Let d be the minimal integer for which the code can be realized as a convex RF code in R d ; we will refer to this as the minimal embedding dimension of C. Note that the codes in Figure 3A,D have d = 2, whereas the codes in Figure 3B,C have d = 1. The simplicial complex, (C), is thus not sufficient to determine the minimal embedding dimension of a convex RF code, but this information is somehow present in the RF structure of the code. Similarly, in Lemma 2 we saw that (C) does not provide sufficient information to determine whether or not C can be realized as a

22 15 convex RF code; after working out the RF structure, however, it was easy to see that the given code was not realizable The receptive field structure (RF structure) of a neural code As we have just seen, the intrinsic structure of a neural code contains information about the underlying stimulus space that cannot be inferred from the simplicial complex of the code alone. This information is, however, present in what we have loosely referred to as the RF structure of the code. We now explain more carefully what we mean by this term. Given a set of receptive fields U = {U 1,..., U n } in a stimulus space X, there are certain containment relations between intersections and unions of the U i s that are obvious, and carry no information about the particular arrangement in question. These relationships are merely a result of unavoidable set relationships. For example, U 1 U 2 U 2 U 3 U 4 is always guaranteed to be true, because it follows from U 2 U 2. On the other hand, a relationship such as U 3 U 1 U 2 (as in Figure 3D) is not always present, and thus reflects something about the structure of a particular receptive field arrangement. Let C {0, 1} n be a neural code, and let U = {U 1,..., U n } be any arrangement of receptive fields in a stimulus space X such that C = C(U) (this is guaranteed to exist by Lemma 1). The RF structure of C refers to the set of relations among the U i s that are not obvious, and have the form: U i U j, for σ τ =. i σ j τ

23 16 In particular, this includes any empty intersections i σ U i = (here τ = ). In the examples in Figure 3, the panel A code has no unusual RF structure relations and is as general as possible; while panel B has U 1 U 3 and U 2 U 3 ; panel C has U 1 U 2 U 3 ; and panel D has U 3 U 1 U 2. Our central goal is to develop a method to algorithmically extract a minimal description of the RF structure directly from a neural code C, without first realizing it as C(U) for some arrangement of receptive fields. We view this as a first step towards inferring stimulus space features that cannot be obtained from the simplicial complex (C). To do this we turn to an algebro-geometric framework, that of neural rings and ideals. These objects are defined in Section 3 so as to capture the full combinatorial data of a neural code, but in a way that allows us to naturally and algorithmically infer a compact description of the desired RF structure, as shown in Chapter 4.

24 17 Chapter 3 Neural Rings and Neural Ideals In this chapter we define the neural ring R C and a closely-related neural ideal, J C. First, we briefly review some basic algebraic geometry background needed throughout the following sections. 3.1 Basic algebraic geometry background The following definitions are standard (see, for example, [26]). Rings and ideals. Let R be a commutative ring. A subset I R is an ideal of R if it has the following properties: (i) I is a subgroup of R under addition. (ii) If a I, then ra I for all r R. An ideal I is said to be generated by a set A, and we write I = A, if I = {r 1 a r n a n a i A, r i R, and n N}.

25 18 In other words, I is the set of all finite combinations of elements of A with coefficients in R. An ideal I R is proper if I R. An ideal I R is prime if it is proper and has the following property: if rs I for some r, s R, then r I or s I. An ideal m R is maximal if it is proper and if for any ideal I such that m I R, either I = m or I = R. An ideal I R is radical if r n I implies r I, for any r R and n N. An ideal I R is primary if rs I implies r I or s n I for some n N. A primary decomposition of an ideal I expresses I as an intersection of finitely many primary ideals. Ideals and varieties. Let k be a field, n the number of neurons, and k[x 1,..., x n ] a polynomial ring with one indeterminate x i for each neuron. We will consider k n to be the neural activity space, where each point v = (v 1,..., v n ) k n is a vector tracking the state v i of each neuron. Note that any polynomial f k[x 1,..., x n ] can be evaluated at a point v k n by setting x i = v i each time x i appears in f. We will denote this value f(v). Let J k[x 1,..., x n ] be an ideal, and define the variety V (J) def = {v k n f(v) = 0 for all f J}. Similarly, given a subset S k n, we can define the ideal of functions that vanish on this subset as I(S) def = {f k[x 1,...., x n ] f(v) = 0 for all v S}. The ideal-variety correspondence [26] gives us the usual order-reversing relationships: I J V (J) V (I), and S T I(T ) I(S). Furthermore, V (I(V )) = V

26 19 for any variety V, but it is not always true that I(V (J)) = J for an ideal J (see Section 3.5). We will regard neurons as having only two states, on or off, and thus choose k = F 2 = {0, 1}. 3.2 Definition of the neural ring Let C {0, 1} n = F n 2 be a neural code, and define the ideal I C of F 2 [x 1,..., x n ] corresponding to the set of polynomials that vanish on all codewords in C: I C def = I(C) = {f F 2 [x 1,..., x n ] f(c) = 0 for all c C}. By design, V (I C ) C; we will show that in fact V (I C ) = C and hence I(V (I C )) = I C. To see this, define an ideal m v = x 1 v 1,..., x n v n for every v F n 2; note that V (m v ) = {v}. Then, for a code C F n 2, define the ideal J = v C m v. As this intersection is finite, C = V (J), and thus we have V (I C ) = V (I(C)) = V (I(V (J C ))) = V (J C ) = C. Note that the ideal generated by the Boolean relations, B def = x 2 1 x 1,..., x 2 n x n, is automatically contained in I C, irrespective of C. The neural ring R C corresponding to the code C is the quotient ring R C def = F 2 [x 1,..., x n ]/I C,

27 20 together with the set of indeterminates x 1,..., x n. We say that two neural rings are equivalent if there is a bijection between the sets of indeterminates that yields a ring homomorphism. Remark. Due to the Boolean relations, any element y R C satisfies y 2 = y (crossterms vanish because 2 = 0 in F 2 ), so the neural ring is a Boolean ring isomorphic to F C 2. It is important to keep in mind, however, that R C comes equipped with a privileged set of functions, x 1,..., x n ; this allows the ring to keep track of considerably more structure than just the size of the neural code. The importance of using this presentation will be clear as we begin to extract receptive field information. 3.3 The spectrum of the neural ring We can think of R C as the ring of functions of the form f : C F 2 on the neural code, where each function assigns a 0 or 1 to each codeword c C by evaluating f F 2 [x 1,..., x n ]/I C through the substitutions x i = c i for i = 1,..., n. To see this, note that two polynomials are in the same equivalence class in R C if and only if they evaluate the same on every c C. That is, f = g in R C f g I C f(c) g(c) = 0 for all c C, i.e., f(c) = g(c) for all c C. Quotienting the original polynomial ring by I C ensures that there is only one zero function in R C. The spectrum of the neural ring, Spec(R C ), consists of all prime ideals in R C. We will see shortly that the elements of Spec(R C ) are in one-to-one correspondence with the elements of the neural code C. Indeed, our definition of R C was designed for this to be true. For any point v {0, 1} n of the neural activity space, let m v def = I(v) = {f F 2 [x 1,..., x n ] f(v) = 0}

28 21 be the maximal ideal of F 2 [x 1,..., x n ] consisting of all functions that vanish on v. We can also write m v = x 1 v 1,..., x n v n (see Lemma 6 in Section 3.5). Using this, we can characterize the spectrum of the neural ring. Lemma 3. Spec(R C ) = { m v v C}, where m v is the quotient of m v in R C. The proof is given in Section 3.5. Note that because R C is a Boolean ring, the maximal ideal spectrum and the prime ideal spectrum coincide. 3.4 The neural ideal & an explicit set of relations for the neural ring The definition of the neural ring is rather impractical, as it does not give us explicit relations for generating I C and R C. Here we define another ideal, J C, via an explicit set of generating relations. Although J C is closely related to I C, it turns out that J C is a more convenient object to study, which is why we will use the term neural ideal to refer to J C rather than I C. For any v {0, 1} n, consider the function ρ v F 2 [x 1,..., x n ] defined as ρ v def = n (1 v i x i ) = i=1 {i v i =1} x i {j v j =0} (1 x j ) = i supp(v) x i j / supp(v) (1 x j ). Note that ρ v (x) can be thought of as a characteristic function for v, since it satisfies ρ v (v) = 1 and ρ v (x) = 0 for any other x F n 2. Now consider the ideal J C F 2 [x 1,..., x n ] generated by all functions ρ v, for v / C: J C def = {ρ v v / C}.

29 22 We call J C the neural ideal corresponding to the neural code C. If C = 2 [n] is the complete code, we simply set J C = 0, the zero ideal. J C is related to I C as follows, giving us explicit relations for the neural ring. Lemma 4. Let C {0, 1} n be a neural code. Then, I C = J C + B = {ρ v v / C}, {x i (1 x i ) i [n]}, where B = {x i (1 x i ) i [n]} is the ideal generated by the Boolean relations, and J C is the neural ideal. The proof is given in Section Proof of Lemmas 3 and 4 To prove Lemmas 3 and 4, we need a version of the Nullstellensatz for finite fields. The original Hilbert s Nullstellensatz applies when k is an algebraically closed field. It states that if f k[x 1,..., x n ] vanishes on V (J), then f J. In other words, I(V (J)) = J. Because we have chosen k = F 2 = {0, 1}, we have to be a little careful about the usual ideal-variety correspondence, as there are some subtleties introduced in the case of finite fields. In particular, J = J in F 2 [x 1,..., x n ] does not imply I(V (J)) = J. The following lemma and theorem are well-known. Let F q be a finite field of size q, and F q [x 1,..., x n ] the n-variate polynomial ring over F q. Lemma 5. For any ideal J F q [x 1,..., x n ], the ideal J + x q 1 x 1,..., x q n x n is a radical ideal.

30 23 Theorem 1 (Strong Nullstellensatz in Finite Fields). For an arbitrary finite field F q, let J F q [x 1,..., x n ] be an ideal. Then, I(V (J)) = J + x q 1 x 1,..., x q n x n. Proof of Lemma 3 We begin by describing the maximal ideals of F 2 [x 1,..., x n ]. Recall that m v def = I(v) = {f F 2 [x 1,..., x n ] f(v) = 0} is the maximal ideal of F 2 [x 1,..., x n ] consisting of all functions that vanish on v F n 2. We will use the notation m v to denote the quotient of m v in R C, in cases where m v I C. Lemma 6. m v = x 1 v 1,..., x n v n F 2 [x 1,..., x n ], and is a radical ideal. Proof. Denote A v = x 1 v 1,..., x n v n, and observe that V (A v ) = {v}. It follows that I(V (A v )) = I(v) = m v. On the other hand, using the Strong Nullstellensatz in Finite Fields we have I(V (A v )) = A v + x 2 1 x 1,..., x 2 n x n = A v, where the last equality is obtained by observing that, since v i {0, 1} and x 2 i x i = x i (1 x i ), each generator of x 2 1 x 1,..., x 2 n x n is already contained in A v. We conclude that A v = m v, and the ideal is radical by Lemma 5. In the proof of Lemma 3, we make use of the following correspondence: for any quotient ring R/I, the maximal ideals of R/I are exactly the quotients m = m/i,

31 24 where m is a maximal ideal of R that contains I [27]. Proof of Lemma 3. First, recall that because R C is a Boolean ring, Spec(R C ) = maxspec(r C ), the set of all maximal ideals of R C. We also know that any maximal ideal of F 2 [x 1,..., x n ] which contains I C is of the form m v for v F n 2. To see this, we only need show that for maximal ideal m I C, we have V (m) (since if v V (m), then m m v, and as m is maximal, m = m v ). To show this, suppose that V (m) =. Using the Strong Nullstellensatz, since m I C x 2 1 x 1,..., x 2 n x n, we have m = m + x 2 1 x 1,..., x 2 n x n = I(V (m)) = I( ) = F 2 [x 1,..., x n ] which is a contradiction. By the correspondence stated above, to show that maxspec(r C ) = { m v v C} it suffices to show m v I C if and only if v C. To see this, note that for each v C, I C m v because, by definition, all elements of I C are functions that vanish on each v C. On the other hand, if v / C then m v I C ; in particular, the characteristic function ρ v I C for v / C, but ρ v / m v because ρ v (v) = 1. Hence, the maximal ideals of R C are exactly those of the form m v for v C. We have thus verified that the points in Spec(R C ) correspond to codewords in C. This was expected given our original definition of the neural ring, and suggests that the relations on F 2 [x 1,..., x n ] imposed by I C are simply relations ensuring that V ( m v ) = for all v / C.

32 25 Proof of Lemma 4 Here we find explicit relations for I C in the case of an arbitrary neural code. Recall that ρ v = n ((x i v i ) 1) = i=1 {i v i =1} {j v j =0} x i (1 x j ), and that ρ v (x) can be thought of as a characteristic function for v, since it satisfies ρ v (v) = 1 and ρ v (x) = 0 for any other x F n 2. This immediately implies that V (J C ) = V ( {ρ v v / C} ) = C. We can now prove Lemma 4. Proof of Lemma 4. Observe that I C = I(C) = I(V (J C )), since V (J C ) = C. On the other hand, the Strong Nullstellensatz in Finite Fields implies I(V (J C )) = J C + x 2 1 x 1,..., x 2 n x n = J C + B.

33 26 Chapter 4 How to infer RF structure using the neural ideal We begin by presenting an alternative set of relations that can be used to define the neural ring. These relations enable us to easily interpret elements of I C as receptive field relationships, clarifying the connection between the neural ring and ideal and the RF structure of the code. We next introduce pseudo-monomials and pseudo-monomial ideals, and use these notions to obtain a minimal description of the neural ideal, which we call the canonical form. Theorem 3 enables us to use the canonical form of J C in order to read off a minimal description of the RF structure of the code. Finally, we present an algorithm that inputs a neural code C and outputs the canonical form CF (J C ), and illustrate its use in a detailed example.

34 4.1 An alternative set of relations for the neural ring 27 Let C {0, 1} n be a neural code, and recall by Lemma 1 that C can always be realized as a RF code C = C(U), provided we don t require the U i s to be convex. Let X be a stimulus space and U = {U i } n i=1 a collection of open sets in X, and consider the RF code C(U). The neural ring corresponding to this code is R C(U). Observe that the functions f R C(U) can be evaluated at any point p X by assigning 1 if p U i x i (p) = 0 if p / U i each time x i appears in the polynomial f. The vector (x 1 (p),..., x n (p)) {0, 1} n represents the neural response to the stimulus p. Note that if p / n i=1 U i, then (x 1 (p),..., x n (p)) = (0,..., 0) is the all-zeros codeword. For any σ [n], define def U σ = U i, i σ and x σ def = i σ x i. Our convention is that x = 1 and U = X, even in cases where X n i=1 U i. Note that for any p X, 1 if p U σ x σ (p) = 0 if p / U σ. The relations in I C(U) encode the combinatorial data of U. For example, if U σ = then we cannot have x σ = 1 at any point of the stimulus space X, and must therefore impose the relation x σ to knock off those points. On the other hand, if U σ U i U j, then x σ = 1 implies either x i = 1 or x j = 1, something that is guaranteed by imposing the relation x σ (1 x i )(1 x j ). These observations lead us to an alternative

35 28 ideal, I U F 2 [x 1,..., x n ], defined directly from the arrangement of receptive fields U = {U 1,..., U n }: I U def = { x σ i τ (1 x i ) U σ i τ U i }. Note that if τ =, we only get a relation for U σ =, and this is x σ. If σ =, then U σ = X, and we only get relations of this type if X is contained in the union of the U i s. This is equivalent to the requirement that there is no outside point corresponding to the all-zeros codeword. Perhaps unsurprisingly, it turns out that I U and I C(U) exactly coincide, so I U provides an alternative set of relations that can be used to define R C(U). Theorem 2. I U = I C(U). Recall that for a given set of receptive fields U = {U 1,..., U n } in some stimulus space X, the ideal I U F 2 [x 1,..., x n ] was defined as: I U def = {x σ i τ (1 x i ) U σ i τ U i }. The Boolean relations are present in I U irrespective of U, as it is always true that U i U i and this yields the relation x i (1 x i ) for each i. By analogy with our definition of J C, it makes sense to define an ideal J U which is obtained by stripping away the Boolean relations. This will then be used in the proof of Theorem 2. Note that if σ τ, then for any i σ τ we have U σ U i j τ U i, and the corresponding relation is a multiple of the Boolean relation x i (1 x i ). We can thus restrict attention to relations in I U that have σ τ =, so long as we include separately the Boolean relations. These observations are summarized by the following lemma.

36 29 Lemma 7. I U = J U + x 2 1 x 1,..., x 2 n x n, where J U def = {x σ i τ (1 x i ) σ τ = and U σ i τ U i }. Proof of Theorem 2. We will show that J U = J C(U) (and thus that I U = I C(U) ) by showing that each ideal contains the generators of the other. First, we show that all generating relations of J C(U) are contained in J U. Recall that the generators of J C(U) are of the form ρ v = i supp(v) x i j / supp(v) (1 x j ) for v / C(U). If ρ v is a generator of J C(U), then v / C(U) and this implies (by the definition of C(U)) that U supp(v) j / supp(v) U j. Taking σ = supp(v) and τ = [n] \ supp(v), we have U σ j τ U j with σ τ =. This in turn tells us (by the definition of J U ) that x σ j τ (1 x j) is a generator of J U. Since ρ v = x σ j τ (1 x j) for our choice of σ and τ, we conclude that ρ v J U. Hence, J C(U) J U. Next, we show that all generating relations of J U are contained in J C(U). If J U has generator x σ i τ (1 x i), then U σ i τ U i and σ τ =. This in turn implies that i σ U i \ j τ U j =, and thus (by the definition of C(U)) we have v / C(U) for any v such that supp(v) σ and supp(v) τ =. It follows that J C(U) contains the relation x supp(v) j / supp(v) (1 x j) for any such v. This includes all relations of the form x σ j τ (1 x j) k / σ τ P k, where P k {x k, 1 x k }. Taking f = x σ j τ (1 x j) in Lemma 8 (below), we can conclude that J C(U) contains x σ j τ (1 x j). Hence, J U J C(U). Lemma 8. For any f k[x 1,..., x n ] and τ [n], the ideal { f i τ P i P i {x i, 1 x i } } = f.

37 30 Proof. First, denote I f (τ) def = { f i τ P i P i {x i, 1 x i } }. We wish to prove that I f (τ) = f, for any τ [n]. Clearly, I f (τ) f, since every generator of I f (τ) is a multiple of f. We will prove I f (τ) f by induction on τ. If τ = 0, then τ = and I f (τ) = f. If τ = 1, so that τ = {i} for some i [n], then I f (τ) = f(1 x i ), fx i. Note that f(1 x i ) + fx i = f, so f I f (τ), and thus I f (τ) f. Now, assume that for some l 1 we have I f (σ) f for any σ [n] with σ l. If l n, we are done, so we need only show that if l < n, then I f (τ) f for any τ of size l + 1. Consider τ [n] with τ = l + 1, and let j τ be any element. Define τ = τ\{j}, and note that τ = l. By our inductive assumption, I f (τ ) f. We will show that I f (τ) I f (τ ), and hence I f (τ) f. Let g = f i τ P i be any generator of I f (τ ) and observe that both f(1 x j ) i τ P i and fx j i τ P i are both generators of I f (τ). It follows that their sum, g, is also in I f (τ), and hence g I f (τ) for any generator g of I f (τ ). We conclude that I f (τ) I f (τ ), as desired. 4.2 Interpreting neural ring relations as receptive field relationships Theorem 2 suggests that we can interpret elements of I C in terms of relationships between receptive fields. Lemma 9. Let C {0, 1} n be a neural code, and let U = {U 1,..., U n } be any collection of open sets (not necessarily convex) in a stimulus space X such that C =

38 31 C(U). Then, for any pair of subsets σ, τ [n], x σ (1 x i ) I C U σ U i. i τ i τ Proof. ( ) This is a direct consequence of Theorem 2. ( ) We distinguish two cases, based on whether or not σ and τ intersect. If x σ i τ (1 x i) I C and σ τ, then x σ i τ (1 x i) B, where B = {x i (1 x i ) i [n]} is the ideal generated by the Boolean relations. Consequently, the relation does not give us any information about the code, and U σ i τ U i follows trivially from the observation that U i U i for any i σ τ. If, on the other hand, x σ i τ (1 x i) I C and σ τ =, then ρ v I C for each v {0, 1} n such that supp(v) σ and supp(v) τ =. Since ρ v (v) = 1, it follows that v / C for any v with supp(v) σ and supp(v) τ =. To see this, recall from the original definition of I C that for all c C, f(c) = 0 for any f I C ; it follows that ρ v (c) = 0 for all c C. Because C = C(U), the fact that v / C for any v such that supp(v) σ and supp(v) τ = implies i σ U i \ j τ U j =. We can thus conclude that U σ j τ U j. Lemma 9 allows us to extract RF structure from the different types of relations that appear in I C : Boolean relations: {x i (1 x i )}. The relation x i (1 x i ) corresponds to U i U i, which does not contain any information about the code C. Type 1 relations: {x σ }. The relation x σ corresponds to U σ =. Type 2 relations: { x σ i τ (1 x i) σ, τ, σ τ =, U σ and i τ U i X }. The relation x σ i τ (1 x i) corresponds to U σ i τ U i.

Neural Codes and Neural Rings: Topology and Algebraic Geometry

Neural Codes and Neural Rings: Topology and Algebraic Geometry Neural Codes and Neural Rings: Topology and Algebraic Geometry Ma191b Winter 2017 Geometry of Neuroscience References for this lecture: Curto, Carina; Itskov, Vladimir; Veliz-Cuba, Alan; Youngs, Nora,

More information

RESEARCH STATEMENT. Nora Youngs, University of Nebraska - Lincoln

RESEARCH STATEMENT. Nora Youngs, University of Nebraska - Lincoln RESEARCH STATEMENT Nora Youngs, University of Nebraska - Lincoln 1. Introduction Understanding how the brain encodes information is a major part of neuroscience research. In the field of neural coding,

More information

What makes a neural code convex?

What makes a neural code convex? What makes a neural code convex? Carina Curto, Elizabeth Gross, Jack Jeffries,, Katherine Morrison 5,, Mohamed Omar 6, Zvi Rosen 7,, Anne Shiu 8, and Nora Youngs 6 November 9, 05 Department of Mathematics,

More information

Convexity of Neural Codes

Convexity of Neural Codes Convexity of Neural Codes R. Amzi Jeffs Mohamed Omar, Advisor Nora Youngs, Reader Department of Mathematics May, 2016 Copyright 2016 R. Amzi Jeffs. The author grants Harvey Mudd College and the Claremont

More information

A Polarization Operation for Pseudomonomial Ideals

A Polarization Operation for Pseudomonomial Ideals A Polarization Operation for Pseudomonomial Ideals Jeffrey Sun September 7, 2016 Abstract Pseudomonomials and ideals generated by pseudomonomials (pseudomonomial ideals) are a central object of study in

More information

Exploring the Exotic Setting for Algebraic Geometry

Exploring the Exotic Setting for Algebraic Geometry Exploring the Exotic Setting for Algebraic Geometry Victor I. Piercey University of Arizona Integration Workshop Project August 6-10, 2010 1 Introduction In this project, we will describe the basic topology

More information

2. Prime and Maximal Ideals

2. Prime and Maximal Ideals 18 Andreas Gathmann 2. Prime and Maximal Ideals There are two special kinds of ideals that are of particular importance, both algebraically and geometrically: the so-called prime and maximal ideals. Let

More information

Every Binary Code Can Be Realized by Convex Sets

Every Binary Code Can Be Realized by Convex Sets Every Binary Code Can Be Realized by Convex Sets Megan K. Franke 1 and Samuel Muthiah 2 arxiv:1711.03185v2 [math.co] 27 Apr 2018 1 Department of Mathematics, University of California Santa Barbara, Santa

More information

Math 418 Algebraic Geometry Notes

Math 418 Algebraic Geometry Notes Math 418 Algebraic Geometry Notes 1 Affine Schemes Let R be a commutative ring with 1. Definition 1.1. The prime spectrum of R, denoted Spec(R), is the set of prime ideals of the ring R. Spec(R) = {P R

More information

Encoding binary neural codes in networks of threshold-linear neurons

Encoding binary neural codes in networks of threshold-linear neurons University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 4-5-2013 Encoding binary neural codes in networks

More information

Boolean Algebras, Boolean Rings and Stone s Representation Theorem

Boolean Algebras, Boolean Rings and Stone s Representation Theorem Boolean Algebras, Boolean Rings and Stone s Representation Theorem Hongtaek Jung December 27, 2017 Abstract This is a part of a supplementary note for a Logic and Set Theory course. The main goal is to

More information

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ANDREW SALCH 1. Hilbert s Nullstellensatz. The last lecture left off with the claim that, if J k[x 1,..., x n ] is an ideal, then

More information

Algebraic Transformations of Convex Codes

Algebraic Transformations of Convex Codes Algebraic Transformations of Convex Codes Amzi Jeffs Advisor: Mohamed Omar Feb 2, 2016 Outline Neuroscience: Place cells and neural codes. Outline Neuroscience: Place cells and neural codes. Algebra Background:

More information

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d The Algebraic Method 0.1. Integral Domains. Emmy Noether and others quickly realized that the classical algebraic number theory of Dedekind could be abstracted completely. In particular, rings of integers

More information

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra

Algebraic Varieties. Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic Varieties Notes by Mateusz Micha lek for the lecture on April 17, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra Algebraic varieties represent solutions of a system of polynomial

More information

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes!

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes! ALGEBRAIC GROUPS Disclaimer: There are millions of errors in these notes! 1. Some algebraic geometry The subject of algebraic groups depends on the interaction between algebraic geometry and group theory.

More information

where Σ is a finite discrete Gal(K sep /K)-set unramified along U and F s is a finite Gal(k(s) sep /k(s))-subset

where Σ is a finite discrete Gal(K sep /K)-set unramified along U and F s is a finite Gal(k(s) sep /k(s))-subset Classification of quasi-finite étale separated schemes As we saw in lecture, Zariski s Main Theorem provides a very visual picture of quasi-finite étale separated schemes X over a henselian local ring

More information

12. Hilbert Polynomials and Bézout s Theorem

12. Hilbert Polynomials and Bézout s Theorem 12. Hilbert Polynomials and Bézout s Theorem 95 12. Hilbert Polynomials and Bézout s Theorem After our study of smooth cubic surfaces in the last chapter, let us now come back to the general theory of

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS.

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS. FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS. Let A be a ring, for simplicity assumed commutative. A filtering, or filtration, of an A module M means a descending sequence of submodules M = M 0

More information

3. The Sheaf of Regular Functions

3. The Sheaf of Regular Functions 24 Andreas Gathmann 3. The Sheaf of Regular Functions After having defined affine varieties, our next goal must be to say what kind of maps between them we want to consider as morphisms, i. e. as nice

More information

Math 249B. Nilpotence of connected solvable groups

Math 249B. Nilpotence of connected solvable groups Math 249B. Nilpotence of connected solvable groups 1. Motivation and examples In abstract group theory, the descending central series {C i (G)} of a group G is defined recursively by C 0 (G) = G and C

More information

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12 MATH 8253 ALGEBRAIC GEOMETRY WEEK 2 CİHAN BAHRAN 3.2.. Let Y be a Noetherian scheme. Show that any Y -scheme X of finite type is Noetherian. Moreover, if Y is of finite dimension, then so is X. Write f

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

LECTURE 3: RELATIVE SINGULAR HOMOLOGY

LECTURE 3: RELATIVE SINGULAR HOMOLOGY LECTURE 3: RELATIVE SINGULAR HOMOLOGY In this lecture we want to cover some basic concepts from homological algebra. These prove to be very helpful in our discussion of singular homology. The following

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Summer Algebraic Geometry Seminar

Summer Algebraic Geometry Seminar Summer Algebraic Geometry Seminar Lectures by Bart Snapp About This Document These lectures are based on Chapters 1 and 2 of An Invitation to Algebraic Geometry by Karen Smith et al. 1 Affine Varieties

More information

Vector Spaces. Chapter 1

Vector Spaces. Chapter 1 Chapter 1 Vector Spaces Linear algebra is the study of linear maps on finite-dimensional vector spaces. Eventually we will learn what all these terms mean. In this chapter we will define vector spaces

More information

Classification of root systems

Classification of root systems Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

9. Birational Maps and Blowing Up

9. Birational Maps and Blowing Up 72 Andreas Gathmann 9. Birational Maps and Blowing Up In the course of this class we have already seen many examples of varieties that are almost the same in the sense that they contain isomorphic dense

More information

Topological Data Analysis - Spring 2018

Topological Data Analysis - Spring 2018 Topological Data Analysis - Spring 2018 Simplicial Homology Slightly rearranged, but mostly copy-pasted from Harer s and Edelsbrunner s Computational Topology, Verovsek s class notes. Gunnar Carlsson s

More information

5 Set Operations, Functions, and Counting

5 Set Operations, Functions, and Counting 5 Set Operations, Functions, and Counting Let N denote the positive integers, N 0 := N {0} be the non-negative integers and Z = N 0 ( N) the positive and negative integers including 0, Q the rational numbers,

More information

10. Noether Normalization and Hilbert s Nullstellensatz

10. Noether Normalization and Hilbert s Nullstellensatz 10. Noether Normalization and Hilbert s Nullstellensatz 91 10. Noether Normalization and Hilbert s Nullstellensatz In the last chapter we have gained much understanding for integral and finite ring extensions.

More information

Every Neural Code Can Be Realized by Convex Sets

Every Neural Code Can Be Realized by Convex Sets Every Neural Code Can Be Realized by Convex Sets Megan K. Franke and Samuel Muthiah July 21, 2017 Abstract Place cells are neurons found in some mammals that fire based on the animal s location in their

More information

Math 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm

Math 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm Math 676. A compactness theorem for the idele group 1. Introduction Let K be a global field, so K is naturally a discrete subgroup of the idele group A K and by the product formula it lies in the kernel

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Math 210B. Profinite group cohomology

Math 210B. Profinite group cohomology Math 210B. Profinite group cohomology 1. Motivation Let {Γ i } be an inverse system of finite groups with surjective transition maps, and define Γ = Γ i equipped with its inverse it topology (i.e., the

More information

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014 Algebraic Geometry Andreas Gathmann Class Notes TU Kaiserslautern 2014 Contents 0. Introduction......................... 3 1. Affine Varieties........................ 9 2. The Zariski Topology......................

More information

Formal power series rings, inverse limits, and I-adic completions of rings

Formal power series rings, inverse limits, and I-adic completions of rings Formal power series rings, inverse limits, and I-adic completions of rings Formal semigroup rings and formal power series rings We next want to explore the notion of a (formal) power series ring in finitely

More information

ALGEBRAIC PROPERTIES OF BIER SPHERES

ALGEBRAIC PROPERTIES OF BIER SPHERES LE MATEMATICHE Vol. LXVII (2012 Fasc. I, pp. 91 101 doi: 10.4418/2012.67.1.9 ALGEBRAIC PROPERTIES OF BIER SPHERES INGA HEUDTLASS - LUKAS KATTHÄN We give a classification of flag Bier spheres, as well as

More information

The Fundamental Group and Covering Spaces

The Fundamental Group and Covering Spaces Chapter 8 The Fundamental Group and Covering Spaces In the first seven chapters we have dealt with point-set topology. This chapter provides an introduction to algebraic topology. Algebraic topology may

More information

1 Differentiable manifolds and smooth maps

1 Differentiable manifolds and smooth maps 1 Differentiable manifolds and smooth maps Last updated: April 14, 2011. 1.1 Examples and definitions Roughly, manifolds are sets where one can introduce coordinates. An n-dimensional manifold is a set

More information

SECTION 2: THE COMPACT-OPEN TOPOLOGY AND LOOP SPACES

SECTION 2: THE COMPACT-OPEN TOPOLOGY AND LOOP SPACES SECTION 2: THE COMPACT-OPEN TOPOLOGY AND LOOP SPACES In this section we will give the important constructions of loop spaces and reduced suspensions associated to pointed spaces. For this purpose there

More information

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES

CHAPTER 1. AFFINE ALGEBRAIC VARIETIES CHAPTER 1. AFFINE ALGEBRAIC VARIETIES During this first part of the course, we will establish a correspondence between various geometric notions and algebraic ones. Some references for this part of the

More information

Knowledge spaces from a topological point of view

Knowledge spaces from a topological point of view Knowledge spaces from a topological point of view V.I.Danilov Central Economics and Mathematics Institute of RAS Abstract In this paper we consider the operations of restriction, extension and gluing of

More information

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School

Basic counting techniques. Periklis A. Papakonstantinou Rutgers Business School Basic counting techniques Periklis A. Papakonstantinou Rutgers Business School i LECTURE NOTES IN Elementary counting methods Periklis A. Papakonstantinou MSIS, Rutgers Business School ALL RIGHTS RESERVED

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

HOMOLOGY THEORIES INGRID STARKEY

HOMOLOGY THEORIES INGRID STARKEY HOMOLOGY THEORIES INGRID STARKEY Abstract. This paper will introduce the notion of homology for topological spaces and discuss its intuitive meaning. It will also describe a general method that is used

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

arxiv: v2 [math.ag] 24 Jun 2015

arxiv: v2 [math.ag] 24 Jun 2015 TRIANGULATIONS OF MONOTONE FAMILIES I: TWO-DIMENSIONAL FAMILIES arxiv:1402.0460v2 [math.ag] 24 Jun 2015 SAUGATA BASU, ANDREI GABRIELOV, AND NICOLAI VOROBJOV Abstract. Let K R n be a compact definable set

More information

Category Theory. Categories. Definition.

Category Theory. Categories. Definition. Category Theory Category theory is a general mathematical theory of structures, systems of structures and relationships between systems of structures. It provides a unifying and economic mathematical modeling

More information

4.4 Noetherian Rings

4.4 Noetherian Rings 4.4 Noetherian Rings Recall that a ring A is Noetherian if it satisfies the following three equivalent conditions: (1) Every nonempty set of ideals of A has a maximal element (the maximal condition); (2)

More information

THE CLOSED-POINT ZARISKI TOPOLOGY FOR IRREDUCIBLE REPRESENTATIONS. K. R. Goodearl and E. S. Letzter

THE CLOSED-POINT ZARISKI TOPOLOGY FOR IRREDUCIBLE REPRESENTATIONS. K. R. Goodearl and E. S. Letzter THE CLOSED-POINT ZARISKI TOPOLOGY FOR IRREDUCIBLE REPRESENTATIONS K. R. Goodearl and E. S. Letzter Abstract. In previous work, the second author introduced a topology, for spaces of irreducible representations,

More information

CW-complexes. Stephen A. Mitchell. November 1997

CW-complexes. Stephen A. Mitchell. November 1997 CW-complexes Stephen A. Mitchell November 1997 A CW-complex is first of all a Hausdorff space X equipped with a collection of characteristic maps φ n α : D n X. Here n ranges over the nonnegative integers,

More information

Math 145. Codimension

Math 145. Codimension Math 145. Codimension 1. Main result and some interesting examples In class we have seen that the dimension theory of an affine variety (irreducible!) is linked to the structure of the function field in

More information

Rings and groups. Ya. Sysak

Rings and groups. Ya. Sysak Rings and groups. Ya. Sysak 1 Noetherian rings Let R be a ring. A (right) R -module M is called noetherian if it satisfies the maximum condition for its submodules. In other words, if M 1... M i M i+1...

More information

ON THE STANLEY DEPTH OF SQUAREFREE VERONESE IDEALS

ON THE STANLEY DEPTH OF SQUAREFREE VERONESE IDEALS ON THE STANLEY DEPTH OF SQUAREFREE VERONESE IDEALS MITCHEL T. KELLER, YI-HUANG SHEN, NOAH STREIB, AND STEPHEN J. YOUNG ABSTRACT. Let K be a field and S = K[x 1,...,x n ]. In 1982, Stanley defined what

More information

TROPICAL SCHEME THEORY

TROPICAL SCHEME THEORY TROPICAL SCHEME THEORY 5. Commutative algebra over idempotent semirings II Quotients of semirings When we work with rings, a quotient object is specified by an ideal. When dealing with semirings (and lattices),

More information

MTH 428/528. Introduction to Topology II. Elements of Algebraic Topology. Bernard Badzioch

MTH 428/528. Introduction to Topology II. Elements of Algebraic Topology. Bernard Badzioch MTH 428/528 Introduction to Topology II Elements of Algebraic Topology Bernard Badzioch 2016.12.12 Contents 1. Some Motivation.......................................................... 3 2. Categories

More information

Using persistent homology to reveal hidden information in neural data

Using persistent homology to reveal hidden information in neural data Using persistent homology to reveal hidden information in neural data Department of Mathematical Sciences, Norwegian University of Science and Technology ACAT Final Project Meeting, IST Austria July 9,

More information

arxiv: v1 [math.ac] 25 Jul 2017

arxiv: v1 [math.ac] 25 Jul 2017 Primary Decomposition in Boolean Rings David C. Vella, Skidmore College arxiv:1707.07783v1 [math.ac] 25 Jul 2017 1. Introduction Let R be a commutative ring with identity. The Lasker- Noether theorem on

More information

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f)) 1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical

More information

Contents. Index... 15

Contents. Index... 15 Contents Filter Bases and Nets................................................................................ 5 Filter Bases and Ultrafilters: A Brief Overview.........................................................

More information

Tree sets. Reinhard Diestel

Tree sets. Reinhard Diestel 1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked

More information

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998 CHAPTER 0 PRELIMINARY MATERIAL Paul Vojta University of California, Berkeley 18 February 1998 This chapter gives some preliminary material on number theory and algebraic geometry. Section 1 gives basic

More information

THE FUNDAMENTAL GROUP AND CW COMPLEXES

THE FUNDAMENTAL GROUP AND CW COMPLEXES THE FUNDAMENTAL GROUP AND CW COMPLEXES JAE HYUNG SIM Abstract. This paper is a quick introduction to some basic concepts in Algebraic Topology. We start by defining homotopy and delving into the Fundamental

More information

Decomposition Methods for Representations of Quivers appearing in Topological Data Analysis

Decomposition Methods for Representations of Quivers appearing in Topological Data Analysis Decomposition Methods for Representations of Quivers appearing in Topological Data Analysis Erik Lindell elindel@kth.se SA114X Degree Project in Engineering Physics, First Level Supervisor: Wojtek Chacholski

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

A Version of the Grothendieck Conjecture for p-adic Local Fields

A Version of the Grothendieck Conjecture for p-adic Local Fields A Version of the Grothendieck Conjecture for p-adic Local Fields by Shinichi MOCHIZUKI* Section 0: Introduction The purpose of this paper is to prove an absolute version of the Grothendieck Conjecture

More information

THE BUCHBERGER RESOLUTION ANDA OLTEANU AND VOLKMAR WELKER

THE BUCHBERGER RESOLUTION ANDA OLTEANU AND VOLKMAR WELKER THE BUCHBERGER RESOLUTION ANDA OLTEANU AND VOLKMAR WELKER arxiv:1409.2041v2 [math.ac] 11 Sep 2014 Abstract. We define the Buchberger resolution, which is a graded free resolution of a monomial ideal in

More information

Math 210B. Artin Rees and completions

Math 210B. Artin Rees and completions Math 210B. Artin Rees and completions 1. Definitions and an example Let A be a ring, I an ideal, and M an A-module. In class we defined the I-adic completion of M to be M = lim M/I n M. We will soon show

More information

1 Categorical Background

1 Categorical Background 1 Categorical Background 1.1 Categories and Functors Definition 1.1.1 A category C is given by a class of objects, often denoted by ob C, and for any two objects A, B of C a proper set of morphisms C(A,

More information

Lattices, closure operators, and Galois connections.

Lattices, closure operators, and Galois connections. 125 Chapter 5. Lattices, closure operators, and Galois connections. 5.1. Semilattices and lattices. Many of the partially ordered sets P we have seen have a further valuable property: that for any two

More information

0.1 Spec of a monoid

0.1 Spec of a monoid These notes were prepared to accompany the first lecture in a seminar on logarithmic geometry. As we shall see in later lectures, logarithmic geometry offers a natural approach to study semistable schemes.

More information

ABSTRACT ALGEBRA: A PRESENTATION ON PROFINITE GROUPS

ABSTRACT ALGEBRA: A PRESENTATION ON PROFINITE GROUPS ABSTRACT ALGEBRA: A PRESENTATION ON PROFINITE GROUPS JULIA PORCINO Our brief discussion of the p-adic integers earlier in the semester intrigued me and lead me to research further into this topic. That

More information

A GLIMPSE OF ALGEBRAIC K-THEORY: Eric M. Friedlander

A GLIMPSE OF ALGEBRAIC K-THEORY: Eric M. Friedlander A GLIMPSE OF ALGEBRAIC K-THEORY: Eric M. Friedlander During the first three days of September, 1997, I had the privilege of giving a series of five lectures at the beginning of the School on Algebraic

More information

A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ

A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ NICOLAS FORD Abstract. The goal of this paper is to present a proof of the Nullstellensatz using tools from a branch of logic called model theory. In

More information

Division Algebras and Parallelizable Spheres, Part II

Division Algebras and Parallelizable Spheres, Part II Division Algebras and Parallelizable Spheres, Part II Seminartalk by Jerome Wettstein April 5, 2018 1 A quick Recap of Part I We are working on proving the following Theorem: Theorem 1.1. The following

More information

MULTIPLICITIES OF MONOMIAL IDEALS

MULTIPLICITIES OF MONOMIAL IDEALS MULTIPLICITIES OF MONOMIAL IDEALS JÜRGEN HERZOG AND HEMA SRINIVASAN Introduction Let S = K[x 1 x n ] be a polynomial ring over a field K with standard grading, I S a graded ideal. The multiplicity of S/I

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 2

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 2 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 2 RAVI VAKIL CONTENTS 1. Where we were 1 2. Yoneda s lemma 2 3. Limits and colimits 6 4. Adjoints 8 First, some bureaucratic details. We will move to 380-F for Monday

More information

Math 248B. Applications of base change for coherent cohomology

Math 248B. Applications of base change for coherent cohomology Math 248B. Applications of base change for coherent cohomology 1. Motivation Recall the following fundamental general theorem, the so-called cohomology and base change theorem: Theorem 1.1 (Grothendieck).

More information

Nonabelian Poincare Duality (Lecture 8)

Nonabelian Poincare Duality (Lecture 8) Nonabelian Poincare Duality (Lecture 8) February 19, 2014 Let M be a compact oriented manifold of dimension n. Then Poincare duality asserts the existence of an isomorphism H (M; A) H n (M; A) for any

More information

Extensions Of S-spaces

Extensions Of S-spaces University of Central Florida Electronic Theses and Dissertations Doctoral Dissertation (Open Access) Extensions Of S-spaces 2013 Bernd Losert University of Central Florida Find similar works at: http://stars.library.ucf.edu/etd

More information

AN INTRODUCTION TO AFFINE SCHEMES

AN INTRODUCTION TO AFFINE SCHEMES AN INTRODUCTION TO AFFINE SCHEMES BROOKE ULLERY Abstract. This paper gives a basic introduction to modern algebraic geometry. The goal of this paper is to present the basic concepts of algebraic geometry,

More information

Algebraic Topology Homework 4 Solutions

Algebraic Topology Homework 4 Solutions Algebraic Topology Homework 4 Solutions Here are a few solutions to some of the trickier problems... Recall: Let X be a topological space, A X a subspace of X. Suppose f, g : X X are maps restricting to

More information

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples

Chapter 3. Rings. The basic commutative rings in mathematics are the integers Z, the. Examples Chapter 3 Rings Rings are additive abelian groups with a second operation called multiplication. The connection between the two operations is provided by the distributive law. Assuming the results of Chapter

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

121B: ALGEBRAIC TOPOLOGY. Contents. 6. Poincaré Duality

121B: ALGEBRAIC TOPOLOGY. Contents. 6. Poincaré Duality 121B: ALGEBRAIC TOPOLOGY Contents 6. Poincaré Duality 1 6.1. Manifolds 2 6.2. Orientation 3 6.3. Orientation sheaf 9 6.4. Cap product 11 6.5. Proof for good coverings 15 6.6. Direct limit 18 6.7. Proof

More information

MINKOWSKI THEORY AND THE CLASS NUMBER

MINKOWSKI THEORY AND THE CLASS NUMBER MINKOWSKI THEORY AND THE CLASS NUMBER BROOKE ULLERY Abstract. This paper gives a basic introduction to Minkowski Theory and the class group, leading up to a proof that the class number (the order of the

More information

TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS

TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS YUSUKE SUYAMA Abstract. We give a necessary and sufficient condition for the nonsingular projective toric variety associated to a building set to be

More information

MATH 326: RINGS AND MODULES STEFAN GILLE

MATH 326: RINGS AND MODULES STEFAN GILLE MATH 326: RINGS AND MODULES STEFAN GILLE 1 2 STEFAN GILLE 1. Rings We recall first the definition of a group. 1.1. Definition. Let G be a non empty set. The set G is called a group if there is a map called

More information

Math 530 Lecture Notes. Xi Chen

Math 530 Lecture Notes. Xi Chen Math 530 Lecture Notes Xi Chen 632 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca 1991 Mathematics Subject Classification. Primary

More information

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 1. Let R be a commutative ring with 1 0. (a) Prove that the nilradical of R is equal to the intersection of the prime

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph MATHEMATICAL ENGINEERING TECHNICAL REPORTS Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph Hisayuki HARA and Akimichi TAKEMURA METR 2006 41 July 2006 DEPARTMENT

More information

Semimatroids and their Tutte polynomials

Semimatroids and their Tutte polynomials Semimatroids and their Tutte polynomials Federico Ardila Abstract We define and study semimatroids, a class of objects which abstracts the dependence properties of an affine hyperplane arrangement. We

More information

A finite universal SAGBI basis for the kernel of a derivation. Osaka Journal of Mathematics. 41(4) P.759-P.792

A finite universal SAGBI basis for the kernel of a derivation. Osaka Journal of Mathematics. 41(4) P.759-P.792 Title Author(s) A finite universal SAGBI basis for the kernel of a derivation Kuroda, Shigeru Citation Osaka Journal of Mathematics. 4(4) P.759-P.792 Issue Date 2004-2 Text Version publisher URL https://doi.org/0.890/838

More information

9. Integral Ring Extensions

9. Integral Ring Extensions 80 Andreas Gathmann 9. Integral ing Extensions In this chapter we want to discuss a concept in commutative algebra that has its original motivation in algebra, but turns out to have surprisingly many applications

More information