RESEARCH STATEMENT. Nora Youngs, University of Nebraska - Lincoln
|
|
- Esther Hill
- 5 years ago
- Views:
Transcription
1 RESEARCH STATEMENT Nora Youngs, University of Nebraska - Lincoln 1. Introduction Understanding how the brain encodes information is a major part of neuroscience research. In the field of neural coding, the relationship between stimulus and response, and the creation of a dictionary to relate the two, is one area of research which has been very frequently investigated [1, 2, 3]. Though this question has been studied heavily, it is not the only useful approach. Much can also be learned from investigating the structure of the neural code on its own. This question is important from a coding theory perspective, as it addresses error-correcting properties and decoding efficiency [4]. Also, it has been shown that the structure of the neural code can reflect the topological structure of the stimulus space [5]. My research has thus far focused on this second question - what can be learned from the intrinsic structure of a neural code? Given a set of neurons {1,..., n}, we define a neural code C {0, 1} n as a set of binary patterns of neural activity. Each codeword c = (c 1,..., c n ) corresponds to a subset of neurons supp(c) def = {i c i = 1} which were recorded to be on simultaneously at some point. This type of code (without specific firing rates or times) is often referred to as a combinatorial neural code [6, 7]. One common way to obtain a neural code is with receptive fields. A receptive field is a map f i : X R 0 from a space of stimuli, X, to the average firing rate of a neuron i in response to each stimulus. Commonly, the set U i X where f i takes on positive values is called a receptive field for the ith neuron. We obtain a neural code from a receptive field as follows: Definition. Let X be a stimulus space and U = {U 1,..., U n } a collection of receptive fields with U i X. The receptive field (RF) code of U is C(U) def = {c {0, 1} n ( ) ( ) U i \ U j }. i supp c j / supp c If X R d, and U i is convex for all i, then we say C(U) is a convex receptive field code. The particular example of receptive fields that has inspired much of my research is hippocampal place cells [8, 9]. Place cells are neurons whose receptive field is a 2-dimensional spatial region, the place field. These 2D regions overlap to create a set of smaller regions, each of which activates a particular set of neurons and thus has the same corresponding codeword. Receptive field codes are found in many other situations, such as orientation-selective neurons in visual cortex [10, 11], or those neurons which code for head direction. In such situations, how can we extract the structure of the receptive fields from the neural code? What can we learn about the dimension of the underlying space? To make progress on these questions and others like them, we turn to algebraic geometry, which provides a natural framework for extracting topological and geometric structures from a space. 1
2 2. The Neural Ring Although any code may be realized as a receptive field code in R, there are codes which cannot be realized as convex receptive field codes in R d, for any d. Examples of this occur on as few as 3 neurons. The questions we hope to answer are the following: given a code C, is C a convex receptive field code? If so, what is the minimal dimension d for which we can realize this code as a receptive field code in R d? To work towards answers for these questions, we define an algebraic object called the neural ring. This, and much of the following section, is laid out in detail in our recent paper [12] The neural ideal for general codes. Definition. Given a code C {0, 1} n, we define the corresponding neural ideal I C F 2 [x 1,..., x n ] as def I C = {f F 2 [x 1,..., x n ] f(c) = 0 for all c C}. That is, we define I C to be the set of polynomials which evaluate to 0 on all codewords: C is exactly the variety V (I C ). We define the neural ring to be F 2 [x 1,..., x n ]/I C, which is exactly the ring of functions C {0, 1}. This abstract definition is a natural way to store the code, but in order to extract the combinatorial information that defines the code in a simpler way, we must look further. By finding a set of generators for I C, we can understand the structure more concretely. To this end, define the ideal generated by the so-called Boolean relations B = {x i (1 x i ) i = 1,..., n}. Then, for each v {0, 1} n, define the polynomial ρ v = v i =1 x i v i =0 (1 x i); note ρ v is essentially an indicator polynomial for v, i.e., ρ v (c) = 1 c = v. Define J C = ρ v v / C. Then, I have proven that we can decompose I C as as follows: Theorem. With B and J C defined as above, I C = B + J C. Here B, as the ideal generated by the Boolean relations, does not depend on C at all. That is, because the code is binary, the polynomial x i (1 x i ) will evaluate to 0 on any v {0, 1} n. Thus, all the specific information in the code is contained within J C. Note that we can define this ideal for any binary code C, not just a receptive field code. The generators of J C belong to a class of polynomials we have called pseudo-monomials. This list of pseudo-monomial generators for J C are often no simpler to deal with than the code itself; indeed, if the code has fewer than 2 n 1 words, there are more generators for J C than there were codewords in C. Therefore, we would like to find a different set of generators of the same ideal. The most efficient choice we have found so far is a set of generators we refer to as the canonical form of J C (written CF (J C )) made up of the minimal pseudo-monomials of J C. The combinatorial information that defines the code is still contained within these generators, and for receptive field codes they have a nice interpretation The neural ideal for receptive field codes. Given a set U of receptive fields, we can encode their intersection information in an ideal: Definition. Given a set of receptive fields, U = {U 1... U n }, define the corresponding ideal I U as follows: def I U = x i (1 x i ) ( ) ( ) U i U j. i σ j τ i σ j τ 2
3 Since the sets σ, τ need not be disjoint, this seems different from the similarly-generated I C. However, I proved the following useful theorem: Theorem. If C = C(U) for U = {U 1,... U n } a set of receptive fields, then I C = I U. By pulling out the trivial combinatorial information (e.g. that U 1 U 1 ) into B, I then showed that we can interpret the canonical form to give minimal combinatorial information about the receptive fields: Theorem. Let C {0, 1} n be a neural code, and let U = {U 1,..., U n } be any collection of open sets (not necessarily convex) in a nonempty stimulus space X such that C = C(U). The canonical form of J C is: J C = { x σ σ is minimal w.r.t. U σ = }, { xσ { i τ (1 x i ) σ, τ, σ τ =, U σ, i τ U i X, and σ, τ are each minimal w.r.t. U σ i τ U i }, i τ (1 x i ) τ is minimal w.r.t. X i τ U i }. We call the above three (disjoint) sets of relations comprising CF (J C ) the minimal Type 1 relations, the minimal Type 2 relations, and the minimal Type 3 relations, respectively. The minimal pseudo-monomial generators that give the canonical form (and thus the minimal combinatorial information) can be found algorithmically from the code itself. An algorithm for this process is published in our paper. I have written Matlab code (since many neuroscience researchers use Matlab for their own data) which can take as input a neural code and give the canonical form for the ideal. The workflow runs as follows: neural code C {0, 1} n neural ideal J C = {ρ v v / C} primary decomposition of J C canonical form CF (J C ) minimal RF structure of C As an example of the power of this algorithm, consider the following code on 5 neurons. After applying the algorithm as indicated in the workflow above, we get minimal information to help us draw the picture shown at right. C = {00000, 10000, 01000, 00100, 00001, 11000, 10001, 01100, 00110, 00101, 00011, 11100, 00111}. 3. Ongoing Work and Future Projects 3.1. Maps Between Codes. Currently, I am taking this research in two different directions that stem from the approach outlined above. Firstly, I am investigating the relationship between natural maps between the codes and the corresponding maps between the algebraic objects corresponding to each code. Several natural maps on codes exist, namely permutation of the indices, inclusion of one code into another with more codewords, and truncation to a shorter code. However, considering the induced maps on 3
4 the neural rings does not lead to natural ring maps. In particular, two neural rings are isomorphic as rings exactly when the two corresponding codes have the same number of codewords, regardless of the number of neurons or any other structure. Therefore, I have been considering the neural rings as F 2 [x 1,..., x n ]/B-modules instead, which preserves the structure from the code and the inherent correspondence between neuron i and the variable x i. This has led to a much more natural notion of neural ring homomorphism as a module homomorphism Data analysis. I am also working (along with Aubrey Thompson, an undergraduate student in our research group) to optimize the existing Matlab code and improve our algorithm for finding the canonical form. For example, the algorithm currently requires finding the code s complement, which is computationally expensive, but we are confident this step can be avoided. Once the code is optimized, it is our intention to apply these methods to real neural data from place cells to see how it performs and what kinds of structures are obtained from actual place cells. The natural maps which motivate my most recent work are especially important in data analysis. When analyzing an actual neural code from data, it is vital to consider possible sources of error such as missing neurons or activity patterns, and how the structures would look if the errors were corrected Future directions. We know that some codes can be written as convex receptive field codes in R d for some d, while others cannot. For those which can, there is a minimal d which makes this possible. Though some broad classes of convex receptive field codes are known (e.g. simplicial complex codes), it is not yet certain how to detect for any code whether or not it is convex. For those codes C that are convex, it is also unknown how to detect the minimum dimension d for which C is convex in R d. Some lower bounds are given by classical results such as Helly s theorem [13] but I am interested in improving these results. For those codes which are known to be convex in R 2, particularly place cell codes, we know that the canonical form of the neural ideal gives us all the information necessary to describe the accompanying layout of place fields. However, we do not yet know an algorithmic process by which this picture can be generated, and developing one is an interesting problem Problems for Undergraduate Students. The mathematical problems motivated by neuroscience research can be attacked from many different angles, so the work is very approachable for undergraduate students with many types of mathematical background. A student with more interest in programming or algorithms could work on the (probably difficult) problem of how to take a convex code and draw the corresponding receptive fields in some automated way. For students who are learning to code, work on these algorithms or similar ones, or on particular classes of examples, could be done in Matlab, Macaulay2, or Sage. For those who want to try working in algebra or even topology, projects are possible researching the algebraic properties of specific classes of examples, and their related simplicial complexes. 4
5 References [1] E. N. Brown, L. M. Frank, D. Tang, M. C. Quirk, and M. A. Wilson. A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. J Neurosci, 18(18): , [2] S. Deneve, P. E. Latham, and A. Pouget. Reading population codes: a neural implementation of ideal observers. Nat Neurosci, 2(8):740 5, [3] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes. Nat Neurosci, 9(11):1432 8, [4] C Curto, V Itskov, K Morrison, Z Roth, and J.L. Walker. Combinatorial neural codes from a mathematical coding theory perspective. Neural Computation, 25(25), [5] C. Curto and V. Itskov. Cell groups reveal structure of stimulus space. PLoS Computational Biology, 4(10), [6] E. Schneidman, J. Puchalla, R. Segev, R. Harris, W. Bialek, and M. Berry II. Synergy from silence in a combinatorial neural code. arxiv:q-bio.nc/ , [7] L. Osborne, S. Palmer, S. Lisberger, and W. Bialek. The neural basis for combinatorial coding in a cortical population response. Journal of Neuroscience, 28(50): , [8] J. O Keefe and J. Dostrovsky. The hippocampus as a spatial map. preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34(1): , [9] B. L. McNaughton, F. P. Battaglia, O. Jensen, E. I. Moser, and M. B. Moser. Path integration and the neural basis of the cognitive map. Nat Rev Neurosci, 7(8):663 78, [10] D.W. Watkins and M.A. Berkley. The orientation selectivity of single neurons in cat striate cortex. Experimental Brain Research, 19: , [11] R. Ben-Yishai, R. L. Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc Natl Acad Sci U S A, 92(9):3844 8, [12] C. Curto, V. Itskov, A. Veliz-Cuba, and N. Youngs. The neural ring : an algebraic tool for analyzing the intrinsic structure of neural codes. Bulletin of Mathematical Biology, 75(9), [13] Ludwig Danzer, Branko Grünbaum, and Victor Klee. Helly s theorem and its relatives. In Proc. Sympos. Pure Math., Vol. VII, pages Amer. Math. Soc., Providence, R.I.,
Neural Codes and Neural Rings: Topology and Algebraic Geometry
Neural Codes and Neural Rings: Topology and Algebraic Geometry Ma191b Winter 2017 Geometry of Neuroscience References for this lecture: Curto, Carina; Itskov, Vladimir; Veliz-Cuba, Alan; Youngs, Nora,
More informationThe Neural Ring: Using Algebraic Geometry to Analyze Neural Codes
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Dissertations, Theses, and Student Research Papers in Mathematics Mathematics, Department of 8-2014 The Neural Ring: Using
More informationConvexity of Neural Codes
Convexity of Neural Codes R. Amzi Jeffs Mohamed Omar, Advisor Nora Youngs, Reader Department of Mathematics May, 2016 Copyright 2016 R. Amzi Jeffs. The author grants Harvey Mudd College and the Claremont
More informationAlgebraic Transformations of Convex Codes
Algebraic Transformations of Convex Codes Amzi Jeffs Advisor: Mohamed Omar Feb 2, 2016 Outline Neuroscience: Place cells and neural codes. Outline Neuroscience: Place cells and neural codes. Algebra Background:
More informationEvery Binary Code Can Be Realized by Convex Sets
Every Binary Code Can Be Realized by Convex Sets Megan K. Franke 1 and Samuel Muthiah 2 arxiv:1711.03185v2 [math.co] 27 Apr 2018 1 Department of Mathematics, University of California Santa Barbara, Santa
More informationWhat makes a neural code convex?
What makes a neural code convex? Carina Curto, Elizabeth Gross, Jack Jeffries,, Katherine Morrison 5,, Mohamed Omar 6, Zvi Rosen 7,, Anne Shiu 8, and Nora Youngs 6 November 9, 05 Department of Mathematics,
More informationA No-Go Theorem for One-Layer Feedforward Networks
LETTER Communicated by Mike DeWeese A No-Go Theorem for One-Layer Feedforward Networks Chad Giusti cgiusti@seas.upenn.edu Vladimir Itskov vladimir.itskov@.psu.edu Department of Mathematics, University
More informationA Polarization Operation for Pseudomonomial Ideals
A Polarization Operation for Pseudomonomial Ideals Jeffrey Sun September 7, 2016 Abstract Pseudomonomials and ideals generated by pseudomonomials (pseudomonomial ideals) are a central object of study in
More informationEncoding binary neural codes in networks of threshold-linear neurons
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 4-5-2013 Encoding binary neural codes in networks
More informationEvery Neural Code Can Be Realized by Convex Sets
Every Neural Code Can Be Realized by Convex Sets Megan K. Franke and Samuel Muthiah July 21, 2017 Abstract Place cells are neurons found in some mammals that fire based on the animal s location in their
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar
More informationUsing persistent homology to reveal hidden information in neural data
Using persistent homology to reveal hidden information in neural data Department of Mathematical Sciences, Norwegian University of Science and Technology ACAT Final Project Meeting, IST Austria July 9,
More informationA Monte Carlo Sequential Estimation for Point Process Optimum Filtering
2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering
More information1/12/2017. Computational neuroscience. Neurotechnology.
Computational neuroscience Neurotechnology https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/ 1 Neurotechnology http://www.lce.hut.fi/research/cogntech/neurophysiology Recording
More informationWhat is the neural code? Sekuler lab, Brandeis
What is the neural code? Sekuler lab, Brandeis What is the neural code? What is the neural code? Alan Litke, UCSD What is the neural code? What is the neural code? What is the neural code? Encoding: how
More informationCorrelations and neural information coding Shlens et al. 09
Correlations and neural information coding Shlens et al. 09 Joel Zylberberg www.jzlab.org The neural code is not one-to-one ρ = 0.52 ρ = 0.80 d 3s [Max Turner, UW] b # of trials trial 1!! trial 2!!.!.!.
More informationThe homogeneous Poisson process
The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,
More informationSUPPLEMENTARY INFORMATION
Supplementary discussion 1: Most excitatory and suppressive stimuli for model neurons The model allows us to determine, for each model neuron, the set of most excitatory and suppresive features. First,
More informationHierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References
24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained
More informationBayesian probability theory and generative models
Bayesian probability theory and generative models Bruno A. Olshausen November 8, 2006 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using
More informationSean Escola. Center for Theoretical Neuroscience
Employing hidden Markov models of neural spike-trains toward the improved estimation of linear receptive fields and the decoding of multiple firing regimes Sean Escola Center for Theoretical Neuroscience
More informationProbabilistic Inference of Hand Motion from Neural Activity in Motor Cortex
Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex Y Gao M J Black E Bienenstock S Shoham J P Donoghue Division of Applied Mathematics, Brown University, Providence, RI 292 Dept
More informationNeuronal Tuning: To Sharpen or Broaden?
NOTE Communicated by Laurence Abbott Neuronal Tuning: To Sharpen or Broaden? Kechen Zhang Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies,
More informationarxiv: v1 [q-bio.nc] 6 May 2016
WHAT CAN TOPOLOGY TELL US ABOUT THE NEURAL CODE? arxiv:65.95v [q-bio.nc] 6 May 26 CARINA CURTO Abstract. Neuroscience is undergoing a period of rapid experimental progress and expansion. New mathematical
More informationMetrics on Persistence Modules and an Application to Neuroscience
Metrics on Persistence Modules and an Application to Neuroscience Magnus Bakke Botnan NTNU November 28, 2014 Part 1: Basics The Study of Sublevel Sets A pair (M, f ) where M is a topological space and
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More informationSPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017
SPIKE TRIGGERED APPROACHES Odelia Schwartz Computational Neuroscience Course 2017 LINEAR NONLINEAR MODELS Linear Nonlinear o Often constrain to some form of Linear, Nonlinear computations, e.g. visual
More informationThe receptive fields of neurons are dynamic; that is, their
An analysis of neural receptive field plasticity by point process adaptive filtering Emery N. Brown*, David P. Nguyen*, Loren M. Frank*, Matthew A. Wilson, and Victor Solo *Neuroscience Statistics Research
More informationOn Open and Closed Convex Codes
https://doi.org/10.1007/s00454-018-00050-1 On Open and Closed Convex Codes Joshua Cruz 1 Chad Giusti 2 Vladimir Itskov 3 Bill Kronholm 4 Received: 17 April 2017 / Revised: 5 November 2018 / Accepted: 20
More informationHow to read a burst duration code
Neurocomputing 58 60 (2004) 1 6 www.elsevier.com/locate/neucom How to read a burst duration code Adam Kepecs a;, John Lisman b a Cold Spring Harbor Laboratory, Marks Building, 1 Bungtown Road, Cold Spring
More informationarxiv: v3 [math.co] 30 May 2017
On open and closed convex codes arxiv:1609.03502v3 [math.co] 30 May 2017 JOSHUA CRUZ, CHAD GIUSTI, VLADIMIR ITSKOV, AND BILL KRONHOLM Abstract. Neural codes serve as a language for neurons in the brain.
More informationOn the Splitting of MO(2) over the Steenrod Algebra. Maurice Shih
On the Splitting of MO(2) over the Steenrod Algebra Maurice Shih under the direction of John Ullman Massachusetts Institute of Technology Research Science Institute On the Splitting of MO(2) over the Steenrod
More informationSome global properties of neural networks. L. Accardi and A. Aiello. Laboratorio di Cibernetica del C.N.R., Arco Felice (Na), Italy
Some global properties of neural networks L. Accardi and A. Aiello Laboratorio di Cibernetica del C.N.R., Arco Felice (Na), Italy 1 Contents 1 Introduction 3 2 The individual problem of synthesis 4 3 The
More informationThe Rectified Gaussian Distribution
The Rectified Gaussian Distribution N. D. Socci, D. D. Lee and H. S. Seung Bell Laboratories, Lucent Technologies Murray Hill, NJ 07974 {ndslddleelseung}~bell-labs.com Abstract A simple but powerful modification
More informationA new look at state-space models for neural data
A new look at state-space models for neural data Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University http://www.stat.columbia.edu/ liam liam@stat.columbia.edu
More informationMemories Associated with Single Neurons and Proximity Matrices
Memories Associated with Single Neurons and Proximity Matrices Subhash Kak Oklahoma State University, Stillwater Abstract: This paper extends the treatment of single-neuron memories obtained by the use
More informationAccuracy and learning in neuronal populations
M.A.L. Nicolelis (Ed.) Pmgress in Brain Research, Vol. 130 O 2001 Elsevier Science B.V. All rights reserved CHAPTER 21 Accuracy and learning in neuronal populations Kechen Zhang '.* and Terrence J. Sejnowski
More informationBoolean Algebras, Boolean Rings and Stone s Representation Theorem
Boolean Algebras, Boolean Rings and Stone s Representation Theorem Hongtaek Jung December 27, 2017 Abstract This is a part of a supplementary note for a Logic and Set Theory course. The main goal is to
More informationSAMPLE TEX FILE ZAJJ DAUGHERTY
SAMPLE TEX FILE ZAJJ DAUGHERTY Contents. What is the partition algebra?.. Graphs and equivalence relations.. Diagrams and their compositions.. The partition algebra. Combinatorial representation theory:
More informationON THE STANLEY DEPTH OF SQUAREFREE VERONESE IDEALS
ON THE STANLEY DEPTH OF SQUAREFREE VERONESE IDEALS MITCHEL T. KELLER, YI-HUANG SHEN, NOAH STREIB, AND STEPHEN J. YOUNG ABSTRACT. Let K be a field and S = K[x 1,...,x n ]. In 1982, Stanley defined what
More informationCHARACTERIZATION OF NONLINEAR NEURON RESPONSES
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)
More informationADDENDUM TO FROBENIUS AND CARTIER ALGEBRAS OF STANLEY-REISNER RINGS [J. ALGEBRA 358 (2012) ]
ADDENDUM TO FROBENIUS AND CARTIER ALGEBRAS OF STANLEY-REISNER RINGS [J. ALGEBRA 358 (2012) 162-177] JOSEP ÀLVAREZ MONTANER AND KOHJI YANAGAWA Abstract. We give a purely combinatorial characterization of
More informationMath 231b Lecture 16. G. Quick
Math 231b Lecture 16 G. Quick 16. Lecture 16: Chern classes for complex vector bundles 16.1. Orientations. From now on we will shift our focus to complex vector bundles. Much of the theory for real vector
More informationNEW CLASSES OF SET-THEORETIC COMPLETE INTERSECTION MONOMIAL IDEALS
NEW CLASSES OF SET-THEORETIC COMPLETE INTERSECTION MONOMIAL IDEALS M. R. POURNAKI, S. A. SEYED FAKHARI, AND S. YASSEMI Abstract. Let be a simplicial complex and χ be an s-coloring of. Biermann and Van
More informationBP -HOMOLOGY AND AN IMPLICATION FOR SYMMETRIC POLYNOMIALS. 1. Introduction and results
BP -HOMOLOGY AND AN IMPLICATION FOR SYMMETRIC POLYNOMIALS DONALD M. DAVIS Abstract. We determine the BP -module structure, mod higher filtration, of the main part of the BP -homology of elementary 2- groups.
More informationDecompositions Of Modules And Matrices
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 1973 Decompositions Of Modules And Matrices Thomas
More informationHow many units can a commutative ring have?
How many units can a commutative ring have? Sunil K. Chebolu and Keir Locridge Abstract. László Fuchs posed the following problem in 960, which remains open: classify the abelian groups occurring as the
More informationarxiv: v1 [math.ac] 25 Jul 2017
Primary Decomposition in Boolean Rings David C. Vella, Skidmore College arxiv:1707.07783v1 [math.ac] 25 Jul 2017 1. Introduction Let R be a commutative ring with identity. The Lasker- Noether theorem on
More informationOn Elementary and Algebraic Cellular Automata
Chapter On Elementary and Algebraic Cellular Automata Yuriy Gulak Center for Structures in Extreme Environments, Mechanical and Aerospace Engineering, Rutgers University, New Jersey ygulak@jove.rutgers.edu
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA
More informationStatistical models for neural encoding
Statistical models for neural encoding Part 1: discrete-time models Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk
More informationFlexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015
Flexible Gating of Contextual Influences in Natural Vision Odelia Schwartz University of Miami Oct 05 Contextual influences Perceptual illusions: no man is an island.. Review paper on context: Schwartz,
More informationTransformation of stimulus correlations by the retina
Transformation of stimulus correlations by the retina Kristina Simmons (University of Pennsylvania) and Jason Prentice, (now Princeton University) with Gasper Tkacik (IST Austria) Jan Homann (now Princeton
More informationLCM LATTICES SUPPORTING PURE RESOLUTIONS
LCM LATTICES SUPPORTING PURE RESOLUTIONS CHRISTOPHER A. FRANCISCO, JEFFREY MERMIN, AND JAY SCHWEIG Abstract. We characterize the lcm lattices that support a monomial ideal with a pure resolution. Given
More informationMarr's Theory of the Hippocampus: Part I
Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr
More informationRESEARCH STATEMENT MARGARET NICHOLS
RESEARCH STATEMENT MARGARET NICHOLS 1. Introduction My research lies in geometry and topology, particularly in the study of 3-manifolds. A major theme in 3-manifold topology since the work of Haken in
More informationAn algebraic view of topological -machines
An algebraic view of topological -machines Luke Grecki Graduate Group in Applied Mathematics lgrecki@math.ucdavis.edu June 8, 2010 1 Contents 1 Semigroups 3 2 Semigroups of automata 4 3 -machine words
More informationEquality of P-partition Generating Functions
Bucknell University Bucknell Digital Commons Honors Theses Student Theses 2011 Equality of P-partition Generating Functions Ryan Ward Bucknell University Follow this and additional works at: https://digitalcommons.bucknell.edu/honors_theses
More informationLecture 1. Toric Varieties: Basics
Lecture 1. Toric Varieties: Basics Taras Panov Lomonosov Moscow State University Summer School Current Developments in Geometry Novosibirsk, 27 August1 September 2018 Taras Panov (Moscow University) Lecture
More informationSearching for simple models
Searching for simple models 9th Annual Pinkel Endowed Lecture Institute for Research in Cognitive Science University of Pennsylvania Friday April 7 William Bialek Joseph Henry Laboratories of Physics,
More informationSUPPLEMENTARY INFORMATION
Spatio-temporal correlations and visual signaling in a complete neuronal population Jonathan W. Pillow 1, Jonathon Shlens 2, Liam Paninski 3, Alexander Sher 4, Alan M. Litke 4,E.J.Chichilnisky 2, Eero
More informationMath 418 Algebraic Geometry Notes
Math 418 Algebraic Geometry Notes 1 Affine Schemes Let R be a commutative ring with 1. Definition 1.1. The prime spectrum of R, denoted Spec(R), is the set of prime ideals of the ring R. Spec(R) = {P R
More informationGorenstein rings through face rings of manifolds.
Gorenstein rings through face rings of manifolds. Isabella Novik Department of Mathematics, Box 354350 University of Washington, Seattle, WA 98195-4350, USA, novik@math.washington.edu Ed Swartz Department
More informationCoding theory: algebraic geometry of linear algebra David R. Kohel
Coding theory: algebraic geometry of linear algebra David R. Kohel 1. Introduction. Let R be a ring whose underlying set we call the alphabet. A linear code C over R is a free R-module V of rank k, an
More informationGeneric section of a hyperplane arrangement and twisted Hurewicz maps
arxiv:math/0605643v2 [math.gt] 26 Oct 2007 Generic section of a hyperplane arrangement and twisted Hurewicz maps Masahiko Yoshinaga Department of Mathematice, Graduate School of Science, Kobe University,
More informationECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction
ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering
More informationINTRODUCTION TO COMMUTATIVE ALGEBRA MAT6608. References
INTRODUCTION TO COMMUTATIVE ALGEBRA MAT6608 ABRAHAM BROER References [1] Atiyah, M. F.; Macdonald, I. G. Introduction to commutative algebra. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills,
More informationMid Year Project Report: Statistical models of visual neurons
Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons
More informationTransductive neural decoding for unsorted neuronal spikes of rat hippocampus
Transductive neural decoding for unsorted neuronal spikes of rat hippocampus The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation
More informationMATH 433 Applied Algebra Lecture 22: Review for Exam 2.
MATH 433 Applied Algebra Lecture 22: Review for Exam 2. Topics for Exam 2 Permutations Cycles, transpositions Cycle decomposition of a permutation Order of a permutation Sign of a permutation Symmetric
More informationLearning the collective dynamics of complex biological systems. from neurons to animal groups. Thierry Mora
Learning the collective dynamics of complex biological systems from neurons to animal groups Thierry Mora Università Sapienza Rome A. Cavagna I. Giardina O. Pohl E. Silvestri M. Viale Aberdeen University
More informationUnderstanding Generalization Error: Bounds and Decompositions
CIS 520: Machine Learning Spring 2018: Lecture 11 Understanding Generalization Error: Bounds and Decompositions Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the
More informationNeural variability and Poisson statistics
Neural variability and Poisson statistics January 15, 2014 1 Introduction We are in the process of deriving the Hodgkin-Huxley model. That model describes how an action potential is generated by ion specic
More informationGlobal Layout Optimization of Olfactory Cortex and of Amygdala of Rat
UMIACS-TR-2007-14 CS-TR-4861 Global Layout Optimization of Olfactory Cortex and of Amygdala of Rat Raul Rodriguez-Esteban and Christopher Cherniak June 2005 Committee for Philosophy and the Sciences, Department
More informationSmith theory. Andrew Putman. Abstract
Smith theory Andrew Putman Abstract We discuss theorems of P. Smith and Floyd connecting the cohomology of a simplicial complex equipped with an action of a finite p-group to the cohomology of its fixed
More informationAN INTRODUCTION TO AFFINE SCHEMES
AN INTRODUCTION TO AFFINE SCHEMES BROOKE ULLERY Abstract. This paper gives a basic introduction to modern algebraic geometry. The goal of this paper is to present the basic concepts of algebraic geometry,
More informationAn Investigation on an Extension of Mullineux Involution
An Investigation on an Extension of Mullineux Involution SPUR Final Paper, Summer 06 Arkadiy Frasinich Mentored by Augustus Lonergan Project Suggested By Roman Bezrukavnikov August 3, 06 Abstract In this
More informationMATH 8253 ALGEBRAIC GEOMETRY WEEK 12
MATH 8253 ALGEBRAIC GEOMETRY WEEK 2 CİHAN BAHRAN 3.2.. Let Y be a Noetherian scheme. Show that any Y -scheme X of finite type is Noetherian. Moreover, if Y is of finite dimension, then so is X. Write f
More informationOn certain Regular Maps with Automorphism group PSL(2, p) Martin Downs
BULLETIN OF THE GREEK MATHEMATICAL SOCIETY Volume 53, 2007 (59 67) On certain Regular Maps with Automorphism group PSL(2, p) Martin Downs Received 18/04/2007 Accepted 03/10/2007 Abstract Let p be any prime
More informationRepresentation Stability and FI Modules. Background: Classical Homological Stability. Examples of Homologically Stable Sequences
This talk was given at the 2012 Topology Student Workshop, held at the Georgia Institute of Technology from June 11 15, 2012. Representation Stability and FI Modules Exposition on work by Church Ellenberg
More informationCHARACTERIZATION OF NONLINEAR NEURON RESPONSES
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)
More information8. Prime Factorization and Primary Decompositions
70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings
More informationLongest element of a finite Coxeter group
Longest element of a finite Coxeter group September 10, 2015 Here we draw together some well-known properties of the (unique) longest element w in a finite Coxeter group W, with reference to theorems and
More informationSPATIOTEMPORAL ANALYSIS OF SYNCHRONIZATION OF NEURAL ENSEMBLES FOR SPATIAL DISCRIMINATIONS IN CAT STRIATE CORTEX
SPATIOTEMPORAL ANALYSIS OF SYNCHRONIZATION OF NEURAL ENSEMBLES FOR SPATIAL DISCRIMINATIONS IN CAT STRIATE CORTEX Jason M Samonds Department of Biomedical Engineering Vanderbilt University The auto-content
More informationPermutation groups/1. 1 Automorphism groups, permutation groups, abstract
Permutation groups Whatever you have to do with a structure-endowed entity Σ try to determine its group of automorphisms... You can expect to gain a deep insight into the constitution of Σ in this way.
More informationTo cognize is to categorize revisited: Category Theory is where Mathematics meets Biology
To cognize is to categorize revisited: Category Theory is where Mathematics meets Biology Autonomous Systems Laboratory Technical University of Madrid Index 1. Category Theory 2. The structure-function
More informationDepth of some square free monomial ideals
Bull. Math. Soc. Sci. Math. Roumanie Tome 56(104) No. 1, 2013, 117 124 Depth of some square free monomial ideals by Dorin Popescu and Andrei Zarojanu Dedicated to the memory of Nicolae Popescu (1937-2010)
More information(iv) Whitney s condition B. Suppose S β S α. If two sequences (a k ) S α and (b k ) S β both converge to the same x S β then lim.
0.1. Stratified spaces. References are [7], [6], [3]. Singular spaces are naturally associated to many important mathematical objects (for example in representation theory). We are essentially interested
More informationRate- and Phase-coded Autoassociative Memory
Rate- and Phase-coded Autoassociative Memory Máté Lengyel Peter Dayan Gatsby Computational Neuroscience Unit, University College London 7 Queen Square, London WCN 3AR, United Kingdom {lmate,dayan}@gatsby.ucl.ac.uk
More informationSampling-based probabilistic inference through neural and synaptic dynamics
Sampling-based probabilistic inference through neural and synaptic dynamics Wolfgang Maass for Robert Legenstein Institute for Theoretical Computer Science Graz University of Technology, Austria Institute
More informationANNIHILATOR IDEALS IN ALMOST SEMILATTICE
BULLETIN OF THE INTERNATIONAL MATHEMATICAL VIRTUAL INSTITUTE ISSN (p) 2303-4874, ISSN (o) 2303-4955 www.imvibl.org /JOURNALS / BULLETIN Vol. 7(2017), 339-352 DOI: 10.7251/BIMVI1702339R Former BULLETIN
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationA Catalog of Elemental Neural Circuits
A Catalog of Elemental Neural Circuits Jacob Feldman Dept. of Psychology, Center for Cognitive Science, Rutgers University Abstract It is well-known that abstract neurons of the McCulloch-Pitts type compute
More informationToric Ideals, an Introduction
The 20th National School on Algebra: DISCRETE INVARIANTS IN COMMUTATIVE ALGEBRA AND IN ALGEBRAIC GEOMETRY Mangalia, Romania, September 2-8, 2012 Hara Charalambous Department of Mathematics Aristotle University
More informationMonomial equivariant embeddings of quasitoric manifolds and the problem of existence of invariant almost complex structures.
Monomial equivariant embeddings of quasitoric manifolds and the problem of existence of invariant almost complex structures. Andrey Kustarev joint work with V. M. Buchstaber, Steklov Mathematical Institute
More informationIntersections of Leray Complexes and Regularity of Monomial Ideals
Intersections of Leray Complexes and Regularity of Monomial Ideals Gil Kalai Roy Meshulam Abstract For a simplicial complex X and a field K, let h i X) = dim H i X; K). It is shown that if X,Y are complexes
More informationAutoassociative Memory Retrieval and Spontaneous Activity Bumps in Small-World Networks of Integrate-and-Fire Neurons
Autoassociative Memory Retrieval and Spontaneous Activity Bumps in Small-World Networks of Integrate-and-Fire Neurons Anastasia Anishchenko Department of Physics and Brain Science Program Brown University,
More informationDynamical Systems and Computational Mechanisms
Dynamical Systems and Computational Mechanisms Belgium Workshop July 1-2, 2002 Roger Brockett Engineering and Applied Sciences Harvard University Issues and Approach 1. Developments in fields outside electrical
More informationREPRESENTATIONS OF S n AND GL(n, C)
REPRESENTATIONS OF S n AND GL(n, C) SEAN MCAFEE 1 outline For a given finite group G, we have that the number of irreducible representations of G is equal to the number of conjugacy classes of G Although
More informationDynamic Analysis of Neural Encoding by Point Process Adaptive Filtering
LETTER Communicated by Emanuel Todorov Dynamic Analysis of Neural Encoding by Point Process Adaptive Filtering Uri T. Eden tzvi@neurostat.mgh.harvard.edu Neuroscience Statistics Research Laboratory, Department
More information