Using persistent homology to reveal hidden information in neural data
|
|
- Eugene Hines
- 5 years ago
- Views:
Transcription
1 Using persistent homology to reveal hidden information in neural data Department of Mathematical Sciences, Norwegian University of Science and Technology ACAT Final Project Meeting, IST Austria July 9, 2015
2 Collaborators and inspiration Benjamin Adric Dunn; Kavli Institute for Systems Neuroscience Magnus Bakke Botnan; NTNU dept. math. sci. Yasser Roudi; Kavli Institute for Systems Neuroscience Nils Baas; NTNU dept. math. sci.
3 Collaborators and inspiration Benjamin Adric Dunn; Kavli Institute for Systems Neuroscience Magnus Bakke Botnan; NTNU dept. math. sci. Yasser Roudi; Kavli Institute for Systems Neuroscience Nils Baas; NTNU dept. math. sci. Builds on ideas by C. Giusti and collaborators Y. Dabaghian and collaborators
4 Place cells Fundamental question: How do mammals navigate space? In 1971, John O Keefe discovered the first component of this positioning system. He found a type of nerve cell in an area of the brain called the hippocampus that was always activated when a rat was at a certain place in a room. Other nerve cells were activated when the rat was at other places. O Keefe concluded that these place cells formed a map of the room. The Nobel Committee, October 2014
5 Place cells idealized 1 Animal enters room (loosely defined environment)
6 Place cells idealized 1 Animal enters room (loosely defined environment) 2 Place cells establish a cover by place fields
7 Place cells idealized 1 Animal enters room (loosely defined environment) 2 Place cells establish a cover by place fields 3 Animal moves around, place cells fire Neuron 1 Neuron 2 Neuron 3 Time
8 Place cells idealized 1 Animal enters room (loosely defined environment) 2 Place cells establish a cover by place fields 3 Animal moves around, place cells fire Indicative of triple intersection Indicative of double intersection
9 Can we see the environment from neuron recordings? Definition Let U = {U u } u be an open cover. Its nerve NU is a simplicial complex with simplices {i 0, i 1,, i u } NU U u 0 U u 1 U u u. Theorem (Nerve theorem) Let X be a paracompact space and U an open cover of X such that every covering set and every non-empty intersection of covering sets is contractible. Then NU X.
10 Can we see the environment from neuron recordings? Definition Let U = {U u } u be an open cover. Its nerve NU is a simplicial complex with simplices {i 0, i 1,, i u } NU U u 0 U u 1 U u u. Theorem (Nerve theorem) Let X be a paracompact space and U an open cover of X such that every covering set and every non-empty intersection of covering sets is contractible. Then NU X. Firing data intersection data homotopy type.
11 Firing data intersection data homotopy type
12 Firing data intersection data homotopy type
13 Firing data intersection data homotopy type
14 Firing data intersection data homotopy type
15 Firing data intersection data homotopy type (Embedding is of course not known)
16 Reminder: Biology is messy Real place fields (and these are nice): Generated using data from Mizuseki, Sirota, Pastalkova, Diba, and Buzsáki: Multiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks (CRCNS.org, 2013).
17 Pretend biology is nice and clean Neurons Obstruction Neuroscientists observe Time Construct from cofiring
18 Pretend biology is nice and clean Neurons Obstruction Neuroscientists observe Time Construct from cofiring Made without spatial data!
19 Pretend biology is nice and clean Neurons Obstruction Neuroscientists observe Time Construct from cofiring But what qualifies as cofiring?
20 Firing data correlation Record firing data ( spike trains ) from N neurons for a time T (T 10 min.) Bin data, keeping fire (+1) or no fire (0 or 1) for each time interval [t, t + Δt) (Δt 1 ms) Data thus consists of S 1,, S u (Z/2Z) u /Δu For each pair (i, j), xcorr u (S u, S u ) = 1 S u 1 S u 1 S u, S u [n] xcorr(s u, S u ) = 1 L max ( u u =0 xcorr u (S u, S u ), u u =0 xcorr u (S u, S u ))
21 Order complex Have complete graph with weights C u,u = xcorr(s u, S u ), so could consider its flag complex.
22 Order complex Have complete graph with weights C u,u = xcorr(s u, S u ), so could consider its flag complex. But neuron firing is a highly non-linear process. We re really observing C u,u = f(real relationship between neuron i and j) for some (highly non-linear) monotone f.
23 Order complex Definition Let G be a complete graph with real edge weights W u,u. Sort the edges W u (0) < W u (1) < < W u (( u 2 ) 1) (breaking ties arbitrarily), and let G have edge weights W u (u ) = k ( u 2 ). The order complex of G is OC(G) = flag( G).
24 Order complex Definition Let G be a complete graph with real edge weights W u,u. Sort the edges W u (0) < W u (1) < < W u (( u 2 ) 1) (breaking ties arbitrarily), and let G have edge weights W u (u ) = k ( u 2 ). The order complex of G is OC(G) = flag( G). As filtered simplicial complexes OC(C) = OC(f(real neuron relations)) = OC(real neuron relations).
25 Toy example
26 Toy example 0.20 u Death Birth
27 Real data? Similar techniques have been applied to real data: Curto and Itskov (2008): Cell groups reveal structure of stimulus space
28 Real data? Similar techniques have been applied to real data: Curto and Itskov (2008): Cell groups reveal structure of stimulus space Curto, Giusti, Itskov and Pastalkova (2015): Clique topology reveals intrinsic geometric structure in neural correlations
29 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession
30 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation
31 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings
32 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings Tactile sensory input
33 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings Tactile sensory input Olfactory sensory input
34 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings Tactile sensory input Olfactory sensory input???
35 Uncovering such hidden information Suppose we have a list of candidate stimuli influencing place cell firing, and a recording of the corresponding animal state x(δt), x(2δt),, x(t ) M, as well as spike trains S 1,, S u recorded from N neurons.
36 Uncovering such hidden information Suppose we have a list of candidate stimuli influencing place cell firing, and a recording of the corresponding animal state x(δt), x(2δt),, x(t ) M, as well as spike trains S 1,, S u recorded from N neurons. Two questions: 1 Is our list of candidate stimuli complete? 2 If not, what can we say about the topology of remaining (unknown) stimuli?
37 Kinetic Ising model for neuron activity The Ising model has seen success as a model of neuron firing. Definition G complete graph on vertices 1,, N. Edge weights J u,u. Let E 1,, E u R R. A set of ±1-valued random variables S u (t), with 1 i N and t = kδt are said to obey the kinetic Ising model with couplings J and external fields E 1,, E u if they have the conditional probabilities P (S u (t + Δt) = s S 1 (t) = s 1,, S u (t) = s u ) = exp (s (E u (t) + u J u =1 u,u s u )). 2cosh (E u (t) + u J u =1 u,u s u )
38 Kinetic Ising model for neuron activity The Ising model has seen success as a model of neuron firing. Definition G complete graph on vertices 1,, N. Edge weights J u,u. Let E 1,, E u R R. A set of ±1-valued random variables S u (t), with 1 i N and t = kδt are said to obey the kinetic Ising model with couplings J and external fields E 1,, E u if they have the conditional probabilities P (S u (t + Δt) = s S 1 (t) = s 1,, S u (t) = s u ) = exp (s (E u (t) + u J u =1 u,u s u )). 2cosh (E u (t) + u J u =1 u,u s u ) F u (t) = E u (t) + u u =1 J u,u s u is called the system s Hamiltonian.
39 F u (t) = E u (t) + u u =1 J u,u s u 1.0 Firing probability u u (u ) Internal couplings J u,u capture activiation and inhibition by neighbor neurons. External fields E u capture stimuli.
40 F u (t) = E u (t) + u u =1 J u,u s u 1.0 Firing probability u u (u ) Internal couplings J u,u capture activiation and inhibition by neighbor neurons. External fields E u capture stimuli. Assume E u (t) = E spatial,u (x(t)) + E head,u (x(t)) + E u,u (x(t)) + +??? where each term is a sum of Gaussians on factors of some nice
41 Kinetic Ising model for neuron activity; example Space and head only. Physical space [0, 1] 2, head configuration S 1. x R M = [0, 1] 2 S 1.
42 Kinetic Ising model for neuron activity; example Space and head only. Physical space [0, 1] 2, head configuration S 1. x R M = [0, 1] 2 S 1. Then in assume E u (t) = E spatial,u (x(t)) + E head,u (x(t)), E spatial,u (x, y, α) = e (u u u ) 2 (u u u ) 2 E head,u (x, y, α) = e (u u u ) 2. I.e. spatial place field in (p u, q u ), head direction place field in a u.
43 Inference Will just write E u (t) = u u =1 A u,u exp ( d(x(t), c u ) 2 ). Easy to see that the likelihood function of observed data under the model is convex in A. Then we can 1 Maximize (log-)likelihood using say BFGS. 2 Get contribution of E spatial,u, E head,u and any other suspected covariates.
44 Inference Will just write E u (t) = u u =1 A u,u exp ( d(x(t), c u ) 2 ). Easy to see that the likelihood function of observed data under the model is convex in A. Then we can 1 Maximize (log-)likelihood using say BFGS. 2 Get contribution of E spatial,u, E head,u and any other suspected covariates. 3 Remove said (expected) contribution from data.
45 Inference Will just write E u (t) = u u =1 A u,u exp ( d(x(t), c u ) 2 ). Easy to see that the likelihood function of observed data under the model is convex in A. Then we can 1 Maximize (log-)likelihood using say BFGS. 2 Get contribution of E spatial,u, E head,u and any other suspected covariates. 3 Remove said (expected) contribution from data. 4 Get new spike trains (residuals); can still compute correlations and persistence.
46 Stimuli list exhausted? No Yes Information about topology of hidden stimuli No Random? No more to learn Yes Persistence diagrams and Betti curves 1 Pick phenomenon/stimuli; e.g. spatial pos., head dir., u -phase pref.,???, 2 Infer its contribution to spike trains 3 Remove contribution Persistent homology Spike trains Experiment or simulation
47 Synthetic data Real recordings are now plentiful 1, but: 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
48 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
49 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
50 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
51 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
52 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. Why stop at one head? ( Given an u -headed rat exploring a Klein bottle ) 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
53 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. Why stop at one head? ( Given an u -headed rat exploring a Klein bottle ) Theta phase preference/precession. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
54 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. Why stop at one head? ( Given an u -headed rat exploring a Klein bottle ) Theta phase preference/precession. Neurons can be coupled at will. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at
55 Proof of concept 1 Space is a an annulus (specifically square box with impassable disc in the middle). Spatial tuning quite strong. Head direction tuning quite strong.
56 Proof of concept 1 Space is a an annulus (specifically square box with impassable disc in the middle). Spatial tuning quite strong. Head direction tuning quite strong. We pretend that we are neuroscientists who have not heard of head direction tuning; we believe spatial location is the only thing influencing neural activity.
57 Proof of concept 1; original data
58 Proof of concept 1; original data Death H Birth
59 Proof of concept 1; space removed
60 Proof of concept 1; space removed Death H Birth
61 Proof of concept 1; head removed
62 Proof of concept 1; head removed Death H Birth
63 Proof of concept 1; space and head removed
64 Proof of concept 1; space and head removed Death H Birth
65 Proof of concept 1; space and head removed 0.5 H Death Betti number Birth
66 Proof of concept 1; it s random Betti number Original Spatial removed Head dir. removed Spatial + head dir. removed Shuffled corr Filtration
67 Thank you!
arxiv: v1 [q-bio.nc] 22 Oct 2015
Using persistent homology to reveal hidden information in neural data Gard Spreemann 1, Benjamin Dunn 2, Magnus Bakke Botnan 1, and Nils A. Baas 1 arxiv:1516629v1 [q-bio.nc] 22 Oct 2015 1 {gard.spreemann,
More informationMetrics on Persistence Modules and an Application to Neuroscience
Metrics on Persistence Modules and an Application to Neuroscience Magnus Bakke Botnan NTNU November 28, 2014 Part 1: Basics The Study of Sublevel Sets A pair (M, f ) where M is a topological space and
More informationNeural Codes and Neural Rings: Topology and Algebraic Geometry
Neural Codes and Neural Rings: Topology and Algebraic Geometry Ma191b Winter 2017 Geometry of Neuroscience References for this lecture: Curto, Carina; Itskov, Vladimir; Veliz-Cuba, Alan; Youngs, Nora,
More informationRESEARCH STATEMENT. Nora Youngs, University of Nebraska - Lincoln
RESEARCH STATEMENT Nora Youngs, University of Nebraska - Lincoln 1. Introduction Understanding how the brain encodes information is a major part of neuroscience research. In the field of neural coding,
More informationWhat makes a neural code convex?
What makes a neural code convex? Carina Curto, Elizabeth Gross, Jack Jeffries,, Katherine Morrison 5,, Mohamed Omar 6, Zvi Rosen 7,, Anne Shiu 8, and Nora Youngs 6 November 9, 05 Department of Mathematics,
More informationEvery Binary Code Can Be Realized by Convex Sets
Every Binary Code Can Be Realized by Convex Sets Megan K. Franke 1 and Samuel Muthiah 2 arxiv:1711.03185v2 [math.co] 27 Apr 2018 1 Department of Mathematics, University of California Santa Barbara, Santa
More informationarxiv: v1 [q-bio.nc] 6 May 2016
WHAT CAN TOPOLOGY TELL US ABOUT THE NEURAL CODE? arxiv:65.95v [q-bio.nc] 6 May 26 CARINA CURTO Abstract. Neuroscience is undergoing a period of rapid experimental progress and expansion. New mathematical
More informationSean Escola. Center for Theoretical Neuroscience
Employing hidden Markov models of neural spike-trains toward the improved estimation of linear receptive fields and the decoding of multiple firing regimes Sean Escola Center for Theoretical Neuroscience
More informationThe Neural Ring: Using Algebraic Geometry to Analyze Neural Codes
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Dissertations, Theses, and Student Research Papers in Mathematics Mathematics, Department of 8-2014 The Neural Ring: Using
More informationThe homogeneous Poisson process
The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,
More informationTransformation of stimulus correlations by the retina
Transformation of stimulus correlations by the retina Kristina Simmons (University of Pennsylvania) and Jason Prentice, (now Princeton University) with Gasper Tkacik (IST Austria) Jan Homann (now Princeton
More informationEfficient inference of interactions from non-equilibrium data and application to multi-electrode neural recordings
Efficient inference of interactions from non-equilibrium data and application to multi-electrode neural recordings ESR: 1 Supervisor: Dr. Yasser Roudi 1, 2 1 Kavli Institute for Systems Neuroscience, Trondheim
More informationConsider the following spike trains from two different neurons N1 and N2:
About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More informationTopological Data Analysis - II. Afra Zomorodian Department of Computer Science Dartmouth College
Topological Data Analysis - II Afra Zomorodian Department of Computer Science Dartmouth College September 4, 2007 1 Plan Yesterday: Motivation Topology Simplicial Complexes Invariants Homology Algebraic
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar
More informationLearning from persistence diagrams
Learning from persistence diagrams Ulrich Bauer TUM July 7, 2015 ACAT final meeting IST Austria Joint work with: Jan Reininghaus, Stefan Huber, Roland Kwitt 1 / 28 ? 2 / 28 ? 2 / 28 3 / 28 3 / 28 3 / 28
More informationHow Behavioral Constraints May Determine Optimal Sensory Representations
How Behavioral Constraints May Determine Optimal Sensory Representations by Salinas (2006) CPSC 644 Presented by Yoonsuck Choe Motivation Neural response is typically characterized in terms of a tuning
More informationCHARACTERIZATION OF NONLINEAR NEURON RESPONSES
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)
More informationBayesian probability theory and generative models
Bayesian probability theory and generative models Bruno A. Olshausen November 8, 2006 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using
More informationNeuronal Dynamics: Computational Neuroscience of Single Neurons
Week 5 part 3a :Three definitions of rate code Neuronal Dynamics: Computational Neuroscience of Single Neurons Week 5 Variability and Noise: The question of the neural code Wulfram Gerstner EPFL, Lausanne,
More informationNeural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU http://www.cns.nyu.edu/~pillow Oct 5, Course lecture: Computational Modeling of Neuronal Systems
More informationMarr's Theory of the Hippocampus: Part I
Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr
More informationNeuroscience Introduction
Neuroscience Introduction The brain As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven t unlocked the mystery of the three pounds of matter
More informationA No-Go Theorem for One-Layer Feedforward Networks
LETTER Communicated by Mike DeWeese A No-Go Theorem for One-Layer Feedforward Networks Chad Giusti cgiusti@seas.upenn.edu Vladimir Itskov vladimir.itskov@.psu.edu Department of Mathematics, University
More informationCognitive Prosem. December 1, 2008
Cognitive Prosem December 1, 2008 Spatial Representation What does it mean to represent spatial information? What are the characteristics of human spatial representations? Reference Frames/Systems Distortions
More informationMid Year Project Report: Statistical models of visual neurons
Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons
More informationEncoding binary neural codes in networks of threshold-linear neurons
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 4-5-2013 Encoding binary neural codes in networks
More informationEvery Neural Code Can Be Realized by Convex Sets
Every Neural Code Can Be Realized by Convex Sets Megan K. Franke and Samuel Muthiah July 21, 2017 Abstract Place cells are neurons found in some mammals that fire based on the animal s location in their
More informationHow to do backpropagation in a brain
How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep
More informationBrains and Computation
15-883: Computational Models of Neural Systems Lecture 1.1: Brains and Computation David S. Touretzky Computer Science Department Carnegie Mellon University 1 Models of the Nervous System Hydraulic network
More informationNeural Spike Train Analysis 1: Introduction to Point Processes
SAMSI Summer 2015: CCNS Computational Neuroscience Summer School Neural Spike Train Analysis 1: Introduction to Point Processes Uri Eden BU Department of Mathematics and Statistics July 27, 2015 Spikes
More informationConvexity of Neural Codes
Convexity of Neural Codes R. Amzi Jeffs Mohamed Omar, Advisor Nora Youngs, Reader Department of Mathematics May, 2016 Copyright 2016 R. Amzi Jeffs. The author grants Harvey Mudd College and the Claremont
More informationUncovering spatial topology represented by rat hippocampal population neuronal codes
J Comput Neurosci manuscript No. (will be inserted by the editor) Uncovering spatial topology represented by rat hippocampal population neuronal codes Zhe Chen Fabian Kloosterman Emery N. Brown Matthew
More informationOne-Parameter Processes, Usually Functions of Time
Chapter 4 One-Parameter Processes, Usually Functions of Time Section 4.1 defines one-parameter processes, and their variations (discrete or continuous parameter, one- or two- sided parameter), including
More informationSPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017
SPIKE TRIGGERED APPROACHES Odelia Schwartz Computational Neuroscience Course 2017 LINEAR NONLINEAR MODELS Linear Nonlinear o Often constrain to some form of Linear, Nonlinear computations, e.g. visual
More informationA Monte Carlo Sequential Estimation for Point Process Optimum Filtering
2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering
More informationCSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture
CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What
More informationResearch Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers
Advances in Mathematical Physics Volume 2, Article ID 5978, 8 pages doi:.55/2/5978 Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers A. Bershadskii Physics Department, ICAR,
More informationLearning the collective dynamics of complex biological systems. from neurons to animal groups. Thierry Mora
Learning the collective dynamics of complex biological systems from neurons to animal groups Thierry Mora Università Sapienza Rome A. Cavagna I. Giardina O. Pohl E. Silvestri M. Viale Aberdeen University
More informationMathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One
More information1/12/2017. Computational neuroscience. Neurotechnology.
Computational neuroscience Neurotechnology https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/ 1 Neurotechnology http://www.lce.hut.fi/research/cogntech/neurophysiology Recording
More informationOn Open and Closed Convex Codes
https://doi.org/10.1007/s00454-018-00050-1 On Open and Closed Convex Codes Joshua Cruz 1 Chad Giusti 2 Vladimir Itskov 3 Bill Kronholm 4 Received: 17 April 2017 / Revised: 5 November 2018 / Accepted: 20
More informationAlgebraic Transformations of Convex Codes
Algebraic Transformations of Convex Codes Amzi Jeffs Advisor: Mohamed Omar Feb 2, 2016 Outline Neuroscience: Place cells and neural codes. Outline Neuroscience: Place cells and neural codes. Algebra Background:
More informationFinding informative neurons in the brain using Multi-Scale Relevance
Finding informative neurons in the brain using Multi-Scale Relevance Ryan John Cubero,,3, Matteo Marsili, and Yasser Roudi arxiv:.3v [q-bio.nc] Feb Kavli Institute for Systems Neuroscience and Centre for
More informationencoding and estimation bottleneck and limits to visual fidelity
Retina Light Optic Nerve photoreceptors encoding and estimation bottleneck and limits to visual fidelity interneurons ganglion cells light The Neural Coding Problem s(t) {t i } Central goals for today:
More informationGentle Introduction to Infinite Gaussian Mixture Modeling
Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for
More informationEffects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks
Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model
More informationTHE retina in general consists of three layers: photoreceptors
CS229 MACHINE LEARNING, STANFORD UNIVERSITY, DECEMBER 2016 1 Models of Neuron Coding in Retinal Ganglion Cells and Clustering by Receptive Field Kevin Fegelis, SUID: 005996192, Claire Hebert, SUID: 006122438,
More informationarxiv: v3 [math.co] 30 May 2017
On open and closed convex codes arxiv:1609.03502v3 [math.co] 30 May 2017 JOSHUA CRUZ, CHAD GIUSTI, VLADIMIR ITSKOV, AND BILL KRONHOLM Abstract. Neural codes serve as a language for neurons in the brain.
More informationTuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing
Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict
More informationHow to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto
1 How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto What is wrong with back-propagation? It requires labeled training data. (fixed) Almost
More informationProbabilistic Models in Theoretical Neuroscience
Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction
More information10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode
10/8/014 9.07 INTRODUCTION TO STATISTICS FOR BRAIN AND COGNITIVE SCIENCES Lecture 4 Emery N. Brown The Multivariate Gaussian Distribution Case : Probability Model for Spike Sorting The data are tetrode
More informationComparing integrate-and-fire models estimated using intracellular and extracellular data 1
Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Liam Paninski a,b,2 Jonathan Pillow b Eero Simoncelli b a Gatsby Computational Neuroscience Unit, University College
More informationStatistical models for neural encoding
Statistical models for neural encoding Part 1: discrete-time models Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk
More informationDual Nature Hidden Layers Neural Networks A Novel Paradigm of Neural Network Architecture
Dual Nature Hidden Layers Neural Networks A Novel Paradigm of Neural Network Architecture S.Karthikeyan 1, Ravi Prakash 2, B.B.Paul 3 1 Lecturer, Department of Computer Science, Faculty of Science, Banaras
More informationChapter 9: The Perceptron
Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed
More informationAn Introductory Course in Computational Neuroscience
An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of
More informationk k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation
K-Class Classification Problem Let us denote the -th class by C, with n exemplars or training samples, forming the sets T for = 1,, K: {( x, ) p = 1 n } T = d,..., p p The complete training set is T =
More informationA Kernel on Persistence Diagrams for Machine Learning
A Kernel on Persistence Diagrams for Machine Learning Jan Reininghaus 1 Stefan Huber 1 Roland Kwitt 2 Ulrich Bauer 1 1 Institute of Science and Technology Austria 2 FB Computer Science Universität Salzburg,
More informationCausal modeling of fmri: temporal precedence and spatial exploration
Causal modeling of fmri: temporal precedence and spatial exploration Alard Roebroeck Maastricht Brain Imaging Center (MBIC) Faculty of Psychology & Neuroscience Maastricht University Intro: What is Brain
More informationWhat is the neural code? Sekuler lab, Brandeis
What is the neural code? Sekuler lab, Brandeis What is the neural code? What is the neural code? Alan Litke, UCSD What is the neural code? What is the neural code? What is the neural code? Encoding: how
More informationRandom topology and geometry
Random topology and geometry Matthew Kahle Ohio State University AMS Short Course Introduction I predict a new subject of statistical topology. Rather than count the number of holes, Betti numbers, etc.,
More informationThe Fundamental Group
The Fundamental Group Renzo s math 472 This worksheet is designed to accompany our lectures on the fundamental group, collecting relevant definitions and main ideas. 1 Homotopy Intuition: Homotopy formalizes
More informationInstructor (Brad Osgood)
TheFourierTransformAndItsApplications-Lecture26 Instructor (Brad Osgood): Relax, but no, no, no, the TV is on. It's time to hit the road. Time to rock and roll. We're going to now turn to our last topic
More informationTransductive neural decoding for unsorted neuronal spikes of rat hippocampus
Transductive neural decoding for unsorted neuronal spikes of rat hippocampus The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation
More informationCSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression
CSC2515 Winter 2015 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/csc2515_winter15.html
More informationToday. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use?
Today Statistical Learning Parameter Estimation: Maximum Likelihood (ML) Maximum A Posteriori (MAP) Bayesian Continuous case Learning Parameters for a Bayesian Network Naive Bayes Maximum Likelihood estimates
More informationHopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen
Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual
More informationARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD
ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided
More informationSUPPLEMENTARY INFORMATION
Supplementary discussion 1: Most excitatory and suppressive stimuli for model neurons The model allows us to determine, for each model neuron, the set of most excitatory and suppresive features. First,
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict
More informationNeuronal structure detection using Persistent Homology
Neuronal structure detection using Persistent Homology J. Heras, G. Mata and J. Rubio Department of Mathematics and Computer Science, University of La Rioja Seminario de Informática Mirian Andrés March
More informationStatistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design
Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University
More informationThe Intersection of Statistics and Topology:
The Intersection of Statistics and Topology: Confidence Sets for Persistence Diagrams Brittany Terese Fasy joint work with S. Balakrishnan, F. Lecci, A. Rinaldo, A. Singh, L. Wasserman 3 June 2014 [FLRWBS]
More informationChris Bishop s PRML Ch. 8: Graphical Models
Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular
More information/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16
60.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 0/3/6 6. Introduction We talked a lot the last lecture about greedy algorithms. While both Prim
More informationA Polarization Operation for Pseudomonomial Ideals
A Polarization Operation for Pseudomonomial Ideals Jeffrey Sun September 7, 2016 Abstract Pseudomonomials and ideals generated by pseudomonomials (pseudomonomial ideals) are a central object of study in
More informationArtificial Neural Networks
Artificial Neural Networks Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Knowledge
More informationLecture 12: Quality Control I: Control of Location
Lecture 12: Quality Control I: Control of Location 10 October 2005 This lecture and the next will be about quality control methods. There are two reasons for this. First, it s intrinsically important for
More informationEM Algorithm & High Dimensional Data
EM Algorithm & High Dimensional Data Nuno Vasconcelos (Ken Kreutz-Delgado) UCSD Gaussian EM Algorithm For the Gaussian mixture model, we have Expectation Step (E-Step): Maximization Step (M-Step): 2 EM
More informationLinking non-binned spike train kernels to several existing spike train metrics
Linking non-binned spike train kernels to several existing spike train metrics Benjamin Schrauwen Jan Van Campenhout ELIS, Ghent University, Belgium Benjamin.Schrauwen@UGent.be Abstract. This work presents
More informationComputational and Statistical Tradeoffs via Convex Relaxation
Computational and Statistical Tradeoffs via Convex Relaxation Venkat Chandrasekaran Caltech Joint work with Michael Jordan Time-constrained Inference o Require decision after a fixed (usually small) amount
More informationSTA 414/2104: Lecture 8
STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA
More informationGraphical Models. Outline. HMM in short. HMMs. What about continuous HMMs? p(o t q t ) ML 701. Anna Goldenberg ... t=1. !
Outline Graphical Models ML 701 nna Goldenberg! ynamic Models! Gaussian Linear Models! Kalman Filter! N! Undirected Models! Unification! Summary HMMs HMM in short! is a ayes Net hidden states! satisfies
More informationThe Multivariate Gaussian Distribution
9.07 INTRODUCTION TO STATISTICS FOR BRAIN AND COGNITIVE SCIENCES Lecture 4 Emery N. Brown The Multivariate Gaussian Distribution Analysis of Background Magnetoencephalogram Noise Courtesy of Simona Temereanca
More informationMITOCW ocw-18_02-f07-lec02_220k
MITOCW ocw-18_02-f07-lec02_220k The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.
More informationStatistics Random Variables
1 Statistics Statistics are used in a variety of ways in neuroscience. Perhaps the most familiar example is trying to decide whether some experimental results are reliable, using tests such as the t-test.
More informationGaussian process based nonlinear latent structure discovery in multivariate spike train data
Gaussian process based nonlinear latent structure discovery in multivariate spike train data Anqi Wu, Nicholas A. Roy, Stephen Keeley, & Jonathan W. Pillow Princeton Neuroscience Institute Princeton University
More informationCHARACTERIZATION OF NONLINEAR NEURON RESPONSES
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)
More informationImplementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics arxiv:1512.07839v4 [cs.lg] 7 Jun 2017 Sacha Sokoloski Max Planck Institute for Mathematics in the Sciences Abstract
More informationarxiv: v2 [math.pr] 15 May 2016
MAXIMALLY PERSISTENT CYCLES IN RANDOM GEOMETRIC COMPLEXES OMER BOBROWSKI, MATTHEW KAHLE, AND PRIMOZ SKRABA arxiv:1509.04347v2 [math.pr] 15 May 2016 Abstract. We initiate the study of persistent homology
More information(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann
(Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for
More informationLeo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices
Leo Kadanoff and d XY Models with Symmetry-Breaking Fields renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and Random Matrix Theory Non-Hermitian Localization in
More informationNatural Image Statistics
Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex
More informationThe receptive fields of neurons are dynamic; that is, their
An analysis of neural receptive field plasticity by point process adaptive filtering Emery N. Brown*, David P. Nguyen*, Loren M. Frank*, Matthew A. Wilson, and Victor Solo *Neuroscience Statistics Research
More informationMath 350: An exploration of HMMs through doodles.
Math 350: An exploration of HMMs through doodles. Joshua Little (407673) 19 December 2012 1 Background 1.1 Hidden Markov models. Markov chains (MCs) work well for modelling discrete-time processes, or
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More information