Using persistent homology to reveal hidden information in neural data

Size: px
Start display at page:

Download "Using persistent homology to reveal hidden information in neural data"

Transcription

1 Using persistent homology to reveal hidden information in neural data Department of Mathematical Sciences, Norwegian University of Science and Technology ACAT Final Project Meeting, IST Austria July 9, 2015

2 Collaborators and inspiration Benjamin Adric Dunn; Kavli Institute for Systems Neuroscience Magnus Bakke Botnan; NTNU dept. math. sci. Yasser Roudi; Kavli Institute for Systems Neuroscience Nils Baas; NTNU dept. math. sci.

3 Collaborators and inspiration Benjamin Adric Dunn; Kavli Institute for Systems Neuroscience Magnus Bakke Botnan; NTNU dept. math. sci. Yasser Roudi; Kavli Institute for Systems Neuroscience Nils Baas; NTNU dept. math. sci. Builds on ideas by C. Giusti and collaborators Y. Dabaghian and collaborators

4 Place cells Fundamental question: How do mammals navigate space? In 1971, John O Keefe discovered the first component of this positioning system. He found a type of nerve cell in an area of the brain called the hippocampus that was always activated when a rat was at a certain place in a room. Other nerve cells were activated when the rat was at other places. O Keefe concluded that these place cells formed a map of the room. The Nobel Committee, October 2014

5 Place cells idealized 1 Animal enters room (loosely defined environment)

6 Place cells idealized 1 Animal enters room (loosely defined environment) 2 Place cells establish a cover by place fields

7 Place cells idealized 1 Animal enters room (loosely defined environment) 2 Place cells establish a cover by place fields 3 Animal moves around, place cells fire Neuron 1 Neuron 2 Neuron 3 Time

8 Place cells idealized 1 Animal enters room (loosely defined environment) 2 Place cells establish a cover by place fields 3 Animal moves around, place cells fire Indicative of triple intersection Indicative of double intersection

9 Can we see the environment from neuron recordings? Definition Let U = {U u } u be an open cover. Its nerve NU is a simplicial complex with simplices {i 0, i 1,, i u } NU U u 0 U u 1 U u u. Theorem (Nerve theorem) Let X be a paracompact space and U an open cover of X such that every covering set and every non-empty intersection of covering sets is contractible. Then NU X.

10 Can we see the environment from neuron recordings? Definition Let U = {U u } u be an open cover. Its nerve NU is a simplicial complex with simplices {i 0, i 1,, i u } NU U u 0 U u 1 U u u. Theorem (Nerve theorem) Let X be a paracompact space and U an open cover of X such that every covering set and every non-empty intersection of covering sets is contractible. Then NU X. Firing data intersection data homotopy type.

11 Firing data intersection data homotopy type

12 Firing data intersection data homotopy type

13 Firing data intersection data homotopy type

14 Firing data intersection data homotopy type

15 Firing data intersection data homotopy type (Embedding is of course not known)

16 Reminder: Biology is messy Real place fields (and these are nice): Generated using data from Mizuseki, Sirota, Pastalkova, Diba, and Buzsáki: Multiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks (CRCNS.org, 2013).

17 Pretend biology is nice and clean Neurons Obstruction Neuroscientists observe Time Construct from cofiring

18 Pretend biology is nice and clean Neurons Obstruction Neuroscientists observe Time Construct from cofiring Made without spatial data!

19 Pretend biology is nice and clean Neurons Obstruction Neuroscientists observe Time Construct from cofiring But what qualifies as cofiring?

20 Firing data correlation Record firing data ( spike trains ) from N neurons for a time T (T 10 min.) Bin data, keeping fire (+1) or no fire (0 or 1) for each time interval [t, t + Δt) (Δt 1 ms) Data thus consists of S 1,, S u (Z/2Z) u /Δu For each pair (i, j), xcorr u (S u, S u ) = 1 S u 1 S u 1 S u, S u [n] xcorr(s u, S u ) = 1 L max ( u u =0 xcorr u (S u, S u ), u u =0 xcorr u (S u, S u ))

21 Order complex Have complete graph with weights C u,u = xcorr(s u, S u ), so could consider its flag complex.

22 Order complex Have complete graph with weights C u,u = xcorr(s u, S u ), so could consider its flag complex. But neuron firing is a highly non-linear process. We re really observing C u,u = f(real relationship between neuron i and j) for some (highly non-linear) monotone f.

23 Order complex Definition Let G be a complete graph with real edge weights W u,u. Sort the edges W u (0) < W u (1) < < W u (( u 2 ) 1) (breaking ties arbitrarily), and let G have edge weights W u (u ) = k ( u 2 ). The order complex of G is OC(G) = flag( G).

24 Order complex Definition Let G be a complete graph with real edge weights W u,u. Sort the edges W u (0) < W u (1) < < W u (( u 2 ) 1) (breaking ties arbitrarily), and let G have edge weights W u (u ) = k ( u 2 ). The order complex of G is OC(G) = flag( G). As filtered simplicial complexes OC(C) = OC(f(real neuron relations)) = OC(real neuron relations).

25 Toy example

26 Toy example 0.20 u Death Birth

27 Real data? Similar techniques have been applied to real data: Curto and Itskov (2008): Cell groups reveal structure of stimulus space

28 Real data? Similar techniques have been applied to real data: Curto and Itskov (2008): Cell groups reveal structure of stimulus space Curto, Giusti, Itskov and Pastalkova (2015): Clique topology reveals intrinsic geometric structure in neural correlations

29 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession

30 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation

31 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings

32 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings Tactile sensory input

33 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings Tactile sensory input Olfactory sensory input

34 Place cells care about more than space Neuroscientists suspect (know) that firing is influenced also by Theta phase preference/precession Head orientation Neuron couplings Tactile sensory input Olfactory sensory input???

35 Uncovering such hidden information Suppose we have a list of candidate stimuli influencing place cell firing, and a recording of the corresponding animal state x(δt), x(2δt),, x(t ) M, as well as spike trains S 1,, S u recorded from N neurons.

36 Uncovering such hidden information Suppose we have a list of candidate stimuli influencing place cell firing, and a recording of the corresponding animal state x(δt), x(2δt),, x(t ) M, as well as spike trains S 1,, S u recorded from N neurons. Two questions: 1 Is our list of candidate stimuli complete? 2 If not, what can we say about the topology of remaining (unknown) stimuli?

37 Kinetic Ising model for neuron activity The Ising model has seen success as a model of neuron firing. Definition G complete graph on vertices 1,, N. Edge weights J u,u. Let E 1,, E u R R. A set of ±1-valued random variables S u (t), with 1 i N and t = kδt are said to obey the kinetic Ising model with couplings J and external fields E 1,, E u if they have the conditional probabilities P (S u (t + Δt) = s S 1 (t) = s 1,, S u (t) = s u ) = exp (s (E u (t) + u J u =1 u,u s u )). 2cosh (E u (t) + u J u =1 u,u s u )

38 Kinetic Ising model for neuron activity The Ising model has seen success as a model of neuron firing. Definition G complete graph on vertices 1,, N. Edge weights J u,u. Let E 1,, E u R R. A set of ±1-valued random variables S u (t), with 1 i N and t = kδt are said to obey the kinetic Ising model with couplings J and external fields E 1,, E u if they have the conditional probabilities P (S u (t + Δt) = s S 1 (t) = s 1,, S u (t) = s u ) = exp (s (E u (t) + u J u =1 u,u s u )). 2cosh (E u (t) + u J u =1 u,u s u ) F u (t) = E u (t) + u u =1 J u,u s u is called the system s Hamiltonian.

39 F u (t) = E u (t) + u u =1 J u,u s u 1.0 Firing probability u u (u ) Internal couplings J u,u capture activiation and inhibition by neighbor neurons. External fields E u capture stimuli.

40 F u (t) = E u (t) + u u =1 J u,u s u 1.0 Firing probability u u (u ) Internal couplings J u,u capture activiation and inhibition by neighbor neurons. External fields E u capture stimuli. Assume E u (t) = E spatial,u (x(t)) + E head,u (x(t)) + E u,u (x(t)) + +??? where each term is a sum of Gaussians on factors of some nice

41 Kinetic Ising model for neuron activity; example Space and head only. Physical space [0, 1] 2, head configuration S 1. x R M = [0, 1] 2 S 1.

42 Kinetic Ising model for neuron activity; example Space and head only. Physical space [0, 1] 2, head configuration S 1. x R M = [0, 1] 2 S 1. Then in assume E u (t) = E spatial,u (x(t)) + E head,u (x(t)), E spatial,u (x, y, α) = e (u u u ) 2 (u u u ) 2 E head,u (x, y, α) = e (u u u ) 2. I.e. spatial place field in (p u, q u ), head direction place field in a u.

43 Inference Will just write E u (t) = u u =1 A u,u exp ( d(x(t), c u ) 2 ). Easy to see that the likelihood function of observed data under the model is convex in A. Then we can 1 Maximize (log-)likelihood using say BFGS. 2 Get contribution of E spatial,u, E head,u and any other suspected covariates.

44 Inference Will just write E u (t) = u u =1 A u,u exp ( d(x(t), c u ) 2 ). Easy to see that the likelihood function of observed data under the model is convex in A. Then we can 1 Maximize (log-)likelihood using say BFGS. 2 Get contribution of E spatial,u, E head,u and any other suspected covariates. 3 Remove said (expected) contribution from data.

45 Inference Will just write E u (t) = u u =1 A u,u exp ( d(x(t), c u ) 2 ). Easy to see that the likelihood function of observed data under the model is convex in A. Then we can 1 Maximize (log-)likelihood using say BFGS. 2 Get contribution of E spatial,u, E head,u and any other suspected covariates. 3 Remove said (expected) contribution from data. 4 Get new spike trains (residuals); can still compute correlations and persistence.

46 Stimuli list exhausted? No Yes Information about topology of hidden stimuli No Random? No more to learn Yes Persistence diagrams and Betti curves 1 Pick phenomenon/stimuli; e.g. spatial pos., head dir., u -phase pref.,???, 2 Infer its contribution to spike trains 3 Remove contribution Persistent homology Spike trains Experiment or simulation

47 Synthetic data Real recordings are now plentiful 1, but: 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

48 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

49 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

50 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

51 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

52 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. Why stop at one head? ( Given an u -headed rat exploring a Klein bottle ) 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

53 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. Why stop at one head? ( Given an u -headed rat exploring a Klein bottle ) Theta phase preference/precession. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

54 Synthetic data Real recordings are now plentiful 1, but: Experiments tend not to be topologically minded, and they take a long time to do. There is a lot of black magic that goes into postprocessing; data should say consult your neuroscientist before trusting. So, synthesize data: Create firing data using the same (kinetic Ising) model as for the inference. Covariates are limited now only by your imagination: Spatial environment can be anything. Why stop at one head? ( Given an u -headed rat exploring a Klein bottle ) Theta phase preference/precession. Neurons can be coupled at will. 1 E.g. Collaborative Research in Computational Neuroscience data sharing at

55 Proof of concept 1 Space is a an annulus (specifically square box with impassable disc in the middle). Spatial tuning quite strong. Head direction tuning quite strong.

56 Proof of concept 1 Space is a an annulus (specifically square box with impassable disc in the middle). Spatial tuning quite strong. Head direction tuning quite strong. We pretend that we are neuroscientists who have not heard of head direction tuning; we believe spatial location is the only thing influencing neural activity.

57 Proof of concept 1; original data

58 Proof of concept 1; original data Death H Birth

59 Proof of concept 1; space removed

60 Proof of concept 1; space removed Death H Birth

61 Proof of concept 1; head removed

62 Proof of concept 1; head removed Death H Birth

63 Proof of concept 1; space and head removed

64 Proof of concept 1; space and head removed Death H Birth

65 Proof of concept 1; space and head removed 0.5 H Death Betti number Birth

66 Proof of concept 1; it s random Betti number Original Spatial removed Head dir. removed Spatial + head dir. removed Shuffled corr Filtration

67 Thank you!

arxiv: v1 [q-bio.nc] 22 Oct 2015

arxiv: v1 [q-bio.nc] 22 Oct 2015 Using persistent homology to reveal hidden information in neural data Gard Spreemann 1, Benjamin Dunn 2, Magnus Bakke Botnan 1, and Nils A. Baas 1 arxiv:1516629v1 [q-bio.nc] 22 Oct 2015 1 {gard.spreemann,

More information

Metrics on Persistence Modules and an Application to Neuroscience

Metrics on Persistence Modules and an Application to Neuroscience Metrics on Persistence Modules and an Application to Neuroscience Magnus Bakke Botnan NTNU November 28, 2014 Part 1: Basics The Study of Sublevel Sets A pair (M, f ) where M is a topological space and

More information

Neural Codes and Neural Rings: Topology and Algebraic Geometry

Neural Codes and Neural Rings: Topology and Algebraic Geometry Neural Codes and Neural Rings: Topology and Algebraic Geometry Ma191b Winter 2017 Geometry of Neuroscience References for this lecture: Curto, Carina; Itskov, Vladimir; Veliz-Cuba, Alan; Youngs, Nora,

More information

RESEARCH STATEMENT. Nora Youngs, University of Nebraska - Lincoln

RESEARCH STATEMENT. Nora Youngs, University of Nebraska - Lincoln RESEARCH STATEMENT Nora Youngs, University of Nebraska - Lincoln 1. Introduction Understanding how the brain encodes information is a major part of neuroscience research. In the field of neural coding,

More information

What makes a neural code convex?

What makes a neural code convex? What makes a neural code convex? Carina Curto, Elizabeth Gross, Jack Jeffries,, Katherine Morrison 5,, Mohamed Omar 6, Zvi Rosen 7,, Anne Shiu 8, and Nora Youngs 6 November 9, 05 Department of Mathematics,

More information

Every Binary Code Can Be Realized by Convex Sets

Every Binary Code Can Be Realized by Convex Sets Every Binary Code Can Be Realized by Convex Sets Megan K. Franke 1 and Samuel Muthiah 2 arxiv:1711.03185v2 [math.co] 27 Apr 2018 1 Department of Mathematics, University of California Santa Barbara, Santa

More information

arxiv: v1 [q-bio.nc] 6 May 2016

arxiv: v1 [q-bio.nc] 6 May 2016 WHAT CAN TOPOLOGY TELL US ABOUT THE NEURAL CODE? arxiv:65.95v [q-bio.nc] 6 May 26 CARINA CURTO Abstract. Neuroscience is undergoing a period of rapid experimental progress and expansion. New mathematical

More information

Sean Escola. Center for Theoretical Neuroscience

Sean Escola. Center for Theoretical Neuroscience Employing hidden Markov models of neural spike-trains toward the improved estimation of linear receptive fields and the decoding of multiple firing regimes Sean Escola Center for Theoretical Neuroscience

More information

The Neural Ring: Using Algebraic Geometry to Analyze Neural Codes

The Neural Ring: Using Algebraic Geometry to Analyze Neural Codes University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Dissertations, Theses, and Student Research Papers in Mathematics Mathematics, Department of 8-2014 The Neural Ring: Using

More information

The homogeneous Poisson process

The homogeneous Poisson process The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,

More information

Transformation of stimulus correlations by the retina

Transformation of stimulus correlations by the retina Transformation of stimulus correlations by the retina Kristina Simmons (University of Pennsylvania) and Jason Prentice, (now Princeton University) with Gasper Tkacik (IST Austria) Jan Homann (now Princeton

More information

Efficient inference of interactions from non-equilibrium data and application to multi-electrode neural recordings

Efficient inference of interactions from non-equilibrium data and application to multi-electrode neural recordings Efficient inference of interactions from non-equilibrium data and application to multi-electrode neural recordings ESR: 1 Supervisor: Dr. Yasser Roudi 1, 2 1 Kavli Institute for Systems Neuroscience, Trondheim

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017 The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent

More information

Topological Data Analysis - II. Afra Zomorodian Department of Computer Science Dartmouth College

Topological Data Analysis - II. Afra Zomorodian Department of Computer Science Dartmouth College Topological Data Analysis - II Afra Zomorodian Department of Computer Science Dartmouth College September 4, 2007 1 Plan Yesterday: Motivation Topology Simplicial Complexes Invariants Homology Algebraic

More information

Lateral organization & computation

Lateral organization & computation Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar

More information

Learning from persistence diagrams

Learning from persistence diagrams Learning from persistence diagrams Ulrich Bauer TUM July 7, 2015 ACAT final meeting IST Austria Joint work with: Jan Reininghaus, Stefan Huber, Roland Kwitt 1 / 28 ? 2 / 28 ? 2 / 28 3 / 28 3 / 28 3 / 28

More information

How Behavioral Constraints May Determine Optimal Sensory Representations

How Behavioral Constraints May Determine Optimal Sensory Representations How Behavioral Constraints May Determine Optimal Sensory Representations by Salinas (2006) CPSC 644 Presented by Yoonsuck Choe Motivation Neural response is typically characterized in terms of a tuning

More information

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)

More information

Bayesian probability theory and generative models

Bayesian probability theory and generative models Bayesian probability theory and generative models Bruno A. Olshausen November 8, 2006 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using

More information

Neuronal Dynamics: Computational Neuroscience of Single Neurons

Neuronal Dynamics: Computational Neuroscience of Single Neurons Week 5 part 3a :Three definitions of rate code Neuronal Dynamics: Computational Neuroscience of Single Neurons Week 5 Variability and Noise: The question of the neural code Wulfram Gerstner EPFL, Lausanne,

More information

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU http://www.cns.nyu.edu/~pillow Oct 5, Course lecture: Computational Modeling of Neuronal Systems

More information

Marr's Theory of the Hippocampus: Part I

Marr's Theory of the Hippocampus: Part I Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr

More information

Neuroscience Introduction

Neuroscience Introduction Neuroscience Introduction The brain As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven t unlocked the mystery of the three pounds of matter

More information

A No-Go Theorem for One-Layer Feedforward Networks

A No-Go Theorem for One-Layer Feedforward Networks LETTER Communicated by Mike DeWeese A No-Go Theorem for One-Layer Feedforward Networks Chad Giusti cgiusti@seas.upenn.edu Vladimir Itskov vladimir.itskov@.psu.edu Department of Mathematics, University

More information

Cognitive Prosem. December 1, 2008

Cognitive Prosem. December 1, 2008 Cognitive Prosem December 1, 2008 Spatial Representation What does it mean to represent spatial information? What are the characteristics of human spatial representations? Reference Frames/Systems Distortions

More information

Mid Year Project Report: Statistical models of visual neurons

Mid Year Project Report: Statistical models of visual neurons Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons

More information

Encoding binary neural codes in networks of threshold-linear neurons

Encoding binary neural codes in networks of threshold-linear neurons University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 4-5-2013 Encoding binary neural codes in networks

More information

Every Neural Code Can Be Realized by Convex Sets

Every Neural Code Can Be Realized by Convex Sets Every Neural Code Can Be Realized by Convex Sets Megan K. Franke and Samuel Muthiah July 21, 2017 Abstract Place cells are neurons found in some mammals that fire based on the animal s location in their

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Brains and Computation

Brains and Computation 15-883: Computational Models of Neural Systems Lecture 1.1: Brains and Computation David S. Touretzky Computer Science Department Carnegie Mellon University 1 Models of the Nervous System Hydraulic network

More information

Neural Spike Train Analysis 1: Introduction to Point Processes

Neural Spike Train Analysis 1: Introduction to Point Processes SAMSI Summer 2015: CCNS Computational Neuroscience Summer School Neural Spike Train Analysis 1: Introduction to Point Processes Uri Eden BU Department of Mathematics and Statistics July 27, 2015 Spikes

More information

Convexity of Neural Codes

Convexity of Neural Codes Convexity of Neural Codes R. Amzi Jeffs Mohamed Omar, Advisor Nora Youngs, Reader Department of Mathematics May, 2016 Copyright 2016 R. Amzi Jeffs. The author grants Harvey Mudd College and the Claremont

More information

Uncovering spatial topology represented by rat hippocampal population neuronal codes

Uncovering spatial topology represented by rat hippocampal population neuronal codes J Comput Neurosci manuscript No. (will be inserted by the editor) Uncovering spatial topology represented by rat hippocampal population neuronal codes Zhe Chen Fabian Kloosterman Emery N. Brown Matthew

More information

One-Parameter Processes, Usually Functions of Time

One-Parameter Processes, Usually Functions of Time Chapter 4 One-Parameter Processes, Usually Functions of Time Section 4.1 defines one-parameter processes, and their variations (discrete or continuous parameter, one- or two- sided parameter), including

More information

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017 SPIKE TRIGGERED APPROACHES Odelia Schwartz Computational Neuroscience Course 2017 LINEAR NONLINEAR MODELS Linear Nonlinear o Often constrain to some form of Linear, Nonlinear computations, e.g. visual

More information

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

More information

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What

More information

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers

Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers Advances in Mathematical Physics Volume 2, Article ID 5978, 8 pages doi:.55/2/5978 Research Article Hidden Periodicity and Chaos in the Sequence of Prime Numbers A. Bershadskii Physics Department, ICAR,

More information

Learning the collective dynamics of complex biological systems. from neurons to animal groups. Thierry Mora

Learning the collective dynamics of complex biological systems. from neurons to animal groups. Thierry Mora Learning the collective dynamics of complex biological systems from neurons to animal groups Thierry Mora Università Sapienza Rome A. Cavagna I. Giardina O. Pohl E. Silvestri M. Viale Aberdeen University

More information

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One

More information

1/12/2017. Computational neuroscience. Neurotechnology.

1/12/2017. Computational neuroscience. Neurotechnology. Computational neuroscience Neurotechnology https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/ 1 Neurotechnology http://www.lce.hut.fi/research/cogntech/neurophysiology Recording

More information

On Open and Closed Convex Codes

On Open and Closed Convex Codes https://doi.org/10.1007/s00454-018-00050-1 On Open and Closed Convex Codes Joshua Cruz 1 Chad Giusti 2 Vladimir Itskov 3 Bill Kronholm 4 Received: 17 April 2017 / Revised: 5 November 2018 / Accepted: 20

More information

Algebraic Transformations of Convex Codes

Algebraic Transformations of Convex Codes Algebraic Transformations of Convex Codes Amzi Jeffs Advisor: Mohamed Omar Feb 2, 2016 Outline Neuroscience: Place cells and neural codes. Outline Neuroscience: Place cells and neural codes. Algebra Background:

More information

Finding informative neurons in the brain using Multi-Scale Relevance

Finding informative neurons in the brain using Multi-Scale Relevance Finding informative neurons in the brain using Multi-Scale Relevance Ryan John Cubero,,3, Matteo Marsili, and Yasser Roudi arxiv:.3v [q-bio.nc] Feb Kavli Institute for Systems Neuroscience and Centre for

More information

encoding and estimation bottleneck and limits to visual fidelity

encoding and estimation bottleneck and limits to visual fidelity Retina Light Optic Nerve photoreceptors encoding and estimation bottleneck and limits to visual fidelity interneurons ganglion cells light The Neural Coding Problem s(t) {t i } Central goals for today:

More information

Gentle Introduction to Infinite Gaussian Mixture Modeling

Gentle Introduction to Infinite Gaussian Mixture Modeling Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for

More information

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks Commun. Theor. Phys. (Beijing, China) 40 (2003) pp. 607 613 c International Academic Publishers Vol. 40, No. 5, November 15, 2003 Effects of Interactive Function Forms in a Self-Organized Critical Model

More information

THE retina in general consists of three layers: photoreceptors

THE retina in general consists of three layers: photoreceptors CS229 MACHINE LEARNING, STANFORD UNIVERSITY, DECEMBER 2016 1 Models of Neuron Coding in Retinal Ganglion Cells and Clustering by Receptive Field Kevin Fegelis, SUID: 005996192, Claire Hebert, SUID: 006122438,

More information

arxiv: v3 [math.co] 30 May 2017

arxiv: v3 [math.co] 30 May 2017 On open and closed convex codes arxiv:1609.03502v3 [math.co] 30 May 2017 JOSHUA CRUZ, CHAD GIUSTI, VLADIMIR ITSKOV, AND BILL KRONHOLM Abstract. Neural codes serve as a language for neurons in the brain.

More information

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing Tuning tuning curves So far: Receptive fields Representation of stimuli Population vectors Today: Contrast enhancment, cortical processing Firing frequency N 3 s max (N 1 ) = 40 o N4 N 1 N N 5 2 s max

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict

More information

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto 1 How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto What is wrong with back-propagation? It requires labeled training data. (fixed) Almost

More information

Probabilistic Models in Theoretical Neuroscience

Probabilistic Models in Theoretical Neuroscience Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction

More information

10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode

10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode 10/8/014 9.07 INTRODUCTION TO STATISTICS FOR BRAIN AND COGNITIVE SCIENCES Lecture 4 Emery N. Brown The Multivariate Gaussian Distribution Case : Probability Model for Spike Sorting The data are tetrode

More information

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Liam Paninski a,b,2 Jonathan Pillow b Eero Simoncelli b a Gatsby Computational Neuroscience Unit, University College

More information

Statistical models for neural encoding

Statistical models for neural encoding Statistical models for neural encoding Part 1: discrete-time models Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk

More information

Dual Nature Hidden Layers Neural Networks A Novel Paradigm of Neural Network Architecture

Dual Nature Hidden Layers Neural Networks A Novel Paradigm of Neural Network Architecture Dual Nature Hidden Layers Neural Networks A Novel Paradigm of Neural Network Architecture S.Karthikeyan 1, Ravi Prakash 2, B.B.Paul 3 1 Lecturer, Department of Computer Science, Faculty of Science, Banaras

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

An Introductory Course in Computational Neuroscience

An Introductory Course in Computational Neuroscience An Introductory Course in Computational Neuroscience Contents Series Foreword Acknowledgments Preface 1 Preliminary Material 1.1. Introduction 1.1.1 The Cell, the Circuit, and the Brain 1.1.2 Physics of

More information

k k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation

k k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation K-Class Classification Problem Let us denote the -th class by C, with n exemplars or training samples, forming the sets T for = 1,, K: {( x, ) p = 1 n } T = d,..., p p The complete training set is T =

More information

A Kernel on Persistence Diagrams for Machine Learning

A Kernel on Persistence Diagrams for Machine Learning A Kernel on Persistence Diagrams for Machine Learning Jan Reininghaus 1 Stefan Huber 1 Roland Kwitt 2 Ulrich Bauer 1 1 Institute of Science and Technology Austria 2 FB Computer Science Universität Salzburg,

More information

Causal modeling of fmri: temporal precedence and spatial exploration

Causal modeling of fmri: temporal precedence and spatial exploration Causal modeling of fmri: temporal precedence and spatial exploration Alard Roebroeck Maastricht Brain Imaging Center (MBIC) Faculty of Psychology & Neuroscience Maastricht University Intro: What is Brain

More information

What is the neural code? Sekuler lab, Brandeis

What is the neural code? Sekuler lab, Brandeis What is the neural code? Sekuler lab, Brandeis What is the neural code? What is the neural code? Alan Litke, UCSD What is the neural code? What is the neural code? What is the neural code? Encoding: how

More information

Random topology and geometry

Random topology and geometry Random topology and geometry Matthew Kahle Ohio State University AMS Short Course Introduction I predict a new subject of statistical topology. Rather than count the number of holes, Betti numbers, etc.,

More information

The Fundamental Group

The Fundamental Group The Fundamental Group Renzo s math 472 This worksheet is designed to accompany our lectures on the fundamental group, collecting relevant definitions and main ideas. 1 Homotopy Intuition: Homotopy formalizes

More information

Instructor (Brad Osgood)

Instructor (Brad Osgood) TheFourierTransformAndItsApplications-Lecture26 Instructor (Brad Osgood): Relax, but no, no, no, the TV is on. It's time to hit the road. Time to rock and roll. We're going to now turn to our last topic

More information

Transductive neural decoding for unsorted neuronal spikes of rat hippocampus

Transductive neural decoding for unsorted neuronal spikes of rat hippocampus Transductive neural decoding for unsorted neuronal spikes of rat hippocampus The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression CSC2515 Winter 2015 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/csc2515_winter15.html

More information

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use?

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use? Today Statistical Learning Parameter Estimation: Maximum Likelihood (ML) Maximum A Posteriori (MAP) Bayesian Continuous case Learning Parameters for a Bayesian Network Naive Bayes Maximum Likelihood estimates

More information

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION Supplementary discussion 1: Most excitatory and suppressive stimuli for model neurons The model allows us to determine, for each model neuron, the set of most excitatory and suppresive features. First,

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict

More information

Neuronal structure detection using Persistent Homology

Neuronal structure detection using Persistent Homology Neuronal structure detection using Persistent Homology J. Heras, G. Mata and J. Rubio Department of Mathematics and Computer Science, University of La Rioja Seminario de Informática Mirian Andrés March

More information

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University

More information

The Intersection of Statistics and Topology:

The Intersection of Statistics and Topology: The Intersection of Statistics and Topology: Confidence Sets for Persistence Diagrams Brittany Terese Fasy joint work with S. Balakrishnan, F. Lecci, A. Rinaldo, A. Singh, L. Wasserman 3 June 2014 [FLRWBS]

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16 60.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 0/3/6 6. Introduction We talked a lot the last lecture about greedy algorithms. While both Prim

More information

A Polarization Operation for Pseudomonomial Ideals

A Polarization Operation for Pseudomonomial Ideals A Polarization Operation for Pseudomonomial Ideals Jeffrey Sun September 7, 2016 Abstract Pseudomonomials and ideals generated by pseudomonomials (pseudomonomial ideals) are a central object of study in

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Knowledge

More information

Lecture 12: Quality Control I: Control of Location

Lecture 12: Quality Control I: Control of Location Lecture 12: Quality Control I: Control of Location 10 October 2005 This lecture and the next will be about quality control methods. There are two reasons for this. First, it s intrinsically important for

More information

EM Algorithm & High Dimensional Data

EM Algorithm & High Dimensional Data EM Algorithm & High Dimensional Data Nuno Vasconcelos (Ken Kreutz-Delgado) UCSD Gaussian EM Algorithm For the Gaussian mixture model, we have Expectation Step (E-Step): Maximization Step (M-Step): 2 EM

More information

Linking non-binned spike train kernels to several existing spike train metrics

Linking non-binned spike train kernels to several existing spike train metrics Linking non-binned spike train kernels to several existing spike train metrics Benjamin Schrauwen Jan Van Campenhout ELIS, Ghent University, Belgium Benjamin.Schrauwen@UGent.be Abstract. This work presents

More information

Computational and Statistical Tradeoffs via Convex Relaxation

Computational and Statistical Tradeoffs via Convex Relaxation Computational and Statistical Tradeoffs via Convex Relaxation Venkat Chandrasekaran Caltech Joint work with Michael Jordan Time-constrained Inference o Require decision after a fixed (usually small) amount

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA

More information

Graphical Models. Outline. HMM in short. HMMs. What about continuous HMMs? p(o t q t ) ML 701. Anna Goldenberg ... t=1. !

Graphical Models. Outline. HMM in short. HMMs. What about continuous HMMs? p(o t q t ) ML 701. Anna Goldenberg ... t=1. ! Outline Graphical Models ML 701 nna Goldenberg! ynamic Models! Gaussian Linear Models! Kalman Filter! N! Undirected Models! Unification! Summary HMMs HMM in short! is a ayes Net hidden states! satisfies

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution 9.07 INTRODUCTION TO STATISTICS FOR BRAIN AND COGNITIVE SCIENCES Lecture 4 Emery N. Brown The Multivariate Gaussian Distribution Analysis of Background Magnetoencephalogram Noise Courtesy of Simona Temereanca

More information

MITOCW ocw-18_02-f07-lec02_220k

MITOCW ocw-18_02-f07-lec02_220k MITOCW ocw-18_02-f07-lec02_220k The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.

More information

Statistics Random Variables

Statistics Random Variables 1 Statistics Statistics are used in a variety of ways in neuroscience. Perhaps the most familiar example is trying to decide whether some experimental results are reliable, using tests such as the t-test.

More information

Gaussian process based nonlinear latent structure discovery in multivariate spike train data

Gaussian process based nonlinear latent structure discovery in multivariate spike train data Gaussian process based nonlinear latent structure discovery in multivariate spike train data Anqi Wu, Nicholas A. Roy, Stephen Keeley, & Jonathan W. Pillow Princeton Neuroscience Institute Princeton University

More information

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)

More information

Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics

Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics arxiv:1512.07839v4 [cs.lg] 7 Jun 2017 Sacha Sokoloski Max Planck Institute for Mathematics in the Sciences Abstract

More information

arxiv: v2 [math.pr] 15 May 2016

arxiv: v2 [math.pr] 15 May 2016 MAXIMALLY PERSISTENT CYCLES IN RANDOM GEOMETRIC COMPLEXES OMER BOBROWSKI, MATTHEW KAHLE, AND PRIMOZ SKRABA arxiv:1509.04347v2 [math.pr] 15 May 2016 Abstract. We initiate the study of persistent homology

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and d XY Models with Symmetry-Breaking Fields renormalization group study of higher order gradients, cosines and vortices Leo Kadanoff and Random Matrix Theory Non-Hermitian Localization in

More information

Natural Image Statistics

Natural Image Statistics Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex

More information

The receptive fields of neurons are dynamic; that is, their

The receptive fields of neurons are dynamic; that is, their An analysis of neural receptive field plasticity by point process adaptive filtering Emery N. Brown*, David P. Nguyen*, Loren M. Frank*, Matthew A. Wilson, and Victor Solo *Neuroscience Statistics Research

More information

Math 350: An exploration of HMMs through doodles.

Math 350: An exploration of HMMs through doodles. Math 350: An exploration of HMMs through doodles. Joshua Little (407673) 19 December 2012 1 Background 1.1 Hidden Markov models. Markov chains (MCs) work well for modelling discrete-time processes, or

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information