Multi-neuron spike trains - distances and information.

Size: px
Start display at page:

Download "Multi-neuron spike trains - distances and information."

Transcription

1 Multi-neuron spike trains - distances and information. Conor Houghton Computer Science, U Bristol NETT Florence, March 204

2 Thanks for the invite In 87 Stendhal reportedly was overcome by the cultural richness of Florence he encountered when he first visited the Tuscan city. As he described in his book Naples and Florence: A Journey from Milan to Reggio: As I emerged from the porch of Santa Croce, I was seized with a fierce palpitation of the heart (that same symptom which, in Berlin, is referred to as an attack of the nerves); the well-spring of life was dried up within me, and I walked in constant fear of falling to the ground. Wikipedia article

3 Motivation Overview. Mutual information for spaces with distances. [Rederive a result due to Kraskov, Stöbauer and Grassberger (PRE 2004)] Spike trains and sets of spike trains as a space with distance.

4 Motivation Spike trains. Spiking responses in the auditory forebrain of zebra finch. A B C D

5 Motivation Classical approach I. Discretize Split into words , 0000, 0000 Bialek, de Ruyer van Steveninck, Strong and other coworkers, late 990s.

6 Motivation Classical approach II. Estimate probability of words. For example, say w 8 = 0000 then estimate p(w 8 ) # occurrences of w 8 # words Calculate H(W ) = i p(w i ) log 2 p(w i ) = log 2 p(w i ) Bialek, de Ruyer van Steveninck, Strong and other coworkers, late 990s.

7 Motivation Classical approach III. Conditional probability. s 0 s s 2 s 3 s 4 s H(W s 0 ) H(W s ) H(W s 2 ) H(W s 3 ) H(W s 4 ) H(W s 5 ) Bialek, de Ruyer van Steveninck, Strong and other coworkers, late 990s.

8 Motivation Classical approach IV. Mutual information H(W S) = H(W s i ) and I (W ; S) = H(W ) H(W S) Bialek, de Ruyer van Steveninck, Strong and other coworkers, late 990s.

9 Motivation ms scale information in blow fly spike trains. Bialek, de Ruyer van Steveninck, Strong and other coworkers, late 990s.

10 Motivation Difficulties with the classical approach. Undersampling. 00 ms words and 2 ms bins gives 2 50 = words. Lots of clever approaches to this, for example Nemenman et al. (PRE 2004, BMC Neuroscience 2007) where a cunning prior is used for p(w i ). Sampling bias. An even distribution will never give equal counts for each word, giving different p(w i ). Lots of clever approaches to this too, see Panzeri et al. (J Neurophys. 2007). Ilya Nemenman, William Bialek and Rob de Ruyter van Steveninck. Physical Review E 69.5 (2004): 056. Panzeri, Stefano, et al. Journal of Neurophysiology 98.3 (2007):

11 Motivation Many fixes but still... Neuron - neuron mutual information. Maze - neuron mutual information. Mutual information with multiple units.

12 KDE for spike trains. Approaches to probability estimation Parameteric approach. See Gillespie and Houghton (JCN 20).... or Yu et al., (Front. in Comp. Neuro. 200). Non-parametric approach Histograms. Kernel density estimation (KDE). kth nearest neighbor (knn). We will see that KDE and knn lead to more-or-less the same formula. Yunguo Yu et al. Frontiers in Computational Neuroscience 4 (200).

13 KDE for spike trains. Kernel density estimation. p(x) = k(x x i ) n i Picture from density estimation

14 KDE for spike trains. Histograms and the classical approach Histogram: binning : R Z x int (x/δx)th bin Converting spike trains to words: discretization : space of spike trains Z r w

15 KDE for spike trains. What we need for kernel density estimation. So we must have p(x) = k(x x i ) n i k(x)dx = That means we need to be able to integrate.

16 KDE for spike trains. Spike train space.

17 KDE for spike trains. Integrating in the space of spike trains I. The space of spike trains has no coordinates, but it does have a measure given by the distribution of spike trains. vol(d) = P(r D) which can be estimated by the fraction of responses in D vol(d) # spike trains in D # spike trains

18 KDE for spike trains. Integrating in the space of spike trains II. volume 3/7

19 KDE for spike trains. Basic idea Want to calculate: I (R; S) = s S R p(r, s) log 2 p(r s) p(r) dr Use p(r)to give a measure. Use KDE to estimate p(r s). Any integrals will be estimated by counting points. Kernels will be defined using volume. R. Joshua Tobin and Conor J. Houghton, Entropy 5 (203)

20 KDE for spike trains. Result After some fiddling this give I (R; S) n r i log 2 n s c(r i, s i ; n h ) n h n r number of responses. n s number of stimuli. n h size of the kernel. c(r i, s i ; n h ) number of responses to stimulus s i in the kernel around r i. R. Joshua Tobin and Conor J. Houghton, Entropy 5 (203)

21 KDE for spike trains. c(r i, s i ; n h ) c(crossed, red, 3) = 2

22 KDE for spike trains. Results I We are interested in the structure of spike train space as a metric space, so simulated test data was constructed using the sort of distribution of spike trains in metric space observed in Gillespie and Houghton (JCN 20). James Gillespie and Conor Houghton, The Journal of Computational Neuroscience, 30 (20)

23 KDE for spike trains. Results II histogram estimated (ban) true (ban) KDE estimated (ban) true (ban) n s = 0 and n t = 0. R. Joshua Tobin and Conor J. Houghton, Entropy 5 (203)

24 KDE for spike trains. Results III histogram estimated (ban) true (ban) KDE estimated (ban) true (ban) n s = 0 and n t = 200. R. Joshua Tobin and Conor J. Houghton, Entropy 5 (203)

25 KDE for spike trains. KDE and knn KDE I (R; S) n r i log 2 n s c(r i, s i ; n h ) n h knn I e (R; S) Ϝ(n k ) + Ϝ(n t n s ) Ϝ(n t ) Ϝ[C(r i, s i ; n k )] n r i They use a Kozachenko and Leonenko estimator and, by a clever choice of how they pick k for different parts of the estimate, they get all the volume-based terms to cancel. Alexander Kraskov, Harald Stögbauer and Peter Grassberger. Physical Review E 69.6 (2004):

26 Distances Euclidean metric Picture from Google maps

27 Distances Non-Euclidean metric Picture from Google maps

28 Distances Non-metric Picture from

29 Distances van Rossum metric Spike trains mapped to functions and a metric on the space of functions induces a metric on the spike train space. Mark van Rossum, Neural Computation 3.4 (200):

30 Distances Multi-unit van Rossum metric There is a multi-unit easily computed version of the van Rossum metric. It relies on a time constant and a population parameter. Houghton, Conor and Kamal Sen, Neural computation 20.6 (2008): Houghton, Conor and Thomas Kreuz, Network: (202):

31 Distances The Victor-Purpura metric I. Edit one spike train in to the other:. Insertion of a spike with a cost of one. 2. Deletion of a spike with a cost of one. 3. Moving a spike a distance δt costs q δt. The distance is the cost of the cheapest edit. Jonathan D. Victor and Keith P. Purpura, Journal of Neurophysiology 76.2 (996):

32 Distances The Victor-Purpura metric II. u u 2 u 3 spike train u qδt qδt 2 spike train v v v 2 v 3 v 4 d = 3 + q(δt + δt 2 )

33 Distances Multi-neuron Victor-Purpura I. The Victor-Purpura metric can be extended to measure a distance between a pair of population responses by adding a cost k for changing the identity of a spike. Edit one spike train in to the other:. Insertion of a spike with a cost of one. 2. Deletion of a spike with a cost of one. 3. Change the neuron label of a spike at a cost k. 4. Moving a spike a distance δt costs q δt. The distance is the cost of the cheapest edit. Dmitriy Aronov et al., Journal of Neurophysiology 89.6 (2003):

34 Distances Multi-neuron Victor-Purpura II. u u 2 u 3 spike train u qδt + k qδt 2 spike train v v v 2 v 3 v 4 d = 3 + q(δt + δt 2 ) + k

35 Distances Labelled-line and population codes If k = 0 it does not matter what neuron a spike came from, this is a population code. It s like superimposing all the spike trains in the population and then working out the distance. If k = 2 it is never worth changing the label, this is a labelled-line code. The distance between the two population responses is just the sum of the distances for the individual pairs of responses for each neuron. Dmitriy Aronov et al., Journal of Neurophysiology 89.6 (2003):

36 Distances Problems Multi-unit VP metric computationally expensive. How do we calculate q and k?

37 Distances SPIKE metric. u u 2 u 3 spike train u δt δt 2 δt 3 spike train v v v 2 d = (δt + δt 2 + δt 3 ) + (δt + δt 3 ) = 2δt + δt 2 + 2δt 3 Thomas Kreuz, Daniel Chicharro, Conor Houghton, Ralph G. Andrzejak and Florian Mormann, The Journal Neurophysiology 09 (203)

38 Distances Multi-neuron SPIKE metric. Extending SPIKE to multiple neurons is easy, just add a distance between neurons. spike train u u u 2 u 3 δt δt 2 δt 3 + k spike train v v v 2 d == 2δt + 2δt 2 + δt 3 + k

39 Distances Multi-neuron SPIKE metric - problem. Still stuck with k.

40 The end Conclusions Distance-based measures of mutual information. Multi-neuron distance functions.

41 The end Acknowledgements Collaborators. Thomas Kreuz / Josh Tobin / James Gillespie / Cathal Cooney. Funding: the James S McDonnell Foundation.

Information Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35

Information Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35 1 / 35 Information Theory Mark van Rossum School of Informatics, University of Edinburgh January 24, 2018 0 Version: January 24, 2018 Why information theory 2 / 35 Understanding the neural code. Encoding

More information

On the efficient calculation of van Rossum distances.

On the efficient calculation of van Rossum distances. On the efficient calculation of van Rossum distances. Conor Houghton 1 and Thomas Kreuz 2 1 School of Mathematics, Trinity College Dublin, Ireland. 2 Institute for complex systems - CNR, Sesto Fiorentino,

More information

What can spike train distances tell us about the neural code?

What can spike train distances tell us about the neural code? What can spike train distances tell us about the neural code? Daniel Chicharro a,b,1,, Thomas Kreuz b,c, Ralph G. Andrzejak a a Dept. of Information and Communication Technologies, Universitat Pompeu Fabra,

More information

Thanks. WCMC Neurology and Neuroscience. Collaborators

Thanks. WCMC Neurology and Neuroscience. Collaborators Thanks WCMC Neurology and Neuroscience Keith Purpura Ferenc Mechler Anita Schmid Ifije Ohiorhenuan Qin Hu Former students Dmitriy Aronov Danny Reich Michael Repucci Bob DeBellis Collaborators Sheila Nirenberg

More information

Estimation of information-theoretic quantities

Estimation of information-theoretic quantities Estimation of information-theoretic quantities Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk November 16, 2004 Some

More information

Victor, Binless Entropy and Information Estimates 09/04/02 1:21 PM BINLESS STRATEGIES FOR ESTIMATION OF INFORMATION FROM NEURAL DATA

Victor, Binless Entropy and Information Estimates 09/04/02 1:21 PM BINLESS STRATEGIES FOR ESTIMATION OF INFORMATION FROM NEURAL DATA 9/4/ : PM BINLESS STRATEGIES FOR ESTIMATION OF INFORMATION FROM NEURAL DATA Jonathan D. Victor Department of Neurology and Neuroscience Weill Medical College of Cornell University 3 York Avenue New York,

More information

Entropy and information estimation: An overview

Entropy and information estimation: An overview Entropy and information estimation: An overview Ilya Nemenman December 12, 2003 Ilya Nemenman, NIPS 03 Entropy Estimation Workshop, December 12, 2003 1 Workshop schedule: Morning 7:30 7:55 Ilya Nemenman,

More information

Entropy and Information in the Neuroscience Laboratory

Entropy and Information in the Neuroscience Laboratory Entropy and Information in the Neuroscience Laboratory G80.3042.002: Statistical Analysis and Modeling of Neural Data Jonathan D. Victor Department of Neurology and Neuroscience Weill Medical College of

More information

Introduction to neural spike train data for phase-amplitude analysis

Introduction to neural spike train data for phase-amplitude analysis Electronic Journal of Statistics Vol. 8 (24) 759 768 ISSN: 935-7524 DOI:.24/4-EJS865 Introduction to neural spike train data for phase-amplitude analysis Wei Wu Department of Statistics, Florida State

More information

Entropy and Information in the Neuroscience Laboratory

Entropy and Information in the Neuroscience Laboratory Entropy and Information in the Neuroscience Laboratory Graduate Course in BioInformatics WGSMS Jonathan D. Victor Department of Neurology and Neuroscience Weill Cornell Medical College June 2008 Thanks

More information

Entropy and information in neural spike trains: Progress on the sampling problem

Entropy and information in neural spike trains: Progress on the sampling problem PHYSICAL REVIEW E 69, 056111 (2004) Entropy and information in neural spike trains: Progress on the sampling problem Ilya Nemenman, 1, * William Bialek, 2, and Rob de Ruyter van Steveninck 3, 1 avli Institute

More information

A comparison of binless spike train measures

A comparison of binless spike train measures A comparison of binless spike train measures António R. C. Paiva, Il Park, and José C. Príncipe Department of Electrical and Computer Engineering University of Florida Gainesville, FL 36, USA {arpaiva,

More information

arxiv:physics/ v1 [physics.data-an] 7 Jun 2003

arxiv:physics/ v1 [physics.data-an] 7 Jun 2003 Entropy and information in neural spike trains: Progress on the sampling problem arxiv:physics/0306063v1 [physics.data-an] 7 Jun 2003 Ilya Nemenman, 1 William Bialek, 2 and Rob de Ruyter van Steveninck

More information

An information-geometric framework for statistical inferences in the neural spike train space

An information-geometric framework for statistical inferences in the neural spike train space J Comput Neurosci (2) 3:725 748 DOI.7/s827--336-x An information-geometric framework for statistical inferences in the neural spike train space Wei Wu Anuj Srivastava Received: 25 January 2 / Revised:

More information

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods) Machine Learning InstanceBased Learning (aka nonparametric methods) Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Non parametric CSE 446 Machine Learning Daniel Weld March

More information

encoding and estimation bottleneck and limits to visual fidelity

encoding and estimation bottleneck and limits to visual fidelity Retina Light Optic Nerve photoreceptors encoding and estimation bottleneck and limits to visual fidelity interneurons ganglion cells light The Neural Coding Problem s(t) {t i } Central goals for today:

More information

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides Probabilistic modeling The slides are closely adapted from Subhransu Maji s slides Overview So far the models and algorithms you have learned about are relatively disconnected Probabilistic modeling framework

More information

arxiv:cond-mat/ v2 27 Jun 1997

arxiv:cond-mat/ v2 27 Jun 1997 Entropy and Information in Neural Spike rains Steven P. Strong, 1 Roland Koberle, 1,2 Rob R. de Ruyter van Steveninck, 1 and William Bialek 1 1 NEC Research Institute, 4 Independence Way, Princeton, New

More information

arxiv: v2 [q-bio.nc] 5 Nov 2015

arxiv: v2 [q-bio.nc] 5 Nov 2015 The Annals of Applied Statistics 2015, Vol. 9, No. 3, 1278 1297 DOI: 10.1214/15-AOAS847 c Institute of Mathematical Statistics, 2015 arxiv:1506.03157v2 [q-bio.nc] 5 Nov 2015 A NEW FRAMEWORK FOR EUCLIDEAN

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: [CDL Journals Account] On: 15 May 2009 Access details: Access Details: [subscription number 785022368] Publisher Informa Healthcare Informa Ltd Registered in England and

More information

Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding

Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding Article Approximations of Shannon utual Information for Discrete Variables with Applications to Neural Population Coding Wentao Huang 1,2, * and Kechen Zhang 2, * 1 Key Laboratory of Cognition and Intelligence

More information

On similarity measures for spike trains

On similarity measures for spike trains On similarity measures for spike trains Justin Dauwels, François Vialatte, Theophane Weber, and Andrzej Cichocki RIKEN Brain Science Institute, Saitama, Japan Massachusetts Institute of Technology, Cambridge,

More information

A NEW FRAMEWORK FOR EUCLIDEAN SUMMARY STATISTICS IN THE NEURAL SPIKE TRAIN SPACE

A NEW FRAMEWORK FOR EUCLIDEAN SUMMARY STATISTICS IN THE NEURAL SPIKE TRAIN SPACE The Annals of Applied Statistics 2015, Vol. 9, No. 3, 1278 1297 DOI: 10.1214/15-AOAS847 Institute of Mathematical Statistics, 2015 A NEW FRAMEWORK FOR EUCLIDEAN SUMMARY STATISTICS IN THE NEURAL SPIKE TRAIN

More information

Bayesian probability theory and generative models

Bayesian probability theory and generative models Bayesian probability theory and generative models Bruno A. Olshausen November 8, 2006 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using

More information

A Recipe for the Estimation of Information Flow in a Dynamical System

A Recipe for the Estimation of Information Flow in a Dynamical System Entropy 2015, 17, 438-470; doi:10.3390/e17010438 Article OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy A Recipe for the Estimation of Information Flow in a Dynamical System Deniz Gencaga

More information

Chapter 7 Spike Metrics

Chapter 7 Spike Metrics Chapter 7 Spike Metrics Jonathan D. Victor and Keith P. Purpura Abstract Important questions in neuroscience, such as how neural activity represents the sensory world, can be framed in terms of the extent

More information

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One

More information

Mining Classification Knowledge

Mining Classification Knowledge Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology COST Doctoral School, Troina 2008 Outline 1. Bayesian classification

More information

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows Kn-Nearest

More information

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl Spiking neurons Membrane equation V GNa GK GCl Cm VNa VK VCl dv dt + V = V Na G Na + V K G K + V Cl G Cl G total G total = G Na + G K + G Cl = C m G total Membrane with synaptic inputs V Gleak GNa GK

More information

Mining Classification Knowledge

Mining Classification Knowledge Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology SE lecture revision 2013 Outline 1. Bayesian classification

More information

Applications in Computational Biology. Stephen J Eglen University of Cambridge

Applications in Computational Biology. Stephen J Eglen University of Cambridge Applications in Computational Biology Stephen J Eglen University of Cambridge Mathematics Is Biology s next microscope, only better; Biology Is Mathematics next Physics, only better. (J Cohen, 24 PLOS

More information

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

Model neurons!!poisson neurons!

Model neurons!!poisson neurons! Model neurons!!poisson neurons! Suggested reading:! Chapter 1.4 in Dayan, P. & Abbott, L., heoretical Neuroscience, MI Press, 2001.! Model neurons: Poisson neurons! Contents: Probability of a spike sequence

More information

How biased are maximum entropy models?

How biased are maximum entropy models? How biased are maximum entropy models? Jakob H. Macke Gatsby Computational Neuroscience Unit University College London, UK jakob@gatsby.ucl.ac.uk Iain Murray School of Informatics University of Edinburgh,

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.1 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

Statistical models for neural encoding

Statistical models for neural encoding Statistical models for neural encoding Part 1: discrete-time models Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk

More information

Machine Learning Basics

Machine Learning Basics Security and Fairness of Deep Learning Machine Learning Basics Anupam Datta CMU Spring 2019 Image Classification Image Classification Image classification pipeline Input: A training set of N images, each

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

Do Neurons Process Information Efficiently?

Do Neurons Process Information Efficiently? Do Neurons Process Information Efficiently? James V Stone, University of Sheffield Claude Shannon, 1916-2001 Nothing in biology makes sense except in the light of evolution. Theodosius Dobzhansky, 1973.

More information

Design principles for contrast gain control from an information theoretic perspective

Design principles for contrast gain control from an information theoretic perspective Design principles for contrast gain control from an information theoretic perspective Yuguo Yu Brian Potetz Tai Sing Lee Center for the Neural Computer Science Computer Science Basis of Cognition Department

More information

Neural Decoding. Mark van Rossum. School of Informatics, University of Edinburgh. January 2012

Neural Decoding. Mark van Rossum. School of Informatics, University of Edinburgh. January 2012 Neural Decoding Mark van Rossum School of Informatics, University of Edinburgh January 2012 0 Acknowledgements: Chris Williams and slides from Gatsby Liam Paninski. Version: January 31, 2018 1 / 63 2 /

More information

What do V1 neurons like about natural signals

What do V1 neurons like about natural signals What do V1 neurons like about natural signals Yuguo Yu Richard Romero Tai Sing Lee Center for the Neural Computer Science Computer Science Basis of Cognition Department Department Carnegie Mellon University

More information

Comparison of objective functions for estimating linear-nonlinear models

Comparison of objective functions for estimating linear-nonlinear models Comparison of objective functions for estimating linear-nonlinear models Tatyana O. Sharpee Computational Neurobiology Laboratory, the Salk Institute for Biological Studies, La Jolla, CA 937 sharpee@salk.edu

More information

Time-rescaling methods for the estimation and assessment of non-poisson neural encoding models

Time-rescaling methods for the estimation and assessment of non-poisson neural encoding models Time-rescaling methods for the estimation and assessment of non-poisson neural encoding models Jonathan W. Pillow Departments of Psychology and Neurobiology University of Texas at Austin pillow@mail.utexas.edu

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University

More information

Neural Spike Train Analysis 1: Introduction to Point Processes

Neural Spike Train Analysis 1: Introduction to Point Processes SAMSI Summer 2015: CCNS Computational Neuroscience Summer School Neural Spike Train Analysis 1: Introduction to Point Processes Uri Eden BU Department of Mathematics and Statistics July 27, 2015 Spikes

More information

ICA based on a Smooth Estimation of the Differential Entropy

ICA based on a Smooth Estimation of the Differential Entropy ICA based on a Smooth Estimation of the Differential Entropy Lev Faivishevsky School of Engineering, Bar-Ilan University levtemp@gmail.com Jacob Goldberger School of Engineering, Bar-Ilan University goldbej@eng.biu.ac.il

More information

Matching Techniques. Technical Session VI. Manila, December Jed Friedman. Spanish Impact Evaluation. Fund. Region

Matching Techniques. Technical Session VI. Manila, December Jed Friedman. Spanish Impact Evaluation. Fund. Region Impact Evaluation Technical Session VI Matching Techniques Jed Friedman Manila, December 2008 Human Development Network East Asia and the Pacific Region Spanish Impact Evaluation Fund The case of random

More information

Neural Data Analysis

Neural Data Analysis Neural Data Analysis Comprehensive Neural Analysis with AxIS Layers of Neuronal Activity Raw Voltage Data Single electrode analysis provides information on individual neuron firing and bursting Control

More information

Hammerstein-Wiener Model: A New Approach to the Estimation of Formal Neural Information

Hammerstein-Wiener Model: A New Approach to the Estimation of Formal Neural Information Hammerstein-Wiener Model: A New Approach to the Estimation of Formal Neural Information Reza Abbasi-Asl1, Rahman Khorsandi2, Bijan Vosooghi-Vahdat1 1. Biomedical Signal and Image Processing Laboratory

More information

Adaptive contrast gain control and information maximization $

Adaptive contrast gain control and information maximization $ Neurocomputing 65 66 (2005) 6 www.elsevier.com/locate/neucom Adaptive contrast gain control and information maximization $ Yuguo Yu a,, Tai Sing Lee b a Center for the Neural Basis of Cognition, Carnegie

More information

Intro to Contemporary Math

Intro to Contemporary Math Intro to Contemporary Math Hamiltonian Circuits and Nearest Neighbor Algorithm Nicholas Nguyen nicholas.nguyen@uky.edu Department of Mathematics UK Agenda Hamiltonian Circuits and the Traveling Salesman

More information

Transductive neural decoding for unsorted neuronal spikes of rat hippocampus

Transductive neural decoding for unsorted neuronal spikes of rat hippocampus Transductive neural decoding for unsorted neuronal spikes of rat hippocampus The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Lecture 7: DecisionTrees

Lecture 7: DecisionTrees Lecture 7: DecisionTrees What are decision trees? Brief interlude on information theory Decision tree construction Overfitting avoidance Regression trees COMP-652, Lecture 7 - September 28, 2009 1 Recall:

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 3 THE PERCEPTRON Learning Objectives: Describe the biological motivation behind the perceptron. Classify learning algorithms based on whether they are error-driven

More information

Generalized Correlation Power Analysis

Generalized Correlation Power Analysis Generalized Correlation Power Analysis Sébastien Aumônier Oberthur Card Systems SA, 71-73, rue des Hautes Pâtures 92726 Nanterre, France s.aumonier@oberthurcs.com Abstract. The correlation power attack

More information

Pacific Symposium on Biocomputing 6: (2001)

Pacific Symposium on Biocomputing 6: (2001) Analyzing sensory systems with the information distortion function Alexander G Dimitrov and John P Miller Center for Computational Biology Montana State University Bozeman, MT 59715-3505 falex,jpmg@nervana.montana.edu

More information

Prediction on Spike Data Using Kernel Algorithms

Prediction on Spike Data Using Kernel Algorithms Prediction on Spike Data Using Kernel Algorithms Jan Eichhorn, Andreas Tolias, Alexander Zien, Malte Kuss, Carl Edward Rasmussen, Jason Weston, Nikos Logothetis and Bernhard Schölkopf Max Planck Institute

More information

Classification of Higgs Boson Tau-Tau decays using GPU accelerated Neural Networks

Classification of Higgs Boson Tau-Tau decays using GPU accelerated Neural Networks Classification of Higgs Boson Tau-Tau decays using GPU accelerated Neural Networks Mohit Shridhar Stanford University mohits@stanford.edu, mohit@u.nus.edu Abstract In particle physics, Higgs Boson to tau-tau

More information

CMSC 422 Introduction to Machine Learning Lecture 4 Geometry and Nearest Neighbors. Furong Huang /

CMSC 422 Introduction to Machine Learning Lecture 4 Geometry and Nearest Neighbors. Furong Huang / CMSC 422 Introduction to Machine Learning Lecture 4 Geometry and Nearest Neighbors Furong Huang / furongh@cs.umd.edu What we know so far Decision Trees What is a decision tree, and how to induce it from

More information

7 Distances. 7.1 Metrics. 7.2 Distances L p Distances

7 Distances. 7.1 Metrics. 7.2 Distances L p Distances 7 Distances We have mainly been focusing on similarities so far, since it is easiest to explain locality sensitive hashing that way, and in particular the Jaccard similarity is easy to define in regards

More information

Different Criteria for Active Learning in Neural Networks: A Comparative Study

Different Criteria for Active Learning in Neural Networks: A Comparative Study Different Criteria for Active Learning in Neural Networks: A Comparative Study Jan Poland and Andreas Zell University of Tübingen, WSI - RA Sand 1, 72076 Tübingen, Germany Abstract. The field of active

More information

Non Parametric Estimation of Mutual Information through the Entropy of the Linkage

Non Parametric Estimation of Mutual Information through the Entropy of the Linkage Entropy 203, 5, 554-577; doi:0.3390/e52554 Article OPEN ACCESS entropy ISSN 099-4300 www.mdpi.com/ournal/entropy Non Parametric Estimation of Mutual Information through the Entropy of the Linkage Maria

More information

Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation

Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation NOTE Communicated by Jonathan Victor Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation Robert E. Kass kass@stat.cmu.edu Valérie Ventura vventura@stat.cmu.edu

More information

Classification: The rest of the story

Classification: The rest of the story U NIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN CS598 Machine Learning for Signal Processing Classification: The rest of the story 3 October 2017 Today s lecture Important things we haven t covered yet Fisher

More information

Received: 14 May 2009 Accepted: 16 July 2009

Received: 14 May 2009 Accepted: 16 July 2009 BMC Neuroscience BioMed Central Software A toolbox for the fast information analysis of multiple-site LFP, EEG and spike train recordings Cesare Magri 1, Kevin Whittingstall 2, Vanessa Singh 2, Nikos K

More information

Convolution and Pooling as an Infinitely Strong Prior

Convolution and Pooling as an Infinitely Strong Prior Convolution and Pooling as an Infinitely Strong Prior Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Convolutional

More information

A Statistical Journey through Historic Haverford

A Statistical Journey through Historic Haverford A Statistical Journey through Historic Haverford Arthur Berg University of California, San Diego Arthur Berg A Statistical Journey through Historic Haverford 2/ 18 The Dataset Histograms Oldest? Arthur

More information

Nonlinear Classification

Nonlinear Classification Nonlinear Classification INFO-4604, Applied Machine Learning University of Colorado Boulder October 5-10, 2017 Prof. Michael Paul Linear Classification Most classifiers we ve seen use linear functions

More information

Inner products for representation and learning in the spike train domain

Inner products for representation and learning in the spike train domain Inner products for representation and learning in the spike train domain António R. C. Paiva, Il Park, and José C. Príncipe May 11, 21 Abstract In many neurophysiological studies and brain-inspired computation

More information

Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods

Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods Kanaka Rajan Lewis-Sigler Institute for Integrative Genomics Princeton University

More information

CSE446: non-parametric methods Spring 2017

CSE446: non-parametric methods Spring 2017 CSE446: non-parametric methods Spring 2017 Ali Farhadi Slides adapted from Carlos Guestrin and Luke Zettlemoyer Linear Regression: What can go wrong? What do we do if the bias is too strong? Might want

More information

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan Karklin and Eero P. Simoncelli NYU Overview Efficient coding is a well-known objective for the evaluation and

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2010 Lecture 22: Nearest Neighbors, Kernels 4/18/2011 Pieter Abbeel UC Berkeley Slides adapted from Dan Klein Announcements On-going: contest (optional and FUN!)

More information

Measures of Spike Train Synchrony

Measures of Spike Train Synchrony Measures of Spike Train Synchrony Nebojša Božanić Supervised by Dr. Thomas Kreuz Nebojša Božanić: Measures of Spike Train Synchrony. PhD in Nonlinear Dynamics and Complex Systems, Department for Information

More information

INTRODUCTION TO NEURAL NETWORKS

INTRODUCTION TO NEURAL NETWORKS INTRODUCTION TO NEURAL NETWORKS R. Beale & T.Jackson: Neural Computing, an Introduction. Adam Hilger Ed., Bristol, Philadelphia and New York, 990. THE STRUCTURE OF THE BRAIN The brain consists of about

More information

Complex Systems Methods 3. Statistical complexity of temporal sequences

Complex Systems Methods 3. Statistical complexity of temporal sequences Complex Systems Methods 3. Statistical complexity of temporal sequences Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) 02.11.2007 1 / 24 Overview 1 Summary Kolmogorov sufficient

More information

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)

More information

Comparison of Resampling Techniques for the Non-Causality Hypothesis

Comparison of Resampling Techniques for the Non-Causality Hypothesis Chapter 1 Comparison of Resampling Techniques for the Non-Causality Hypothesis Angeliki Papana, Catherine Kyrtsou, Dimitris Kugiumtzis and Cees G.H. Diks Abstract Different resampling schemes for the null

More information

Neural coding of naturalistic motion stimuli

Neural coding of naturalistic motion stimuli INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (21) 317 329 www.iop.org/journals/ne PII: S954-898X(1)23519-2 Neural coding of naturalistic motion

More information

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee Google Trends Deep learning obtains many exciting results. Can contribute to new Smart Services in the Context of the Internet of Things (IoT). IoT Services

More information

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Maneesh Sahani Gatsby Computational Neuroscience Unit University College, London Jennifer

More information

Using persistent homology to reveal hidden information in neural data

Using persistent homology to reveal hidden information in neural data Using persistent homology to reveal hidden information in neural data Department of Mathematical Sciences, Norwegian University of Science and Technology ACAT Final Project Meeting, IST Austria July 9,

More information

Reproducible Neural Data Analysis with Elephant, Neo and odml

Reproducible Neural Data Analysis with Elephant, Neo and odml Reproducible Neural Data Analysis with Elephant, Neo and odml Michael Denker Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) Jülich Research Center, Jülich, Germany

More information

Decision Trees. Machine Learning CSEP546 Carlos Guestrin University of Washington. February 3, 2014

Decision Trees. Machine Learning CSEP546 Carlos Guestrin University of Washington. February 3, 2014 Decision Trees Machine Learning CSEP546 Carlos Guestrin University of Washington February 3, 2014 17 Linear separability n A dataset is linearly separable iff there exists a separating hyperplane: Exists

More information

9 Classification. 9.1 Linear Classifiers

9 Classification. 9.1 Linear Classifiers 9 Classification This topic returns to prediction. Unlike linear regression where we were predicting a numeric value, in this case we are predicting a class: winner or loser, yes or no, rich or poor, positive

More information

Response-Field Dynamics in the Auditory Pathway

Response-Field Dynamics in the Auditory Pathway Response-Field Dynamics in the Auditory Pathway Didier Depireux Powen Ru Shihab Shamma Jonathan Simon Work supported by grants from the Office of Naval Research, a training grant from the National Institute

More information

Information Extraction from Text

Information Extraction from Text Information Extraction from Text Jing Jiang Chapter 2 from Mining Text Data (2012) Presented by Andrew Landgraf, September 13, 2013 1 What is Information Extraction? Goal is to discover structured information

More information

Jeffrey D. Ullman Stanford University

Jeffrey D. Ullman Stanford University Jeffrey D. Ullman Stanford University 3 We are given a set of training examples, consisting of input-output pairs (x,y), where: 1. x is an item of the type we want to evaluate. 2. y is the value of some

More information

4 THE PERCEPTRON. 4.1 Bio-inspired Learning. Learning Objectives:

4 THE PERCEPTRON. 4.1 Bio-inspired Learning. Learning Objectives: 4 THE PERCEPTRON Algebra is nothing more than geometry, in words; geometry is nothing more than algebra, in pictures. Sophie Germain Learning Objectives: Describe the biological motivation behind the perceptron.

More information

Neural Encoding: Firing Rates and Spike Statistics

Neural Encoding: Firing Rates and Spike Statistics Neural Encoding: Firing Rates and Spike Statistics Dayan and Abbott (21) Chapter 1 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Background: Dirac δ Function Dirac δ function has the following

More information

Decoding conceptual representations

Decoding conceptual representations Decoding conceptual representations!!!! Marcel van Gerven! Computational Cognitive Neuroscience Lab (www.ccnlab.net) Artificial Intelligence Department Donders Centre for Cognition Donders Institute for

More information

, p 1 < p 2 < < p l primes.

, p 1 < p 2 < < p l primes. Solutions Math 347 Homework 1 9/6/17 Exercise 1. When we take a composite number n and factor it into primes, that means we write it as a product of prime numbers, usually in increasing order, using exponents

More information

Gentle Introduction to Infinite Gaussian Mixture Modeling

Gentle Introduction to Infinite Gaussian Mixture Modeling Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for

More information

Learning the dynamics of biological networks

Learning the dynamics of biological networks Learning the dynamics of biological networks Thierry Mora ENS Paris A.Walczak Università Sapienza Rome A. Cavagna I. Giardina O. Pohl E. Silvestri M.Viale Aberdeen University F. Ginelli Laboratoire de

More information

Bayesian estimation of discrete entropy with mixtures of stick-breaking priors

Bayesian estimation of discrete entropy with mixtures of stick-breaking priors Bayesian estimation of discrete entropy with mixtures of stick-breaking priors Evan Archer 24, Il Memming Park 234, & Jonathan W. Pillow 234. Institute for Computational and Engineering Sciences 2. Center

More information

Nearest Neighbor. Machine Learning CSE546 Kevin Jamieson University of Washington. October 26, Kevin Jamieson 2

Nearest Neighbor. Machine Learning CSE546 Kevin Jamieson University of Washington. October 26, Kevin Jamieson 2 Nearest Neighbor Machine Learning CSE546 Kevin Jamieson University of Washington October 26, 2017 2017 Kevin Jamieson 2 Some data, Bayes Classifier Training data: True label: +1 True label: -1 Optimal

More information