Population Coding. Maneesh Sahani Gatsby Computational Neuroscience Unit University College London
|
|
- Priscilla Hood
- 6 years ago
- Views:
Transcription
1 Population Coding Maneesh Sahani Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2010
2 Coding so far... Time-series for both spikes and stimuli Empirical estimate function(s) given measured stimuli or movements and spikes
3 Population codes High dimensionality (cells stimulus time). usually limited to simple rate codes. even prosthetic work assumes instantaneous (lagged) coding Limited empirical data can record 10s - 100s of neurons. population size more like theoretical inferences, based on single-cell and aggregate (fmri, LFP, optical) measurements.
4 Common approach The most common sort of questions asked of population codes: given assumed encoding functions, how well can we (or downstream areas) decode the encoded stimulus value? what encoding schemes would be optimal, in the sense of allowing decoders to estimate stimulus values as well as possible. Before considering populations, we need to formulate some ideas about rate coding in the context of single cells.
5 Rate coding In the rate coding context, we imagine that the firing rate of a cell r represents a single (possibly multidimensional) stimulus value s at any one time: r = f (s). Even if s and r are embedded in time-series we assume: 1. that coding is instantaneous (with a fixed lag), 2. that r (and therefore s) is constant over a short time. The actual number of spikes n produced in is then taken to be distributed around r, often according to a Poisson distribution.
6 Tuning curves The function f (s) is known as a tuning curve. Commonly assumed forms: Gaussian r 0 + r max exp [ 1 ] 2σ 2(x x pref) 2 Cosine r 0 + r max cos(θ θ pref ) Wrapped Gaussian r 0 + r max n exp von Mises ( circular Gaussian ) r 0 + r max exp [ κcos(θ θ pref ) ] [ 1 ] 2σ 2(θ θ pref 2πn) 2
7 Measuring the performance of rate codes: Discrete choice Suppose we want to make a binary choice based on firing rate: present / absent (signal detection) up / down horizontal / vertical Call one potential stimulus s 0, the other s 1. P(n s): probability density P(n s 0 ) P(n s 1 ) response
8 ROC curves probability density P(n s 0 ) P(n s 1 ) 1 response hit rate false alarm rate
9 ROC curves probability density P(n s 0 ) P(n s 1 ) 1 response hit rate false alarm rate
10 Summary measures area under the ROC curve given n 1 P(n s 1 ) and n 0 P(n s 0 ), this equals P(n 1 > n 0 ) discriminability d for equal variance Gaussians d = µ 1 µ 0 σ. for any threshold d = Φ 1 (1 FA) Φ 1 (1 HR) where Φ is a standard normal cdf. definition unclear for non-gaussian distributions.
11 Continuous estimation Now consider a (one dimensional) stimulus that takes any real value (or angle). contrast orientation motion direction movement speed Consider a neuron that fires n spikes in response to a stimulus s, according P(n f (s) ) Given n we attempt to estimate s. How well can we do?
12 Continuous estimation Useful to consider a limit given N measurements n i all generated by the same stimulus s. Then the posterior over s is log P(s {n i }) = i log P(n i s) + log P(s) log Z({n i }) and so taking N 1 N log P(s {n i}) log P(n s) n s + 0 log Z(s ) and so P(s {n i }) e N log P(n s) n s /Z = e NKL [P(n s ) P(n s)] /Z
13 Continuous estimation Now, Taylor expand the KL divergence in s around s : KL [ P(n s ) P(n s) ] = log P(n s) + log P(n s ) n s n s = log P(n s ) d log P(n s) (s s ) n s ds + log P(n s ) n s = 1 d 2 (s s ) 2 2 log P(n s) s +... ds 2 s = 1 2 (s s ) 2 J(s ) +... So in asymptopia, the posterior N (s, 1/J(s )). J(s ) is called the Fisher Information. d J(s 2 log P(n s) (d s log P(n s) ) = ds 2 ds s = s 1 d s 2 (s s ) 2 2 log P(n s) s +... ds 2 s s ) 2 (You will show that these are identical in the homework.) s
14 Cramér-Rao bound The Fisher Information is important even outside the large data limit due to a deeper result that is due to Cramér and Rao. This states that for any N, any unbiased estimator ŝ({n i }) of s will have the property that (ŝ({ni }) s ) 2 n i s 1 J(s ). Thus, Fisher Information gives a lower bound on the variance of any unbiased estimator. This is called the Cramér-Rao bound. (There is also a version for biased estimators). The Fisher Information will be our primary tool to quantify the performance of a population code.
15 Fisher Info and tuning curves n = r + noise; r = f (s) ( d ) 2 J(s )= ds log P(n s) s s ( d ) 2 = dr log P(n r ) f (s ) f (s ) s = J noise (r ) 2 f (s ) 2 f(s) J(s) firing rate / Fisher info s
16 Fisher info for Poisson neurons For Poisson neurons so and P(n f (s)) = e f (s) f (s) n J noise [ f (s )]= = n! ( d d f ( d d f ) 2 log P(n f (s)) f (s ) s ) 2 f (s) + n log f (s) log n! f (s ) ( ) 2 = 1 + n/ f (s ) = (n f (s )) 2 f (s ) 2 = f (s ) f (s ) 2= 1 f (s ) J[s ]= f (s ) 2 / f (s ) s s [not surprising! f (s) = n] s
17 Cooperative coding Labelled Line Scalar coding firing rate firing rate s s Distributed encoding firing rate firing rate s s
18 Cooperative coding All of these are found in biological systems. Issues: 1. redundancy and robustness (not scalar) 2. efficiency (not labelled line) 3. local computation (not scalar or distributed) 4. multiple values (not scalar)
19 Coding in multiple dimensions Cartesian Distributed s 2 s 2 s 1 s 1 efficient problems with multiple values represent multiple values may require more neurons
20 Cricket cercal system r a (s) = ra max [cos(θ θ a )] + = ra max [c T a v] + So, writing r a = r a /ra max : ( ) ( ) r1 r 3 c T = 1 r 2 r 4 c T v ( 2 ) r1 r v = (c 1 c 2 ) 3 = r r 2 r 1 c 1 r 3 c 3 + r 2 c 2 r 4 c 4 = 4 a c T 1 c 2 = 0 c 3 = c 1 c 4 = c 2 r a c a This is called population vector decoding.
21 Motor cortex (simplified) Cosine tuning, randomly distributed preferred directions. In general, population vector decoding works for cosine tuning cartesian or dense (tight) directions But: is it optimal? does it generalise? (Gaussian tuning curves) how accurate is it?
22 Bayesian decoding Take n a Poisson[ f a (s) ], independently for different cells. Then P(n s) = a e f a(s) ( f a (s) ) n a n a! and log P(s n) = a f a (s) + n a log( f a (s) ) log n a! + log P(s) Assume a f a(s) is independent of s for a homogeneous population, and prior is flat. d ds log P(s n) = d ds = a a n a log( f a (s) ) n a f a (s) f a(s)
23 Bayesian decoding Now, consider f a (s) = e (s s a) 2 /2σ 2, so f a(s) = (s s a )/σ 2 e (s s a) 2 /2σ 2 and set the derivative to 0: n a (s s a )/σ 2 = 0 a ŝ MAP = a n as a a n a So the MAP estimate is a population average of preferred directions. Not exactly a population vector.
24 Population Fisher Info Fisher Informations for independent random variates add: J n (s) = d2 log P(n s) ds2 = d2 log P(n ds 2 a s) a = d2 ds log P(n a s) = J 2 na (s). a a = a f a(s) 2 f a (s)
25 Optimal tuning properties A considerable amount of work has been done in recent years on finding optimal properties of tuning curves for rate-based population codes. Here, we reproduce one such argument (from Zhang and Sejnowski, 1999). Consider a population of cells that codes the value of a D dimensional stimulus, s. Let the ath cell emit r spikes in an interval τ with probability distribution that is conditionally independent of the other cells (given s) and has the form P a (r s, τ) = S (r, f a (s), τ). The tuning curve of the ath cell, f a (s), has the form f a (s) = F φ ( (ξ a ) 2) ; (ξ a ) 2 = D i (ξ a i ) 2 ; ξ a i = s i c a i σ, where F is a maximal rate and the function φ is monotically decreasing. The parameters c a and σ give the centre of the ath tuning curve and the (common) width.
26 Optimal tuning properties Now, the (i j)th term in the FI matrix for the ath cell is (by definition) [ Ji a j(s) = E log P a (r s, τ) ] log P a (r s, τ) s i s j Applying the chain rule repeatedly, we find that log P a 1 (r s, τ) = S (r, f a (s), τ) s i S (r, f a (s), τ) s i = S (2) (r, f a (s), τ) f a (s) S (r, f a (s), τ) s i (where S (2) indicates differentiation with respect to the second argument) = S (2) (r, f a (s), τ) S (r, f a (s), τ) Fφ ( (ξ a ) 2) s i D (ξi a ) 2 = S (2) (r, f a (s), τ) ( S (r, f a (s), τ) Fφ (ξ a ) 2) 2(s i c a i ) (σ a i )2 i
27 Optimal tuning properties So, [ (S Ji a (2) (r, f a ) 2 ] (s), τ) j(s) = E 4F ( 2 φ ( (ξ a ) 2)) 2 (s i c a i )(s j c a j ) S (r, f a (s), τ) σ 4 ( = A φ (ξ a ) 2, F, τ ) (s i c a i )(s j c a j ) σ 4 where the function A φ does not depend explicitly on σ.
28 Optimal tuning properties We assumed neurons were independent Fisher information adds. Approximate by integral over the tuning curve centres, assuming uniform density η of neurons. J i j (s) = a J a i j(s) = + + dc a 1 dc a dc a D ηj a i j(s) dc a D ηa φ ( (ξ a ) 2, F, τ ) (s i c a i )(s j c a j ) σ 4 Change variables: c a i ξ a i = + = σd σ 2 η σdξ a dξ a 1 + σdξ a D ηa φ ( (ξ a ) 2, F, τ ) ξa i ξa j σ 2 dξ a D A φ ( (ξ a ) 2, F, τ ) ξ a i ξ a j Now, if i j, integral is odd in both ξi a and ξ a j, and thus vanishes. If i = j, then the integral has some value D K φ (F, τ, D), independent of σ. Thus, J ii = σ D 2 ηdk φ (F, τ, D) and the total Fisher information is proportional to σ D 2.
29 Optimal tuning properties Thus optimal tuning width depends on the stimulus dimension. D = 1 σ 0 (although a lower limit is encountered when the tuning width falls below the inter-cell spacing) D = 2 J independent of σ. D > 2 σ (actual limit set by valid stimuli).
30 More... Correlated noise Extended s (feature maps etc.) Uncertainty
Encoding or decoding
Encoding or decoding Decoding How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms for extracting a stimulus
More information3.3 Population Decoding
3.3 Population Decoding 97 We have thus far considered discriminating between two quite distinct stimulus values, plus and minus. Often we are interested in discriminating between two stimulus values s
More informationSignal detection theory
Signal detection theory z p[r -] p[r +] - + Role of priors: Find z by maximizing P[correct] = p[+] b(z) + p[-](1 a(z)) Is there a better test to use than r? z p[r -] p[r +] - + The optimal
More information!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be
Supplementary Materials General case: computing log likelihood We first describe the general case of computing the log likelihood of a sensory parameter θ that is encoded by the activity of neurons. Each
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More information+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing
Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable
More informationExercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.
1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend
More information3 Neural Decoding. 3.1 Encoding and Decoding. (r 1, r 2,..., r N ) for N neurons is a list of spike-count firing rates, although,
3 Neural Decoding 3.1 Encoding and Decoding In chapters 1 and 2, we considered the problem of predicting neural responses to known stimuli. The nervous system faces the reverse problem, determining what
More informationRobust regression and non-linear kernel methods for characterization of neuronal response functions from limited data
Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Maneesh Sahani Gatsby Computational Neuroscience Unit University College, London Jennifer
More informationNeural Decoding. Chapter Encoding and Decoding
Chapter 3 Neural Decoding 3.1 Encoding and Decoding In chapters 1 and 2, we considered the problem of predicting neural responses to known stimuli. The nervous system faces the reverse problem, determining
More informationNeural coding Ecological approach to sensory coding: efficient adaptation to the natural environment
Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ.
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar
More informationWhat is the neural code? Sekuler lab, Brandeis
What is the neural code? Sekuler lab, Brandeis What is the neural code? What is the neural code? Alan Litke, UCSD What is the neural code? What is the neural code? What is the neural code? Encoding: how
More informationCSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture
CSE/NB 528 Final Lecture: All Good Things Must 1 Course Summary Where have we been? Course Highlights Where do we go from here? Challenges and Open Problems Further Reading 2 What is the neural code? What
More informationInformation Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35
1 / 35 Information Theory Mark van Rossum School of Informatics, University of Edinburgh January 24, 2018 0 Version: January 24, 2018 Why information theory 2 / 35 Understanding the neural code. Encoding
More informationNeuronal Tuning: To Sharpen or Broaden?
NOTE Communicated by Laurence Abbott Neuronal Tuning: To Sharpen or Broaden? Kechen Zhang Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies,
More information1/12/2017. Computational neuroscience. Neurotechnology.
Computational neuroscience Neurotechnology https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/ 1 Neurotechnology http://www.lce.hut.fi/research/cogntech/neurophysiology Recording
More informationDescribing Spike-Trains
Describing Spike-Trains Maneesh Sahani Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2012 Neural Coding The brain manipulates information by combining and generating action
More informationNeural Encoding Models
Neural Encoding Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2011 Studying sensory systems x(t) y(t) Decoding: ˆx(t)= G[y(t)]
More informationNeural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU http://www.cns.nyu.edu/~pillow Oct 5, Course lecture: Computational Modeling of Neuronal Systems
More informationDetection theory 101 ELEC-E5410 Signal Processing for Communications
Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off
More informationConcerns of the Psychophysicist. Three methods for measuring perception. Yes/no method of constant stimuli. Detection / discrimination.
Three methods for measuring perception Concerns of the Psychophysicist. Magnitude estimation 2. Matching 3. Detection/discrimination Bias/ Attentiveness Strategy/Artifactual Cues History of stimulation
More informationMathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One
More informationExercise Sheet 4: Covariance and Correlation, Bayes theorem, and Linear discriminant analysis
Exercise Sheet 4: Covariance and Correlation, Bayes theorem, and Linear discriminant analysis Younesse Kaddar. Covariance and Correlation Assume that we have recorded two neurons in the two-alternative-forced
More informationProbabilistic Models in Theoretical Neuroscience
Probabilistic Models in Theoretical Neuroscience visible unit Boltzmann machine semi-restricted Boltzmann machine restricted Boltzmann machine hidden unit Neural models of probabilistic sampling: introduction
More informationSean Escola. Center for Theoretical Neuroscience
Employing hidden Markov models of neural spike-trains toward the improved estimation of linear receptive fields and the decoding of multiple firing regimes Sean Escola Center for Theoretical Neuroscience
More informationEstimation of information-theoretic quantities
Estimation of information-theoretic quantities Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk November 16, 2004 Some
More informationBayesian Inference. 2 CS295-7 cfl Michael J. Black,
Population Coding Now listen to me closely, young gentlemen. That brain is thinking. Maybe it s thinking about music. Maybe it has a great symphony all thought out or a mathematical formula that would
More informationConsider the following spike trains from two different neurons N1 and N2:
About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in
More informationStatistical models for neural encoding
Statistical models for neural encoding Part 1: discrete-time models Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk
More informationencoding and estimation bottleneck and limits to visual fidelity
Retina Light Optic Nerve photoreceptors encoding and estimation bottleneck and limits to visual fidelity interneurons ganglion cells light The Neural Coding Problem s(t) {t i } Central goals for today:
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Gaussian Processes Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London
More informationWeek 3: The EM algorithm
Week 3: The EM algorithm Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2005 Mixtures of Gaussians Data: Y = {y 1... y N } Latent
More informationInferring Elapsed Time from Stochastic Neural Processes
Inferring Elapsed Time from Stochastic Neural Processes Misha B. Ahrens and Maneesh Sahani Gatsby Computational Neuroscience Unit, UCL Alexandra House, 17 Queen Square, London, WC1N 3AR {ahrens, maneesh}@gatsby.ucl.ac.uk
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationIf there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,
Recall The Neyman-Pearson Lemma Neyman-Pearson Lemma: Let Θ = {θ 0, θ }, and let F θ0 (x) be the cdf of the random vector X under hypothesis and F θ (x) be its cdf under hypothesis. Assume that the cdfs
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationProblem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30
Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain
More informationComparing integrate-and-fire models estimated using intracellular and extracellular data 1
Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Liam Paninski a,b,2 Jonathan Pillow b Eero Simoncelli b a Gatsby Computational Neuroscience Unit, University College
More informationDetection theory. H 0 : x[n] = w[n]
Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal
More informationComputing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons
Computing with Inter-spike Interval Codes in Networks of Integrate and Fire Neurons Dileep George a,b Friedrich T. Sommer b a Dept. of Electrical Engineering, Stanford University 350 Serra Mall, Stanford,
More informationThe homogeneous Poisson process
The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,
More informationCorrelations and neural information coding Shlens et al. 09
Correlations and neural information coding Shlens et al. 09 Joel Zylberberg www.jzlab.org The neural code is not one-to-one ρ = 0.52 ρ = 0.80 d 3s [Max Turner, UW] b # of trials trial 1!! trial 2!!.!.!.
More informationThe Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception
The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception Takashi Kanamaru Department of Mechanical Science and ngineering, School of Advanced
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationComparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models
Supplemental Material Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models Motivation The purpose of this analysis is to verify that context dependent changes
More informationFinding a Basis for the Neural State
Finding a Basis for the Neural State Chris Cueva ccueva@stanford.edu I. INTRODUCTION How is information represented in the brain? For example, consider arm movement. Neurons in dorsal premotor cortex (PMd)
More informationEncoding and Decoding Spikes for Dynamic Stimuli
LETTER Communicated by Uri Eden Encoding and Decoding Spikes for Dynamic Stimuli Rama Natarajan rama@cs.toronto.edu Department of Computer Science, University of Toronto, Toronto, Ontario, Canada M5S 3G4
More informationStephen Scott.
1 / 35 (Adapted from Ethem Alpaydin and Tom Mitchell) sscott@cse.unl.edu In Homework 1, you are (supposedly) 1 Choosing a data set 2 Extracting a test set of size > 30 3 Building a tree on the training
More informationNeural Decoding. Mark van Rossum. School of Informatics, University of Edinburgh. January 2012
Neural Decoding Mark van Rossum School of Informatics, University of Edinburgh January 2012 0 Acknowledgements: Chris Williams and slides from Gatsby Liam Paninski. Version: January 31, 2018 1 / 63 2 /
More informationImplementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics arxiv:1512.07839v4 [cs.lg] 7 Jun 2017 Sacha Sokoloski Max Planck Institute for Mathematics in the Sciences Abstract
More informationDesign of experiments via information theory
Design of experiments via information theory Liam Paninski Center for Neural Science New York University New York, NY 10003 liam@cns.nyu.edu Abstract We discuss an idea for collecting data in a relatively
More informationThe Effect of Correlated Variability on the Accuracy of a Population Code
LETTER Communicated by Michael Shadlen The Effect of Correlated Variability on the Accuracy of a Population Code L. F. Abbott Volen Center and Department of Biology, Brandeis University, Waltham, MA 02454-9110,
More informationNeural Encoding: Firing Rates and Spike Statistics
Neural Encoding: Firing Rates and Spike Statistics Dayan and Abbott (21) Chapter 1 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Background: Dirac δ Function Dirac δ function has the following
More informationCorrelations in Populations: Information-Theoretic Limits
Correlations in Populations: Information-Theoretic Limits Don H. Johnson Ilan N. Goodman dhj@rice.edu Department of Electrical & Computer Engineering Rice University, Houston, Texas Population coding Describe
More informationFrequentist-Bayesian Model Comparisons: A Simple Example
Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal
More information12/2/15. G Perception. Bayesian Decision Theory. Laurence T. Maloney. Perceptual Tasks. Testing hypotheses. Estimation
G89.2223 Perception Bayesian Decision Theory Laurence T. Maloney Perceptual Tasks Testing hypotheses signal detection theory psychometric function Estimation previous lecture Selection of actions this
More informationImproved characterization of neural and behavioral response. state-space framework
Improved characterization of neural and behavioral response properties using point-process state-space framework Anna Alexandra Dreyer Harvard-MIT Division of Health Sciences and Technology Speech and
More informationQuantum estimation for quantum technology
Quantum estimation for quantum technology Matteo G A Paris Applied Quantum Mechanics group Dipartimento di Fisica @ UniMI CNISM - Udr Milano IQIS 2008 -- CAMERINO Quantum estimation for quantum technology
More informationLinking non-binned spike train kernels to several existing spike train metrics
Linking non-binned spike train kernels to several existing spike train metrics Benjamin Schrauwen Jan Van Campenhout ELIS, Ghent University, Belgium Benjamin.Schrauwen@UGent.be Abstract. This work presents
More informationLatent state estimation using control theory
Latent state estimation using control theory Bert Kappen SNN Donders Institute, Radboud University, Nijmegen Gatsby Unit, UCL London August 3, 7 with Hans Christian Ruiz Bert Kappen Smoothing problem Given
More informationCHARACTERIZATION OF NONLINEAR NEURON RESPONSES
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)
More informationUnsupervised Learning
Unsupervised Learning Bayesian Model Comparison Zoubin Ghahramani zoubin@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc in Intelligent Systems, Dept Computer Science University College
More informationMembrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl
Spiking neurons Membrane equation V GNa GK GCl Cm VNa VK VCl dv dt + V = V Na G Na + V K G K + V Cl G Cl G total G total = G Na + G K + G Cl = C m G total Membrane with synaptic inputs V Gleak GNa GK
More informationArtificial Neural Networks Examination, March 2004
Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College
More informationLearning with Probabilities
Learning with Probabilities CS194-10 Fall 2011 Lecture 15 CS194-10 Fall 2011 Lecture 15 1 Outline Bayesian learning eliminates arbitrary loss functions and regularizers facilitates incorporation of prior
More informationBayesian Decision Theory
Bayesian Decision Theory Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent University) 1 / 46 Bayesian
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality
More informationGenerative models for amplitude modulation: Gaussian Modulation Cascade Processes
Generative models for amplitude modulation: Gaussian Modulation Cascade Processes Richard Turner (turner@gatsby.ucl.ac.uk) Maneesh Sahani (maneesh@gatsby.ucl.ac.uk) Gatsby Computational Neuroscience Unit,
More information1. Fisher Information
1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 2. MLE, MAP, Bayes classification Barnabás Póczos & Aarti Singh 2014 Spring Administration http://www.cs.cmu.edu/~aarti/class/10701_spring14/index.html Blackboard
More informationEffective Connectivity & Dynamic Causal Modelling
Effective Connectivity & Dynamic Causal Modelling Hanneke den Ouden Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen Advanced SPM course Zurich, Februari 13-14, 2014 Functional Specialisation
More informationHow Behavioral Constraints May Determine Optimal Sensory Representations
How Behavioral Constraints May Determine Optimal Sensory Representations by Salinas (2006) CPSC 644 Presented by Yoonsuck Choe Motivation Neural response is typically characterized in terms of a tuning
More informationStatistical Tools and Techniques for Solar Astronomers
Statistical Tools and Techniques for Solar Astronomers Alexander W Blocker Nathan Stein SolarStat 2012 Outline Outline 1 Introduction & Objectives 2 Statistical issues with astronomical data 3 Example:
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationHomework 1 Due: Thursday 2/5/2015. Instructions: Turn in your homework in class on Thursday 2/5/2015
10-704 Homework 1 Due: Thursday 2/5/2015 Instructions: Turn in your homework in class on Thursday 2/5/2015 1. Information Theory Basics and Inequalities C&T 2.47, 2.29 (a) A deck of n cards in order 1,
More informationComputational Biology Lecture #3: Probability and Statistics. Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept
Computational Biology Lecture #3: Probability and Statistics Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept 26 2005 L2-1 Basic Probabilities L2-2 1 Random Variables L2-3 Examples
More informationA Multivariate Time-Frequency Based Phase Synchrony Measure for Quantifying Functional Connectivity in the Brain
A Multivariate Time-Frequency Based Phase Synchrony Measure for Quantifying Functional Connectivity in the Brain Dr. Ali Yener Mutlu Department of Electrical and Electronics Engineering, Izmir Katip Celebi
More informationStatistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design
Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University
More informationDecision-making, inference, and learning theory. ECE 830 & CS 761, Spring 2016
Decision-making, inference, and learning theory ECE 830 & CS 761, Spring 2016 1 / 22 What do we have here? Given measurements or observations of some physical process, we ask the simple question what do
More informationGentle Introduction to Infinite Gaussian Mixture Modeling
Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for
More informationParameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn!
Parameter estimation! and! forecasting! Cristiano Porciani! AIfA, Uni-Bonn! Questions?! C. Porciani! Estimation & forecasting! 2! Cosmological parameters! A branch of modern cosmological research focuses
More informationA New Unsupervised Event Detector for Non-Intrusive Load Monitoring
A New Unsupervised Event Detector for Non-Intrusive Load Monitoring GlobalSIP 2015, 14th Dec. Benjamin Wild, Karim Said Barsim, and Bin Yang Institute of Signal Processing and System Theory of,, Germany
More informationStatistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,
More informationECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)
ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling
More informationBayesian inference J. Daunizeau
Bayesian inference J. Daunizeau Brain and Spine Institute, Paris, France Wellcome Trust Centre for Neuroimaging, London, UK Overview of the talk 1 Probabilistic modelling and representation of uncertainty
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationA Few Notes on Fisher Information (WIP)
A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties
More informationSupplemental Data: Appendix
Supplemental Data: Appendix A Demonstration of model identifiability A.1 Identifiability of of 2-ADC model In this section, we demonstrate that there is a one-to-one mapping from the set of parameters
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationBasic concepts in estimation
Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates
More informationA Monte Carlo Sequential Estimation for Point Process Optimum Filtering
2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering
More informationNeural spike statistics modify the impact of background noise
Neurocomputing 38}40 (2001) 445}450 Neural spike statistics modify the impact of background noise Stefan D. Wilke*, Christian W. Eurich Institut fu( r Theoretische Physik, Universita( t Bremen, Postfach
More informationMinimum Message Length Analysis of the Behrens Fisher Problem
Analysis of the Behrens Fisher Problem Enes Makalic and Daniel F Schmidt Centre for MEGA Epidemiology The University of Melbourne Solomonoff 85th Memorial Conference, 2011 Outline Introduction 1 Introduction
More informationLinear Models for Classification
Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,
More informationProbabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations
Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations A. Emin Orhan Robert A. Jacobs Department of Brain & Cognitive Sciences University of Rochester Rochester, NY 4627
More informationSPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017
SPIKE TRIGGERED APPROACHES Odelia Schwartz Computational Neuroscience Course 2017 LINEAR NONLINEAR MODELS Linear Nonlinear o Often constrain to some form of Linear, Nonlinear computations, e.g. visual
More information