Design principles for contrast gain control from an information theoretic perspective
|
|
- Martina Rich
- 5 years ago
- Views:
Transcription
1 Design principles for contrast gain control from an information theoretic perspective Yuguo Yu Brian Potetz Tai Sing Lee Center for the Neural Computer Science Computer Science Basis of Cognition Department Department Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University Abstract Contrast gain control is an important and common mechanism underlying the visual system s adaptation to the statistics of the visual scenes. In this paper, we first showed that the threshold and saturation determine the preferred contrast sensitivity as well as the maimum information coding capacity of the neuronal model. Then we investigated the design principles underlying adaptation behavior in contrast gain control phenomena by an adaptive linear-nonlinear model. We found that an adaptive rescaling mechanism predicted by information transmission maimization can eplain a variety of observed contrast gain control phenomena in neurophysiological eperiments, including the divisive input-output relations, and the inverse power law relation between response gain and input contrast. Our results suggest that contrast gain control in visual systems might be designed for information maimization. 1 Introduction The visual systems ehibit great fleibility in adapting their input-output functions to the mean [1] and the contrast [,3] of luminance intensity in the visual environment. The amplitude gains of the transfer functions of visual neurons were found to decrease with input variance [4-7]. The relationship between the kernel gain and the input variance has been found to follow an inverse power law relationship [6,7]. In addition, the contrast response functions of visual cortical neurons were found to adapt to the mean contrast by shifting along the log contrast ais to match the range of the prevailing input signals [3,8,9]. These phenomena are called contrast gain control and have been observed in many different types of neurons in the sensory systems of many species, such as neurons in the retina [,4,5,7], striate [3,6,8] and etrastriate visual corte [9] of mammals, and fly H1 neurons [10,11]. Recently, a number of biophysical and neural models have been advanced to account for contrast gain control, including the normalization model [1], the synaptic depression model [13] and a more recent model based on background ecitatory and inhibitory synaptic modulation [14,15]. Various biophysical factors that have been
2 implicated in gain control include threshold [1], synaptic depression [13], synaptic noise [14], dendritic saturation [15], long-term slow adaptation [3,4], and active ionic channels in the spike generation [5,8]. While it is possible that these multiple biological factors and mechanisms can co-eist to affect various aspects of contrast gain adaptation [16], the rules by which the various factors are adjusted to mediate gain control and the principles governing the determination of these factors remain unclear. In this paper, we will investigate the basic biophysical causes and the computational principles underlying contrast gain control by studying a cascade model of an adaptive linear kernel followed by a static nonlinearity. Model and Analysis Fig.1. The adaptive linear-nonlinear (LN) model consists of an adaptive linear filter h(t) followed by nonlinearity g(.). The amplitude of h(t) is scaled by, which acts as an adaptive mechanism. (t) is the response of the linear filter h(t). y(t) is the output. Recent studies [5-7] suggested that the adaptive behaviors in contrast gain control eperiments can be modeled by an adaptive linear kernel cascaded with a static nonlinearity (see Fig.1). Here, we set the adaptive linear function h( t) = β ( σ ) sin( πt/ τ ) ep( t/ τ ) (1) with a = 80 ms and b = 100 ms. We assume here that there eists an adaptive mechanism which can maimize the mutual information between each input signal s(t) and the output y(t) of the neuron by adjusting the adaptive rescaling factor (). The linear response (t) is given by 0 a ( t) = h( τ ) s( t τ ) dτ. The nonlinearity is given by 0, if ( t) < θ, y( t) = g( ( t)) = ( t) θ, if θ ( t) < η, η θ, if ( t) η. where is the response threshold, is the saturation level, and y(t) is the response of the neuron. We use a Gaussian white noise stimulus s(t) with zero mean and SD as the input signal. Its probability density function (PDF) is given by The linear response (t) also has a Gaussian distribution with PDF where is given by σ ( ) σ ( τ ) τ t h d average. This is the adaptive LN model. 3 Gain Analysis 0 b () s 1 e σ πσ p( s) =. 1 e σ πσ p( ) =, =< >=, where <... > denotes time First we fied adaptive factor () = 1 and only studied the role of the nonlinearity in the sensory coding process. In eperimental studies, Wiener Kernel method [17,18] was typically used to recover the linear transfer function h (t) of the investigated
3 system based on the input s(t) and the response y(t). What is the relationship between the real linear function h(t) and the recovered linear kernel h (t) in the presence of the static nonlinearity? According to Bussgang s theorem [19], for any memoryless nonlinear system y = g() with an input signal drawn from a Gaussian distribution, K(f), the Fourier transform of the optimal linear transfer function, specifying the input-output relationship of the static nonlinearity g() is given by Y ( f ) X ( f ) < g( ) > K( f ) = =, (3) X ( f ) X ( f ) σ where Y(f) is the Fourier transform of the output y(t), and * stands for conjugate. For the entire cascade model, the optimal linear transfer function T(f), the Fourier transform of the resultant linear kernel h (t) for the entire cascade, is given by Y ( f ) S( f ) T ( f ) =, (4) S( f ) S( f ) where S(f) is the Fourier transform of input signal s(t). Combining the equations, we have < g( ) > T ( f ) = H ( f ) K( f ) = H ( f ) (5) σ This indicates the entire effect of the static nonlinearity on the recovered Wiener g ( ) Kernel simply introduces a gain scaling factor α = < > to the original linear kernel. Therefore, the recovered linear kernel h (t) is given by h (t) = h(t), where gain factor quantifies how the recovered linear kernel h (t) is affected by the threshold, saturation and the standard deviation of the stimulus. The gain factor can be determined by ( θ ) pd + ( η θ ) pd < g( ) > θ η α( σ ) = =. σ η σ σ h ( τ ) dτ Performing the integrations and simplifying yields 1 η θ α( σ ) = ( erf ( ) erf ( )) = P[ ( t) [ θ, η]] (7) σ σ The basic conclusion of this analysis is that the gain of the measured effective transfer function will change with input variance due to the effect of the static nonlinearity, even though the parameters of the model,, and, are fied. To illustrate this phenomenon, we fi = 1, = 5 and = 40, and plot the gain for signals with different s according to the analytical equation. Fig.a shows the resultant linear kernel h (t) (the inverse Fourier transform of T(f)) is heavily dependent on the value of. Interestingly, is not monotonic, it increases with in the small range, reaches a maimum, and decreases with a further increase in (circles in Fig.b). 0 (6) To confirm these analytical results, we applied the standard Wiener kernel technique [18] to recover the linear wiener kernel for the whole cascade model with = 1 based on the input s( t ) and the output y( t ). The amplitude gain of the recovered kernel (triangles in Fig.b) in this computational study is shown to match well with the
4 theoretical prediction. The recovered kernel h ˆ ( t) only ehibited gain scaling relative to the linear kernel h(t). There is no temporal dilation or contraction of the linear kernel. The computational study therefore confirms the correctness of our theoretical results. The optimal opt in which gain is maimum can be obtained by differentiating Eq.[7], θ η σ =. (8) opt (ln ln ) h ( ) η θ τ dτ 0 The obtained opt is a function of saturation and threshold. This might provide a mechanism and rules for a neuron to adjust its transfer function and gain tuning curve according to the statistical contet of the input signals. However, the range of adjustment of the optimal opt by changing and is rather limited. 4 Information Analysis Fig.. (a) Recovered kernels for = 3 and = 50 for various input stimuli of different. (b) Gain is a function of, ehibiting a tuning curve. This tuning curve is predicted by the theoretical analysis, and is confirmed by the simulation result. (c) I m(,) as a function of for various and. (d) I m(,) as a function of (, ) for various. The distortion or gain tuning effect due to the nonlinearity indeed can affect the information encoding process of the neuron. Now we use Shannon s information theory [0] to quantify the information transmission of the LN model. For a system with input s( t ) and output y( t ), the total output entropy H ( y) = p( y) log p( y) (9) y quantifies the system s theoretical limit on information transfer capacity, while the mutual information in discrete form [1] is given by, I = H ( y) H ( y s) = p( y) log p( y) + p( s) p( y s) log p( y s), m y (10) s, y measures how much of that capacity is utilized to transmit and encode the input signal. H(y s) is noise entropy, accounting for the variability in the response that is not due to variations in the stimulus, but comes from noise sources. For simplicity, we consider the noiseless case, where H(y s) = 0. In this case, the mutual information is set to be equal to the output entropy I m = H(y). The probability distribution of the output response y(t) can be derived from Eq.[1] and []. We can compute the entropy of y(t) directly from this distribution using Eq.[10].
5 We fi = 1, and we compute the mutual information I m as a function of stimulus to eamine the effect of nonlinearity. Fig.c shows that mutual information, in a way similar to effective gain, varies nonlinearly with input, ehibiting a tuning curve, with maimum at an intermediate. This optimal is denoted by opt, corresponding the signals that can introduce maimum information transmission of the system. For a fied, I m increases with an increase in saturation value or with a decrease in the threshold value. This suggests that any nonlinear system with threshold and saturation properties can best encode or transmit signals of a particular range of, and will not encode adequately signals outside this range without adaptation of its parameters. In fact, mutual information I m is roughly proportional to the gain factor (see Fig.d), suggesting that efficient information encoding and gain maimization are tightly correlated. 5 Information maimization in the adaptive LN model Fig.c shows that for the static model (i.e., = 1 is fied for various input), there eists an optimal input distribution with opt that can induce maimal information transmission (recall that opt maimizes gain). To maintain maimal information rate for any given input, we propose an adaptive mechanism that rescales the amplitude of the linear kernel in the LN cascade so that the output of the linear kernel (t) is effectively adjusted to operate at the optimal regime of the given static nonlinearity. Let the rescaling factor be adapt (), then the linear kernel is h ( t) = β ( σ ) sin( π t/ τ ) ep( t/ τ ), (11) A adapt a b where adapt () is determined as the appropriate scaling factor necessary for maimizing information transmission for each input variance. The precise biophysical mechanism for mediating this effect is not known at presence, and presumably can be mediated by a variety of biophysical or network feedback mechanisms. Our primary task here is to elucidate the rules underlying the choice of the scaling factor, and the ramification of such a choice on the contrast gain control phenomena. We propose that the adaptive rescaling mechanism essentially chooses adapt () = opt / for each so that the maimum of the information transmission capacity I ma of the system can be reached. Fig.3a shows that the information transmission for such an adaptive LN model is maintained at the highest level I ma independent of the variance of the signal input. Note that for the static LN model with = 1, I m varies with, with only one global maimum I ma at a particular opt (see Fig.3a). The adaptive model thus ensures that the capacity of the system be fully utilized in different statistical contets of the environment. The maimum of information transmission is constrained only by the threshold and the saturation level (Fig.3b). The lower (or higher) is the threshold (saturation), the higher is the maimum information rate. Therefore, the total gain of the adaptive LN model, i.e., the amplitude of the recovered linear kernel from input s(t) and output y(t) comes from two effects: gain due to the nonlinearity effect (see Eq.[6]), i.e., α, and gain due to the true adjustment adapt. Thus, the total gain factor for the adaptive LN model is γ ( σ ) = β α( σ ) = α ( σ ) σ / σ. (1) adapt opt opt opt Fig.3c demonstrates the inverse power-law relationship between input and the total gain of the adaptive LN model. This inverse power law relationship in the gain-variance curve has been observed in several recent eperimental studies [6-7] (see Fig.3d). It is important to note that without the information maimizing adaptive
6 rescaling, the relationship between the response gain and is a bell-shape curve (as shown in Fig.) rather than an inverse power-law. Our analytical results therefore provide the connection between the empirical inverse power-law observed (Fig.3d) and the principle of information maimization. comparison as the dash line with slope of -1). Fig. 3. I m varies with input in the static (fied ) LN model ( = 0 and = 50) but is kept at maimum rate in adaptive model due to adapting adapt for each. (b) I m is maintained at the maimum level for various and. (c) The amplitude of the recovered kernel follows an inverse power-law relationship with input. (d) Eperimental data by Truchard et al. [6] shows that monocular gain decreases with stimulus contrast (from.5% to 50%) for two recorded cells. They are close to the inverse power-law relationship (shown for 6 Adaptation of the contrast response functions To recover the input-output relationship, eperimenters typically use a stimulus that keeps an input attribute constant for a period of time t, and obtain the output by averaging the response of the neuron during that period [3]. We now proceed to investigate the contrast response function for the adaptive LN model using similar signals. Can adaptive rescaling by adapt in the adaptive LN model eplain the observed adaptive shift in the contrast response curve as a function of the mean contrast? To answer this question, we simulated this eperiment with our cascade LN model, using one-dimensional temporal sinewave gratings of different contrasts (see the black line in Fig.4a) as input to the neuron. Here, a sinewave grating with a temporal frequency of 10 Hz can be considered the carrier signal (see gray line in Fig.4a), amplitude modulated by the input contrast signal c(t). Signal modulated by each contrast value c( t ) is presented for t = 4 seconds. The contrast values are drawn from a Gaussian white distribution with standard deviation. c determines the mean contrast level of the contrast signal in each sequence, which lasts for 1000 seconds. The input-output curves are obtained from sequences of four different mean contrast levels, with c = 1, 5, 10, and 0 respectively, To plot the input-output curve, the model s response for each time bin t is averaged to get a mean output value for each contrast value. The contrast response functions (I/O curves) change their slopes for four different mean contrast levels ( c = 1, 5, 10, and 0 respectively). In a log-contrast plot, this change in slope is manifested as a horizontal shift in the contrast response function (Fig.4b). This behavior is qualitatively similar to the neurophysiological observations [3,8,9]. When the input contrast is divided by the mean contrast c, the contrast response functions become superimposed on top of each other, demonstrating that the
7 adaptation is a divisive effect (see Fig.4c). Thus, the predicted rescaling of the linear kernel based on information maimization can eplain the divisive contrast gain adaptation observed in the neurophysiological eperiments [3,8,9]. A similar rescaling of input-output relations has also been observed in recent eperimental works on H1 neurons of blowfly [10-11], which provided direct evidence that the scaling of the input-output function is set to maimize information transmission for each distribution of signals. Our theoretical results thus demonstrate the underlying connection of these eperimental findings on contrast gain control from an information theoretical prospective. Figure 4. (a) An eample of an input contrast signal with sinewave modulation (temporal sine frequency is 10 Hz). The contrast c(t) (magnitude of the sinewave) changes every 4 seconds. The standard deviation of the contrast levels for this sequence is c = 1. (b) In log-log plot, the contrast response functions i.e., c(t) ~ y(t), recovered from four classes of input contrast signals with c = 1, 3, 5, and 10 respectively. (c) This adaptive shift is a divisive effect, as the contrast response functions collapse together when the input contrast is divided by mean contrast c. 7 Discussion In summary, we first isolated the effect of nonlinearity on contrast gain tuning. We found that the threshold and saturation determines the selectivity and sensitivity of the neuron to the statistics of the input signals. Input signals with optimal variance can maimize the sensitivity of the system and results in the maimal information transmission. Net we studied the relationship between the adaptive linearity and contrast gain control phenomena by employing the principle of information maimization. For any signal with a given variance, the linear kernel amplitude can be adjusted to an optimal level by an adaptive mechanism to maimize the information transmission is maimized. This model is successful in reproducing three important phenomena observed in earlier eperiments related to contrast gain control: 1) the logarithmic decay of the linear kernel gain with the input contrast [6,7], see Fig.3c; ) the divisive adjustment of the contrast response functions in adaptation to different mean contrast levels [3,8,9], see Fig.4b; 3) the rescaling input/output relationship for maimal information transmission [10,11] see Fig.4c. Our theoretical work therefore provides a coherent framework for understanding why the various eperimental observations listed above are in fact evidence in support of the proposal that contrast gain control is a mechanism for information maimization. Further eperimental investigations are needed to clarify the underlying biological factors and mechanisms for the optimizing adaptation of the linear kernel. Acknowledgments This research is supported by NSF CAREER , a NIH P41PR for biomedical supercomputing and NIH MH64445.
8 References [1] Creutzfeldt, O.D. (197). Transfer function of the retina. Electroencephalogr. clin. Neurophysiol. Suppl. 31: [] Shapley, R.M. & Victor, J.D. (1979). The contrast gain control of the cat retina. Vision Res. 19: [3] Ohzawa, I., Sclar, G. & Freeman, R.D. (1985). Contrast gain control in the cat s visual system. J. Neurophysiol. 54: [4] Smirnakis, S.M., Berry, M.J., Warland, D.K., Bialek, W. & Meister, M. (1997). Adaptation of retinal processing to image contrast and spatial scale. Nature 386: [5] Kim, K.J. & Rieke, F. (001). Temporal contrast adaptation in the input and output signals of salamander retinal ganglion cells. J. Neurosci. 1: [6] Truchard, A.M., Ohzawa, I. & Freeman, R.D. (000). Contrast Gain Control in the Visual Corte: Monocular Versus Binocular Mechanisms. J. Neurosci. 0: [7] Chander, D. & Chichilnisky, E.J. (001). Adaptation to temporal contrast in primate and salamander retina. J. Neurosci. 1: [8] Sanchez-Vives, M.V., Nowak, L.G. & McCormick, D.A. (000). Membrane mechanisms underlying contrast adaptation in cat area 17 in vivo. J. Neurosci. 0: [9] Kohn, A. & Movshon, J.A. (003). Neuronal adaptation to visual motion in area MT of the macaque. Neuron 39: [10] Brenner, N., Bialek, W. & de Ruyter van Steveninck, R. (000). Adaptive rescaling maimizes information transmission. Neuron 6: [11] Fairhall, A.L., Lewen, G.D., Bialek, W. & de Ruyter van Steveninck, R. (001). Efficiency and ambiguity in an adaptive neural code. Nature 41: [1] Heeger, D.J. (199). Normalization of cell responses in cat striate corte. Vis. Neuro. 9: [13] Abbott, L.F., Varela, J.A., Sen, K. & Nelson, S.B. (1997). Synaptic Depression and Cortical Gain Control. Science 75:0-3. [14] Chance, F.S., Abbott, L.F. & Reyes, A.D. (00). Gain modulation from background synaptic input. Neuron 35: [15] Prescott, S.A. & De Koninck, Y. (003). Gain control of firing rate by shunting inhibition: Roles of synaptic noise and dendritic saturation. P. Natl. Acad. Sci. USA. 100: [16] Demb, J.B. (00). Multiple mechanisms for contrast adaptation in the retina. Neuron 36: [17] Lee, Y.W. & Schetzen, M. (1965). Measurement of the Wiener kernels of a non-linear system by cross-correlation. Int. J. Control. : [18] Marmarelis, V.Z. (1993). Identification of nonlinear biological systems using Laguerre epansions of kernels. Annals of Biomedical Engineering 1: [19] Bendat, J.S. Nonlinear system analysis and identification from random data. John Wiley and Sons, New York, [0] Shannon, C.E. & Weaver, W. The Mathematical Theory of Communication. Univ. of Illinois Press, Ur-bana, IL, [1] Dayan, P. & Abbott, L.F. Theoretical Neuroscience. MIT Press, Cambridge, 001, Chap.4, pp.19.
Adaptive contrast gain control and information maximization $
Neurocomputing 65 66 (2005) 6 www.elsevier.com/locate/neucom Adaptive contrast gain control and information maximization $ Yuguo Yu a,, Tai Sing Lee b a Center for the Neural Basis of Cognition, Carnegie
More informationScholarOne, 375 Greenbrier Drive, Charlottesville, VA, 22901
Page 1 of 43 Information maximization as a principle for contrast gain control Journal: Manuscript ID: Manuscript Type: Manuscript Section: Conflict of Interest: Date Submitted by the Author: Keywords:
More informationDynamical mechanisms underlying contrast gain control in single neurons
PHYSICAL REVIEW E 68, 011901 2003 Dynamical mechanisms underlying contrast gain control in single neurons Yuguo Yu and Tai Sing Lee Center for the Neural Basis of Cognition, Carnegie Mellon University,
More informationWhat do V1 neurons like about natural signals
What do V1 neurons like about natural signals Yuguo Yu Richard Romero Tai Sing Lee Center for the Neural Computer Science Computer Science Basis of Cognition Department Department Carnegie Mellon University
More informationencoding and estimation bottleneck and limits to visual fidelity
Retina Light Optic Nerve photoreceptors encoding and estimation bottleneck and limits to visual fidelity interneurons ganglion cells light The Neural Coding Problem s(t) {t i } Central goals for today:
More informationSpike-Frequency Adaptation: Phenomenological Model and Experimental Tests
Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests J. Benda, M. Bethge, M. Hennig, K. Pawelzik & A.V.M. Herz February, 7 Abstract Spike-frequency adaptation is a common feature of
More informationNonlinear reverse-correlation with synthesized naturalistic noise
Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California
More informationEfficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model
ACCEPTED FOR NIPS: DRAFT VERSION Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model Sander M. Bohte CWI, Life Sciences Amsterdam, The Netherlands S.M.Bohte@cwi.nl September
More informationInformation Theory. Mark van Rossum. January 24, School of Informatics, University of Edinburgh 1 / 35
1 / 35 Information Theory Mark van Rossum School of Informatics, University of Edinburgh January 24, 2018 0 Version: January 24, 2018 Why information theory 2 / 35 Understanding the neural code. Encoding
More information!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be
Supplementary Materials General case: computing log likelihood We first describe the general case of computing the log likelihood of a sensory parameter θ that is encoded by the activity of neurons. Each
More informationExercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.
1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend
More informationEstimation of information-theoretic quantities
Estimation of information-theoretic quantities Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk November 16, 2004 Some
More informationAdaptation in the Neural Code of the Retina
Adaptation in the Neural Code of the Retina Lens Retina Fovea Optic Nerve Optic Nerve Bottleneck Neurons Information Receptors: 108 95% Optic Nerve 106 5% After Polyak 1941 Visual Cortex ~1010 Mean Intensity
More informationLimulus. The Neural Code. Response of Visual Neurons 9/21/2011
Crab cam (Barlow et al., 2001) self inhibition recurrent inhibition lateral inhibition - L16. Neural processing in Linear Systems: Temporal and Spatial Filtering C. D. Hopkins Sept. 21, 2011 The Neural
More informationHow to read a burst duration code
Neurocomputing 58 60 (2004) 1 6 www.elsevier.com/locate/neucom How to read a burst duration code Adam Kepecs a;, John Lisman b a Cold Spring Harbor Laboratory, Marks Building, 1 Bungtown Road, Cold Spring
More informationThe Spike Response Model: A Framework to Predict Neuronal Spike Trains
The Spike Response Model: A Framework to Predict Neuronal Spike Trains Renaud Jolivet, Timothy J. Lewis 2, and Wulfram Gerstner Laboratory of Computational Neuroscience, Swiss Federal Institute of Technology
More informationSynaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000
Synaptic Input Professor David Heeger September 5, 2000 The purpose of this handout is to go a bit beyond the discussion in Ch. 6 of The Book of Genesis on synaptic input, and give some examples of how
More informationFisher Information Quantifies Task-Specific Performance in the Blowfly Photoreceptor
Fisher Information Quantifies Task-Specific Performance in the Blowfly Photoreceptor Peng Xu and Pamela Abshire Department of Electrical and Computer Engineering and the Institute for Systems Research
More informationMembrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl
Spiking neurons Membrane equation V GNa GK GCl Cm VNa VK VCl dv dt + V = V Na G Na + V K G K + V Cl G Cl G total G total = G Na + G K + G Cl = C m G total Membrane with synaptic inputs V Gleak GNa GK
More informationNeural Encoding. Mark van Rossum. January School of Informatics, University of Edinburgh 1 / 58
1 / 58 Neural Encoding Mark van Rossum School of Informatics, University of Edinburgh January 2015 2 / 58 Overview Understanding the neural code Encoding: Prediction of neural response to a given stimulus
More informationEfficient and direct estimation of a neural subunit model for sensory coding
To appear in: Neural Information Processing Systems (NIPS), Lake Tahoe, Nevada. December 3-6, 22. Efficient and direct estimation of a neural subunit model for sensory coding Brett Vintch Andrew D. Zaharia
More information1/12/2017. Computational neuroscience. Neurotechnology.
Computational neuroscience Neurotechnology https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/ 1 Neurotechnology http://www.lce.hut.fi/research/cogntech/neurophysiology Recording
More informationModeling Convergent ON and OFF Pathways in the Early Visual System
Modeling Convergent ON and OFF Pathways in the Early Visual System The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Gollisch,
More informationEfficient coding of natural images with a population of noisy Linear-Nonlinear neurons
To appear in: Neural Information Processing Systems (NIPS), http://nips.cc/ Granada, Spain. December 12-15, 211. Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan
More informationSUPPLEMENTARY INFORMATION
Supplementary discussion 1: Most excitatory and suppressive stimuli for model neurons The model allows us to determine, for each model neuron, the set of most excitatory and suppresive features. First,
More informationEmergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity
Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity Jorge F. Mejias 1,2 and Joaquín J. Torres 2 1 Department of Physics and Center for
More informationReal and Modeled Spike Trains: Where Do They Meet?
Real and Modeled Spike Trains: Where Do They Meet? Vasile V. Moca 1, Danko Nikolić,3, and Raul C. Mureşan 1, 1 Center for Cognitive and Neural Studies (Coneural), Str. Cireşilor nr. 9, 4487 Cluj-Napoca,
More informationNeural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU http://www.cns.nyu.edu/~pillow Oct 5, Course lecture: Computational Modeling of Neuronal Systems
More informationModeling Surround Suppression in V1 Neurons with a Statistically-Derived Normalization Model
Presented at: NIPS-98, Denver CO, 1-3 Dec 1998. Pulished in: Advances in Neural Information Processing Systems eds. M. S. Kearns, S. A. Solla, and D. A. Cohn volume 11, pages 153--159 MIT Press, Cambridge,
More informationDo Neurons Process Information Efficiently?
Do Neurons Process Information Efficiently? James V Stone, University of Sheffield Claude Shannon, 1916-2001 Nothing in biology makes sense except in the light of evolution. Theodosius Dobzhansky, 1973.
More informationAdaptive Velocity Tuning for Visual Motion Estimation
Adaptive Velocity Tuning for Visual Motion Estimation Volker Willert 1 and Julian Eggert 2 1- Darmstadt University of Technology Institute of Automatic Control, Control Theory and Robotics Lab Landgraf-Georg-Str.
More informationDivisive Inhibition in Recurrent Networks
Divisive Inhibition in Recurrent Networks Frances S. Chance and L. F. Abbott Volen Center for Complex Systems and Department of Biology Brandeis University Waltham MA 2454-911 Abstract Models of visual
More informationSPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017
SPIKE TRIGGERED APPROACHES Odelia Schwartz Computational Neuroscience Course 2017 LINEAR NONLINEAR MODELS Linear Nonlinear o Often constrain to some form of Linear, Nonlinear computations, e.g. visual
More informationEfficient coding of natural images with a population of noisy Linear-Nonlinear neurons
Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan Karklin and Eero P. Simoncelli NYU Overview Efficient coding is a well-known objective for the evaluation and
More informationStatistical models for neural encoding
Statistical models for neural encoding Part 1: discrete-time models Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk
More informationSpatiotemporal Response Properties of Optic-Flow Processing Neurons
Article Spatiotemporal Response Properties of Optic-Flow Processing Neurons Franz Weber, 1,3, * Christian K. Machens, 2 and Alexander Borst 1 1 Department of Systems and Computational Neurobiology, Max-Planck-Institute
More informationNeural coding Ecological approach to sensory coding: efficient adaptation to the natural environment
Neural coding Ecological approach to sensory coding: efficient adaptation to the natural environment Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ.
More informationAnalyzing Neuroscience Signals using Information Theory and Complexity
12th INCF Workshop on Node Communication and Collaborative Neuroinformatics Warsaw, April 16-17, 2015 Co-Authors: Analyzing Neuroscience Signals using Information Theory and Complexity Shannon Communication
More informationSpike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation
NOTE Communicated by Jonathan Victor Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation Robert E. Kass kass@stat.cmu.edu Valérie Ventura vventura@stat.cmu.edu
More informationInferring synaptic conductances from spike trains under a biophysically inspired point process model
Inferring synaptic conductances from spike trains under a biophysically inspired point process model Kenneth W. Latimer The Institute for Neuroscience The University of Texas at Austin latimerk@utexas.edu
More informationConsider the following spike trains from two different neurons N1 and N2:
About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in
More informationMaximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model
Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model Jonathan W. Pillow, Liam Paninski, and Eero P. Simoncelli Howard Hughes Medical Institute Center for Neural Science New York
More informationNeural Modeling and Computational Neuroscience. Claudio Gallicchio
Neural Modeling and Computational Neuroscience Claudio Gallicchio 1 Neuroscience modeling 2 Introduction to basic aspects of brain computation Introduction to neurophysiology Neural modeling: Elements
More informationModelling and Analysis of Retinal Ganglion Cells Through System Identification
Modelling and Analysis of Retinal Ganglion Cells Through System Identification Dermot Kerr 1, Martin McGinnity 2 and Sonya Coleman 1 1 School of Computing and Intelligent Systems, University of Ulster,
More informationCombining biophysical and statistical methods for understanding neural codes
Combining biophysical and statistical methods for understanding neural codes Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University http://www.stat.columbia.edu/
More informationComparison of objective functions for estimating linear-nonlinear models
Comparison of objective functions for estimating linear-nonlinear models Tatyana O. Sharpee Computational Neurobiology Laboratory, the Salk Institute for Biological Studies, La Jolla, CA 937 sharpee@salk.edu
More informationVisual motion processing and perceptual decision making
Visual motion processing and perceptual decision making Aziz Hurzook (ahurzook@uwaterloo.ca) Oliver Trujillo (otrujill@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Centre for Theoretical Neuroscience,
More informationDimensionality reduction in neural models: an information-theoretic generalization of spiketriggered average and covariance analysis
to appear: Journal of Vision, 26 Dimensionality reduction in neural models: an information-theoretic generalization of spiketriggered average and covariance analysis Jonathan W. Pillow 1 and Eero P. Simoncelli
More informationHigh-dimensional geometry of cortical population activity. Marius Pachitariu University College London
High-dimensional geometry of cortical population activity Marius Pachitariu University College London Part I: introduction to the brave new world of large-scale neuroscience Part II: large-scale data preprocessing
More informationBroadband coding with dynamic synapses
Broadband coding with dynamic synapses Benjamin Lindner Max-Planck-Institute for the Physics of Complex Systems, Nöthnitzer Str. 38 1187 Dresden, Germany André Longtin Department of Physics and Center
More informationDecoding Poisson Spike Trains by Gaussian Filtering
LETTER Communicated by Paul Tiesinga Decoding Poisson Spike Trains by Gaussian Filtering Sidney R. Lehky sidney@salk.edu Computational Neuroscience Lab, Salk Institute, La Jolla, CA 92037, U.S.A. The temporal
More informationarxiv: v1 [q-bio.nc] 9 Mar 2016
Temporal code versus rate code for binary Information Sources Agnieszka Pregowska a, Janusz Szczepanski a,, Eligiusz Wajnryb a arxiv:1603.02798v1 [q-bio.nc] 9 Mar 2016 a Institute of Fundamental Technological
More informationComparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models
Supplemental Material Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models Motivation The purpose of this analysis is to verify that context dependent changes
More informationLateral organization & computation
Lateral organization & computation review Population encoding & decoding lateral organization Efficient representations that reduce or exploit redundancy Fixation task 1rst order Retinotopic maps Log-polar
More informationModeling and Characterization of Neural Gain Control. Odelia Schwartz. A dissertation submitted in partial fulfillment
Modeling and Characterization of Neural Gain Control by Odelia Schwartz A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Center for Neural Science
More informationA Three-dimensional Physiologically Realistic Model of the Retina
A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617
More informationThe functional organization of the visual cortex in primates
The functional organization of the visual cortex in primates Dominated by LGN M-cell input Drosal stream for motion perception & spatial localization V5 LIP/7a V2 V4 IT Ventral stream for object recognition
More informationNeural characterization in partially observed populations of spiking neurons
Presented at NIPS 2007 To appear in Adv Neural Information Processing Systems 20, Jun 2008 Neural characterization in partially observed populations of spiking neurons Jonathan W. Pillow Peter Latham Gatsby
More informationNeurophysiology of a VLSI spiking neural network: LANN21
Neurophysiology of a VLSI spiking neural network: LANN21 Stefano Fusi INFN, Sezione Roma I Università di Roma La Sapienza Pza Aldo Moro 2, I-185, Roma fusi@jupiter.roma1.infn.it Paolo Del Giudice Physics
More informationLinearization of F-I Curves by Adaptation
LETTER Communicated by Laurence Abbott Linearization of F-I Curves by Adaptation Bard Ermentrout Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A. We show that negative
More informationNeural information often passes through many different
Transmission of population coded information Alfonso Renart, and Mark C. W. van Rossum Instituto de Neurociencias de Alicante. Universidad Miguel Hernndez - CSIC 03550 Sant Joan d Alacant, Spain, Center
More informationMethods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische
More informationWhat is the neural code? Sekuler lab, Brandeis
What is the neural code? Sekuler lab, Brandeis What is the neural code? What is the neural code? Alan Litke, UCSD What is the neural code? What is the neural code? What is the neural code? Encoding: how
More informationMid Year Project Report: Statistical models of visual neurons
Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons
More informationWhen is an Integrate-and-fire Neuron like a Poisson Neuron?
When is an Integrate-and-fire Neuron like a Poisson Neuron? Charles F. Stevens Salk Institute MNL/S La Jolla, CA 92037 cfs@salk.edu Anthony Zador Salk Institute MNL/S La Jolla, CA 92037 zador@salk.edu
More informationActivity Driven Adaptive Stochastic. Resonance. Gregor Wenning and Klaus Obermayer. Technical University of Berlin.
Activity Driven Adaptive Stochastic Resonance Gregor Wenning and Klaus Obermayer Department of Electrical Engineering and Computer Science Technical University of Berlin Franklinstr. 8/9, 187 Berlin fgrewe,obyg@cs.tu-berlin.de
More informationTransformation of stimulus correlations by the retina
Transformation of stimulus correlations by the retina Kristina Simmons (University of Pennsylvania) and Jason Prentice, (now Princeton University) with Gasper Tkacik (IST Austria) Jan Homann (now Princeton
More informationPopulation Coding in Retinal Ganglion Cells
Population Coding in Retinal Ganglion Cells Reza Abbasi Asl Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-218-23 http://www2.eecs.berkeley.edu/pubs/techrpts/218/eecs-218-23.html
More informationIntrinsic gain modulation and adaptive neural coding. Abstract. Sungho Hong 1,2, Brian Nils Lundstrom 1 and Adrienne L. Fairhall 1
Intrinsic gain modulation and adaptive neural coding Sungho Hong 1,, Brian Nils Lundstrom 1 and Adrienne L. Fairhall 1 1 Physiology and Biophysics Department University of Washington Seattle, WA 98195-79
More informationLearning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods
Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods Kanaka Rajan Lewis-Sigler Institute for Integrative Genomics Princeton University
More informationarxiv:physics/ v1 [physics.data-an] 7 Jun 2003
Entropy and information in neural spike trains: Progress on the sampling problem arxiv:physics/0306063v1 [physics.data-an] 7 Jun 2003 Ilya Nemenman, 1 William Bialek, 2 and Rob de Ruyter van Steveninck
More informationBiological Modeling of Neural Networks
Week 4 part 2: More Detail compartmental models Biological Modeling of Neural Networks Week 4 Reducing detail - Adding detail 4.2. Adding detail - apse -cable equat Wulfram Gerstner EPFL, Lausanne, Switzerland
More informationTemporal whitening by power-law adaptation in neocortical neurons
Temporal whitening by power-law adaptation in neocortical neurons Christian Pozzorini, Richard Naud, Skander Mensi and Wulfram Gerstner School of Computer and Communication Sciences and School of Life
More informationThe homogeneous Poisson process
The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,
More informationFeatures and dimensions: Motion estimation in fly vision
Features and dimensions: Motion estimation in fly vision William Bialek a and Rob R. de Ruyter van Steveninck b a Joseph Henry Laboratories of Physics, and Lewis Sigler Institute for Integrative Genomics
More informationStatistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design
Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University
More informationNeural variability and Poisson statistics
Neural variability and Poisson statistics January 15, 2014 1 Introduction We are in the process of deriving the Hodgkin-Huxley model. That model describes how an action potential is generated by ion specic
More informationInformation Theory and Neuroscience II
John Z. Sun and Da Wang Massachusetts Institute of Technology October 14, 2009 Outline System Model & Problem Formulation Information Rate Analysis Recap 2 / 23 Neurons Neuron (denoted by j) I/O: via synapses
More informationDimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis
Journal of Vision (2006) 6, 414 428 http://journalofvision.org/6/4/9/ 414 Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis
More informationAdaptation to a 'spatial-frequency doubled' stimulus
Perception, 1980, volume 9, pages 523-528 Adaptation to a 'spatial-frequency doubled' stimulus Peter Thompson^!, Brian J Murphy Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania
More informationarxiv:cond-mat/ v2 27 Jun 1997
Entropy and Information in Neural Spike rains Steven P. Strong, 1 Roland Koberle, 1,2 Rob R. de Ruyter van Steveninck, 1 and William Bialek 1 1 NEC Research Institute, 4 Independence Way, Princeton, New
More informationIntroduction to neural spike train data for phase-amplitude analysis
Electronic Journal of Statistics Vol. 8 (24) 759 768 ISSN: 935-7524 DOI:.24/4-EJS865 Introduction to neural spike train data for phase-amplitude analysis Wei Wu Department of Statistics, Florida State
More informationunit P[r x*] C decode encode unit P[x r] f(x) x D
Probabilistic Interpretation of Population Codes Richard S. Zemel Peter Dayan Aleandre Pouget zemel@u.arizona.edu dayan@ai.mit.edu ale@salk.edu Abstract We present a theoretical framework for population
More informationIntroduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten. Lecture 2a. The Neuron - overview of structure. From Anderson (1995)
Introduction to Neural Networks U. Minn. Psy 5038 Spring, 1999 Daniel Kersten Lecture 2a The Neuron - overview of structure From Anderson (1995) 2 Lect_2a_Mathematica.nb Basic Structure Information flow:
More informationSecond Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models
Using Minimum and Maximum Mutual Information Models Jeffrey D. Fitzgerald 1,2, Ryan J. Rowekamp 1,2, Lawrence C. Sincich 3, Tatyana O. Sharpee 1,2 * 1 Computational Neurobiology Laboratory, The Salk Institute
More informationSUPPLEMENTARY INFORMATION
Spatio-temporal correlations and visual signaling in a complete neuronal population Jonathan W. Pillow 1, Jonathon Shlens 2, Liam Paninski 3, Alexander Sher 4, Alan M. Litke 4,E.J.Chichilnisky 2, Eero
More informationInfluence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations
Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn
More informationThe Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017
The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent
More informationFiring Rate Distributions and Ef ciency of Information Transmission of Inferior Temporal Cortex Neurons to Natural Visual Stimuli
LETTER Communicated by Dan Ruderman Firing Rate Distributions and Ef ciency of Information Transmission of Inferior Temporal Cortex Neurons to Natural Visual Stimuli Alessandro Treves SISSA, Programme
More informationFinding a Basis for the Neural State
Finding a Basis for the Neural State Chris Cueva ccueva@stanford.edu I. INTRODUCTION How is information represented in the brain? For example, consider arm movement. Neurons in dorsal premotor cortex (PMd)
More informationInformation Maximization in Single Neurons
nformation Maximization in Single Neurons Martin Stemmler and Christof Koch Computation and Neural Systems Program Caltech 139-74 Pasadena CA 91 125 Email: stemmler@klab.caltech.edu.koch@klab.caltech.edu
More informationPacific Symposium on Biocomputing 6: (2001)
Analyzing sensory systems with the information distortion function Alexander G Dimitrov and John P Miller Center for Computational Biology Montana State University Bozeman, MT 59715-3505 falex,jpmg@nervana.montana.edu
More information4.2 Entropy lost and information gained
4.2. ENTROPY LOST AND INFORMATION GAINED 101 4.2 Entropy lost and information gained Returning to the conversation between Max and Allan, we assumed that Max would receive a complete answer to his question,
More informationEfficient Coding. Odelia Schwartz 2017
Efficient Coding Odelia Schwartz 2017 1 Levels of modeling Descriptive (what) Mechanistic (how) Interpretive (why) 2 Levels of modeling Fitting a receptive field model to experimental data (e.g., using
More informationNeural Encoding II: Reverse Correlation and Visual Receptive Fields
Chapter 2 Neural Encoding II: Reverse Correlation and Visual Receptive Fields 2.1 Introduction The spike-triggered average stimulus introduced in chapter 1 is a standard way of characterizing the selectivity
More informationCHARACTERIZATION OF NONLINEAR NEURON RESPONSES
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)
More informationNeural Networks 1 Synchronization in Spiking Neural Networks
CS 790R Seminar Modeling & Simulation Neural Networks 1 Synchronization in Spiking Neural Networks René Doursat Department of Computer Science & Engineering University of Nevada, Reno Spring 2006 Synchronization
More informationFlexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015
Flexible Gating of Contextual Influences in Natural Vision Odelia Schwartz University of Miami Oct 05 Contextual influences Perceptual illusions: no man is an island.. Review paper on context: Schwartz,
More informationA General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons
massachusetts institute of technology computer science and artificial intelligence laboratory A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons Minjoon
More information