Efficient and direct estimation of a neural subunit model for sensory coding

Size: px
Start display at page:

Download "Efficient and direct estimation of a neural subunit model for sensory coding"

Transcription

1 To appear in: Neural Information Processing Systems (NIPS), Lake Tahoe, Nevada. December 3-6, 22. Efficient and direct estimation of a neural subunit model for sensory coding Brett Vintch Andrew D. Zaharia J. Anthony Movshon Eero P. Simoncelli Center for Neural Science, and Howard Hughes Medical Institute New York University New York, NY 3 vintch@cns.nyu.edu Abstract Many visual and auditory neurons have response properties that are well explained by pooling the rectified responses of a set of spatially shifted linear filters. These filters cannot be estimated using spike-triggered averaging (STA). Subspace methods such as spike-triggered covariance (STC) can recover multiple filters, but require substantial amounts of data, and recover an orthogonal basis for the subspace in which the filters reside rather than the filters themselves. Here, we assume a linear-nonlinear linear-nonlinear (LN-LN) cascade model in which the first linear stage is a set of shifted ( convolutional ) copies of a common filter, and the first nonlinear stage consists of rectifying scalar nonlinearities that are identical for all filter outputs. We refer to these initial LN elements as the subunits of the receptive field. The second linear stage then computes a weighted sum of the responses of the rectified subunits. We present a method for directly fitting this model to spike data, and apply it to both simulated and real neuronal data from primate V. The subunit model significantly outperforms STA and STC in terms of cross-validated accuracy and efficiency. Introduction Advances in sensory neuroscience rely on the development of testable functional models for the encoding of sensory stimuli in neural responses. Such models require procedures for fitting their parameters to data, and should be interpretable in terms both of sensory function and of the biological elements from which they are made. The most common models in the visual and auditory literature are based on linear-nonlinear (LN) cascades, in which a linear stage serves to project the highdimensional stimulus down to a one-dimensional signal, where it is then nonlinearly transformed to drive spiking. LN models are readily fit to data, and their linear operators specify the stimulus selectivity and invariance of the cell. The weights of the linear stage may be loosely interpreted as representing the efficacy of synapses, and the nonlinearity as a transformation from membrane potential to firing rate. For many visual and auditory neurons, responses are not well described by projection onto a single linear filter, but instead reflect a combination of several filters. In the cat retina, the responses of Y cells have been described by linear pooling of shifted rectified linear filters, dubbed subunits [, 2]. Similar behaviors are seen in guinea pig [3] and monkey retina [4]. In the auditory nerve, responses are described as computing the envelope of the temporally filtered sound waveform, which can be computed via summation of squared quadrature filter responses [5]. In primary visual cortex (V), simple cells are well described using LN models [6, 7], but complex cell responses are more like a

2 superposition of multiple spatially shifted simple cells [8], each with the same orientation and spatial frequency preference [9]. Although the description of complex cells is often reduced to a sum of two squared filters in quadrature [], more recent experiments indicate that these cells (and indeed most simple cells) require multiple shifted filters to fully capture their responses [, 2, 3]. Intermediate nonlinearities are also required to describing the response properties of some neurons in V2 to stimuli (e.g., angles [4] and depth edges [5]). Each of these examples is consistent with a canonical but constrained LN-LN model, in which the first linear stage consists of convolution with one (or a few) filters, and the first nonlinear stage is point-wise and rectifying. The second linear stage then pools the responses of these subunits using a weighted sum, and the final nonlinearity converts this to a firing rate. Hierarchical stacks of this type of generalized complex cell model have also been proposed for machine vision [6, 7]. What is lacking is a method for validating this model by fitting it directly to spike data. A widely used procedure for fitting a simple LN model to neural data is reverse correlation [8, 9]. The spike-triggered average of a set of Gaussian white noise stimuli provides an unbiased estimate of the linear kernel. In a subunit model, the initial linear stage projects the stimulus into a multidimensional subspace, which can be estimated using spike-triggered covariance (STC) [2, 2]. This has been used successfully for fly motion neurons [22], vertebrate retina [23], and primary visual cortex [24, ]. But this method relies on a Gaussian stimulus ensemble, requires a substantial amount of data, and recovers only a set of orthogonal axes for the response subspace not the underlying biological filters. More general methods based on information maximization alleviate some of the stimulus restrictions [25] but strongly limit the dimensionality of the recoverable subspace and still produce only a basis for the subspace. Here, we develop a specific subunit model and a maximum likelihood procedure to estimate its parameters from spiking data. We fit the model to both simulated and real V neuronal data, demonstrating that it is substantially more accurate for a given amount of data than the current state-of-theart V model which is based on STC [], and that it produces biologically interpretable filters. 2 Subunit model We assume that neural responses arise from a weighted sum of the responses of a set of nonlinear subunits. Each subunit applies a linear filter to its input (which can be either the raw stimulus, or the responses arising from a previous stage in a hierarchical cascade), and transforms the filtered response using a memoryless rectifying nonlinearity. A critical simplification is that the subunit filters are related by a fixed transformation; here, we assume they are spatially translated copies of a common filter, and thus the population of subunits can be viewed as computing a convolution. For example, the subunits of a V complex cell could be simple cells in V that share the same orientation and spatial frequency preference, but differ in spatial location, as originally proposed by Hubel & Wiesel [8, 9]. We also assume that all subunits use the same rectifying nonlinearity. The response to input defined over two discrete spatial dimensions and time, x(i, j, t), is written as: ˆr(t) = X w m,n X k(m, n, ) x(i m, j n, t ) A b, () m,n i,j, where k is the subunit filter, f is a point-wise function parameterized by vector, w n,m are the spatial weights, and b is an additive baseline. The ellipsis indicates that we allow for multiple subunit channels, each with its own filter, nonlinearity, and pooling weights. We interpret ˆr(t) as a generator potential, (e.g., time-varying membrane voltage) which is converted to a firing rate by another rectifying nonlinearity. The subunit model of Eq. () may be seen as a specific instance of a subspace model, in which the input is initially projected onto a linear subspace. Bialek and colleagues introduced spike-triggered covariance as a means of recovering such subspaces [2, 22]. Specifically, eigenvector analysis of the covariance matrix of the spike-triggered input ensemble exposes orthogonal axes for which the spike-triggered ensemble has a variance that differs significantly from that of the raw input ensemble. These axes may be separated into those along which variance is greater (excitatory) or less than (suppressive) that of the input. Figure demonstrates what happens when STC is applied to a simulated complex cell with 5 spatially shifted subunits. The response of this model cell is 2

3 a) b) c) position envelope 4 data points // eigenvalues λ -2 eigenvectors shifted filter manifold STC plane 6 data points // λ -4 Figure : Spike-triggered covariance analysis of a simulated V complex cell. (a) The model output is formed by summing the rectified responses of multiple linear filter kernels which are shifted and scaled copies of a canonical form. (b) The shifted filters lie along a manifold in stimulus space (four shown), and are not mutually orthogonal in general. STC recovers an orthogonal basis for a low-dimensional subspace that contains this manifold by finding the directions in stimulus space along which spikes are elicited or suppressed. (c) STC analysis of this model cell returns a variable number of filters dependent upon the amount of acquired data. A modest amount of data typically reveals two strong STC eigenvalues (top), whose eigenvectors form a quadrature (9-degree phaseshifted) pair and span the best-fitting plane for the set of shifted model filters. These will generally have tuning properties (orientation, spatial frequency) similar to the true model filters. However, the manifold does not generally lie in a two-dimensional subspace [26], and a larger data set reveals additional eigenvectors (bottom) that serve to capture the deviations from the ~e,2 plane. Due to the constraint of mutual orthogonality, these filters are usually not localized and they have tuning properties that differ from true model filters. ˆr(t) = P i w ib( ~ k i ~x (t))c 2, where the ~ k s are shifted filters, w weights filters by position, and ~x is the stimulus vector. The recovered STC axes span the same subspace as the shifted model filters, but there are fewer of them, and the enforced orthogonality of eigenvectors means that they are generally not a direct match to any of the model filters. This has also been observed in filters extracted from physiological data [, 2]. Although one may follow the STC analysis by indirectly identifying a localized filter whose shifted copies span the recovered subspace [, 3], the reliance on STC still imposes the stimulus limitations and data requirements mentioned above. 3 Direct subunit model estimation A generic subspace method like STC does not exploit the specific structure of the subunit model. We therefore developed an estimation procedure explicitly tailored for this type of computation. We first introduce a piecewise-linear parameterization of the subunit nonlinearity: f(s) = X l l T l (s), (2) where the s scale a small set of overlapping tent functions, T l ( ), that represent localized portions of f( ) (we find that a dozen or so basis functions are typically sufficient to provide the needed flexibility). Incorporating this into the model response of Eq. () allows us to fold the second linear pooling stage and the subunit nonlinearity into a single sum: ˆr(t) = X w m,n l T X k(m, n, ) x(i m, j n, t ) A b. (3) m,n,l i,j, The model is now partitioned into two linear stages, separated by the fixed nonlinear functions T l ( ). In the first, the stimulus is convolved with k, and in the second, the nonlinear responses are summed with a set of weights that are separable in the indices l and n, m. The partition motivates the use of an iterative coordinate descent scheme: the linear weights of each portion are optimized in alternation, 3

4 while the other portion is held constant. For each step, we minimized the mean square error between the observed firing rate of a cell and the firing rate predicted by the model. For models that include two subunit channels we optimize over both channels simultaneously (see section 3.3 for comments regarding two-channel initialization). 3. Estimating the convolutional subunit kernel The first coordinate descent leg optimizes the convolutional subunit kernel, k, using gradient descent while fixing the subunit nonlinearity and the final linear pooling. Because the tent basis functions are fixed and piecewise linear, the gradient is easily determined. This property also ensures that the descent is locally convex: assuming that updating k does not cause any of the the linear subunit responses to jump between the localized tent functions representing f, then the optimization is linear and the objective function is quadratic. In practice, the full gradient descent path causes the linear subunit responses to move slowly across bins of the piecewise nonlinearity. However, we include a regularization term to impose smoothness on the nonlinearity (see below) and this yields a wellbehaved minimization problem for k. 3.2 Estimating the subunit nonlinearities and linear subunit pooling The second leg of coordinate descent optimizes the subunit nonlinearity (more specifically, the weights on the tent functions, l ), and the subunit pooling, w n,m. As described above, the objective is bilinear in l and w n,m when k is fixed. Estimating both l and w n,m can be accomplished with alternating least-squares, which assures convergence to a (local) minimum [27]. We also include two regularization terms in the objective function. The first ensures smoothness in the nonlinearity f, by penalizing the square of the second derivative of the function in the least-squares fit. This smooth nonlinearity helps to guarantee that the optimization of k is well behaved, even where finite data sets leave the function poorly constrained. We also include a cross-validated ridge prior for the pooling weights to bias w n,m toward zero. The filter kernel k can also be regularized to ensure smoothness, but for the examples shown here we did not find the need to include such a term. 3.3 Model initialization Our objective function is non-convex and contains local minima, so the selection of initial parameter values may affect the solution. We found that initializing our two-channel subunit model to have a positive pooling function for one channel and a negative pooling function for the second channel allowed the optimization of the second channel to proceed much more quickly. This is probably due in part to a suppressive channel that is much weaker than the excitatory channel in general. We initialized the nonlinearity to halfwave-rectification for the excitatory channel and fullwaverectification for the suppressive channel. To initialize the convolutional filter we use a novel technique that we term convolutional STC. The subunit model describes a receptive field as the linear combination of nonlinear kernel responses that spatially tile the stimulus. Thus, the contribution of each localized patch of stimulus (of a size equal to the subunit kernel) is the same, up to a scale factor set by the weighting used in the subsequent pooling stage. As such, we compute an STC analysis on the union of all localized patches of stimuli. For each subunit location, {m, n}, we extract the local stimulus values in a window, g m,n (i, j), the size of the convolutional kernel and append them vertically in a local stimulus matrix. As an initial guess for the pooling weights, we weight each of these blocks by a Gaussian spatial profile, chosen to roughly match the size of the receptive field. We also generate a vector containing the vertical concatenation of copies of the measured spike train, ~r (one copy for each subunit location). w, X g,(i,j) ~r w,2 X g,2(i,j) C B A! X loc ~r C A! ~r loc. (4).. After performing STC analysis on the localized stimulus matrix, we use the first (largest variance) eigenvector to initialize the subunit kernel of the excitatory channel, and the last (lowest variance) eigenvector to initialize the kernel of the suppressive channel. In practice, we find that this initialization greatly reduces the number of iterations, and thus the run time, of the optimization procedure. 4

5 a) Simulated simple cell b) train Simulated complex cell Model performance (r) test subunit model Rust-STC model 5/6 / Minutes of simulated data 5/6 / Minutes of simulated data Figure 2: Model fitting performance for simulated V neurons. Shown are correlation coefficients for the subunit model (black circles) and the Rust-STC model (blue squares) [], computed on both the training data (open), and on a holdout test set (closed). Spike counts for each presented stimulus frame are drawn from a Poisson distribution. Shaded regions indicate ± s.d. for 5 simulation runs. (a) Simple cell, with spike rate determined by the halfwave-rectified and squared response of a single oriented linear filter. (b) Complex cell, with rate determined by a sum of squared Gabor filters arranged in spatial quadrature. Insets show estimated filters for the subunit (top) and Rust- STC (bottom) models with ten seconds (4 frames; left) and 2 minutes (48, frames; right) of data. 4 Experiments We fit the subunit model to physiological data sets in 3 different primate cortical areas: V, V2, and MT. The model is able to explain a significant amount of variance for each of these areas, but for illustrative purposes we show here only data for V. Initially, we use simulated V cells to compare the performance of the subunit model to that of the Rust-STC model [], which is based upon STC analysis. 4. Simulated V data We simulated the responses of canonical V simple cells and complex cells in response to white noise stimuli. Stimuli consisted of a 6x6 spatial array of pixels whose luminance values were set to independent ternary white noise sequences, updated every 25 ms (or 4 Hz). The simulated cells use spatiotemporally oriented Gabor filters: The simple cell has one even-phase filter and a half-squaring output nonlinearity while the complex cell has two filters (one even and one odd) whose squared responses are combined to give a firing rate. Spike counts are drawn from a Poisson distribution, and overall rates are scaled so as to yield an average of 4 ips (i.e. one spike per time bin). For consistency with the analysis of the physiological data, we fit the simulated data using a subunit model with two subunit channels (even though the simulated cells only possess an excitatory channel). When fitting the Rust-STC model, we followed the procedure described in []. Briefly, after the STA and STC filters are estimated, they are weighted according to their predictive power and combined in excitatory and suppressive pools, E and S (we use cross-validation to determine the number of filters to use for each pool). These two pooled responses are then combined using a joint output nonlinearity: ˆr(t) Rust = +( E S )/( E + S + ). Parameters {,,,,, } are optimized to minimizing mean squared error between observed spike counts and the model rate. Model performances, measured as the correlation between the model rate and spike count, are shown in Figure 2. In low data regimes, both models perform nearly perfectly on the training data, but poorly on separate test data not used for fitting, a clear indication of over-fitting. But as the data set increases in size, the subunit model rapidly improves, reaching near-perfect performance for modest spike counts. The Rust-STC model also improves, but much more slowly; It requires more than an order of magnitude more data to achieve the same performance as the subunit model. This 5

6 a) b) Convolutional subunit filters Nonlinearity Position map 5 Excitatory Suppressive ms º + c) Trials Firing rate (ips) s subunit Rust measured Orientation spikes Figure 3: Two-channel subunit model fit to a physiological data from a macaque V cell. (a) Fitted parameters for the excitatory (top row) and suppressive (bottom row) channels, including the spacetime subunit filters (8 grayscale images, corresponding to different time frames), the nonlinearity, and the spatial weighting function w n,m that is used to combine the subunit responses. (b) A raster showing spiking responses to 2 repeated presentations of an identical stimulus with the average spike count (black) and model prediction (blue) plotted above. (c) Simulated models (subunit model: blue, Rust-STC model: purple) and measured (black) responses to drifting sinusoidal gratings. inefficiency is more pronounced for the complex cell, because the simple cell is fully explained by the STA filter, which can be estimated much more reliably than the STC filters for small amounts of data. We conclude that directly fitting the subunit model is much more efficient in the use of data than using STC to estimate a subspace model. 4.2 Physiological data from macaque V We presented spatio-temporal pixel noise to 38 cells recorded from V in anesthetized macaques (see [] for details of experimental design). The stimulus was a 6x6 grid with luminance values set by independent ternary white noise sequences refreshed at 4 Hz. For 2 neurons we also presented 2 repeats of a sequence of stimulus frames as a validation set. The model filters were assumed to respond over a 2 ms (8 frame) causal time window in which the stimulus most strongly affected the firing of the neurons, and thus, model responses were derived from a stimulus vector with 248 dimensions (6x6x8). Figure 3 shows the fit of a 2-channel subunit model to data from a typical V cell. Figure 3a illustrates the subunit kernels and their associated nonlinearities and spatial pooling maps, for both the excitatory channel (top row) and the suppressive channel (bottom row). The two channels show clear but opposing direction selectivity, starting at a latency of 5 ms. The fact that this cell is complex is reflected in two aspects of the model parameters. First, the model shows a symmetric, full-wave rectifying nonlinearity for the excitatory channel. Second, the final linear pooling for this channel is diffuse over space, eliciting a response that is invariant to the exact spatial position and phase of the stimulus. For this particular example the model fits well. For the cross-validated set of repeated stimuli (which have the same structure as for the fitting data), on average the model correlates with each trial s firing rate with an r-value of.54. A raster of spiking responses to twenty repetitions of a 5 s stimulus are depicted in Fig. 3b, along with the average firing rate and the model prediction, which are well matched. The model can also capture the direction selectivity of this cell s response to moving sinusoidal gratings (whose spatial and temporal frequency are chosen to best drive the cell) (Fig. 3c). The subunit model acceptably fits most of the cells we recorded in V. Moreover, fit quality is not correlated with modulation index (r =.8; n.s.), suggesting that the model captures the behavior of both simple and complex cells equally well. The fitted subunit model also significantly outperforms the Rust-STC model in terms of predicting responses to novel data. Figure 4a shows the performance of the Rust-STC and subunit models for 2 V neurons, for both training data and test data on single trials. For the training data, the Rust- 6

7 a) b) c) Subunit model accuracy (r) test training n = Oracle , 5, Rust-STC model accuracy (r) Oracle accuracy (r) Total number of recorded spikes % 75% 5% 25% Figure 4: Model performance comparisons on physiological data. (a) Subunit model performance vs. Rust-STC model for V data. Training accuracy is computed for a single variable-length sequence extracted from the fitting data. Test accuracy is computed on the average response to 2 repeats of a 25 s stimulus. (b) Subunit model performance vs. an Oracle model for V data (see text). Each point represents the average accuracy in predicting responses to each of 2 repeated stimuli. The oracle model uses the average spike count over the other 9 repeats as a prediction. Inset: Ratio of subunit-to-oracle performance. Error bars indicate s.d. (c) Subunit model performance on test data, as a function of the total number of recorded spikes. STC model performs significantly better than the subunit model (Figure 4a; <r Rust >=.8, <r subunit >=.33; p.5). However, this is primarily due to over-fitting: Visual inspection of the STC kernels for most cells reveals very little structure. For test data (that was not included in the data used to fit the models), the subunit model exhibits significantly better performance than the Rust-STC model (< r Rust >=.6, <r subunit >=.27; p.5). This is primarily due to over-fitting in the STC analysis. For a stimulus composed of a 6x6 pixel grid with 8 frames, the spike-triggered covariance matrix contains over 2 million parameters. For the same stimulus, a subunit model with two channels and an 8x8x8 subunit kernel has only about 2 parameters. The subunit model performs well when compared to the Rust-STC model, but we were interested in obtaining a more absolute measure of performance. Specifically, no purely stimulus-driven model can be expected to explain the response variability seen across repeated presentations of the same stimulus. We can estimate an upper bound on stimulus-driven model performance by implementing an empirical oracle model that uses the average response over all but one of a set of repeated stimulus trials to predict the response on the remaining trial. Over the 2 neurons with repeated stimulus data, we found that the subunit model achieved, on average, 76% the performance of the oracle model (Figure 4b). Moreover, the cells that were least well fit by the subunit model were also the cells that responded only weakly to the stimulus (Figure 4c). We conclude that, for most cells, the fitted subunit model explains a significant fraction of the response that can be explained by any stimulus-driven model. 5 Discussion Subunits have been proposed as a qualitative description of many types of receptive fields in sensory systems [2, 28, 8,, 2], and have enjoyed a recent renewal of interest by the modeling community [3, 29]. Here we have described a new parameterized canonical subunit model that can be applied to an arbitrary set of inputs (either a sensory stimulus, or a population of afferents from a previous stage of processing), and we have developed a method for directly estimating the parameters of this model from measured spiking data. Compared with STA or STC, the model fits are more accurate for a given amount of data, less sensitive to the choice of stimulus ensemble, and more interpretable in terms of biological mechanism. For V, we have applied this model directly to the visual stimuli, adopting the simplifying assumption that subcortical pathways faithfully relay the image data to V. Higher visual areas build their responses on the afferent inputs arriving from lower visual areas, and we have applied this subunit 7

8 model to such neurons by first simulating the responses of a population of the afferent V neurons, and then optimizing a subunit model that best maps these afferent responses to the spiking responses observed in the data. Specifically, for neurons in area V2, we model the afferent V population as a collection of simple cells that tile visual space. The V filters are chosen to uniformly cover the space of orientations, scales, and positions [3]. We also include four different phases. For neurons in area MT (V5), we use an afferent V population that also includes direction selective subunits, because the projections from V to MT are known to be sensitive to the direction of visual motion [3]. Specifically, the V filters are a rotation-invariant set of 3-dimensional, space-space-time steerable filters [32]. We fit these models to neural responses to textured stimuli that varied in contrast and local orientation content (for MT, the local elements also drift over time). Our preliminary results show that the subunit model outperforms standard models for these higher order areas as well. We are currently working to refine and generalize the subunit model in a number of ways. The mean squared error objective function, while computationally appealing, does not accurately reflect the noise properties of real neurons, whose variance changes with their mean rate. A likelihood objective function, based on a Poisson or similar spiking model, can improve the accuracy of the fitted model, but it does so at a cost to the simplicity of model estimation (e.g. Alternating Least Squares can no longer be used to solve the bilinear problem). Real neurons also possess other forms of nonlinearities, such as local gain control that is been observed in neurons through the visual and auditory systems [33]. We are exploring means by which this functionality can be included directly in the model framework (e.g. []), while retaining the tractability of the parameter estimation. Acknowledgments This work was supported by the Howard Hughes Medical Institute, and by NIH grant EY444. References [] H. B. Barlow and W. R. Levick. The mechanism of directionally selective units in rabbit s retina. The Journal of Physiology, 78(3):477, June 965. [2] S. Hochstein and R. M. Shapley. Linear and nonlinear spatial subunits in Y cat retinal ganglion cells, 976. [3] J. B. Demb, K. Zaghloul, L. Haarsma, and P. Sterling. Bipolar cells contribute to nonlinear spatial summation in the brisk-transient (Y) ganglion cell in mammalian retina. The Journal of neuroscience, 2(9): , 2. [4] J.D. Crook, B.B. Peterson, O.S. Packer, F.R. Robinson, J.B. Troy, and D.M. Dacey. Y-cell receptive field and collicular projection of parasol ganglion cells in macaque monkey retina. The Journal of neuroscience, 28(44):277 29, 28. [5] P.X. Joris, C.E. Schreiner, and A. Rees. Neural processing of amplitude-modulated sounds. Physiol. Rev., 84:54 577, 24. [6] J. P. Jones and L. A. Palmer. The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of neurophysiology, 58(6):87 2, 987. [7] G. C. DeAngelis, I. Ohzawa, and R. D. Freeman. Spatiotemporal organization of simple-cell receptive fields in the cat s striate cortex. I. General characteristics and postnatal development. Journal of neurophysiology, 69(4):9 7, 993. [8] D. H. Hubel and T. N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat s visual cortex. The Journal of Physiology, 6():6 54, 962. [9] J. A. Movshon, I. D. Thompson, and D. J. Tolhurst. Receptive field organization of complex cells in the cat s striate cortex. The Journal of Physiology, 283():79 99, 978. [] E. H. Adelson and J. R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2(2): , 985. [] N. C. Rust, O. Schwartz, J. A. Movshon, and E. P. Simoncelli. Spatiotemporal elements of macaque V receptive fields. Neuron, 46(6): , June 25. [2] X. Chen, F. Han, M. M. Poo, and Y. Dan. Excitatory and suppressive receptive field subunits in awake monkey primary visual cortex (V). Proceedings of the National Academy of Sciences, 4(48):92 925, November 27. [3] T. Lochmann, T. Blanche, and D. A. Butts. Construction of direction selectivity in V: from simple to complex cells. Computational and Systems Neuroscience (CoSyNe), 2. 8

9 [4] M. Ito and H. Komatsu. Representation of angles embedded within contour stimuli in area V2 of macaque monkeys. The Journal of neuroscience, 24(3): , 24. [5] C. E. Bredfeldt, J. C. A. Read, and B. G. Cumming. A quantitative explanation of responses to disparitydefined edges in macaque V2. Journal of neurophysiology, (2):7 73, 29. [6] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):93 22, 98. [7] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature neuroscience, 2:9 25, 999. [8] E. De Boer. Reverse correlation I. A heuristic introduction to the technique of triggered correlation with application to the analysis of compound systems. Proc. Kon. Nederl. Akad. Wet, 968. [9] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 2(2):99 23, 2. [2] R. D. R. V. Steveninck and W. Bialek. Real-Time Performance of a Movement-Sensitive Neuron in the Blowfly Visual System: Coding and Information Transfer in Short Spike Sequences. Proceedings of the Royal Society B: Biological Sciences, 234(277):379 44, September 988. [2] O. Schwartz, J. W. Pillow, N.C. Rust, and E.P. Simoncelli. Spike-triggered neural characterization. Journal of Vision, 6(4):3 3, February 26. [22] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26(3):695 72, June 2. [23] O. Schwartz, E. J. Chichilnisky, and E. P. Simoncelli. Characterizing neural gain control using spiketriggered covariance. Advances in neural information processing systems, : , 22. [24] J. Touryan, B. Lau, and Y Dan. Isolation of relevant visual features from random stimuli for cortical complex cells. The Journal of neuroscience, 22(24):8 88, 22. [25] T. Sharpee, N. C. Rust, and W. Bialek. Analyzing neural responses to natural signals: maximally informative dimensions. Neural computation, 6(2):223 25, 24. [26] C. Ekanadham, D. Tranchina, and E. P. Simoncelli. Recovery of sparse translation-invariant signals with continuous basis pursuit. IEEE Trans Signal Processing, 59(): , Oct 2. [27] M. Ahrens, L. Paninski, and M. Sahani. Inferring input nonlinearities in neural encoding models. Network: Computation in Neural Systems, 9():35 67, 28. [28] J. D. Victor and R. M. Shapley. The nonlinear pathway of Y ganglion cells in the cat retina. The Journal of General Physiology, 74(6):67 689, December 979. [29] M. Eickenberg, R. J. Rowekamp, M. Kouh, and T. O. Sharpee. Characterizing responses of translationinvariant neurons to natural stimuli: maximally informative invariant dimensions. Neural computation, 24(9): , September 22. [3] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multi-scale derivative computation. Image Processing, 995. Proceedings., International Conference on, 3: vol. 3, 995. [3] J. A. Movshon and W. T. Newsome. Visual response properties of striate cortical neurons projecting to area MT in macaque monkeys. The Journal of neuroscience, 6(23): , 996. [32] E. P. Simoncelli and D. J. Heeger. A model of neuronal responses in visual area MT. Vision Research, 38(5):743 76, March 998. [33] M. Carandini and D. J. Heeger. Normalization as a canonical neural computation. Nature Reviews Neuroscience, 3():5 62, November 2. 9

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017 SPIKE TRIGGERED APPROACHES Odelia Schwartz Computational Neuroscience Course 2017 LINEAR NONLINEAR MODELS Linear Nonlinear o Often constrain to some form of Linear, Nonlinear computations, e.g. visual

More information

Structured hierarchical models for neurons in the early visual system

Structured hierarchical models for neurons in the early visual system Structured hierarchical models for neurons in the early visual system by Brett Vintch A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Center for

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION Supplementary discussion 1: Most excitatory and suppressive stimuli for model neurons The model allows us to determine, for each model neuron, the set of most excitatory and suppresive features. First,

More information

Statistical models for neural encoding

Statistical models for neural encoding Statistical models for neural encoding Part 1: discrete-time models Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk

More information

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)

More information

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC)

More information

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models Anqi Wu Il Memming Park 2 Jonathan W. Pillow Princeton Neuroscience Institute, Princeton University {anqiw, pillow}@princeton.edu

More information

Mid Year Project Report: Statistical models of visual neurons

Mid Year Project Report: Statistical models of visual neurons Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons

More information

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models Published in: Advances in Neural Information Processing Systems 28 (215) Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models Anqi Wu 1 Il Memming Park 2 Jonathan W. Pillow 1 1 Princeton

More information

Dimensionality reduction in neural models: an information-theoretic generalization of spiketriggered average and covariance analysis

Dimensionality reduction in neural models: an information-theoretic generalization of spiketriggered average and covariance analysis to appear: Journal of Vision, 26 Dimensionality reduction in neural models: an information-theoretic generalization of spiketriggered average and covariance analysis Jonathan W. Pillow 1 and Eero P. Simoncelli

More information

This cannot be estimated directly... s 1. s 2. P(spike, stim) P(stim) P(spike stim) =

This cannot be estimated directly... s 1. s 2. P(spike, stim) P(stim) P(spike stim) = LNP cascade model Simplest successful descriptive spiking model Easily fit to (extracellular) data Descriptive, and interpretable (although not mechanistic) For a Poisson model, response is captured by

More information

Identifying Functional Bases for Multidimensional Neural Computations

Identifying Functional Bases for Multidimensional Neural Computations LETTER Communicated by Jonathan Pillow Identifying Functional Bases for Multidimensional Neural Computations Joel Kaardal jkaardal@physics.ucsd.edu Jeffrey D. Fitzgerald jfitzgerald@physics.ucsd.edu Computational

More information

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU http://www.cns.nyu.edu/~pillow Oct 5, Course lecture: Computational Modeling of Neuronal Systems

More information

Characterization of Nonlinear Neuron Responses

Characterization of Nonlinear Neuron Responses Characterization of Nonlinear Neuron Responses Mid Year Report Matt Whiteway Department of Applied Mathematics and Scientific Computing whit822@umd.edu Advisor Dr. Daniel A. Butts Neuroscience and Cognitive

More information

Nonlinear reverse-correlation with synthesized naturalistic noise

Nonlinear reverse-correlation with synthesized naturalistic noise Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California

More information

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University

More information

Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis

Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis Journal of Vision (2006) 6, 414 428 http://journalofvision.org/6/4/9/ 414 Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis

More information

Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models

Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models Supplemental Material Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models Motivation The purpose of this analysis is to verify that context dependent changes

More information

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern. 1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend

More information

Comparison of objective functions for estimating linear-nonlinear models

Comparison of objective functions for estimating linear-nonlinear models Comparison of objective functions for estimating linear-nonlinear models Tatyana O. Sharpee Computational Neurobiology Laboratory, the Salk Institute for Biological Studies, La Jolla, CA 937 sharpee@salk.edu

More information

Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods

Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods Kanaka Rajan Lewis-Sigler Institute for Integrative Genomics Princeton University

More information

1/12/2017. Computational neuroscience. Neurotechnology.

1/12/2017. Computational neuroscience. Neurotechnology. Computational neuroscience Neurotechnology https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/ 1 Neurotechnology http://www.lce.hut.fi/research/cogntech/neurophysiology Recording

More information

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Comparing integrate-and-fire models estimated using intracellular and extracellular data 1 Liam Paninski a,b,2 Jonathan Pillow b Eero Simoncelli b a Gatsby Computational Neuroscience Unit, University College

More information

Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces

Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces LETTER Communicated by Bartlett Mel Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces Aapo Hyvärinen Patrik Hoyer Helsinki University

More information

Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure

Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure Supplementary Discussion Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure Wässle and colleagues first observed that the local structure of cell mosaics was approximately hexagonal

More information

Design principles for contrast gain control from an information theoretic perspective

Design principles for contrast gain control from an information theoretic perspective Design principles for contrast gain control from an information theoretic perspective Yuguo Yu Brian Potetz Tai Sing Lee Center for the Neural Computer Science Computer Science Basis of Cognition Department

More information

Modeling and Characterization of Neural Gain Control. Odelia Schwartz. A dissertation submitted in partial fulfillment

Modeling and Characterization of Neural Gain Control. Odelia Schwartz. A dissertation submitted in partial fulfillment Modeling and Characterization of Neural Gain Control by Odelia Schwartz A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Center for Neural Science

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION Spatio-temporal correlations and visual signaling in a complete neuronal population Jonathan W. Pillow 1, Jonathon Shlens 2, Liam Paninski 3, Alexander Sher 4, Alan M. Litke 4,E.J.Chichilnisky 2, Eero

More information

encoding and estimation bottleneck and limits to visual fidelity

encoding and estimation bottleneck and limits to visual fidelity Retina Light Optic Nerve photoreceptors encoding and estimation bottleneck and limits to visual fidelity interneurons ganglion cells light The Neural Coding Problem s(t) {t i } Central goals for today:

More information

Adaptation in the Neural Code of the Retina

Adaptation in the Neural Code of the Retina Adaptation in the Neural Code of the Retina Lens Retina Fovea Optic Nerve Optic Nerve Bottleneck Neurons Information Receptors: 108 95% Optic Nerve 106 5% After Polyak 1941 Visual Cortex ~1010 Mean Intensity

More information

Modeling Surround Suppression in V1 Neurons with a Statistically-Derived Normalization Model

Modeling Surround Suppression in V1 Neurons with a Statistically-Derived Normalization Model Presented at: NIPS-98, Denver CO, 1-3 Dec 1998. Pulished in: Advances in Neural Information Processing Systems eds. M. S. Kearns, S. A. Solla, and D. A. Cohn volume 11, pages 153--159 MIT Press, Cambridge,

More information

Visual motion processing and perceptual decision making

Visual motion processing and perceptual decision making Visual motion processing and perceptual decision making Aziz Hurzook (ahurzook@uwaterloo.ca) Oliver Trujillo (otrujill@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Centre for Theoretical Neuroscience,

More information

Inferring synaptic conductances from spike trains under a biophysically inspired point process model

Inferring synaptic conductances from spike trains under a biophysically inspired point process model Inferring synaptic conductances from spike trains under a biophysically inspired point process model Kenneth W. Latimer The Institute for Neuroscience The University of Texas at Austin latimerk@utexas.edu

More information

A Deep Learning Model of the Retina

A Deep Learning Model of the Retina A Deep Learning Model of the Retina Lane McIntosh and Niru Maheswaranathan Neurosciences Graduate Program, Stanford University Stanford, CA {lanemc, nirum}@stanford.edu Abstract The represents the first

More information

Modeling Convergent ON and OFF Pathways in the Early Visual System

Modeling Convergent ON and OFF Pathways in the Early Visual System Modeling Convergent ON and OFF Pathways in the Early Visual System The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Gollisch,

More information

Bayesian inference for low rank spatiotemporal neural receptive fields

Bayesian inference for low rank spatiotemporal neural receptive fields published in: Advances in Neural Information Processing Systems 6 (03), 688 696. Bayesian inference for low rank spatiotemporal neural receptive fields Mijung Park Electrical and Computer Engineering The

More information

Spatiotemporal Response Properties of Optic-Flow Processing Neurons

Spatiotemporal Response Properties of Optic-Flow Processing Neurons Article Spatiotemporal Response Properties of Optic-Flow Processing Neurons Franz Weber, 1,3, * Christian K. Machens, 2 and Alexander Borst 1 1 Department of Systems and Computational Neurobiology, Max-Planck-Institute

More information

The functional organization of the visual cortex in primates

The functional organization of the visual cortex in primates The functional organization of the visual cortex in primates Dominated by LGN M-cell input Drosal stream for motion perception & spatial localization V5 LIP/7a V2 V4 IT Ventral stream for object recognition

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable

More information

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico

Ângelo Cardoso 27 May, Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico BIOLOGICALLY INSPIRED COMPUTER MODELS FOR VISUAL RECOGNITION Ângelo Cardoso 27 May, 2010 Symbolic and Sub-Symbolic Learning Course Instituto Superior Técnico Index Human Vision Retinal Ganglion Cells Simple

More information

Adaptive contrast gain control and information maximization $

Adaptive contrast gain control and information maximization $ Neurocomputing 65 66 (2005) 6 www.elsevier.com/locate/neucom Adaptive contrast gain control and information maximization $ Yuguo Yu a,, Tai Sing Lee b a Center for the Neural Basis of Cognition, Carnegie

More information

Primer: The deconstruction of neuronal spike trains

Primer: The deconstruction of neuronal spike trains Primer: The deconstruction of neuronal spike trains JOHNATAN ALJADEFF 1,2, BENJAMIN J. LANSDELL 3, ADRIENNE L. FAIRHALL 4 AND DAVID KLEINFELD 1,5 1 Department of Physics, University of California, San

More information

Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model Jonathan W. Pillow, Liam Paninski, and Eero P. Simoncelli Howard Hughes Medical Institute Center for Neural Science New York

More information

Visual Motion Analysis by a Neural Network

Visual Motion Analysis by a Neural Network Visual Motion Analysis by a Neural Network Kansai University Takatsuki, Osaka 569 1095, Japan E-mail: fukushima@m.ieice.org (Submitted on December 12, 2006) Abstract In the visual systems of mammals, visual

More information

Characterization of Nonlinear Neuron Responses

Characterization of Nonlinear Neuron Responses Characterization of Nonlinear Neuron Responses Final Report Matt Whiteway Department of Applied Mathematics and Scientific Computing whit822@umd.edu Advisor Dr. Daniel A. Butts Neuroscience and Cognitive

More information

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons To appear in: Neural Information Processing Systems (NIPS), http://nips.cc/ Granada, Spain. December 12-15, 211. Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan

More information

What is the neural code? Sekuler lab, Brandeis

What is the neural code? Sekuler lab, Brandeis What is the neural code? Sekuler lab, Brandeis What is the neural code? What is the neural code? Alan Litke, UCSD What is the neural code? What is the neural code? What is the neural code? Encoding: how

More information

Spatiotemporal Elements of Macaque V1 Receptive Fields

Spatiotemporal Elements of Macaque V1 Receptive Fields Neuron, Vol. 46, 945 956, June 16, 2005, Copyright 2005 by Elsevier Inc. DOI 10.1016/j.neuron.2005.05.021 Spatiotemporal Elements of Macaque V1 Receptive Fields Nicole C. Rust, 1, * Odelia Schwartz, 3

More information

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons Yan Karklin and Eero P. Simoncelli NYU Overview Efficient coding is a well-known objective for the evaluation and

More information

ScholarOne, 375 Greenbrier Drive, Charlottesville, VA, 22901

ScholarOne, 375 Greenbrier Drive, Charlottesville, VA, 22901 Page 1 of 43 Information maximization as a principle for contrast gain control Journal: Manuscript ID: Manuscript Type: Manuscript Section: Conflict of Interest: Date Submitted by the Author: Keywords:

More information

Flexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015

Flexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015 Flexible Gating of Contextual Influences in Natural Vision Odelia Schwartz University of Miami Oct 05 Contextual influences Perceptual illusions: no man is an island.. Review paper on context: Schwartz,

More information

Neural Encoding. Mark van Rossum. January School of Informatics, University of Edinburgh 1 / 58

Neural Encoding. Mark van Rossum. January School of Informatics, University of Edinburgh 1 / 58 1 / 58 Neural Encoding Mark van Rossum School of Informatics, University of Edinburgh January 2015 2 / 58 Overview Understanding the neural code Encoding: Prediction of neural response to a given stimulus

More information

Second Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models

Second Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models Using Minimum and Maximum Mutual Information Models Jeffrey D. Fitzgerald 1,2, Ryan J. Rowekamp 1,2, Lawrence C. Sincich 3, Tatyana O. Sharpee 1,2 * 1 Computational Neurobiology Laboratory, The Salk Institute

More information

Two-dimensional adaptation in the auditory forebrain

Two-dimensional adaptation in the auditory forebrain Two-dimensional adaptation in the auditory forebrain Tatyana O. Sharpee, Katherine I. Nagel and Allison J. Doupe J Neurophysiol 106:1841-1861, 2011. First published 13 July 2011; doi:10.1152/jn.00905.2010

More information

Features and dimensions: Motion estimation in fly vision

Features and dimensions: Motion estimation in fly vision Features and dimensions: Motion estimation in fly vision William Bialek a and Rob R. de Ruyter van Steveninck b a Joseph Henry Laboratories of Physics, and Lewis Sigler Institute for Integrative Genomics

More information

Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells Sven Dähne 1,2,3 *, Niko Wilbert 2,3, Laurenz Wiskott 2,3,4 1 Machine Learning Group, Department of Computer Science, Berlin Institute of

More information

Convolutional neural networks

Convolutional neural networks 11-1: Convolutional neural networks Prof. J.C. Kao, UCLA Convolutional neural networks Motivation Biological inspiration Convolution operation Convolutional layer Padding and stride CNN architecture 11-2:

More information

Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions

Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions ARTICLE Communicated by Pamela Reinagel Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions Tatyana Sharpee sharpee@phy.ucsf.edu Sloan Swartz Center for Theoretical Neurobiology

More information

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References 24th March 2011 Update Hierarchical Model Rao and Ballard (1999) presented a hierarchical model of visual cortex to show how classical and extra-classical Receptive Field (RF) effects could be explained

More information

Limulus. The Neural Code. Response of Visual Neurons 9/21/2011

Limulus. The Neural Code. Response of Visual Neurons 9/21/2011 Crab cam (Barlow et al., 2001) self inhibition recurrent inhibition lateral inhibition - L16. Neural processing in Linear Systems: Temporal and Spatial Filtering C. D. Hopkins Sept. 21, 2011 The Neural

More information

DESIGN OF MULTI-DIMENSIONAL DERIVATIVE FILTERS. Eero P. Simoncelli

DESIGN OF MULTI-DIMENSIONAL DERIVATIVE FILTERS. Eero P. Simoncelli Published in: First IEEE Int l Conf on Image Processing, Austin Texas, vol I, pages 790--793, November 1994. DESIGN OF MULTI-DIMENSIONAL DERIVATIVE FILTERS Eero P. Simoncelli GRASP Laboratory, Room 335C

More information

Contextual modulation of V1 receptive fields depends on their spatial symmetry

Contextual modulation of V1 receptive fields depends on their spatial symmetry DOI./s8-8-- Contextual modulation of V receptive fields depends on their spatial symmetry Tatyana O. Sharpee & Jonathan D. Victor Received: April 8 /Revised: 9 June 8 /Accepted: June 8 # Springer Science

More information

Phenomenological Models of Neurons!! Lecture 5!

Phenomenological Models of Neurons!! Lecture 5! Phenomenological Models of Neurons!! Lecture 5! 1! Some Linear Algebra First!! Notes from Eero Simoncelli 2! Vector Addition! Notes from Eero Simoncelli 3! Scalar Multiplication of a Vector! 4! Vector

More information

Charles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio

Charles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio Charles Cadieu, Minjoon Kouh, Anitha Pasupathy, Charles E. Connor, Maximilian Riesenhuber and Tomaso Poggio J Neurophysiol 98:733-75, 27. First published Jun 27, 27; doi:.52/jn.265.26 You might find this

More information

Transformation of stimulus correlations by the retina

Transformation of stimulus correlations by the retina Transformation of stimulus correlations by the retina Kristina Simmons (University of Pennsylvania) and Jason Prentice, (now Princeton University) with Gasper Tkacik (IST Austria) Jan Homann (now Princeton

More information

Neural characterization in partially observed populations of spiking neurons

Neural characterization in partially observed populations of spiking neurons Presented at NIPS 2007 To appear in Adv Neural Information Processing Systems 20, Jun 2008 Neural characterization in partially observed populations of spiking neurons Jonathan W. Pillow Peter Latham Gatsby

More information

Efficient Coding. Odelia Schwartz 2017

Efficient Coding. Odelia Schwartz 2017 Efficient Coding Odelia Schwartz 2017 1 Levels of modeling Descriptive (what) Mechanistic (how) Interpretive (why) 2 Levels of modeling Fitting a receptive field model to experimental data (e.g., using

More information

!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be

!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be Supplementary Materials General case: computing log likelihood We first describe the general case of computing the log likelihood of a sensory parameter θ that is encoded by the activity of neurons. Each

More information

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017 The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent

More information

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London High-dimensional geometry of cortical population activity Marius Pachitariu University College London Part I: introduction to the brave new world of large-scale neuroscience Part II: large-scale data preprocessing

More information

Simple Cell Receptive Fields in V1.

Simple Cell Receptive Fields in V1. Simple Cell Receptive Fields in V1. The receptive field properties of simple cells in V1 were studied by Hubel and Wiesel [65][66] who showed that many cells were tuned to the orientation of edges and

More information

THE retina in general consists of three layers: photoreceptors

THE retina in general consists of three layers: photoreceptors CS229 MACHINE LEARNING, STANFORD UNIVERSITY, DECEMBER 2016 1 Models of Neuron Coding in Retinal Ganglion Cells and Clustering by Receptive Field Kevin Fegelis, SUID: 005996192, Claire Hebert, SUID: 006122438,

More information

The homogeneous Poisson process

The homogeneous Poisson process The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,

More information

Finding a Basis for the Neural State

Finding a Basis for the Neural State Finding a Basis for the Neural State Chris Cueva ccueva@stanford.edu I. INTRODUCTION How is information represented in the brain? For example, consider arm movement. Neurons in dorsal premotor cortex (PMd)

More information

Modelling and Analysis of Retinal Ganglion Cells Through System Identification

Modelling and Analysis of Retinal Ganglion Cells Through System Identification Modelling and Analysis of Retinal Ganglion Cells Through System Identification Dermot Kerr 1, Martin McGinnity 2 and Sonya Coleman 1 1 School of Computing and Intelligent Systems, University of Ulster,

More information

Adaptation to a 'spatial-frequency doubled' stimulus

Adaptation to a 'spatial-frequency doubled' stimulus Perception, 1980, volume 9, pages 523-528 Adaptation to a 'spatial-frequency doubled' stimulus Peter Thompson^!, Brian J Murphy Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania

More information

What do V1 neurons like about natural signals

What do V1 neurons like about natural signals What do V1 neurons like about natural signals Yuguo Yu Richard Romero Tai Sing Lee Center for the Neural Computer Science Computer Science Basis of Cognition Department Department Carnegie Mellon University

More information

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000 Synaptic Input Professor David Heeger September 5, 2000 The purpose of this handout is to go a bit beyond the discussion in Ch. 6 of The Book of Genesis on synaptic input, and give some examples of how

More information

Bayesian probability theory and generative models

Bayesian probability theory and generative models Bayesian probability theory and generative models Bruno A. Olshausen November 8, 2006 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using

More information

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One

More information

Inferring input nonlinearities in neural encoding models

Inferring input nonlinearities in neural encoding models Inferring input nonlinearities in neural encoding models Misha B. Ahrens 1, Liam Paninski 2 and Maneesh Sahani 1 1 Gatsby Computational Neuroscience Unit, University College London, London, UK 2 Dept.

More information

Natural Image Statistics

Natural Image Statistics Natural Image Statistics A probabilistic approach to modelling early visual processing in the cortex Dept of Computer Science Early visual processing LGN V1 retina From the eye to the primary visual cortex

More information

Normative Theory of Visual Receptive Fields

Normative Theory of Visual Receptive Fields Normative Theory of Visual Receptive Fields Tony Lindeberg Computational Brain Science Lab, Department of Computational Science and Technology, KTH Royal Institute of Technology, SE-00 44 Stockholm, Sweden.

More information

Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video

Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video LETTER Communicated by Bruno Olshausen Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video Jarmo Hurri jarmo.hurri@hut.fi Aapo Hyvärinen aapo.hyvarinen@hut.fi Neural Networks

More information

Nonlinear computations shaping temporal processing of pre-cortical vision

Nonlinear computations shaping temporal processing of pre-cortical vision Articles in PresS. J Neurophysiol (June 22, 216). doi:1.1152/jn.878.215 1 2 3 4 5 6 7 8 9 1 Nonlinear computations shaping temporal processing of pre-cortical vision Authors: Daniel A. Butts 1, Yuwei Cui

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

Bayesian active learning with localized priors for fast receptive field characterization

Bayesian active learning with localized priors for fast receptive field characterization Published in: Advances in Neural Information Processing Systems 25 (202) Bayesian active learning with localized priors for fast receptive field characterization Mijung Park Electrical and Computer Engineering

More information

THE functional role of simple and complex cells has

THE functional role of simple and complex cells has 37 A Novel Temporal Generative Model of Natural Video as an Internal Model in Early Vision Jarmo Hurri and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 9800,

More information

A biologically plausible network for the computation of orientation dominance

A biologically plausible network for the computation of orientation dominance A biologically plausible network for the computation of orientation dominance Kritika Muralidharan Statistical Visual Computing Laboratory University of California San Diego La Jolla, CA 9239 krmurali@ucsd.edu

More information

Notes on anisotropic multichannel representations for early vision

Notes on anisotropic multichannel representations for early vision Notes on anisotropic multichannel representations for early vision Silvio P. Sabatini Department of Biophysical and Electronic Engineering, University of Genoa 1 Introduction Although the basic ideas underlying

More information

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Maneesh Sahani Gatsby Computational Neuroscience Unit University College, London Jennifer

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

Effects of Betaxolol on Hodgkin-Huxley Model of Tiger Salamander Retinal Ganglion Cell

Effects of Betaxolol on Hodgkin-Huxley Model of Tiger Salamander Retinal Ganglion Cell Effects of Betaxolol on Hodgkin-Huxley Model of Tiger Salamander Retinal Ganglion Cell 1. Abstract Matthew Dunlevie Clement Lee Indrani Mikkilineni mdunlevi@ucsd.edu cll008@ucsd.edu imikkili@ucsd.edu Isolated

More information

A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons

A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons massachusetts institute of technology computer science and artificial intelligence laboratory A General Mechanism for Tuning: Gain Control Circuits and Synapses Underlie Tuning of Cortical Neurons Minjoon

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA

More information

EXTENSIONS OF ICA AS MODELS OF NATURAL IMAGES AND VISUAL PROCESSING. Aapo Hyvärinen, Patrik O. Hoyer and Jarmo Hurri

EXTENSIONS OF ICA AS MODELS OF NATURAL IMAGES AND VISUAL PROCESSING. Aapo Hyvärinen, Patrik O. Hoyer and Jarmo Hurri EXTENSIONS OF ICA AS MODELS OF NATURAL IMAGES AND VISUAL PROCESSING Aapo Hyvärinen, Patrik O. Hoyer and Jarmo Hurri Neural Networks Research Centre Helsinki University of Technology P.O. Box 5400, FIN-02015

More information

3.3 Population Decoding

3.3 Population Decoding 3.3 Population Decoding 97 We have thus far considered discriminating between two quite distinct stimulus values, plus and minus. Often we are interested in discriminating between two stimulus values s

More information

Linear and Non-Linear Responses to Dynamic Broad-Band Spectra in Primary Auditory Cortex

Linear and Non-Linear Responses to Dynamic Broad-Band Spectra in Primary Auditory Cortex Linear and Non-Linear Responses to Dynamic Broad-Band Spectra in Primary Auditory Cortex D. J. Klein S. A. Shamma J. Z. Simon D. A. Depireux,2,2 2 Department of Electrical Engineering Supported in part

More information