CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

Similar documents
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

Mid Year Project Report: Statistical models of visual neurons

Characterization of Nonlinear Neuron Responses

Characterization of Nonlinear Neuron Responses

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Statistical models for neural encoding

This cannot be estimated directly... s 1. s 2. P(spike, stim) P(stim) P(spike stim) =

Sean Escola. Center for Theoretical Neuroscience

Inferring synaptic conductances from spike trains under a biophysically inspired point process model

Phenomenological Models of Neurons!! Lecture 5!

SUPPLEMENTARY INFORMATION

Neural Encoding Models

SUPPLEMENTARY INFORMATION

Combining biophysical and statistical methods for understanding neural codes

Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

Dimensionality reduction in neural models: an information-theoretic generalization of spiketriggered average and covariance analysis

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design

Likelihood-Based Approaches to

1/12/2017. Computational neuroscience. Neurotechnology.

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis

Neural characterization in partially observed populations of spiking neurons

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

A Deep Learning Model of the Retina

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Time-rescaling methods for the estimation and assessment of non-poisson neural encoding models

Nonlinear computations shaping temporal processing of pre-cortical vision

Efficient and direct estimation of a neural subunit model for sensory coding

Neural variability and Poisson statistics

Neural Encoding. Mark van Rossum. January School of Informatics, University of Edinburgh 1 / 58

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory

Lateral organization & computation

Bayesian inference for low rank spatiotemporal neural receptive fields

Consider the following spike trains from two different neurons N1 and N2:

Comparison of objective functions for estimating linear-nonlinear models

Using Statistics of Natural Images to Facilitate Automatic Receptive Field Analysis

THE retina in general consists of three layers: photoreceptors

Modeling Convergent ON and OFF Pathways in the Early Visual System

Maximum likelihood estimation of cascade point-process neural encoding models

Model-based decoding, information estimation, and change-point detection techniques for multi-neuron spike trains

Correlations and neural information coding Shlens et al. 09

The homogeneous Poisson process

Estimation of information-theoretic quantities

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data

What is the neural code? Sekuler lab, Brandeis

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models

Modelling and Analysis of Retinal Ganglion Cells Through System Identification

Inferring input nonlinearities in neural encoding models

Primer: The deconstruction of neuronal spike trains

STA 414/2104: Lecture 8

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models

Visual motion processing and perceptual decision making

Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods

Active Learning of Functional Networks from Spike Trains

Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models

Flexible Gating of Contextual Influences in Natural Vision. Odelia Schwartz University of Miami Oct 2015

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Bayesian active learning with localized priors for fast receptive field characterization

Neural Spike Train Analysis 1: Introduction to Point Processes

Modeling and Characterization of Neural Gain Control. Odelia Schwartz. A dissertation submitted in partial fulfillment

Probabilistic Models in Theoretical Neuroscience

Recurrent linear models of simultaneously-recorded neural populations

encoding and estimation bottleneck and limits to visual fidelity

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Statistical challenges and opportunities in neural data analysis

An Introductory Course in Computational Neuroscience

Leo Kadanoff and 2d XY Models with Symmetry-Breaking Fields. renormalization group study of higher order gradients, cosines and vortices

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl

Second Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models

Spatiotemporal Elements of Macaque V1 Receptive Fields

Structured hierarchical models for neurons in the early visual system

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Natural Image Statistics

Signal detection theory

Challenges and opportunities in statistical neuroscience

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Experimental design of fmri studies

Modelling temporal structure (in noise and signal)

High-dimensional neural spike train analysis with generalized count linear dynamical systems

Mid-year Report Linear and Non-linear Dimentionality. Reduction. applied to gene expression data of cancer tissue samples

Experimental design of fmri studies

Synaptic Input. Linear Model of Synaptic Transmission. Professor David Heeger. September 5, 2000

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Efficient Coding. Odelia Schwartz 2017

Sparse Coding as a Generative Model

Experimental design of fmri studies

Experimental design of fmri studies & Resting-State fmri

Disambiguating Different Covariation Types

Spectral Clustering on Handwritten Digits Database

State-Space Methods for Inferring Spike Trains from Calcium Imaging

CISC 3250 Systems Neuroscience

EXTENSIONS OF ICA AS MODELS OF NATURAL IMAGES AND VISUAL PROCESSING. Aapo Hyvärinen, Patrik O. Hoyer and Jarmo Hurri

ECE521 week 3: 23/26 January 2017

Encoding or decoding

Trends in policing effort and the number of confiscations for West Coast rock lobster

Experimental design of fmri studies

Nonlinear reverse-correlation with synthesized naturalistic noise

Introduction to Computer Vision

Discrete distribution. Fitting probability models to frequency data. Hypotheses for! 2 test. ! 2 Goodness-of-fit test

Transcription:

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES Matt Whiteway whit8022@umd.edu Dr. Daniel A. Butts dab@umd.edu Neuroscience and Cognitive Science (NACS) Applied Mathematics and Scientific Computation (AMSC) Biological Sciences Graduate Program (BISI) AMSC 663 Project Proposal Presentation

Introduction Approach Evaluation Schedule Conclusion Background The human brain contains 100 billion neurons These neurons process information nonlinearly, thus making them difficult to study INPUTS OUTPUTS [1] Given the inputs and the outputs, how can we model the neuron s computation? [1] http://msjensen.cehd.umn.edu/webanatomy_archive/images/histology/

Introduction Approach Evaluation Schedule Conclusion The Models Many models of increasing complexity have been developed The models I will be implementing are based on statistics Linear Models Linear Nonlinear Poisson (LNP) Model 1. LNP using Spike Triggered Average (STA) 2. LNP using Maximum Likelihood Estimates Generalized Linear Model (GLM) 3. Spike Triggered Covariance (STC) Nonlinear Models 4. Generalized Quadratic Model (GQM) 5. Nonlinear Input Model (NIM)

Introduction Approach Evaluation Schedule Conclusion The Models - LNP INPUT OUTPUT [2] Knowns is the stimulus vector Spike times [2] http://www.sciencedirect.com/science/article/pii/s0079612306650310 Unknowns is a linear filter, defines the neuron s stimulus selectivity is a nonlinear function is the instantaneous rate parameter of an nonhomogenous Poisson process

Introduction Approach 1 Evaluation Schedule Conclusion The Models - LNP-STA 1 The STA is the average stimulus preceding a spike in the output, where is the number of spikes and is the set of stimuli preceding a spike 1. Chichilnisky, E.J. (2001) A simple white noise analysis of neuronal light responses. [3] http://en.wikipedia.org/wiki/spike-triggered_average [3]

Introduction Approach 1 Evaluation Schedule Conclusion The Models - LNP-STA It can be shown that the STA is proportional to the linear filter Though it is possible to fit a parametric form of the nonlinear function, the more common approach is to bin the values of and plot average spike count for each bin based on the stimulus and spike data (histogram method). The Algorithm: Calculate the STA to find Use the stimulus and the filter for the histogram method to estimate discretely

Introduction Approach Evaluation Schedule Conclusion The Models - LNP INPUT OUTPUT [2] Knowns is the stimulus vector Spike times [2] http://www.sciencedirect.com/science/article/pii/s0079612306650310 Unknowns is a linear filter, defines the neuron s stimulus selectivity is a nonlinear function is the instantaneous rate parameter of an nonhomogenous Poisson process

Introduction Approach 2 Evaluation Schedule Conclusion The Models - LNP-GLM 2 Now we will approximate the linear filter using the Maximum Likelihood Estimate (MLE) A likelihood function is the probability of an outcome given a probability density function with parameter The LNP model uses the Poisson distribution where is the vector of spike counts binned at a resolution We want to maximize a log-likelihood function 2. Paninski, L. (2004) Maximum Likelihood estimation of cascade point-process neural encoding models.

Introduction Approach 2 Evaluation Schedule Conclusion The Models - LNP-GLM The Maximum Likelihood Estimate maximizes the probability of spiking as a function of the model parameters when the actual data shows a spike, and minimizes it otherwise. Can employ likelihood optimization methods to obtain maximum likelihood estimates for linear filter If we make some assumptions about the form of the nonlinearity F, the likelihood function has no non-global local maxima gradient ascent! F(u) is convex in u log(f(u)) is concave in u Regularizing estimated linear filter can reduce noise effects Applies penalty for complexity

Introduction Approach 2 Evaluation Schedule Conclusion The Models - LNP-GLM The Algorithm: Assume a form of F and get the corresponding loglikelihood function Depending on the resulting function, find the MLE by Depending on the resulting function, find the MLE by analytical or numerical means

Introduction Approach 3 Evaluation Schedule Conclusion The Models - STC 3 Spike Triggered Covariance (STC) analysis is a way to identify a feature subspace that affects a neuron s response. Find excitatory and suppressive directions [4] 3. Schwartz, O. et al. (2006) Spike-triggered neural characterization. [4] http://books.nips.cc/papers/files/nips14/ns14.pdf

Introduction Approach 3 Evaluation Schedule Conclusion The Models - STC The smallest eigenvalues associated with correspond to inhibitory eigenvectors in the stimulus space; we will just use the smallest for a suppressive filter.

Introduction Approach 3 Evaluation Schedule Conclusion The Models - STC The model then becomes Knowns is the stimulus vector Spike times Unknowns is the excitatory filter is the suppressive filter is a nonlinear function is the instantaneous rate parameter of an nonhomogenous Poisson process

Introduction Approach 3 Evaluation Schedule Conclusion The Models - STC Again, we could try to fit the nonlinearity parametrically, but using the histogram method is fine if we are just using two filters, with bin values coming from and The Algorithm: Calculate the STA to find Calculate the eigenvectors/values of Use eigenvector associated with smallest eigenvalue to find Use the stimulus and the filter for the histogram method to estimate discretely

Introduction Approach Evaluation Schedule Conclusion The Models - Nonlinear Models We can extend these models even further by introducing nonlinearities in the input. The general form is: where the s can be any nonlinear functions. Perhaps we should start with something easier first Assuming that the input is a sum of one-dimensional nonlinearities makes parameter fitting much nicer

Introduction Approach 4 Evaluation Schedule Conclusion The Models - GQM 4 A particular choice of this sum of nonlinearities gives us the Generalized Quadratic Model: Knowns is the stimulus vector Spike times 4. Park, I., and Pillow, J. (2011) Bayesian Spike-Triggered Covariance Analysis. Unknowns matrix vector scalar is a nonlinear function is the instantaneous rate parameter of an nonhomogenous Poisson process

Introduction Approach 4 Evaluation Schedule Conclusion The Models - GQM Even though we are no longer dealing with the GLM, in practice if we choose a form of F that preserves concavity of the log-likelihood, estimating the parameters using MLE will tractable Functions commonly used are exp(u) and log(1+exp(u)) The Algorithm: Write down the log-likelihood function Use optimization to find MLE, assuming one of the above forms for F

Introduction Approach 5 Evaluation Schedule Conclusion The Models - NIM 5 LINEAR NONLINEAR MODEL NONLINEAR INPUT MODEL [5] The Nonlinear Input Model (NIM) defines the neuron s processing as a sum of nonlinear inputs Allows for rectification of inputs due to spikegeneration from the input neuron 5., [5] McFarland, J.M., Cui, Y., and Butts, D.A. (2013) Inferring nonlinear neuronal computation based on physiologically plausible inputs.

Introduction Approach 5 Evaluation Schedule Conclusion The Models - NIM Knowns is the stimulus vector Spike times Assumptions s are rectified linear functions Unknowns s are the linear filters s will be +/-1 s are nonlinear functions is a nonlinear function is the instantaneous rate parameter of an nonhomogenous Poisson process

Introduction Approach 5 Evaluation Schedule Conclusion The Models - NIM The Algorithm Write down the log-likelihood function Use optimization to find MLE, assuming one of the above forms for F Regularization will allow us to incorporate prior knowledge about the parameters

Introduction Approach Evaluation Schedule Conclusion Implementation and Data Parameter fitting will be implemented on an Intel Core 2 Duo with 3 GB of RAM Programming language will be MATLAB Data Initial work will be done using data from a Lateral geniculate nucleus (LGN) neuron (visual system) simulated data from other neurons in the visual system Data can be found at http://www.clfs.umd.edu/biology/ntlab/nim/

Introduction Approach Evaluation Schedule Conclusion Validation Use stimulus data and response data to construct the model, i.e. find the model parameters Use stimulus data and the model to create new response data Now use and to generate a new model Check to see how closely the parameters of the new model and the original model agree

Introduction Approach Evaluation Schedule Conclusion Testing Use k-fold cross-validation on the log-likelihood of the model, LL x, with the log-likelihood of the null model, LL 0, subtracted. Measure the predictive power of the model by comparing the predicted model output to the measured output: fraction of variance explained Use early models (i.e. LNP) to compare results of later models (i.e. NIM)

Introduction Approach Evaluation Schedule Conclusion Project Schedule and Milestones PHASE I September through December Write project proposal (September) Implement and validate the LNP model (October) Develop software to validate models (October) Implement and validate the GLM with regularization (November-December) Complete mid-year progress report (December) PHASE II January through May Implement and validate the STC and GQM (January-February) Implement and validate the NIM with linear rectified upstream functions (March) Develop software to test all models (April) Complete final report and presentation If time permits Fit the nonlinear upstream functions of the NIM using basis functions Extend models to be used for more diverse stimulus types Use the NIM to model small networks of neurons

Introduction Approach Evaluation Schedule Conclusion Deliverables Implemented MATLAB code for all of the models (LNP-STA, LNP-GLM, STC, GQM, NIM) Documentation of code Results of validation and testing for all of the models Mid-year presentation Final report Final presentation

Introduction Approach Evaluation Schedule Conclusion References Chichilnisky, E.J. (2001) A simple white noise analysis of neuronal light responses. Network: Comput. Neural Syst., 12, 199-213. Schwartz, O., Chichilnisky, E. J., & Simoncelli, E. P. (2002). Characterizing neural gain control using spike-triggered covariance. Advances in neural information processing systems, 1, 269-276. Paninski, L. (2004) Maximum Likelihood estimation of cascade point-process neural encoding models. Network: Comput. Neural Syst.,15, 243-262. Schwartz, O. et al. (2006) Spike-triggered neural characterization. Journal of Vision, 6, 484-507. Paninski, L., Pillow, J., and Lewi, J. (2006) Statistical models for neural encoding, decoding, and optimal stimulus design. Park, I., and Pillow, J. (2011) Bayesian Spike-Triggered Covariance Analysis. Adv. Neural Information Processing Systems,24, 1692-1700. Butts, D. A., Weng, C., Jin, J., Alonso, J. M., & Paninski, L. (2011). Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression. The Journal of Neuroscience, 31(31), 11313-11327. McFarland, J.M., Cui, Y., and Butts, D.A. (2013) Inferring nonlinear neuronal computation based on physiologically plausible inputs. PLoS Computational Biology.

Introduction Approach Evaluation Schedule Conclusion Figures 1. http://msjensen.cehd.umn.edu/webanatomy_archive/im ages/histology/ 2. http://www.sciencedirect.com/science/article/pii/s007 9612306650310 3. http://en.wikipedia.org/wiki/spike-triggered_average 4. http://books.nips.cc/papers/files/nips14/ns14.pdf 5. McFarland, J.M., Cui, Y., and Butts, D.A. (2013) Inferring nonlinear neuronal computation based on physiologically plausible inputs. PLoS Computational Biology.