This cannot be estimated directly... s 1. s 2. P(spike, stim) P(stim) P(spike stim) =

Similar documents
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses

Statistical models for neural encoding, decoding, information estimation, and optimal on-line stimulus design

SUPPLEMENTARY INFORMATION

Neural Encoding Models

Statistical models for neural encoding

Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis

Dimensionality reduction in neural models: an information-theoretic generalization of spiketriggered average and covariance analysis

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

CHARACTERIZATION OF NONLINEAR NEURON RESPONSES

Combining biophysical and statistical methods for understanding neural codes

Mid Year Project Report: Statistical models of visual neurons

1/12/2017. Computational neuroscience. Neurotechnology.

Inferring synaptic conductances from spike trains under a biophysically inspired point process model

Characterization of Nonlinear Neuron Responses

Primer: The deconstruction of neuronal spike trains

Likelihood-Based Approaches to

Phenomenological Models of Neurons!! Lecture 5!

Time-rescaling methods for the estimation and assessment of non-poisson neural encoding models

SPIKE TRIGGERED APPROACHES. Odelia Schwartz Computational Neuroscience Course 2017

Bayesian inference for low rank spatiotemporal neural receptive fields

What is the neural code? Sekuler lab, Brandeis

Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

encoding and estimation bottleneck and limits to visual fidelity

Learning quadratic receptive fields from neural responses to natural signals: information theoretic and likelihood methods

Characterization of Nonlinear Neuron Responses

Neural Encoding. Mark van Rossum. January School of Informatics, University of Edinburgh 1 / 58

Comparison of objective functions for estimating linear-nonlinear models

Identifying Functional Bases for Multidimensional Neural Computations

Bayesian active learning with localized priors for fast receptive field characterization

Neural characterization in partially observed populations of spiking neurons

Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University liam.

Efficient and direct estimation of a neural subunit model for sensory coding

SUPPLEMENTARY INFORMATION

Sean Escola. Center for Theoretical Neuroscience

A Deep Learning Model of the Retina

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

Maximum likelihood estimation of cascade point-process neural encoding models

The homogeneous Poisson process

THE retina in general consists of three layers: photoreceptors

Neural variability and Poisson statistics

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

Transformation of stimulus correlations by the retina

Lateral organization & computation

LETTERS. Spatio-temporal correlations and visual signalling in a complete neuronal population

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models

Convolutional Spike-triggered Covariance Analysis for Neural Subunit Models

Modeling and Characterization of Neural Gain Control. Odelia Schwartz. A dissertation submitted in partial fulfillment

Modelling and Analysis of Retinal Ganglion Cells Through System Identification

Bio-Inspired Approach to Modelling Retinal Ganglion Cells using System Identification Techniques

Model-based decoding, information estimation, and change-point detection techniques for multi-neuron spike trains

Inferring input nonlinearities in neural encoding models

Spatiotemporal Elements of Macaque V1 Receptive Fields

Modeling Convergent ON and OFF Pathways in the Early Visual System

The unified maximum a posteriori (MAP) framework for neuronal system identification

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data

!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be

Model-based clustering of non-poisson, non-homogeneous, point process events, with application to neuroscience

Two-dimensional adaptation in the auditory forebrain

Adaptation in the Neural Code of the Retina

Challenges and opportunities in statistical neuroscience

Nonlinear computations shaping temporal processing of pre-cortical vision

Statistical challenges and opportunities in neural data analysis

Recurrent linear models of simultaneously-recorded neural populations

Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions

Design of experiments via information theory

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory

Scaling the Poisson GLM to massive neural datasets through polynomial approximations

High-dimensional neural spike train analysis with generalized count linear dynamical systems

Structured hierarchical models for neurons in the early visual system

When Do Microcircuits Produce Beyond-Pairwise Correlations?

Independent Component Analysis. Contents

The functional organization of the visual cortex in primates

Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model

Bayesian Estimation of Stimulus Responses in Poisson Spike Trains

E.J. Chichilnisky. John R. Adler Professor, Professor of Neurosurgery and of Ophthalmology and, by courtesy, of Electrical Engineering.

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

Support Vector Machines

An Introductory Course in Computational Neuroscience

PLEASE SCROLL DOWN FOR ARTICLE

Membrane equation. VCl. dv dt + V = V Na G Na + V K G K + V Cl G Cl. G total. C m. G total = G Na + G K + G Cl

STA414/2104 Statistical Methods for Machine Learning II

Second Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

ScholarOne, 375 Greenbrier Drive, Charlottesville, VA, 22901

State-Space Methods for Inferring Spike Trains from Calcium Imaging

arxiv:q-bio/ v1 [q-bio.nc] 2 May 2005

Design principles for contrast gain control from an information theoretic perspective

Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models

Entropy and information estimation: An overview

Features and dimensions: Motion estimation in fly vision

Improved characterization of neural and behavioral response. state-space framework

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

ECE521 week 3: 23/26 January 2017

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

A new look at state-space models for neural data

Limulus. The Neural Code. Response of Visual Neurons 9/21/2011

Ising models for neural activity inferred via Selective Cluster Expansion: structural and coding properties

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Bayesian latent structure discovery from multi-neuron recordings

Transcription:

LNP cascade model Simplest successful descriptive spiking model Easily fit to (extracellular) data Descriptive, and interpretable (although not mechanistic)

For a Poisson model, response is captured by relationship between the distribution of red points (spiking stim) and blue points (raw stim), expressed in terms of Bayes rule: s 1 P(spike stim) = P(spike, stim) P(stim) s 2 This cannot be estimated directly...

ML estimation of LNP [on board]

ML estimation of LNP If f θ ( k x) and log is convex (in argument and theta), f θ ( k x) is concave, the likelihood of the LNP model is convex (for all observed data, {n(t), x(t)} ) [Paninski, 04]

ML estimation of LNP If f θ ( k x) and log is convex (in argument and theta), f θ ( k x) is concave, the likelihood of the LNP model is convex (for all observed data, {n(t), x(t)} ) Examples: e ( k x(t)) ( k x(t)) α, 1 < α < 2 [Paninski, 04]

Simple LNP fitting Assuming: - stochastic stimuli, spherically distributed - mean of spike-triggered ensemble is shifted from that of raw ensemble Reverse correlation gives an unbiased estimate of k (for any f). For exponential f, this is the ML estimate! - Bussgang 52; de Boer & Kuyper 68

Computing the STA s1 s2 raw stimuli spiking stimuli

STA corresponds to a direction in stimulus space

Projecting onto the STA P ( spike(t) ) k s(t) = P ( spike(t) & ) k s(t) /P ( s(t))

Solving for nonlinearity nonparametrically

Solving for nonlinearity nonparametrically STA response

Solving for nonlinearity nonparametrically STA response

Projecting onto an axis orthogonal to the STA

Projecting onto an axis orthogonal to the STA

Projecting onto an axis orthogonal to the STA

Figure 3. Characterization of light response in one ON cell (A, B) and one OFF cell (C, D) simultaneously recorded in salamander retina. A, C, The spike-triggered average L-cone contrast during random flicker stiminput strength - Chander & Chichilnisky 01

!"#$%&& 0 time (s) 1!"# &'( $% '()##*+#,-. /0#)#*+#,-. - Pillow etal, 2004

V1 simple cell Time, T [msec] T=100 ms T= 50 ms T= 50 ms T= 50 ms T=100 ms T=100 ms T=150 ms T=150 ms T= 200 ms T= 200 ms X X X X T T T T Y Y Y Space, X [deg] 300 Space, X [deg] Time, T [msec] Time, T [msec] 0 4 0 4 0 0 300 300 0 0 4 Space, X [deg] T=150 ms X X T= 200 ms T T - Ozhawa, etal

A B T [ms] C 300 0 0 5 700 0 0 3.5 400 0 0 10 X [deg] Response [spikes/sec] 25 20 15 10 5 0 0.1 1 2 25 20 15 10 5 0 0.1 1 2 20 15 10 5 0 0.1 1 2 SF [cyc/deg] 25 20 15 10 5 0 0.1 1 10 20 25 20 15 10 5 0 0.1 1 10 20 15 10 5 0 0.1 1 10 20 TF [Hz]

LNP summary LNP is the defacto standard descriptive model, and is implicit in much of the experimental literature Accounts for basic RF properties Accounts for basic spiking properties (rate code) Easily fit to data Easily interpreted BUT, non-mechanistic, and exhibits striking failures (esp. beyond early sensory/motor)...

STA estimation errors Convergence rate [Paninski, 03] e(ˆk) σ E( k s) d N σ = stim s.d., d = stim dim., N = nspikes Non-spherical stimuli can cause biases Model failures: - symmetric nonlinearity (causes no change in STE mean) - response not captured by 1D projection - non-poisson spiking behaviors

2 2 2 variance 1.5 1 variance 1.5 1 variance 1.5 1 0.5 0.5 0.5 0 10 20 30 40 eigenvalue number 0 10 20 30 40 eigenvalue number 0 10 20 30 40 eigenvalue number 0.1 0.08 Error 0.06 0.04 0.02 Bootstrap error Asymptotic error 0 10 0 10 1 10 2 10 3 Number spikes / stimulus dimensions

Example 1: sparse noise true STA

Example 2: uniform noise true

LNP limitations Symmetric nonlinearities and/or multidimensional front-end (e.g., V1 complex cells)

LNP limitations Symmetric nonlinearities and/or multidimensional front-end (e.g., V1 complex cells) Subspace LNP

Classic V1 models Simple cell Complex cell +

V1 simple cell time tf STA space sf 1.8 variance 1.6 1 0.6 STC 0.2 0 50 100 150 200 250 eigenvalue number [Rust, Schwartz, Movshon, Simoncelli, 05]

LNP limitations Symmetric nonlinearities and/or multidimensional front-end (e.g., V1 complex cells) Subspace LNP Linear front end insufficient for non-peripheral neuraons Cascades / fixed front-end nonlinearities

LNP limitations Symmetric nonlinearities and/or multidimensional front-end (e.g., V1 complex cells) Subspace LNP Linear front end insufficient for non-peripheral neuraons Cascades / fixed front-end nonlinearities Responses depend on spike history, other cells Recursive models (GLM)

Linear-Nonlinear-Poisson (LNP) stimulus filter point nonlinearity probabilistic spiking

Recursive LNP stimulus filter exponential nonlinearity probabilistic spiking + post-spike waveform [Truccolo et al 05; Pillow et al 05]

stimulus & spike train model parameters Critical: Likelihood function has no local maxima [Paninski 04]

stimulus & spike train model parameters maximize likelihood Critical: Likelihood function has no local maxima [Paninski 04]

28

rlnp model stimulus filter exponential nonlinearity probabilistic spiking stimulus + post-spike waveform

rlnp model stimulus filter exponential nonlinearity probabilistic spiking stimulus + post-spike waveform +

Multi-cell rlnp, with model spike coupling stimulus filter exponential nonlinearity probabilistic spiking stimulus + post-spike waveform coupling waveforms +

Equivalent model diagram Equivalent model diagram x(t) y 1 (t) K h 11 y n (t) h 1n 30

Equivalent model diagram Equivalent model diagram x(t) y 1 (t) K h 11 y n (t) h 1n 30

stimulus & spike trains model parameters

stimulus & spike trains model parameters maximize likelihood

stimulus & spike trains model parameters maximize likelihood Again: Likelihood function has no local maxima [Paninski 04]

Methods Methods spatiotemporal binary white noise (8 minutes at 120Hz) macaque retinal ganglion cell (RGC) spike responses (ON and OFF parasol) [Pillow, Shlens, Paninski, Chichilnisky, Simoncelli - unpublished]

Example ON cell Example OFF cell [Pillow, Shlens, Paninski, Chichilnisky, Simoncelli - unpublished]

ON cells OFF cells Cross-Correlations 30 20 30 20 RGC rlnp GLM rate (sp/s) 30 20 30 20 30 20 30 20-50 0 50 time (ms) -50 0 50 time (ms)

ON cells OFF cells Cross-Correlations 30 20 30 20 RGC GLM rlnp(no coupling) rate (sp/s) 30 20 30 20 30 20 30 20-50 0 50 time (ms) -50 0 50 time (ms)

Single-trial spike prediction RGC responses single-trial predictions (neighbor spikes frozen )

Single-trial spike prediction RGC responses single-trial predictions (neighbor spikes frozen ) Equivalent model diagram x(t) y 1 (t) K h 11 y n (t) h 1n

Single-trial spike prediction RGC responses single-trial predictions (neighbor spikes frozen ) Equivalent model diagram x(t) y 1 (t) K h 11 y n (t) h 1n

Single-trial spike prediction RGC responses single-trial predictions (neighbor spikes frozen ) spike prediction (bits/sp) 2 1.5 1 1 1.5 2 LNP

Single-trial spike prediction RGC responses single-trial predictions (neighbor spikes frozen ) 2 spike prediction (bits/sp) full model 1.5 1 uncoupled model 1 1.5 2 LNP

Decoding Stimulus information (bits/s) opt. linear LNP rlnp crlnp