Inventory of Supplemental Information

Size: px
Start display at page:

Download "Inventory of Supplemental Information"

Transcription

1 Neuron, Volume 71 Supplemental Information Hippocampal Time Cells Bridge the Gap in Memory for Discontiguous Events Christopher J. MacDonald, Kyle Q. Lepage, Uri T. Eden, and Howard Eichenbaum Inventory of Supplemental Information 1. Figure S1 is related to pg. 5 where we discuss behavior during the delay period 2. Figure S2 is related to pg. 5 that discusses how head direction and speed related neural activity depends on time. 3. Figure S3 is related to pg. 6 of the manuscript where we discuss the distribution of STIC values across a subpopulation of neurons. 4. Figure S4 is related to pg. 9 of the manuscript where we discuss the time course of retiming. 5. Figure S5 is related to pg. 14 and the discussion of our LFP analysis. 6. Figure S6 is related to pg. 20 where we discuss isolation of pyramidal neurons. 7. Supplemental Experimental Procedures gives extensive details on the Methods we used but could not be included in the main text due to space constraints. 8. Supplemental References gives references cited only in the Supplemental Material

2 Supplemental Figures Figure S1: The movement pattern for each rat during the delay period. (a) Data from a single recording session. The grey line indicates the rat s X-Y position as a function of

3 time elapsed since the beginning of each trial s delay period (moving upward on the z- axis). The movement pattern for trials starting with object 1 is on the left while the right plot shows object 2 trials. (b-f) Movement patterns for rats are shown in the same format as in (a) but for the remaining five recording sessions.

4

5 Figure S2: Head direction and speed related neural activity depend on the passage of time during the delay period. (a) The head direction (left column) and speed firing rate plots (right column) are shown for one neuron. At the top of the left column is a polar plot (e.g., Kneirim et al., 1995) in which normalized neural activity (blue line) is shown as a function of the rat s head direction (0 360 degrees) throughout the entire delay period ( 0-6 s ). Below it are six additional polar plots that show head direction activity during consecutive 1-s time segments starting from the beginning of the delay period. Similarly, the top plot in the right column shows normalized neural activity (y-axis; blue line) as a function of the rat s speed (x-axis) during the full delay period. The six plots below it are speed firing rate plots for successive 1-s time segments during the delay period. The heat plot below the left and right columns shows the neuron s activity across the whole delay period (i.e., irrespective of head direction and speed). Note that grey-shaded areas in the plots correspond to head directions (left) or speeds (right) that were observed. For the plots at the top of each column, only head directions (left) or speeds observed at least once in each of the 1-s time segments are shaded in grey. (b - l) The same format as in (a) but for 11 additional neurons. Note that all of the neurons (a-l) are from the same recording session.

6 Figure S3: The relative degree of spatial and temporal information encoded during the delay period varies across a large population of neurons that are active during the delay period. Here the frequency histogram shows the distribution of all the STIC values in neurons for which space and time contributed to activity during the delay period. Figure S4: (a-c) Each panel shows normalized activity inside a neuron s time field for each trial during the recording session. Grey lines indicate the delay length transitions. Retiming abruptly occurs more than 10 trials after the length of the delay period changes. However, the trial at which a neuron retimes varies across the population. Furthermore, once a neuron retimes, it may continue to change its firing rate in a continuous manner over the session (panel b). The neurons are the same as those shown in panels 1, 2, and 4 (left to right) of Figure 7b.

7 Figure S5: Throughout the object odor sequence, the local field potential (LFP) showed a prominent band of power within the theta range (4-12 Hz). (a) The trial-averaged time frequency spectrograms using the LFP recorded from a single tetrode during a recording session. The spectrograms are separated according to the trial period (object = left; delay = center; odor = right) as well as the object that was presented (top = object 1; bottom = object 2). For the object and odor periods, the spectral density was estimated using a single 1.2 s window of activity from each trial. The spectral density during the delay period was estimated using a 1-s window that was slid in 100 ms increments across the delay period in each trial. In all cases, the band width was 1-40 Hz and the frequency of resolution was 1 Hz. (b-f) The same format as in (a) but for five additional recording sessions.

8 Figure S6: The average waveform from a putative pyramidal neuron (blue; spike width = 0.27 ms). The average spike width of pyramidal neurons was 0.27 ms ± (range = 0.20 ms 0.60 ms) and their average firing rate across the recording session was 0.44 Hz ± 0.02 (range = Hz 2.6 Hz). Though we avoided processing putative interneurons offline (i.e., cluster cutting) and including them in our data set, we provide here an average waveform for an interneuron that we isolated (red; spike width = 0.15 ms). The average spike width of putative interneurons that were isolated was 0.18 ms ± 0.03 (range = 0.15 ms 0.21 ms) and the average firing rate was 17.8 Hz ± 2.3 (range = 15.5 Hz 20.1 Hz). Interneurons could be distinguished from pyramidal neurons off line by various features including their firing rate over the course of the recording session, their autocorrelograms, and their relatively narrow spike widths (e.g., Csicsvari et al., 1999). Spike widths were computed at 25% of maximum amplitude from spikes filtered between 154 Hz 8.8 khz. Supplemental Experimental Procedures Details of the behavioral task The task was performed in the central stem of a modifed T-maze that allowed rats to move along a track as it experienced each event sequentially during a trial. Then the rat could progress down a return arm and begin the next trial (Figure 1 in main text). Each

9 trial began with manual removal of a wooden panel that allowed access to either a rectangular block of wood with grated rails or half of a textured, green rubber ball mounted on another wood panel. The rat investigated the object for s during which it typically made several lateralized head-movements over the object. Then the object panel was removed and the rat progressed towards another wooden panel, at which point an additional blank wooden panel was placed behind the rat, confining it to an area that was 6 cm wide and 22 cm long. At the end of the delay period, the wooden panel was removed and the odor pot was placed on a small platform (7 cm 7 cm) that extended from the maze to the rat s right side. The rats never waked past the odor pots during a given trial and, per our definition, always sampled the odor. Because the wooden panel was removed manually, there was slight variability from trial to trial in terms of when the delay was marked as having ended through video scoring. For one rat, the delay period was 6.73 ± 0.41 s and 6.0 ± 0.44 s (mean ± s.d.) during its two recording sessions. For the second rat, the mean delay period was 9.97± 0.61 s and 9.86 ± 0.62 s (mean ± s.d.) during its two recording sessions. For the third and fourth rat, the delay period was 9.0 ± 0.56 s and 9.27 ± 0.71 s respectively. The odor pots were 10-cm tall, 11-cm wide (10 cm inner diameter) terra cotta pots filled with playground sand (QUIKRETE Premium Play Sand). The odor pots were kept together next to the experimenter until the end of the delay period. The sand was scented with only one of two common household spices cinnamon or basil (1% concentration by weight in sand). On go trials, the rat could dig in the sand to find a third of a Fruit Loop (Kellogs) reward buried in the center of the pot at a depth of cm. On nogo trials, the correct response was to refrain from digging and move forward on the central stem, then turn left where the rat was rewarded with two 1/3 Fruit Loops. Incorrect go and nogo responses were not rewarded. Statistical modeling of the neural spike trains Temporal modulation, independent of space and behavior In this analysis, spiking activity was modeled through a conditional intensity function that expresses the spiking probability as a multiplicatively separable function of various

10 covariates that modulate spiking activity. During the delay period, spiking activity was modeled as: λ(t) = λ time (t) λ space (t) λ speed (t) λ head angle (t) λ interactions (t) (1) Here λ time is an exponential fifth order polynomial of time relative to the start of each trial s delay period (i.e., 5 parameters see Eq. 2), λ s is a Gaussian shaped place field composed of five parameters (see Eq. 3), λ speed is an exponential second order polynomial, and λ head angle is defined in Eqn. 4 and Eqn. 5. i=1 λ(t) time = e 5 α i t i (2) i=1 λ(t) space = e 2 2 β i x(t ) i + δ j y(t ) i +λ(x(t )y(t )) j=1 (3) λ(t) head angle = e κ cos(θ (t ) θ (t ) p ) (4) λ(t) head angle = e a cos(θ (t ))+b sin(θ (t )) (5) In Eqn 2., t refers to the time elapsed starting from the beginning of the delay period and the five α s are parameters controlling the degree in which the spike rate is modulated by time during the delay period. In Eqn. 3, x(t) and y(t) refers to the x and y coordinates of the rat at time t during the delay period and x(t)y(t) is a cross-term. The terms β, δ, and λ are used to denote five total parameters specifying the degree in which each covariate modulated spiking rate. In Eqn. 4 and Eqn. 5, θ(t) are the head angles in vector form recorded at time t into the delay period, κ is the parameter that controls the degree to which the rate is modulated by the head angle, θ p is the preferred angle of spiking. The form of the model described by Eqn. (4) specifies the head angle influence on firing rate to be proportional to a von Mises distribution for a circular random variable. The a and b parameters in Eqn. 5 can be related to the modulation κ, (6)

11 For brevity, λ interactions (t) refers to all the interactions in the model as well as a constant term. However, λ interactions (t) is itself multiplicatively separable into the functions that specify the modulation of the neuron s firing rate by all second-order interactions among the rat s position, speed, head angle and velocity. We chose an exhaustive list of interaction terms so that we could explicitly account for the effect that the time covariate has on spiking probability. Eq. 1 can be expressed in a more compact form: λ(t) = e H β (7) where the term specifies the covariates described above and is a vector of parameters. The parameters for this model are estimated using maximum likelihood methods. For a small proportion of the fits we performed, the maximum likelihood of the estimates was not obtained and these cases were excluded. In practice, many of the parameters obtained by this procedure are correlated. What follows hereafter is a description of a procedure used to obtain time parameters that are independent of the other parameters in the model (e.g., McCullagh & Nelder, 1989). In this way, the temporal modulation of a neuron s firing rate can be evaluated during the delay period and is free of any confounds with other covariates specified in Eqn. 7. Note that the design matrix in Eqn. 7 can be expressed as: (8) where refers to the time part of the design matrix and refers to the columns of the design matrix associated with the remaining covariates (space, head direction, speed, and interactions). Here, is potentially confounded with. Therefore, a design matrix is needed whereby time s influence on neural activity is made orthogonal to. That is:

12 (9) In order to obtain, is projected so that it is orthogonal to : (10) The projection matrix,, which is orthogonal to the span of the columns in, is defined by: (11) where is a diagonal matrix with an estimate of the rate λ(t) in vector form down the diagonal. After obtaining, the temporal modulation is specified by λ time (t) in Eqn. 1. Importantly, the estimated firing rate (λ(t) in Eqn. 1) and time parameters using or is identical. The temporal modulation is down-weighted by their 95% confidence interval divided by 4, then averaged across trials. The down-weighting was implemented so that highly uncertain values of temporal modulation were discounted. The results of this procedure are presented in Figure 3e-h. Computing the spatiotemporal information index Due to the well-documented coding of spatial information by hippocampal neurons, an analysis was developed to compare the information encoded by time and space in single neurons. A model of neural activity was first specified as: λ o (t) = λ t o (t) λ s o (t) λ h o (t) (12) where λ o (t) is the spiking intensity for object o at time t, λ o t (t) is the modulation of the conditional intensity due to time, λ o s (t) is the modulation due to space, and λ o history (t) is the modulation due to history. We refer to Eqn. 12 as the space + time (S+T) model to highlight that it includes both space and time as covariates. λ o t is an exponential fifth

13 order polynomial in time, λ s o is a Gaussian shaped place field, and λ h o is the exponentiated sum of weighted linear combinations of past spiking increments, in 1 ms bins going back 5 ms, and in 25 ms bins thereafter going back an additional 150 ms. To compare the contribution of space and time to spiking activity, a nested modeling approach was used along with formal hypothesis testing. A space model was defined solely in terms of Λ s o (t) and Λ h o (t), and omitting Λ t o (t). That is, λ o (t) = λ s o (t) λ h o (t) (13) Similarly, a time model is defined as, λ o (t) = λ t o (t) λ h o (t) (14) In Eqn. 12, 13, and 14, we allowed the parameters to differ depending on the object presented. We fitted these models to the appropriate binned spike trains and computed the likelihood of the data for each model. Note that the space and time models (specified in Eqn. 13 and 14) are both nested within the S+T model (specified in Eqn. 12). In order to evaluate whether time provides more information to spiking than space alone, a likelihood ratio test was conducted based on the test statistic, W = -2 [ ln(γ space ) - ln(γ S+T ) ] (15) which, under the null hypothesis of no additional contribution of time, has a chi-square distribution with 5 degrees of freedom. The same approach was used to assess whether the addition of space to a model already including time increases the likelihood of the data. That is, W = -2 [ ln (Γ time ) - ln (Γ S+T ) ] (16)

14 The STIC was computed by simply computing the difference between the increase in the likelihood of the data due to space (Eqn. 13) and the increase in the likelihood of the data due to time (Eqn. 14). That is, = -2 [ ln (Γ space ) - ln (Γ S+T )] + 2 [ln (Γ time ) - ln (Γ S+T ) ] (17) This equation is simplified to become the STIC: STIC = ln (Γ time ) - ln (Γ space ) (18) A STIC value of x can be interpreted to indicate that time contributes e x more to the spiking likelihood compared to space. It is important to emphasize that correlated space and time parameters do not pose an interpretational problem for the STIC. The STIC relies on comparisons between the S+T model and its nested submodels. The submodels are formulated so that their only difference from S+T is the independent part of space or time (depending on your comparison). For example, the S+T model (Eqn. 12) and space model (Eqn. 13) differ by that part of the time covariate which isn t confounded with space. Testing for a neuron s object selectivity during different trial periods In order to test for object selectivity, a likelihood ratio test using nested models was employed. This test is carried out by fitting the S+T model (Eqn. 12) to the data obtained from all correct trials. For the null hypothesis, the model is fit using the same parameters for both objects and the likelihood of the data given the model is computed. For the alternate hypothesis, the model parameters can change depending on the object. (That is, the alternate model has double the number of parameters as the null model because there are only two objects.) A likelihood ratio statistic, W, was computed by the following equation: W = -2 [ log(γ null ) - log(γ alternate ) ] (19)

15 Here Γ refers to the likelihood of the null or alternate model that was obtained by the fitting procedure. Under the null hypothesis, the W statistic has a chi-square distribution with degrees of freedom equal to the number of additional parameters (i.e., 26) in the alternate model. In order for the neuron to be considered object-selective, we used a P < 0.05 as evidence to reject the null hypothesis. Testing for go/nogo selectivity during the odor period To determine whether a neuron differentiated between go and nogo trials during the odor period, we used the same approach as described in the previous section to determine object selectivity. However, in this case spike trains during the odor period from correct trials were further divided into go and nogo trial types and the likelihood ratio test was carried out. Population analysis Similar to Manns et al. (2007), a Mahalanobis distance measure was used to compare between the activity of the neural population observed at two different moments of the delay. The delay period was divided into N 1-s bins and only neurons that fired > 0.1 Hz during the whole delay period were used. During each of the N bins, the firing rate for all i neurons was estimated for each of the J total trials and normalized to activity from all trials and bins. The normalized firing rate of the population during bin n can be represented as an i-element long row vector in each trial j, x j. Similarly, in another bin, the population s activity can be expressed as,. In order to determine the Mahalanobis distance between a population vector observed in bin n and n+1, first the mean of the two vectors being compared is computed:

16 Finally, the covariance matrix is estimated in reference to and. The Mahalanobis distance is: D = ( - ) T -1 ( - ) Note that in this example, the distance was computed for a bin n and n+1 comparison, which we refer to as a lag 1 comparison. We computed the distance between population vectors at all possible lags and found the average D for that lag, which is shown for each recording session separately in Figure 4. This analysis was also repeated during the object and odor period, as well as the first and last 1.2 s of the delay period as shown in Figure 4b. The bin sizes used to compute the various lags in all four trial periods were 0.2 ms. Retiming comparison of firing patterns observed during different blocks of trials For all three blocks, we constructed PSTHs using 500-ms time-bins starting from the beginning of the delay period. For two rats we used only the first 9 s of data across all three blocks and for a third rat we used only the first 6 s (the third rat had a shorter delay period than the other two rats). Our method begins by computing a Kendall rank correlation coefficient between the PSTH from two blocks of interest. For example, suppose we do this for block 1 and block 2, which gives us our test value (τ B1,B2 ) that reflects the degree of similarity between the temporal patterns of activity from each block s delay period. Next, we ask whether the test value τ B1,B2 is lower than would be expected if a neuron s firing pattern was the same between block 1 and block 2. Continuing with this example, the trials from both blocks were pooled together under the null hypothesis that all of the trials from blocks 1 and 2 were derived from the same population. Following this step, all of these trials were randomly reassigned (without replacement) into one or the other block. Next, two PSTHs were made for each block containing randomly assigned trials and the Kendall rank correlation coefficient was computed to give the first resampled τ value of the empirical distribution. The random reassignment of trials to blocks 1 and 2, and subsequent calculation of the rank correlation coefficient was repeated 999 more times to obtain an empirical distribution of

17 τ under the null hypothesis. This resampling approach was taken for all comparisons of a neuron s firing pattern between all of the trial blocks (i.e., block 1 and block 2, block 1 and block 3, block 2 and block 3, etc.). In order to determine whether a neuron re-timed when transitioning to a longer delay period, we had two specific criteria. The first criteria involved asking what proportion of the τ values obtained from the appropriate empirical distribution (i.e., block 1 = block 2) exceeded the test value τ B1,B2. If the proportion was higher than 99%, then the neuron was considered to have re-timed. If a neuron did not meet the first criterion, we asked what proportion of the τ values obtained from the block 1 and block 3 empirical distribution (i.e., block 1 = block 3) exceeded the test value τ B1,B3. If this value exceeded 99% and, importantly, there was no difference between block 2 and block 3 delay period firing patterns (by using a similar approach), then the neuron was also considered to have re-timed. We found it useful to adopt this additional criterion to account for neurons whose re-timing developed slowly after the longer delay period was introduced during block 2. Furthermore, if a neuron re-timed after transitioning from block 1 to block 2 per our first criterion, we confirmed during block 3 whether it returned to same firing pattern observed during block 1. Thus we compared the test value τ B1,B3 to the empirical distribution of τ values under the null hypothesis that block 1 and block 3 were the same (i.e., p < 0.01). Finally, we asked whether a neuron that retimed in fact scaled its temporal pattern of activity to accommodate the longer delay period. For two rats, the delay period was approximately doubled from 10 to 20 s between block 1 and block 2. The PSTH from block 1 was made as described above (i.e., 9 s of time and 500-ms time-bins) but for block 2, it was made using 18 s of the delay period and with 1-s time-bins. The τ B1,rescaled B2 was computed and compared to the appropriate empirical distribution generated under the null hypothesis. For the third rat in which the delay period was approximately doubled twice during the recording session, we compared the delay period PSTH from block 1 using 500 ms time-bins and 6 s of time to the PSTH from block 3 using 1.5 s time-bins and 18 s of time. A τ B1,rescaled B3 was computed, and the resampling analysis was carried out under the null hypothesis that activity in block 1 and the rescaled block 3 was the same. Note that for our purposes, increasing the duration of the time-bin we use

18 effectively compresses a neuron s time-scale of activity. Characterizing the time-course of retiming during the recording session. For two rats, the first 10 s of the delay period from each trial was broken into 250-ms bins, spikes were placed into 250-ms bins and smoothed with a moving average of 5 bins, then normalized in reference to the mean and standard deviation of binned spikes across the session. (For the third rat with shorter delay periods, only the first 6 s were used for this procedure.) In each block of trials, bins with counts greater than the mean were identified as a neuron s time field. The average spike count during each block s firing field was computed. The firing field from the block with the highest average spike count was identified and used to recompute time field spike counts from all trials. These values are smoothed with a running median 5-trials long and are plotted in Figure S4 for three representative neurons. Using ANOVAs to assess the effect of Time on a neuron s firing rate in reference to the rat s position, head direction and speed. In order to generate spatial firing rate maps for the whole delay period, the area occupied by the rat in the delay zone throughout the delay period was divided into 1 cm by 1 cm bins. The spikes from the neuron were assigned to the appropriate X-Y bin depending on the rat s position. The total spikes within each bin across all of the trials were divided by the total time the rat spent in the bin across all trials. The values shown in the figures were smoothed using a two-dimensional Gaussian kernel (σ = 2 cm). In each video frame, the angle of the rat s head throughout the delay period was assigned to one of sixty, non-overlapping 6 bins that spanned all the way around the rat. The spikes from a neuron were placed in the appropriate bin, which depended on the direction the rat was facing. To create the head direction plots for the whole delay period, the total number of spikes were from all trials were summed and divided by the total amount of time the rat spent in a particular 6 bin. For Figure S2, the values in the polar plot were smoothed using a Gaussian kernel (σ = 12 ) and activity was normalized to the bin with the highest firing rate so that the radius of the polar plot = 1.

19 The speed firing rate plots for the whole delay period were made by first computing the difference in the X-Y position for successive frames during the delay period. This created a list of speeds throughout the delay period and for each trial. Each speed observation was assigned to one of thirty bins that spanned 0 30 cm/s. Each spike was assigned to a bin that depended on the speed of the rat. In this way, the neuron s firing rate was computed so that it was normalized by the total amount of time the rats spent at a particular speed. In Figure S2, the values in the speed firing rate plot were smoothed with a Gaussian kernel (σ = 2cm/s) wide and activity was normalized to the bin with the highest firing rate. We also divided the delay period from each trial into 1-s segments and analyzed the data from these segments independently. The number of 1-s segments used for a session varied from 6-9 because this number varied slightly across rats (see pg. 9 above). By making use of data taken only from a specific 1-s segment, we could visualize how the neural firing rate plots for each behavioral variable changes with time. The figures (Figure 5-6, Figure S2) shown from this analysis also use the same smoothing procedures as above. Note that our analysis of firing rate with respect to position, head direction and speed requires binning. As a result, for each behavioral variable the firing rate for a bin from each 1-s segment was computed on a trial-by-trial basis. These firing rates were not smoothed. Next, for a given behavioral variable we identified only those bins whose firing rate could be estimated in all of the 1-s segments. For only these bins, we obtained their trial-by-trial estimates of firing rate, which were also indexed according to the 1-s segment from which they came. (It is worth noting that in all of our recording sessions, the bins occupied by the rat during the first 1-s segment were not occupied at least once in all subsequent 1-s segments. As a result, none of the data from the first second of the delay period was used in the analysis described next.) With this trial-by-trial data, we could conduct a repeated measures analysis of variance (ANOVA) with factors time and bin to test whether time modulated neural activity. The ANOVA was run on each behavioral variable separately and a P<0.05 was considered statistically significant. Details of ANOVAs on LFPs

20 For the object and odor period, we conducted a three-factor repeated measures analysis of variance (ANOVA) to test whether trial-averaged power did not change across the 4-12 Hz band, across 100 ms (because the window size was 1.2 s and slid only 100 ms), or according to which object was presented during the trial. For the delay period, we computed the trial-averaged spectrogram using a sliding 1 s window (100 ms increments) across the delay period. Here we also used a frequency range between 4-12 Hz. A three-way ANOVA was conducted to test whether the trial-averaged power did not differ across the theta band, across the delay period or depending on which object was presented. For the ANOVA, we used a P < 0.05 as evidence to reject the null hypothesis. After conducting the ANOVA for a tetrode, the p-values that were obtained for the factor object were recorded. For a given tetrode during a trial period, we determined the proportion of active neurons (>0.1 Hz) considered object selective according to the GLM analysis. The Kendall rank correlation coefficient was computed between the tetrode s p- value and the number of object selective neurons recorded from the same tetrode. We reasoned that if object-related differences in trial-averaged theta power explained the tendency to detect object selectivity in single neurons, then lower p-values should result in a larger number of object-selective neurons. Supplemental References Knierim, J.J., Kudrimoti, H.S., and McNaughton, B.L. (1995). Place cells, head direction cells, and the learning of landmark stability. J. Neurosci. 15,

Nature Neuroscience: doi: /nn.2283

Nature Neuroscience: doi: /nn.2283 Supplemental Material for NN-A2678-T Phase-to-rate transformations encode touch in cortical neurons of a scanning sensorimotor system by John Curtis and David Kleinfeld Figure S. Overall distribution of

More information

Chapter 1 Statistical Inference

Chapter 1 Statistical Inference Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations

More information

The Spike Response Model: A Framework to Predict Neuronal Spike Trains

The Spike Response Model: A Framework to Predict Neuronal Spike Trains The Spike Response Model: A Framework to Predict Neuronal Spike Trains Renaud Jolivet, Timothy J. Lewis 2, and Wulfram Gerstner Laboratory of Computational Neuroscience, Swiss Federal Institute of Technology

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

The homogeneous Poisson process

The homogeneous Poisson process The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,

More information

Single-Trial Neural Correlates. of Arm Movement Preparation. Neuron, Volume 71. Supplemental Information

Single-Trial Neural Correlates. of Arm Movement Preparation. Neuron, Volume 71. Supplemental Information Neuron, Volume 71 Supplemental Information Single-Trial Neural Correlates of Arm Movement Preparation Afsheen Afshar, Gopal Santhanam, Byron M. Yu, Stephen I. Ryu, Maneesh Sahani, and K rishna V. Shenoy

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution 9.07 INTRODUCTION TO STATISTICS FOR BRAIN AND COGNITIVE SCIENCES Lecture 4 Emery N. Brown The Multivariate Gaussian Distribution Analysis of Background Magnetoencephalogram Noise Courtesy of Simona Temereanca

More information

10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode

10/8/2014. The Multivariate Gaussian Distribution. Time-Series Plot of Tetrode Recordings. Recordings. Tetrode 10/8/014 9.07 INTRODUCTION TO STATISTICS FOR BRAIN AND COGNITIVE SCIENCES Lecture 4 Emery N. Brown The Multivariate Gaussian Distribution Case : Probability Model for Spike Sorting The data are tetrode

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

Neural Spike Train Analysis 1: Introduction to Point Processes

Neural Spike Train Analysis 1: Introduction to Point Processes SAMSI Summer 2015: CCNS Computational Neuroscience Summer School Neural Spike Train Analysis 1: Introduction to Point Processes Uri Eden BU Department of Mathematics and Statistics July 27, 2015 Spikes

More information

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern.

Exercises. Chapter 1. of τ approx that produces the most accurate estimate for this firing pattern. 1 Exercises Chapter 1 1. Generate spike sequences with a constant firing rate r 0 using a Poisson spike generator. Then, add a refractory period to the model by allowing the firing rate r(t) to depend

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable

More information

Evolution of the Average Synaptic Update Rule

Evolution of the Average Synaptic Update Rule Supporting Text Evolution of the Average Synaptic Update Rule In this appendix we evaluate the derivative of Eq. 9 in the main text, i.e., we need to calculate log P (yk Y k, X k ) γ log P (yk Y k ). ()

More information

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017 The Bayesian Brain Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester May 11, 2017 Bayesian Brain How do neurons represent the states of the world? How do neurons represent

More information

Disambiguating Different Covariation Types

Disambiguating Different Covariation Types NOTE Communicated by George Gerstein Disambiguating Different Covariation Types Carlos D. Brody Computation and Neural Systems Program, California Institute of Technology, Pasadena, CA 925, U.S.A. Covariations

More information

Algorithm-Independent Learning Issues

Algorithm-Independent Learning Issues Algorithm-Independent Learning Issues Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007, Selim Aksoy Introduction We have seen many learning

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Context-Dependent Decay of Motor Memories during Skill Acquisition

Context-Dependent Decay of Motor Memories during Skill Acquisition Current Biology, Volume 23 Supplemental Information Context-Dependent Decay of Motor Memories during Skill Acquisition James N. Ingram, J. Randall Flanagan, and Daniel M. Wolpert Figure S1, Related to

More information

STAT 501 EXAM I NAME Spring 1999

STAT 501 EXAM I NAME Spring 1999 STAT 501 EXAM I NAME Spring 1999 Instructions: You may use only your calculator and the attached tables and formula sheet. You can detach the tables and formula sheet from the rest of this exam. Show your

More information

Supplemental Information. Noise and Correlations. in Parallel Perceptual Decision Making. Thomas U. Otto and Pascal Mamassian

Supplemental Information. Noise and Correlations. in Parallel Perceptual Decision Making. Thomas U. Otto and Pascal Mamassian Current Biology, Volume 22 Supplemental Information Noise and Correlations in Parallel Perceptual Decision Making Thomas U. Otto and Pascal Mamassian Supplemental Inventory Figure S1 (related to Figure

More information

A Statistical Paradigm for Neural Spike Train Decoding Applied to Position Prediction from Ensemble Firing Patterns of Rat Hippocampal Place Cells

A Statistical Paradigm for Neural Spike Train Decoding Applied to Position Prediction from Ensemble Firing Patterns of Rat Hippocampal Place Cells The Journal of Neuroscience, September 15, 1998, 18(18):7411 7425 A Statistical Paradigm for Neural Spike Train Decoding Applied to Position Prediction from Ensemble Firing Patterns of Rat Hippocampal

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

Analysis of variance (ANOVA) Comparing the means of more than two groups

Analysis of variance (ANOVA) Comparing the means of more than two groups Analysis of variance (ANOVA) Comparing the means of more than two groups Example: Cost of mating in male fruit flies Drosophila Treatments: place males with and without unmated (virgin) females Five treatments

More information

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages: Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Event-related fmri. Christian Ruff. Laboratory for Social and Neural Systems Research Department of Economics University of Zurich

Event-related fmri. Christian Ruff. Laboratory for Social and Neural Systems Research Department of Economics University of Zurich Event-related fmri Christian Ruff Laboratory for Social and Neural Systems Research Department of Economics University of Zurich Institute of Neurology University College London With thanks to the FIL

More information

A. Motivation To motivate the analysis of variance framework, we consider the following example.

A. Motivation To motivate the analysis of variance framework, we consider the following example. 9.07 ntroduction to Statistics for Brain and Cognitive Sciences Emery N. Brown Lecture 14: Analysis of Variance. Objectives Understand analysis of variance as a special case of the linear model. Understand

More information

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data

Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Robust regression and non-linear kernel methods for characterization of neuronal response functions from limited data Maneesh Sahani Gatsby Computational Neuroscience Unit University College, London Jennifer

More information

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

A Monte Carlo Sequential Estimation for Point Process Optimum Filtering 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 A Monte Carlo Sequential Estimation for Point Process Optimum Filtering

More information

Hypothesis Testing with the Bootstrap. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods

Hypothesis Testing with the Bootstrap. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods Hypothesis Testing with the Bootstrap Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods Bootstrap Hypothesis Testing A bootstrap hypothesis test starts with a test statistic

More information

Lecture on Null Hypothesis Testing & Temporal Correlation

Lecture on Null Hypothesis Testing & Temporal Correlation Lecture on Null Hypothesis Testing & Temporal Correlation CS 590.21 Analysis and Modeling of Brain Networks Department of Computer Science University of Crete Acknowledgement Resources used in the slides

More information

Linking non-binned spike train kernels to several existing spike train metrics

Linking non-binned spike train kernels to several existing spike train metrics Linking non-binned spike train kernels to several existing spike train metrics Benjamin Schrauwen Jan Van Campenhout ELIS, Ghent University, Belgium Benjamin.Schrauwen@UGent.be Abstract. This work presents

More information

Transition Passage to Descriptive Statistics 28

Transition Passage to Descriptive Statistics 28 viii Preface xiv chapter 1 Introduction 1 Disciplines That Use Quantitative Data 5 What Do You Mean, Statistics? 6 Statistics: A Dynamic Discipline 8 Some Terminology 9 Problems and Answers 12 Scales of

More information

!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be

!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be Supplementary Materials General case: computing log likelihood We first describe the general case of computing the log likelihood of a sensory parameter θ that is encoded by the activity of neurons. Each

More information

ONLINE LEARNING OF GAUSSIAN MIXTURE MODELS: A TWO-LEVEL APPROACH

ONLINE LEARNING OF GAUSSIAN MIXTURE MODELS: A TWO-LEVEL APPROACH ONLINE LEARNING OF GAUSSIAN MIXTURE MODELS: A TWO-LEVEL APPROACH Arnaud Declercq, Justus H. Piater Montefiore Institute, University of Liège, B- Liège, Belgium Arnaud.Declercq@ULg.ac.be, Justus.Piater@ULg.ac.be

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models

Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models Supplemental Material Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models Motivation The purpose of this analysis is to verify that context dependent changes

More information

Probability Distributions

Probability Distributions CONDENSED LESSON 13.1 Probability Distributions In this lesson, you Sketch the graph of the probability distribution for a continuous random variable Find probabilities by finding or approximating areas

More information

Automatic Speech Recognition (CS753)

Automatic Speech Recognition (CS753) Automatic Speech Recognition (CS753) Lecture 12: Acoustic Feature Extraction for ASR Instructor: Preethi Jyothi Feb 13, 2017 Speech Signal Analysis Generate discrete samples A frame Need to focus on short

More information

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS

AT2 Neuromodeling: Problem set #3 SPIKE TRAINS AT2 Neuromodeling: Problem set #3 SPIKE TRAINS Younesse Kaddar PROBLEM 1: Poisson spike trains Link of the ipython notebook for the code Brain neuron emit spikes seemingly randomly: we will aim to model

More information

ROBUST MEASUREMENT OF THE DURATION OF

ROBUST MEASUREMENT OF THE DURATION OF ROBUST MEASUREMENT OF THE DURATION OF THE GLOBAL WARMING HIATUS Ross McKitrick Department of Economics University of Guelph Revised version, July 3, 2014 Abstract: The IPCC has drawn attention to an apparent

More information

Supplementary Materials for

Supplementary Materials for Supplementary Materials for Neuroprosthetic-enabled control of graded arm muscle contraction in a paralyzed human David A. Friedenberg PhD 1,*, Michael A. Schwemmer PhD 1, Andrew J. Landgraf PhD 1, Nicholas

More information

Phenomenological Models of Neurons!! Lecture 5!

Phenomenological Models of Neurons!! Lecture 5! Phenomenological Models of Neurons!! Lecture 5! 1! Some Linear Algebra First!! Notes from Eero Simoncelli 2! Vector Addition! Notes from Eero Simoncelli 3! Scalar Multiplication of a Vector! 4! Vector

More information

Course in Data Science

Course in Data Science Course in Data Science About the Course: In this course you will get an introduction to the main tools and ideas which are required for Data Scientist/Business Analyst/Data Analyst. The course gives an

More information

Finding a Basis for the Neural State

Finding a Basis for the Neural State Finding a Basis for the Neural State Chris Cueva ccueva@stanford.edu I. INTRODUCTION How is information represented in the brain? For example, consider arm movement. Neurons in dorsal premotor cortex (PMd)

More information

Subject CS1 Actuarial Statistics 1 Core Principles

Subject CS1 Actuarial Statistics 1 Core Principles Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and

More information

Internally generated preactivation of single neurons in human medial frontal cortex predicts volition

Internally generated preactivation of single neurons in human medial frontal cortex predicts volition Internally generated preactivation of single neurons in human medial frontal cortex predicts volition Itzhak Fried, Roy Mukamel, Gabriel Kreiman List of supplementary material Supplementary Tables (2)

More information

Supplementary Note on Bayesian analysis

Supplementary Note on Bayesian analysis Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan

More information

The idiosyncratic nature of confidence

The idiosyncratic nature of confidence SUPPLEMENTARY INFORMATION Articles DOI: 10.1038/s41562-017-0215-1 In the format provided by the authors and unedited. The idiosyncratic nature of confidence 1,2 Joaquin Navajas *, Chandni Hindocha 1,3,

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

Modern Navigation. Thomas Herring

Modern Navigation. Thomas Herring 12.215 Modern Navigation Thomas Herring Estimation methods Review of last class Restrict to basically linear estimation problems (also non-linear problems that are nearly linear) Restrict to parametric,

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:10.1038/nature11226 Supplementary Discussion D1 Endemics-area relationship (EAR) and its relation to the SAR The EAR comprises the relationship between study area and the number of species that are

More information

Performance Evaluation and Comparison

Performance Evaluation and Comparison Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Cross Validation and Resampling 3 Interval Estimation

More information

Sleep data, two drugs Ch13.xls

Sleep data, two drugs Ch13.xls Model Based Statistics in Biology. Part IV. The General Linear Mixed Model.. Chapter 13.3 Fixed*Random Effects (Paired t-test) ReCap. Part I (Chapters 1,2,3,4), Part II (Ch 5, 6, 7) ReCap Part III (Ch

More information

CS181 Practical4 Reinforcement Learning

CS181 Practical4 Reinforcement Learning CS181 Practical4 Reinforcement Learning Ben Cook and Scott Shaffer April 2016 1 Technical Approach 1.1 Basic Assumptions The general approach to solving the Swingy Monkey game is to directly approximate

More information

DEVS Simulation of Spiking Neural Networks

DEVS Simulation of Spiking Neural Networks DEVS Simulation of Spiking Neural Networks Rene Mayrhofer, Michael Affenzeller, Herbert Prähofer, Gerhard Höfer, Alexander Fried Institute of Systems Science Systems Theory and Information Technology Johannes

More information

Gravity Modelling Forward Modelling Of Synthetic Data

Gravity Modelling Forward Modelling Of Synthetic Data Gravity Modelling Forward Modelling Of Synthetic Data After completing this practical you should be able to: The aim of this practical is to become familiar with the concept of forward modelling as a tool

More information

arxiv: v1 [cs.lg] 2 Feb 2018

arxiv: v1 [cs.lg] 2 Feb 2018 Short-term Memory of Deep RNN Claudio Gallicchio arxiv:1802.00748v1 [cs.lg] 2 Feb 2018 Department of Computer Science, University of Pisa Largo Bruno Pontecorvo 3-56127 Pisa, Italy Abstract. The extension

More information

CS229 Project: Musical Alignment Discovery

CS229 Project: Musical Alignment Discovery S A A V S N N R R S CS229 Project: Musical Alignment iscovery Woodley Packard ecember 16, 2005 Introduction Logical representations of musical data are widely available in varying forms (for instance,

More information

Course Review. Kin 304W Week 14: April 9, 2013

Course Review. Kin 304W Week 14: April 9, 2013 Course Review Kin 304W Week 14: April 9, 2013 1 Today s Outline Format of Kin 304W Final Exam Course Review Hand back marked Project Part II 2 Kin 304W Final Exam Saturday, Thursday, April 18, 3:30-6:30

More information

Hidden Markov Models Part 1: Introduction

Hidden Markov Models Part 1: Introduction Hidden Markov Models Part 1: Introduction CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Modeling Sequential Data Suppose that

More information

Gentle Introduction to Infinite Gaussian Mixture Modeling

Gentle Introduction to Infinite Gaussian Mixture Modeling Gentle Introduction to Infinite Gaussian Mixture Modeling with an application in neuroscience By Frank Wood Rasmussen, NIPS 1999 Neuroscience Application: Spike Sorting Important in neuroscience and for

More information

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Multilevel Models in Matrix Form Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Today s Lecture Linear models from a matrix perspective An example of how to do

More information

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;

More information

Statistical Filters for Crowd Image Analysis

Statistical Filters for Crowd Image Analysis Statistical Filters for Crowd Image Analysis Ákos Utasi, Ákos Kiss and Tamás Szirányi Distributed Events Analysis Research Group, Computer and Automation Research Institute H-1111 Budapest, Kende utca

More information

Applications of Inertial Measurement Units in Monitoring Rehabilitation Progress of Arm in Stroke Survivors

Applications of Inertial Measurement Units in Monitoring Rehabilitation Progress of Arm in Stroke Survivors Applications of Inertial Measurement Units in Monitoring Rehabilitation Progress of Arm in Stroke Survivors Committee: Dr. Anura P. Jayasumana Dr. Matthew P. Malcolm Dr. Sudeep Pasricha Dr. Yashwant K.

More information

REAL-TIME COMPUTING WITHOUT STABLE

REAL-TIME COMPUTING WITHOUT STABLE REAL-TIME COMPUTING WITHOUT STABLE STATES: A NEW FRAMEWORK FOR NEURAL COMPUTATION BASED ON PERTURBATIONS Wolfgang Maass Thomas Natschlager Henry Markram Presented by Qiong Zhao April 28 th, 2010 OUTLINE

More information

5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,

More information

More Differentiation Page 1

More Differentiation Page 1 More Differentiation Page 1 Directions: Solve the following problems using the available space for scratchwork. Indicate your answers on the front page. Do not spend too much time on any one problem. Note:

More information

Random Number Generation. CS1538: Introduction to simulations

Random Number Generation. CS1538: Introduction to simulations Random Number Generation CS1538: Introduction to simulations Random Numbers Stochastic simulations require random data True random data cannot come from an algorithm We must obtain it from some process

More information

Chirp Transform for FFT

Chirp Transform for FFT Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a

More information

Sound Recognition in Mixtures

Sound Recognition in Mixtures Sound Recognition in Mixtures Juhan Nam, Gautham J. Mysore 2, and Paris Smaragdis 2,3 Center for Computer Research in Music and Acoustics, Stanford University, 2 Advanced Technology Labs, Adobe Systems

More information

Non-Inferiority Tests for the Ratio of Two Proportions in a Cluster- Randomized Design

Non-Inferiority Tests for the Ratio of Two Proportions in a Cluster- Randomized Design Chapter 236 Non-Inferiority Tests for the Ratio of Two Proportions in a Cluster- Randomized Design Introduction This module provides power analysis and sample size calculation for non-inferiority tests

More information

Collective Dynamics in Human and Monkey Sensorimotor Cortex: Predicting Single Neuron Spikes

Collective Dynamics in Human and Monkey Sensorimotor Cortex: Predicting Single Neuron Spikes Collective Dynamics in Human and Monkey Sensorimotor Cortex: Predicting Single Neuron Spikes Supplementary Information Wilson Truccolo 1,2,5, Leigh R. Hochberg 2-6 and John P. Donoghue 4,1,2 1 Department

More information

Neural Encoding: Firing Rates and Spike Statistics

Neural Encoding: Firing Rates and Spike Statistics Neural Encoding: Firing Rates and Spike Statistics Dayan and Abbott (21) Chapter 1 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Background: Dirac δ Function Dirac δ function has the following

More information

SPRING 2007 EXAM C SOLUTIONS

SPRING 2007 EXAM C SOLUTIONS SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =

More information

IE 361 EXAM #3 FALL 2013 Show your work: Partial credit can only be given for incorrect answers if there is enough information to clearly see what you were trying to do. There are two additional blank

More information

Noise & Data Reduction

Noise & Data Reduction Noise & Data Reduction Paired Sample t Test Data Transformation - Overview From Covariance Matrix to PCA and Dimension Reduction Fourier Analysis - Spectrum Dimension Reduction 1 Remember: Central Limit

More information

Describing Spike-Trains

Describing Spike-Trains Describing Spike-Trains Maneesh Sahani Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2012 Neural Coding The brain manipulates information by combining and generating action

More information

Trial-to-Trial Variability and its. on Time-Varying Dependence Between Two Neurons

Trial-to-Trial Variability and its. on Time-Varying Dependence Between Two Neurons Trial-to-Trial Variability and its Effect on Time-Varying Dependence Between Two Neurons Can Cai, Robert E. Kass, and Valérie Ventura Department of Statistics and Center for the Neural Basis of Cognition

More information

Nonstationary spatial process modeling Part II Paul D. Sampson --- Catherine Calder Univ of Washington --- Ohio State University

Nonstationary spatial process modeling Part II Paul D. Sampson --- Catherine Calder Univ of Washington --- Ohio State University Nonstationary spatial process modeling Part II Paul D. Sampson --- Catherine Calder Univ of Washington --- Ohio State University this presentation derived from that presented at the Pan-American Advanced

More information

Recall the Basics of Hypothesis Testing

Recall the Basics of Hypothesis Testing Recall the Basics of Hypothesis Testing The level of significance α, (size of test) is defined as the probability of X falling in w (rejecting H 0 ) when H 0 is true: P(X w H 0 ) = α. H 0 TRUE H 1 TRUE

More information

IV. Covariance Analysis

IV. Covariance Analysis IV. Covariance Analysis Autocovariance Remember that when a stochastic process has time values that are interdependent, then we can characterize that interdependency by computing the autocovariance function.

More information

Self Adaptive Particle Filter

Self Adaptive Particle Filter Self Adaptive Particle Filter Alvaro Soto Pontificia Universidad Catolica de Chile Department of Computer Science Vicuna Mackenna 4860 (143), Santiago 22, Chile asoto@ing.puc.cl Abstract The particle filter

More information

Practical Statistics for the Analytical Scientist Table of Contents

Practical Statistics for the Analytical Scientist Table of Contents Practical Statistics for the Analytical Scientist Table of Contents Chapter 1 Introduction - Choosing the Correct Statistics 1.1 Introduction 1.2 Choosing the Right Statistical Procedures 1.2.1 Planning

More information

Supplementary Figure 1. Characterization of the single-photon quantum light source based on spontaneous parametric down-conversion (SPDC).

Supplementary Figure 1. Characterization of the single-photon quantum light source based on spontaneous parametric down-conversion (SPDC). .2 Classical light source.8 g (2) ().6.4.2 EMCCD SPAD 2 3.2.4.6.8..2.4.6.8.2 Mean number of photon pairs per pump pulse 4 5 6 7 8 9 2 3 4 Supplementary Figure. Characterization of the single-photon quantum

More information

Machine Learning Linear Classification. Prof. Matteo Matteucci

Machine Learning Linear Classification. Prof. Matteo Matteucci Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)

More information

Abnormal Activity Detection and Tracking Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University

Abnormal Activity Detection and Tracking Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University Abnormal Activity Detection and Tracking Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University Abnormal Activity Detection and Tracking 1 The Problem Goal: To track activities

More information

Chapter 7 Comparison of two independent samples

Chapter 7 Comparison of two independent samples Chapter 7 Comparison of two independent samples 7.1 Introduction Population 1 µ σ 1 1 N 1 Sample 1 y s 1 1 n 1 Population µ σ N Sample y s n 1, : population means 1, : population standard deviations N

More information

DM-Group Meeting. Subhodip Biswas 10/16/2014

DM-Group Meeting. Subhodip Biswas 10/16/2014 DM-Group Meeting Subhodip Biswas 10/16/2014 Papers to be discussed 1. Crowdsourcing Land Use Maps via Twitter Vanessa Frias-Martinez and Enrique Frias-Martinez in KDD 2014 2. Tracking Climate Change Opinions

More information

Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation

Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation NOTE Communicated by Jonathan Victor Spike Count Correlation Increases with Length of Time Interval in the Presence of Trial-to-Trial Variation Robert E. Kass kass@stat.cmu.edu Valérie Ventura vventura@stat.cmu.edu

More information

Online Supplement to Creating Work Breaks From Available Idleness

Online Supplement to Creating Work Breaks From Available Idleness Online Supplement to Creating Work Breaks From Available Idleness Xu Sun and Ward Whitt Department of Industrial Engineering and Operations Research, Columbia University New York, NY, 127 June 3, 217 1

More information

Every animal is represented by a blue circle. Correlation was measured by Spearman s rank correlation coefficient (ρ).

Every animal is represented by a blue circle. Correlation was measured by Spearman s rank correlation coefficient (ρ). Supplementary Figure 1 Correlations between tone and context freezing by animal in each of the four groups in experiment 1. Every animal is represented by a blue circle. Correlation was measured by Spearman

More information

arxiv: v4 [stat.me] 27 Nov 2017

arxiv: v4 [stat.me] 27 Nov 2017 CLASSIFICATION OF LOCAL FIELD POTENTIALS USING GAUSSIAN SEQUENCE MODEL Taposh Banerjee John Choi Bijan Pesaran Demba Ba and Vahid Tarokh School of Engineering and Applied Sciences, Harvard University Center

More information

PSY 307 Statistics for the Behavioral Sciences. Chapter 20 Tests for Ranked Data, Choosing Statistical Tests

PSY 307 Statistics for the Behavioral Sciences. Chapter 20 Tests for Ranked Data, Choosing Statistical Tests PSY 307 Statistics for the Behavioral Sciences Chapter 20 Tests for Ranked Data, Choosing Statistical Tests What To Do with Non-normal Distributions Tranformations (pg 382): The shape of the distribution

More information

Online Supplement to Creating Work Breaks From Available Idleness

Online Supplement to Creating Work Breaks From Available Idleness Online Supplement to Creating Work Breaks From Available Idleness Xu Sun and Ward Whitt Department of Industrial Engineering and Operations Research, Columbia University New York, NY, 127 September 7,

More information

Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model

Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model Linear Combinations of Optic Flow Vectors for Estimating Self-Motion a Real-World Test of a Neural Model Matthias O. Franz MPI für biologische Kybernetik Spemannstr. 38 D-72076 Tübingen, Germany mof@tuebingen.mpg.de

More information

Temporal context calibrates interval timing

Temporal context calibrates interval timing Temporal context calibrates interval timing, Mehrdad Jazayeri & Michael N. Shadlen Helen Hay Whitney Foundation HHMI, NPRC, Department of Physiology and Biophysics, University of Washington, Seattle, Washington

More information

A Three-dimensional Physiologically Realistic Model of the Retina

A Three-dimensional Physiologically Realistic Model of the Retina A Three-dimensional Physiologically Realistic Model of the Retina Michael Tadross, Cameron Whitehouse, Melissa Hornstein, Vicky Eng and Evangelia Micheli-Tzanakou Department of Biomedical Engineering 617

More information