SUPPLEMENTARY INFORMATION
|
|
- Rosanna Perry
- 5 years ago
- Views:
Transcription
1 doi: /nature14467 Supplementary Discussion Effects of environmental enrichment on spine turnover in rats and mice and relationship to LTP The existing scientific literature contains conflicting results regarding changes in mouse CA1 spine density upon environmental enrichment or learning; some groups report an increase in density 18,19 and others a decrease 20,35. These differences highlight the difficulties in measuring spine density and estimating differences at the population level. The modest increase (~10%) in basal spine density seen in rats 18 might not occur in mice due to the multiple, documented differences in hippocampal anatomy, physiology and cognition between rats and mice 36. Another possibility, as suggested previously 20,35, is that there are distinct subpopulations of CA1 pyramidal neurons with different spine dynamics. Our time-lapse imaging methods offer the advantage of being able to examine spines longitudinally, thereby increasing the statistical power. However, we might fail to detect changes in one sub-population if other sub-populations average out these effects to below the statistical limits of our sensitivity to spine changes. A third possibility is suggested by data 21 on structural effects of LTP and is consistent with our finding that continuous environmental enrichment did not alter CA1 spine densities. In that work, changes in CA1 spine density were transient following LTP induction, and spine density returned to its original value after two hours 21. The results stress the role of homeostatic effects that may maintain constant spine properties in the steady state. In our study, we provided continual enrichment to the mice, and thus examined steady-state spine densities and dynamics, rather than transient changes 1
2 RESEARCH SUPPLEMENTARY INFORMATION occurring within an hour or two after a stimulus. Overall, we consider it likely that both homeostatic effects and different spine sub-populations may exist. Further imaging experiments in the live adult hippocampus, perhaps using genetic markers to distinguish different categories of activated neurons, will be required to resolve these issues and to distinguish between the different possibilities considered here. 2
3 RESEARCH Summary of mathematical notation used in the text Variable Name Notation Description Minimal spine length Filling fraction dd ff Spines are scored if the length of their projection in the optical plane exceeds dd. The expected fraction of potential synapses that are occupied by a spine. Counting resolution LL Shortest resolvable inter-spine interval. Probability mass function for merged spine order Growth rate Loss rate Merged spine transition rate Surviving fraction of spines Surviving fraction of merged spines Experimental surviving fraction of merged spines PP! Ω rr gg rr ll rr mm,nn SS SS mm SS mm PP Ω Ω = nn is the probability that a random merged spine is nn th -order. The rate at which a potential synapse in the off state transitions to the labile state. The rate at which a potential synapse in the labile state transitions to the off state. The rate at which a merged spine in state nn transitions to state mm. The probability that a spine present at time 0 is present at time. The probability that a merged spine present at time 0 is present at time. The fraction of merged spines at time 0 that were present at time in the data. Time Time elapsed since the reference day. Fraction of undetectable spines Fraction of stable spines Density of potential synapses Effective density of potential synapses Density of spines Effective density of spines ββ γγ ρρ ρρ ee ρρ ss ρρ ss ee The expected fraction of spines whose projected length is too short to be scored. The expected fraction of spines that are in the stable state. The expected number of potential synapses per unit length of dendrite. The expected number of detectable potential synapses on either side of the dendrite per unit length of dendrite. The expected number of spines per unit length of dendrite. The expected number of detectable spines on either side of the dendrite per unit length of dendrite. Degree of merging ρρ ss ee LL The mean merged spine order minus one. Density of merged spines Experimental density of merged spines ρρ mm ρρ mm The expected number of merged spines per unit length of dendrite. The measured number of merged spines divided by the total length of dendrite. Merged spine order Ω Number of spines comprising a merged spine. 3
4 RESEARCH SUPPLEMENTARY INFORMATION Supplementary Methods I Potential synapses and spine dynamics In this section, we define our kinetic model for spine dynamics and provide a formula for the surviving fraction of spines. Since a spine is normally associated with a synapse 34, we assume that spines may only occur at dendritic locations where they can make a synapse with a pre-synaptic axon. Such locations are presumed to be fixed, since neocortical axonal 37, neocortical dendritic 38-40, and hippocampal 4 dendritic arbors appear to be stable. We call these locations potential synapses and further term the fraction of potential synapses that are filled by a spine as the filling fraction 41. We denote the density of potential synapses as ρρ and the filling fraction as ff, hence the expected spine density is ρρ ss = ρρρρ. (See Summary of mathematical notation used in the text for a list of mathematical variables and their definitions). Previous work estimated the filling fraction of CA3 to CA1 synapses to be 0.22 and of neocortical synapses to vary in the range (Ref. 41). Throughout we assume ff = 0.2. Potential synapses can be in three states: OFF, corresponding to the absence of a spine; ON LABILE ; or ON STABLE 7. Spines in the labile state decay to the OFF state with loss rate rr ll, and potential synapses in the OFF state return to the ON LABILE state with growth rate rr gg. Stable spines remain permanently stable. We denote the fraction of spines that are stable as γγ. Because spine density was nearly constant over time (Fig. 3b), we set the loss rate to balance the growth rate (Appendix I). Potential synapses undergo transitions in which a labile spine decays and then subsequently recurs. Since 4
5 RESEARCH both spines fill the same potential synapse, we label them with the same index. We estimate the average recurrence time of CA1 spines to be 48 days (Appendix I). We characterize the dynamics of spine turnover via the surviving fraction, SS, the probability that a spine observed on a reference day is also present after an intervening interval of duration. Note that we do not require a spine to be present throughout this interval. For the kinetic scheme above, the surviving fraction exhibits a mono-exponential decay: SS = SS! + 1 SS! ee!/ττ, in which the offset, SS! 1 ffff 1 ff = 1 1 ffff γγ, encodes the surviving fraction s asymptotic value, and ττ = ff 1 γγ 1 ffff 1 rr gg sets the timescale for spine survival (Extended Fig. 4a and Appendix I). SS! depends on the fraction of stable spines (Extended Fig. 4b). Note that SS! = ff when γγ = 0. If there are no stable spines, the surviving fraction at long time intervals is simply the probability that a randomly chosen potential synapse is in the ON LABILE state. II Computational methods used to generate the simulated datasets Our paper used three simulated datasets: (i) a variable-density dataset that lacked spine dynamics (Fig. 2e and Extended Fig. 5a); (ii) a dynamic dataset in which the spine density and kinetics were heuristically chosen to resemble the in vivo data (Extended Fig. 5b-c); and (iii) a dynamic dataset in which the spine density and kinetics matched the best-fit model of the in vivo data (Fig. 2d,f and Extended Fig. 5d-e). We used the first 5
6 RESEARCH SUPPLEMENTARY INFORMATION two datasets to benchmark our phenomenological model for optical spine merging (see III-V) before fiing it to experimental data (see VI). We used the third dataset to illustrate the effects of optical merging and validate the model under experimentally realistic conditions. Here we detail the computational methods that we used to generate these simulated datasets. Throughout this section, the notation μμ ± σσ signifies a normal distribution whose mean is μμ and whose standard deviation is σσ. The horizontal length of each dendrite was 30 ± 1 μμm. Across this length, the diameter and vertical location of the dendrite underwent random walks. For dataset (i), the mean diameter of the dendrite, and the fluctuations in the diameter, were proportional to the expected spine density 34. A spine density of 1.5 μμm!! corresponded to a mean diameter of 700 nm and a diameter change of 0 ± 3 nm between adjacent columns. For datasets (ii) and (iii), we used a mean diameter of 700 nm and a diameter change of 0 ± 3 nm between adjacent columns. The vertical shift of the dendrite s lower boundary was 0 ± 50 nm between adjacent columns. We placed twenty points evenly along the horizontal length of the dendrite to trace the dendrite and estimate its length. We associated each dendrite with a fixed set of potential synapses and set the filling fraction at ff = 0.2 (Ref. 41). We sampled the number of potential synapses on the dendrite from a Poisson distribution whose mean was the product of the horizontal length and the expected potential synapse density. We associated each potential synapse with a fixed 3D position and spine size. We assigned positions such that potential synapses were uniformly distributed along the dendrite and uniformly oriented around the dendrite. The distance from the center of the spine head to the edge of the dendrite was 600 ± 120 nm 6
7 RESEARCH and the radius of the spine head was 210 ± 21 nm. We added potential synapses sequentially and re-sampled the position of a potential synapse if its spine head overlapped with another spine head. Each image was 512 pixels per side, and each pixel was 72 nm per side. Images contained one or two dendrites in the optical plane. We assigned pixel intensities within the dendrite to be 1000 ± 600 and within the spines to be 2000 ± 300. We applied an intensity floor at 0 and ceiling at Finally, we blurred the image with a Gaussian point spread function (FWHM = 600 nm) and added Poisson noise to the image. Dataset (i) consisted of µm-long dendritic segments. We divided these dendrites equally across eight expected spine density values, which ranged from 0.3 µm -1 to 2.4 µm -1 in 0.3 µm -1 steps. We generated this dataset to study the effects of optical merging on the measured spine density, so we did not simulate spine turnover on these dendrites. Extended Fig. 5a shows examples of simulated dendrites having low, medium, and high spine density. Dataset (ii) consisted of µm-long dendritic segments with an expected spine density of 1.5 μμm!!. Each potential synapse was labile, and we evolved potential synapse states for 400 discrete time steps. During each time step, off potential synapses turned on with probability 3 10!! and on potential synapses turned off with probability 12 10!!. We generated images every 25 time steps. If one conceptualizes this sampling rate as once per day, then these transition rates imply a spine survival timescale of ~27 days and the dataset is 16 days in duration. Whenever we generated an image, the location of each spine was jiered with a standard deviation of 10 nm in the x and y 7
8 RESEARCH SUPPLEMENTARY INFORMATION directions. Extended Fig. 5b compare the ground truth and manual scoring of a dendrite from this time-lapse dataset. Dataset (iii) consisted of µm-long dendritic segments with an expected spine density of 2.56 μμm!!. Each potential synapse was labile, and we evolved potential synapse states for 2000 discrete time steps. During each time step, off potential synapses turned on with probability 1 10!! and on potential synapses turned off with probability !!. We generated images every 75 time steps. If one conceptualizes this sampling rate as once every third day, then these transition rates imply a spine survival timescale of ~7.6 days and the dataset is 80 days in duration. Whenever we generated an image, the location of each spine was jiered with a standard deviation of 10 nm in the x and y directions. Fig. 2d and Extended Fig. 5d show example dendrites from this dataset. Since the generation and analysis of dataset (iii) was performed post hoc, the data analyst was not blind to the computational methods. III Optical merging of spines Here we define our model for optical merging of spines and discuss the likelihood that a merged spine is actually comprised of n real spines. Similarly to prior work 32 we scored only spines that project from the dendrite by at least dd = 0.4 μμm in the optical specimen plane. Thus, some spines have projections that are too short to be scored (Extended Fig. 6a-d). We denote the fraction of spines with projections that are too short as ββ and estimate ββ 0.4 ( IV). For projections that are scored, the spine is equally likely to be on either side of the dendrite. Thus, an actual density of detectable spines, ρρ ee ss = 1 ββ ρρ ss /2, appears on either side of the dendrite. 8
9 RESEARCH Due to the optical resolution limits of light microscopy, spines are optically merged in the resulting images when they are sufficiently close in space (Fig. 2a). For spines on the same side of the dendrite, we model optical merging in terms of the length of dendritic shaft separating the spines. We refer to the dendritic length between a pair of spines separated by nn 1 intervening spines as the nn th inter-spine interval. We posit that the first spine on either side of the dendrite is part of an nn th -order merged spine if the nn 1 st inter-spine interval is LL and the nn th inter-spine interval is >LL, where LL is a phenomenological constant that we call the counting resolution. The counting resolution is the minimal inter-spine interval that the experimenter can resolve; the value of L should be comparable to the optical resolution. Assuming that spines are uniformly distributed, the probability that the merged spine is nn th -order is PP! Ω = nn = ρρ ss ee LL nn!! ee!ρρ ee ss LL, nn 1! where Ω = 1, 2, 3, denotes the merged spine order and PP! is its probability distribution function. This equation has a simple interpretation. The first spine is part of an n th -order merged spine if exactly nn 1 spines follow it within the length LL; the number of spines in (0, L) is Poisson distributed with mean ρρ ss ee LL. Note that the average merged spine order is 1 + ρρ ss ee LL, so we refer to the quantity ρρ ss ee LL as the degree of merging. The degree of merging is zero when every spine is resolvable and increases with the spine density and counting resolution. Conceptually, this scheme could be iterated, beginning with the nn + 1 ssss spine, to classify all spines into merged spines with varying orders. However, this is unnecessary 9
10 RESEARCH SUPPLEMENTARY INFORMATION because our methods only require PP!, which we assume is the same for all merged spines. Furthermore, this convention for grouping spines into merged spines is not the only option. For example, ambiguities can arise when a central spine is close enough to be merged with either neighbor, but the first and last spines are too distant for all three spines to comprise a single merged spine. Such ambiguities are common, but our methods only require PP!, which depends less on the classification scheme. IV Optical merging affects the measurement of spine density. In this section we derive a simple formula that relates the density of merged spines to the density of spines. Measurable merged spine densities underestimate true spine densities for two reasons. First, a spine might be too short to be detected when projected into the optical plane. That happens when the angle between a spine and the normal vector to the optical plane where the dendrite lays (θθ, Extended Fig. 6) is small (Extended Fig. 6c-d). We follow the established neocortical protocol and score dendritic kinks as spines when they protrude by at least dd = 0.4 μμm 32. Denote the dendrite radius as rr dd, the spine head radius as rr ss, and the spine neck length as ll ss (Extended Fig. 6a). Then the minimal spine angle that is scored, θθ cc, satisfies rr dd + dd = rr dd + ll ss sin θθ cc + rr ss (Extended Fig. 6a). In our simulated datasets ( II), we used rr dd = 0.35 μμm, ll ss = 0.6 μμm, and rr ss = 0.21 μμm to represent a typical spine s geometry, such that θθ cc 35!. Spines are not scored when their angle is in either the interval θθ cc, θθ cc or in 180 θθ cc, θθ cc. Thus, the fraction of spines whose projection is too short is ββ 4θθ cc
11 RESEARCH This estimate neglects geometric heterogeneity across spines. Electron microscopy measurements 17,34 suggest that the means ± s.d. of the dendrite radius, spine length, and spine radius in the hippocampus are ± μμm, ± μμm, and ± μμm respectively. Assuming that each geometric parameter is independent and normally distributed with these means and standard deviations, we numerically evaluated the probability that a spine is too short to be scored as ββ The precise value of ββ only affects the estimated spine density, and it does not affect our assessment of kinetic models ( VI). Second, the merged spine density is an underestimate of the actual spine density because several spines can comprise a merged spine. Suppose that the expected number of spines on a dendrite is NN ss, and the expected number of merged spines is NN mm. Then the expected number of n th -order merged spines is NN mm PP! nn, and the expected number of detectable spines is NN ss 1 ββ =! nn!! nnnn mm PP! nn = NN mm 1 + ρρ ss ee LL. Converting these numbers to densities, we find ρρ mm = 1 ββ ρρ ss 1 + ρρ ee ss LL, where ρρ mm is the merged spine density. Note that the ratio of the merged spine and spine densities is a function of the degree of merging. We use this observation to estimate the spine density from the measured merged spine density and survival ( VI). V Optical merging affects the measurement of spine survival In this section we define our kinetic model for the dynamics of merged spines and provide an approximate formula for the surviving fraction of merged spines. The 11
12 RESEARCH SUPPLEMENTARY INFORMATION counting resolution, L, also affects the dynamics of merged spines. A merged spine will be present when any of its component spines is present. Therefore, the typical time it takes a merged spine to decay will be longer than that for any of the component spines to decay, because all component spines must disappear for the merged spine to disappear. Further, if a merged spine is present on the first day of imaging, then the probability that it is present after a long time interval exceeds the probability that any component spine is present. Thus, the finite counting resolution imparts an inflated degree of stability upon the experimentally measured surviving fraction. To consider these effects quantitatively, we construct and analyze a kinetic model that describes how the state of a merged spine is altered by the addition or deletion of a spine. A merged spine is in the stable state if any of its component spines is stable, and all other merged spines are labile. The state of a labile merged spine is equated with its merged spine order, and we denote the rate at which an n th -order merged spine becomes an m th -order merged spine as rr mm,nn. The order of a merged spine increases by one when a spine is born within the counting resolution of each component spine, and the order decreases by one when a component spine decays. Consequently, a kinetic ladder describes merged spine dynamics, and rr mm,nn = 0 when nn mm > 1 (Extended Fig. 7a). A labile merged spine transitions to the OFF state when all component spines decay, and we sometimes refer to merged spines in the OFF state as zeroth-order merged spines. Spine kinetics determine the transition rates between adjacent rungs of the ladder. A merged spine decrements its order when any one of its component spines decays. Assuming that each spine decays independently, this means that 12
13 RESEARCH rr nn!!,nn = nnrr ll. The merged spine order increments when a potential synapse within the counting resolution of each component spine fills. Thus, the length of dendrite within the counting resolution of each component spine is a critical parameter for merged spine kinetics. We denote its expected value for an n th -order merged spine as LL nn. Because component spines become more spread out within high-order merged spines, LL nn decreases with order: LL nn = nn LL (Appendix II). For instance, LL! = 2LL, because a spine can be added within LL to the left or right of the single component spine, but lim nn! LL nn = LL because a spine must be added between the first and last component spine when the merged spine is wide. On one side of the dendrite, the expected number of potential synapses in LL nn is ρρ ee LL nn, where ρρ ee = 1 ββ ρρ/2. Thus, the expected number of potential synapses in the OFF state is ρρ ee LL nn 1 ff. Again assuming that the dynamics of potential synapses are independent, rr nn!!,nn = ρρ ee LL nn 1 ff rr gg (Appendix II). Finally, consider the rate that a zeroth-order merged spine turns on. When the degree of merging is small, merged spines correspond to spines, and rr!,! should approach rr gg. When the degree of merging is large, the merged spine state increments if any of the many potential synapses in LL, LL fills, and rr!,! should approach 2ρρ ee LL 1 ff rr gg. In between, 1 rr!,! = 1 ee 2ρρ ee LL 1 ff rr!!ρρ ee LL!!ff gg (Appendix II). 13
14 RESEARCH SUPPLEMENTARY INFORMATION All transition rates up the kinetic ladder increase with the degree of merging (Extended Fig. 7b), whereas all transitions down the ladder are independent of the degree of merging (Inset, Extended Fig. 7b). This drives occupancy of high-order merged spine states when the degree of merging is large. On the other hand, the up rates decrease as a function of merged spine order (Extended Fig. 7b), whereas the down rates increase (Inset, Extended Fig. 7b). This favors low-order merged spine states. These factors balance at an equilibrium point that recapitulates the merged spine order distribution derived from static considerations (Appendix III). An approximation to this model builds intuition. Assume that all spines are labile, and consider a reduced model in which a single state, denoted Ψ, combines all merged spine orders above zero (Extended Fig. 7c). Because only first-order merged spines can disappear, rr!,! PP! Ω = 1 rr!,!. Similarly, since a merged spine is always born as a first-order merged spine, rr!,! = rr!,!. This defines a two-state system that can be solved mathematically (Appendix I) and that approximates the complete model with reasonable accuracy (Extended Fig. 7d). According to this approximation, the surviving fraction of merged spines is SS mm SS! mm + 1 SS! mm ee!/!, where rr!,! SS! mm = rr!,! + rr!,! is the asymptotic value of the merged surviving fraction (Extended Fig. 7e), and 14
15 RESEARCH 1 Τ = rr!,! + rr!,!! is the time constant of merged spine survival (Extended Fig. 7f). The formula for SS mm follows without approximation from a general treatment of the complete model (Appendix III), but Τ is only an approximation. Even without stable spines, the degree of merging present in hippocampal data implies that SS! mm 0.75 (blue circle, Extended Fig. 7e) and that merged spines live approximately twice as long as their component spines (blue circle, Extended Fig. 7f). On the other hand, the degree of merging present in neocortical data cannot explain the measured stability (pink circle, Extended Fig. 7e). VI Statistical assessment of model parameters In this section, we describe how we use the model of III-V to relate the experimentally measured merged spine surviving fraction to the underlying kinetic parameters of spine turnover. In Fig. 2 we validated the kinetic model using simulated datasets. In this context, the only unknown model parameter was the experimenter s counting resolution, and we used the final formula of IV to estimate this parameter from the ratio between the average experimentally scored spine density and the true spine density. We then predicted the dynamics of the experimenter s merged spine scoring using the model described in V. Fig.2f and Extended Fig. 5c show that the agreement between the model s predictions and the experimenter s scoring was strong. In Fig. 4, we fit the model to the measured merged spine surviving fraction curve. In this context, we did not a priori know the degree of merging, spine growth rate, or fraction of stable spines. We fit the model parameters in two stages. First, we identified the relevant portion of parameter space by using the reduced chi-squared statistic 15
16 RESEARCH SUPPLEMENTARY INFORMATION (Appendix IV) to rapidly assess the goodness-of-fit of many densely sampled candidate models. Given the measured spine density, ρρ mm, and a model s specified value for ρρ ee LL, we estimated the spine density to be ρρ ss 1 + ρρ ee LLLL ρρ 1 ββ mm. Accordingly, this allowed us to estimate the counting resolution as LL 2ρρ ee LLLL ρρ ee LLLL ρρ mm We only considered a model to provide a suitable description of the data if LL 0.5, 1 μμm. The best-fit model for the hippocampus (Fig. 4b) had a reduced chi-squared of 1.3, and the best-fit model for the neocortex (Fig. 4d) had a reduced chi-squared of 0.9. This indicates that the model provided a good fit for the surviving fraction curves. In order to refine our model assessments and compute confidence intervals for the model parameters, we then evaluated the log-likelihood of the measured surviving fraction for subsets of model parameters that produced low values of the reduced chisquared statistic (Appendix IV). In particular, we computed the log-likelihood for hippocampal model parameters that produced a reduced chi-squared less than 2.4 and for 6099 neocortical model parameters that produced a reduced chi-squared less than 6.0 (all had LL 0.5, 1 μμm), where we empirically determined these cutoffs to ensure that poor reduced chi-squared values did not exclude models with high likelihood. Interestingly, the hippocampal model that minimized the reduced chi-squared also maximized the loglikelihood function. We then computed the 95% confidence region for the spine lifetime and the fraction of stable spines using profile likelihoods 42 (Appendix IV). The confidence regions for hippocampal and neocortical model parameters are ploed in Fig. 16
17 RESEARCH 4a, where the color code indicates the significance level at which a particular set of model parameters can be rejected. We do not plot model parameters that can be rejected at the p = 0.05 confidence level. VII Effects of dynamic spine geometries on determinations of spine turnover The size and shape of individual dendritic spines are dynamic 23,43 (Extended Figs. 9, 10). These dynamics will lead to apparent spine turnover, because a spine that is scorable at one point in time need not be scorable at other times. Here we investigate the effect of this artifactual turnover on the hippocampal spine dynamics measured in vivo. Imaging variability of the angle between the dendrite and optical plane can also generate apparent spine turnover, but the magnitude of such turnover is negligible in our data (Extended Fig. 6). We define the surviving fraction of scorable spines as the fraction of scorable spines at time 0 that are also scorable at time. Mathematically, this function is ff = PP sin θθ rr dd + dd rr ss rr dd + ll ss sin θθ 0 rr dd 0 + dd rr ss 0 rr dd 0 + ll ss 0, where the criterion for spine scorability is discussed in IV and Extended Fig. 6a. This function depends on the joint distribution of each geometric parameter at two points in time. It is not feasible to measure such probability distributions directly because a highresolution technique is needed to accurately measure a spine s geometric parameters (e.g. spine radii are often below 0.1 μμm), yet such techniques are not currently compatible with time-lapse imaging over multiple weeks in living animals. We circumvented this difficulty by assuming that each probability distribution was a stationary bivariate Gaussian, which is specified by a mean, variance, and time-correlation function. We specified the means and variances for the dendrite radius, spine length, and spine radius 17
18 RESEARCH SUPPLEMENTARY INFORMATION using the electron microscopy measurements cited in IV. We associated each spine with a mean angle that was uniformly distributed across spines and assumed that the location of each spine head fluctuates isotropically, which implies that the magnitude of angular fluctuations is σσ θθ tan!! σσ!! rrdd + σσ llss 18. μμ rrdd + μμ llss Finally, we stochastically specified the temporal evolution of each geometric parameter by a time-correlation function: cc xx = δδδδ δδδδ 0 σσ xx!, where xx denotes any of the geometric parameters, δδδδ = xx μμ xx, μμ xx is the parameter s! mean, and σσ xx is the parameter s variance. We numerically evaluated the surviving function of scorable spines as a function of the values of the time-correlation functions between any two time points. Isolated fluctuations in the spine length or spine angle generate more turnover than isolated fluctuations in the dendrite radius or spine radius (Extended Fig. 8a). More substantial turnover occurs when all four geometric parameters fluctuate (Extended Fig. 8a), here according to a common time-correlation function. In particular, once all four geometric parameters fully decorrelate from their initial values, 27% of spines will become unscorable (Extended Fig. 8a). As discussed later, this level of turnover is not expected in experimental data, because optical merging will impart additional stability on the spines. 18
19 RESEARCH Using the dataset of Fig. 3a, we measured the local dendrite radius, spine length, and spine radius of 43 persistent spines. All three parameters fluctuated over time (Extended Fig.9a c). Since spine length fluctuations induce the most severe turnover (Extended Fig. 8a), we focused on spine length fluctuations and assumed that all geometric parameters share the same time-correlation function. The experimentally estimated spine length time-correlation function exhibited a rapid initial decline, which we interpret as temporally white measurement noise (s.d. ~10 nm), followed by an approximately exponential decay (time constant: 24 d) (Extended Fig. 9d). We thus modeled the time-correlation functions as exponential functions with this time constant, thereby specifying the time course of the surviving fraction of scorable spines (Extended Fig. 8b). Although this estimate of the time-correlation function is coarse, its errors are aenuated by the weak dependence of the surviving fraction of scorable spines on the value of the time-correlation function (Extended Fig. 8a). The apparent surviving fraction is the product of the true surviving fraction and the surviving fraction of scorable spines. Given the kinetic parameters that best fit the merged-spine surviving fraction as measured in vivo (Fig. 4a), differences between the true and apparent surviving fractions were mild (Extended Fig. 8c). To make this point quantitative, we fied the apparent surviving fraction with an exponential function across a grid of kinetic parameters that were consistent with the measured merged-spine surviving fraction. The magnitude of the difference between the apparent and true timescales was at most 2.0 days (Extended Fig. 8d). Given that the best-fit model had a survival timescale of 7.7 days, this analysis suggests that the true survival timescale might have been approximately 8.4 days. This corresponds to a spine lifetime of
20 RESEARCH SUPPLEMENTARY INFORMATION days, which is within the error bars defined by optical merging (Fig. 4a). Overall, the timescale uncertainties emerging from optical merging (Fig. 4a) are much larger than those emerging from unscorable spines (Extended Fig. 8d). It is nontrivial to calculate the surviving fraction of scorable spines in the presence of optical merging. However, we can place a lower bound on the surviving fraction of scorable merged spines by noting that a merged spine is only unscorable when all of its initial component spines are unscorable. This scenario is unlikely for higherorder merged spines (Extended Fig. 8e). Given the merged spine order distribution expected for hippocampal spine densities (Fig. 4c), the experimentally measured surviving fraction of merged spines is below this bound (Extended Fig. 8f). Thus, dynamic spine geometries cannot account for the experimentally measured spine turnover. 20
21 RESEARCH Supplementary Appendices Appendix I Kinetic model of spine dynamics We provide here the mathematical formulation of our model for spine kinetics, derive a formula for the surviving fraction of spines, and discuss the relationships amongst the several timescales of spine dynamics. Consider an ensemble of potential synapses that evolve according to the assumptions described in I of the Supplementary Methods. Denote the time-dependent probability that a randomly chosen potential synapse occupies the stable state as gg!, the off state as gg!, and the labile state as gg!. We can formalize the assumptions of I with three equations. The assumption that a stable spine remains in the stable state forever means that ddgg! dddd = 0. By stating that a spine in the labile state decays to the off state with loss rate rr ll and that a potential synapse in the off state transitions to the labile state with growth rate rr gg, we assume that ddgg! dddd = rr gggg! + rr ll gg!, ddgg! dddd = rr llgg! + rr gg gg!. Altogether, we condense these formulae into a single vector equation, dddd dddd = RRRR, where the three-dimensional occupancy vector is 21
22 RESEARCH SUPPLEMENTARY INFORMATION gg = gg! gg! gg!, and the 3 3 transition rate matrix is RR = rr gg rr ll. 0 rr gg rr ll Equations of this form frequently arise in kinetic theory and are collectively termed master equations. The formal solution to a master equation is gg = ee RRRR gg 0, where gg 0 is the initial occupancy vector and the matrix exponential is defined by using the power series representation of the exponential function: ee RRRR =! RRRR nn nn!!. nn! An occupancy vector that is stationary under potential synapse dynamics is called an equilibrium point. If GG is an equilibrium point, then RRRR = 0. By definition, the filling fraction, ff, is the fraction of potential synapses that are filled by a spine at equilibrium, and γγ is the fraction of spines that are stable at equilibrium. Thus, GG = ffff 1 ff ff 1 γγ must be an equilibrium point. In order to ensure this is the case, we choose the loss rate to balance the growth rate: rr ll = 1 ff ff 1 γγ rr gg. 22
23 RESEARCH However, an arbitrarily chosen ensemble of potential synapses might not approach this equilibrium point. For example, the ensemble of potential synapses in which every potential synapse is in the stable state represents another equilibrium point. The temporal evolution of an arbitrary occupancy vector can be most easily understood in terms of the eigenvalues and eigenvectors of RR. Recall that a nonzero vector, ξξ, is called an eigenvector of RR if there exists a scalar, λλ, such that RRRR = λλλλ. When this is the case, λλ is called an eigenvalue of RR. The three eigenvalues of RR are λλ! = 0, λλ! = 0, λλ! = rr gg rr ll. Because two of the eigenvalues are equal to zero, there exists a two-dimensional space of eigenvectors with vanishing eigenvalue. We use the eigenvectors ξξ! = and ξξ! = 0 1 ff ff 1 γγ as an orthogonal basis for this space and choose the eigenvector whose eigenvalue is λλ! to be ξξ! = Taken together, ξξ!, ξξ!, and ξξ! provide a basis for the space of occupancy vectors. With the eigenvalues and eigenvectors determined, we can easily calculate the surviving fraction of spines as a function of the model parameters. Consider the ensemble 23
24 RESEARCH SUPPLEMENTARY INFORMATION of potential synapses that were filled at time = 0. The probability that a spine is in the stable state is γγ, and the probability that the spine is in the labile state is 1 γγ. Therefore, this ensemble of potential synapses corresponds to the initial occupancy vector gg 0 = γγ 0 1 γγ = γγξξ! + 1 γγ 1 ffff ξξ! 1 ff 1 γγ 1 ffff ξξ!. Because we have wrien gg 0 in the basis of eigenvectors of RR, it is easy to evaluate the temporal evolution of the occupancy by noting that ee RRRR ξξ = ee λλλλ ξξ for any eigenvalue-eigenvector pair. In particular, gg = γγξξ! + 1 γγ 1 ffff ξξ! 1 ff 1 γγ 1 ffff ee!/ττ ξξ!, where we have replaced the third eigenvalue with a time scale characterizing spine survival: ττ = 1 = 1 ff 1 γγ = λλ! rr gg + rr ll 1 ffff The time-dependent occupancy vector can be rewrien as 1 rr gg. gg = gg! gg! gg! = γγ 1 ff 1 γγ 1 ffff ff 1 γγ! 1 ffff ff 1 γγ 1 ffff 1 ff 1 γγ 1 ffff ee! ττ. The surviving fraction, SS, is the probability that the potential synapse is not in the off state at time : 1 ffff 1 ff SS = 1 gg! = 1 1 ffff γγ + 1 ff 1 γγ 1 ffff ee! ττ. Note that the surviving fraction takes the form 24
25 RESEARCH SS = SS! + 1 SS! ee!/ττ, where SS! 1 ffff 1 ff = 1 1 ffff γγ represents the surviving fraction s asymptotic value, and ττ has been defined previously. This formula is discussed further in I of the Supplementary Methods. In addition to the timescale of spine survival, there are other interesting timescales. The length of time that a labile spine persists before it first decays is exponentially distributed, and we refer to the timescale of this exponential distribution as the spine lifetime: ττ ll = 1 rr ll = 1 ffff 1 ff ττ. Similarly, the length of time that a potential synapse in the off state takes to first transition to the labile state is also exponentially distributed, and we refer to the timescale of this exponential distribution as the spine birth time: ττ bb = 1 = 1 ffff rr gg ff 1 γγ ττ = 1 ff ff 1 γγ ττ ll. Finally, the recurrence time is the length of time that it takes for a labile spine to decay and then recur. A spine recurs in time if it decays in time! < and reappears in time. Thus, the probability density of recurrence times is given by pp rr rr = = dddd ee!!/ττ ll such that the average recurrence time is! ττ ll ee!!! /ττbb ττ bb = ee!/ττ bb ee!/ττ ll ττ bb ττ ll, ττ rr = rr = ττ bb + ττ ll = 1 ffff ff 1 γγ ττ ll. 25
26 RESEARCH SUPPLEMENTARY INFORMATION Since the hippocampal spine survival data presented in the paper are best fit by γγ = and ττ ll = 9.5 days, we estimate the average recurrence time to be 48 days. Appendix II Derivation of transition rate matrix between merged spine states Here we derive formulae for the transition rates between merged spine states. We first assume that the merged spine order is nonzero. Consider a merged spine in which the first component spine is located at xx = 0 and the last component spine is located at xx = ωω. We call the quantity ωω the merged spine width. The order of the merged spine will increase by one if any neighboring potential synapse, on the same side of the dendrite and in the interval ωω LL, LL, transitions from the unfilled to the filled state, because this newborn spine will be within a distance LL of all component spines of the merged spine. Denote the number of neighboring potential synapses as bb, such that the number in the filled state is nn, and the number in the unfilled state is bb nn. If we assume that the dynamics of each potential synapse is independent from its neighbors, we find that bb rr nn!!,nn = bb nn rr gg. bb The superscript of rr nn!!,nn emphasizes that upward transition rates depend on the number of neighboring potential synapses, which varies across merged spines. More simply, the order of the merged spine will decrease by one if any one of the component spines transitions from its filled to unfilled state rr nn!!,nn = nnrr ll, where we again assume that each potential synapse is independent from its neighbors. Since the number of neighboring potential synapses is unobservable in our experiment, we must average over its possible values to obtain effective up rates. We first 26
27 RESEARCH compute the average for fixed merged spine order and width. Since the merged spine order and width are assumed known, the number of neighboring potential synapses is determined by the number of potential synapses in ωω LL, LL. Since the density of off potential synapses is ρρ ee 1 ff and the length of ωω LL, LL is 2LL ωω, the uniform distribution of potential synapses implies that PP bb!,ωω bb Ω = nn > 0, ωω = 0, bb < nn ρρ ee 2LL ωω 1 ff bb!nn ee!ρρ ee!ll!ωω!!ff., bb nn bb nn! Therefore, the effective up rate becomes ωω rr nn!!,nn ωω where the superscript of rr nn!!,nn =! bb PP bb!,ωω bb Ω = nn > 0, ωω rr nn!!,nn bb!! merged spine width, which varies across merged spines. = ρρ ee 2LL ωω 1 ff rr gg, emphasizes that upward transition rates still depend on the The merged spine width is also experimentally unknown, so we must average over its possible values. Note that the probability density of ωω depends on the merged spine order. For example, ωω = 0 for every first-order merged spine, so pp ωω! ωω Ω = 1 = δδ ωω, where δδ is the Dirac delta-function. When Ω = nn > 1, the probability that the merged spine width is less than or equal to ll is the probability that each of the nn 1 component spines that follow the one at xx = 0 are in 0, ll. Assuming a uniform spine distribution, we find PP ωω ll Ω = nn > 1 = ll LL nn!!, which implies a probability density equal to 27
28 RESEARCH SUPPLEMENTARY INFORMATION pp ωω! ωω = ll Ω = nn > 1 = nn 1 ll nn!! LL nn!!, ll LL. 0, ll > LL From these formulae, we deduce that ωω ωω!!nn = LL dddd! pp ωω!!nn ωω Ω = nn ωω = nn 1 LL, nn for every nonzero merged spine order. Therefore, we finally arrive at our desired formula rr nn!!,nn = LL ωω dddd pp ωω!!nn ωω Ω = nn rr nn!!,nn = ρρ ee LL nn 1 ff rr gg,! where LL nn = nn LL represents the dendrite length onto which spine addition increases the merged spine order. Finally we consider the transition rate from the off state to the first-order labile state. Suppose that a merged spine in the off state is located at xx = 0. The merged spine will become first-order if any neighboring potential synapse in LL, LL fills. As before, we denote the number of neighboring potential synapses as bb, such that rr bb!,! = bbrr gg. Since the number of neighboring potential synapses differs across merged spines and is experimentally unknown, we must average over its possible values. When the merged spine order was nonzero, we assumed that any number of unfilled potential synapses could neighbor the merged spine. However, every zerothorder merged spine gets its label from a merged spine that was present during the experiment. Thus, every zeroth-order merged spine must neighbor a potential synapse, and bb is necessarily nonzero. We model this situation by asserting that an off merged 28
29 RESEARCH spine does not occur at an arbitrary position on the dendrite, but rather at some position within a region of interest. If we define regions of interest as stretches of dendrite that contain at least one potential synapse, then it follows that PP bb RRRRRR!! bb RRRRRR = 1 = 0, bb = 0 2ρρ ee LL bb ee!!ρρ ee LL, bb > 0. bb! 1 ee!!ρρ ee LL Regions of interest were previously irrelevant because every merged spine with nn 1 is automatically within a region of interest. We must also account for the fact that the merged spine is in the nn = 0 state. Bayes theorem implies that PP bb RRRRRR!!,! bb RRRRRR = 1, Ω = 0 = PP Ω = 0 RRRRRR = 1, bb PP bb RRRRRR!! bb RRRRRR = 1 PP!!!!"#!! Ω = 0 RRRRRR = 1. The merged spine is in the off state if all bb potential synapses are off: PP Ω = 0 RRRRRR = 1, bb = 1 ff bb. The probability PP!!!!"#!! is determined by the normalization of PP bb RRRRRR!!,!. In particular, PP!!!!"#!! Ω = 0 RRRRRR = 1 = ee!!ρρ ee LLLL ee!!ρρ ee LL, 1 ee!!ρρ ee LL such that 0, bb = 0 PP bb RRRRRR!!,! bb RRRRRR = 1, Ω = 0 = bb ee!!ρρ ee LL!!ff 2ρρ ee LL 1 ff 1 ee!!ρρ ee LL!!ff bb!, bb > 0. This intuitive expression is analogous to PP bb RRRRRR!!, but for off potential synapses. Thus,! rr!,! = rr bb!,! PP bb RRRRRR!!,!!! bb RRRRRR = 1, Ω = 0 bb!! = 1 1 ee!!ρρ ee LL!!ff 2ρρ ee LL 1 ff rr gg, as claimed in IV. 29
30 RESEARCH SUPPLEMENTARY INFORMATION Appendix III Asymptotic survival of merged spines We derive here a formula for the asymptotic value of the surviving fraction of merged spines. At every point in time, each merged spine is either in the stable state, in the off state, or in the nn th -order labile state for some nn. We summarize the probability that a merged spine is in each of these states with an infinite-dimensional occupancy vector, denoted gg. In particular, the component gg! denotes the occupancy of the stable state, the component gg! denotes the occupancy of the off state, and the component gg nn denotes the occupancy of the nn th -order labile state. These components evolve according to and ddgg! dddd = 0, ddgg! dddd = rr!,!gg! + rr!,! gg!, which we summarize compactly as It follows from these equations that ddgg nn dddd = rr nn!!,nn + rr nn!!,nn gg nn + rr nn,nn!! gg nn!! + rr nn,nn!! gg nn!!, dddd dddd = RRRR. dd dddd gg! + gg! +! gg nn nn!! = 0, so these dynamics properly conserve probability. There exists a two-dimensional subspace of occupancy vectors that are stationary under merged spine dynamics. To provide a basis for this space, we choose the two orthogonal vectors 30
31 RESEARCH ζζ jj! = 1, jj = Γ 0, jj Γ, and ζζ jj! = 0, jj = Γ 1 ff mm, jj = 0, ff mm PP! Ω = jj 1 γγ jj, jj > 0 where ff mm is a constant that will play the role of the merged spine filling fraction. We determine ff mm by the requirement that ζζ! be stationary under merged spine dynamics. For jj > 0, note that! ζζ jj!!! ζζ jj = ρρ ss ee LL 1 γγ jj = rr jj!!,jj rr jj,jj!!, such that rr jj!!,jj ζζ jj! = rr jj,jj!! ζζ! jj!!. This condition is known as detailed balance and implies that RRζζ! = rr nn nn!!,nn + rr nn!!,nn ζζ!! nn + rr nn,nn!! ζζ nn!!! + rr nn,nn!! ζζ nn!! = 0 for nn > 1. In order for the nn = 0, 1 components of ζζ! to be stationary, detailed balance must also hold between the off state and the first-order labile state: which implies that 0 = rr!,! ζζ!! + rr!,! ζζ!! = rr!,! 1 ff mm + rr!,! ff mm ee!ρρ ss ee LL 1 γγ, rr!,! ff mm = rr!,! + rr!,! ee!ρρ ssee LL 1 γγ 2ρρ ee LLLL = 2ρρ ee LLLL + ee!ρρ ee LLLL 1 ee!!ρρ ee LL!!ff. Note that when the degree of merging is small, ff mm approaches ff. Consider a randomly chosen merged spine. This merged spine will either be in the stable state (abstractly denoted as Γ = 1) or in some labile state (abstractly denoted as 31
32 RESEARCH SUPPLEMENTARY INFORMATION Γ = 0). As discussed previously, the probability that the merged spine is in the nn th -order labile state is the probability that it is an nn th -order merged spine and that all nn component spines are labile. From this it follows that the fraction of merged spines that are labile is PP! Γ = 0 =! nn!! ρρ ss ee LL nn!! ee!ρρ ss ee LL nn 1! 1 γγ nn = 1 γγ ee!ρρ ss ee LLLL. Therefore, the fraction of merged spines that are stable is γγ mm PP! Γ = 1 = 1 1 γγ ee!ρρ ss ee LLLL. When the degree of merging is small, γγ mm γγ. Also note that the occupancy vector gg jj = ff mm γγ mm, jj = Γ 1 ff mm, jj = 0 ff mm PP! Ω = jj 1 γγ jj, jj > 0 is stationary and satisfies detailed balance. To calculate the asymptote of the surviving fraction, we study the long-term behavior of the initial occupancy vector gg jj 0 = γγ mm, jj = Γ 0, jj = 0. PP! Ω = jj 1 γγ jj, jj > 0 Generally, we can decompose this initial occupancy vector in the basis of the eigenvectors of RR: gg 0 = cc! ζζ! + cc! ζζ! + cc kk ζζ kk kk!!, where each mode with kk > 2 is dynamic and decays with some timescale ττ kk. We have ζζ! kk = 0 for every dynamic mode, from which it follows that cc! = γγ mm. After long time intervals, all dynamic modes will have decayed, and 32
33 RESEARCH gg jj! = lim! gg jj = γγ mm ζζ jj! + cc! ζζ jj! = γγ mm, jj = Γ cc! 1 ff mm, jj = 0. cc! ff mm PP! Ω = jj 1 γγ jj, jj > 0 Because merged spine dynamics conserve probability, we have 1 = gg jj! jj = γγ mm + cc! 1 ff mm + ff mm 1 γγ mm, cc! = 1 γγ mm 1 ff mm γγ mm. Finally, this allows us to conclude that the asymptotic surviving fraction of merged spines is SS mm! = 1 gg!! = 1 ff mm γγ mm 1 ff mm 1 γγ mm 1 ff mm γγ mm. This formula is directly analogous to the formula for the asymptotic spine survival with ff mm and γγ mm replacing ff and γγ. Appendix IV Mathematical details regarding statistical model assessment Here we derive a formula for the expected value of the surviving fraction of merged spines at each time point, a formula for the covariance of the surviving fraction of merged spines between each pair of time points, and a formula for the log-likelihood function. We also discuss how we use profile likelihoods to convert these quantities into confidence intervals. As also described in VI, we first assessed the goodness-of-fit for each set of model parameter using the reduced chi-squared statistic,! χχ red = 1 νν δδtt CC!! δδ, where νν is the number of degrees of freedom (the number of observations minus three), δδ = SS mm SS mm 33
Control of Mobile Robots
Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and
More informationVariations. ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra
Variations ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra Last Time Probability Density Functions Normal Distribution Expectation / Expectation of a function Independence Uncorrelated
More informationAngular Momentum, Electromagnetic Waves
Angular Momentum, Electromagnetic Waves Lecture33: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay As before, we keep in view the four Maxwell s equations for all our discussions.
More informationLecture 6. Notes on Linear Algebra. Perceptron
Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors
More informationLecture 3 Optimization methods for econometrics models
Lecture 3 Optimization methods for econometrics models Cinzia Cirillo University of Maryland Department of Civil and Environmental Engineering 06/29/2016 Summer Seminar June 27-30, 2016 Zinal, CH 1 Overview
More informationProf. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides
Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX
More informationA Posteriori Error Estimates For Discontinuous Galerkin Methods Using Non-polynomial Basis Functions
Lin Lin A Posteriori DG using Non-Polynomial Basis 1 A Posteriori Error Estimates For Discontinuous Galerkin Methods Using Non-polynomial Basis Functions Lin Lin Department of Mathematics, UC Berkeley;
More informationA Step Towards the Cognitive Radar: Target Detection under Nonstationary Clutter
A Step Towards the Cognitive Radar: Target Detection under Nonstationary Clutter Murat Akcakaya Department of Electrical and Computer Engineering University of Pittsburgh Email: akcakaya@pitt.edu Satyabrata
More informationLecture 3. STAT161/261 Introduction to Pattern Recognition and Machine Learning Spring 2018 Prof. Allie Fletcher
Lecture 3 STAT161/261 Introduction to Pattern Recognition and Machine Learning Spring 2018 Prof. Allie Fletcher Previous lectures What is machine learning? Objectives of machine learning Supervised and
More information+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing
Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable
More informationTime Domain Analysis of Linear Systems Ch2. University of Central Oklahoma Dr. Mohamed Bingabr
Time Domain Analysis of Linear Systems Ch2 University of Central Oklahoma Dr. Mohamed Bingabr Outline Zero-input Response Impulse Response h(t) Convolution Zero-State Response System Stability System Response
More informationAdvanced data analysis
Advanced data analysis Akisato Kimura ( 木村昭悟 ) NTT Communication Science Laboratories E-mail: akisato@ieee.org Advanced data analysis 1. Introduction (Aug 20) 2. Dimensionality reduction (Aug 20,21) PCA,
More informationSECTION 8: ROOT-LOCUS ANALYSIS. ESE 499 Feedback Control Systems
SECTION 8: ROOT-LOCUS ANALYSIS ESE 499 Feedback Control Systems 2 Introduction Introduction 3 Consider a general feedback system: Closed-loop transfer function is KKKK ss TT ss = 1 + KKKK ss HH ss GG ss
More informationPattern Recognition. Parameter Estimation of Probability Density Functions
Pattern Recognition Parameter Estimation of Probability Density Functions Classification Problem (Review) The classification problem is to assign an arbitrary feature vector x F to one of c classes. The
More informationCHAPTER 5 Wave Properties of Matter and Quantum Mechanics I
CHAPTER 5 Wave Properties of Matter and Quantum Mechanics I 1 5.1 X-Ray Scattering 5.2 De Broglie Waves 5.3 Electron Scattering 5.4 Wave Motion 5.5 Waves or Particles 5.6 Uncertainty Principle Topics 5.7
More informationCoarse-Graining via the Mori-Zwanzig Formalism
Coarse-Graining via the Mori-Zwanzig Formalism George Em Karniadakis & Zhen Li http://www.pnnl.gov/computing/cm4/ Supported by DOE ASCR Mori-Zwanzig can be used to: 1. Derive coarse-grained equations 2.
More informationLecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications
Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Moving average processes Autoregressive
More informationLecture No. 5. For all weighted residual methods. For all (Bubnov) Galerkin methods. Summary of Conventional Galerkin Method
Lecture No. 5 LL(uu) pp(xx) = 0 in ΩΩ SS EE (uu) = gg EE on ΓΓ EE SS NN (uu) = gg NN on ΓΓ NN For all weighted residual methods NN uu aaaaaa = uu BB + αα ii φφ ii For all (Bubnov) Galerkin methods ii=1
More informationCharge carrier density in metals and semiconductors
Charge carrier density in metals and semiconductors 1. Introduction The Hall Effect Particles must overlap for the permutation symmetry to be relevant. We saw examples of this in the exchange energy in
More information(1) Introduction: a new basis set
() Introduction: a new basis set In scattering, we are solving the S eq. for arbitrary VV in integral form We look for solutions to unbound states: certain boundary conditions (EE > 0, plane and spherical
More informationDISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition
DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths
More informationA new procedure for sensitivity testing with two stress factors
A new procedure for sensitivity testing with two stress factors C.F. Jeff Wu Georgia Institute of Technology Sensitivity testing : problem formulation. Review of the 3pod (3-phase optimal design) procedure
More informationFluids in Rigid-Body Motion
Fluids in Rigid-Body Motion 9. 14. 2016 Hyunse Yoon, Ph.D. Associate Research Scientist IIHR-Hydroscience & Engineering Newton s 2 nd Law of Motion In general, for a body of mass mm, mmaa = FF where, aa
More informationGeneral Strong Polarization
General Strong Polarization Madhu Sudan Harvard University Joint work with Jaroslaw Blasiok (Harvard), Venkatesan Gurswami (CMU), Preetum Nakkiran (Harvard) and Atri Rudra (Buffalo) December 4, 2017 IAS:
More informationReview for Exam Hyunse Yoon, Ph.D. Assistant Research Scientist IIHR-Hydroscience & Engineering University of Iowa
57:020 Fluids Mechanics Fall2013 1 Review for Exam3 12. 11. 2013 Hyunse Yoon, Ph.D. Assistant Research Scientist IIHR-Hydroscience & Engineering University of Iowa 57:020 Fluids Mechanics Fall2013 2 Chapter
More informationIR-MAD Iteratively Re-weighted Multivariate Alteration Detection
IR-MAD Iteratively Re-weighted Multivariate Alteration Detection Nielsen, A. A., Conradsen, K., & Simpson, J. J. (1998). Multivariate Alteration Detection (MAD) and MAF Postprocessing in Multispectral,
More informationSECTION 5: CAPACITANCE & INDUCTANCE. ENGR 201 Electrical Fundamentals I
SECTION 5: CAPACITANCE & INDUCTANCE ENGR 201 Electrical Fundamentals I 2 Fluid Capacitor Fluid Capacitor 3 Consider the following device: Two rigid hemispherical shells Separated by an impermeable elastic
More informationProbability and Statistics
Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 4: IT IS ALL ABOUT DATA 4a - 1 CHAPTER 4: IT
More informationLecture No. 1 Introduction to Method of Weighted Residuals. Solve the differential equation L (u) = p(x) in V where L is a differential operator
Lecture No. 1 Introduction to Method of Weighted Residuals Solve the differential equation L (u) = p(x) in V where L is a differential operator with boundary conditions S(u) = g(x) on Γ where S is a differential
More informationLast Name _Piatoles_ Given Name Americo ID Number
Last Name _Piatoles_ Given Name Americo ID Number 20170908 Question n. 1 The "C-V curve" method can be used to test a MEMS in the electromechanical characterization phase. Describe how this procedure is
More informationPART I (STATISTICS / MATHEMATICS STREAM) ATTENTION : ANSWER A TOTAL OF SIX QUESTIONS TAKING AT LEAST TWO FROM EACH GROUP - S1 AND S2.
PART I (STATISTICS / MATHEMATICS STREAM) ATTENTION : ANSWER A TOTAL OF SIX QUESTIONS TAKING AT LEAST TWO FROM EACH GROUP - S1 AND S2. GROUP S1: Statistics 1. (a) Let XX 1,, XX nn be a random sample from
More informationMarr's Theory of the Hippocampus: Part I
Marr's Theory of the Hippocampus: Part I Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2015 David Marr: 1945-1980 10/05/15 Computational Models of Neural Systems 2 Marr
More informationDiffusive DE & DM Diffusve DE and DM. Eduardo Guendelman With my student David Benisty, Joseph Katz Memorial Conference, May 23, 2017
Diffusive DE & DM Diffusve DE and DM Eduardo Guendelman With my student David Benisty, Joseph Katz Memorial Conference, May 23, 2017 Main problems in cosmology The vacuum energy behaves as the Λ term in
More informationNon-Parametric Weighted Tests for Change in Distribution Function
American Journal of Mathematics Statistics 03, 3(3: 57-65 DOI: 0.593/j.ajms.030303.09 Non-Parametric Weighted Tests for Change in Distribution Function Abd-Elnaser S. Abd-Rabou *, Ahmed M. Gad Statistics
More informationNow, suppose that the signal is of finite duration (length) NN. Specifically, the signal is zero outside the range 0 nn < NN. Then
EE 464 Discrete Fourier Transform Fall 2018 Read Text, Chapter 4. Recall that for a complex-valued discrete-time signal, xx(nn), we can compute the Z-transform, XX(zz) = nn= xx(nn)zz nn. Evaluating on
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationSupport Vector Machines. CSE 4309 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Support Vector Machines CSE 4309 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification
More informationStatistical Learning with the Lasso, spring The Lasso
Statistical Learning with the Lasso, spring 2017 1 Yeast: understanding basic life functions p=11,904 gene values n number of experiments ~ 10 Blomberg et al. 2003, 2010 The Lasso fmri brain scans function
More informationCSC 578 Neural Networks and Deep Learning
CSC 578 Neural Networks and Deep Learning Fall 2018/19 3. Improving Neural Networks (Some figures adapted from NNDL book) 1 Various Approaches to Improve Neural Networks 1. Cost functions Quadratic Cross
More informationTEXT AND OTHER MATERIALS:
1. TEXT AND OTHER MATERIALS: Check Learning Resources in shared class files Calculus Wiki-book: https://en.wikibooks.org/wiki/calculus (Main Reference e-book) Paul s Online Math Notes: http://tutorial.math.lamar.edu
More informationElastic light scattering
Elastic light scattering 1. Introduction Elastic light scattering in quantum mechanics Elastic scattering is described in quantum mechanics by the Kramers Heisenberg formula for the differential cross
More information7.3 The Jacobi and Gauss-Seidel Iterative Methods
7.3 The Jacobi and Gauss-Seidel Iterative Methods 1 The Jacobi Method Two assumptions made on Jacobi Method: 1.The system given by aa 11 xx 1 + aa 12 xx 2 + aa 1nn xx nn = bb 1 aa 21 xx 1 + aa 22 xx 2
More informationHopfield Network for Associative Memory
CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory 1 The next few units cover unsupervised models Goal: learn the distribution of a set of observations Some observations
More informationMath 171 Spring 2017 Final Exam. Problem Worth
Math 171 Spring 2017 Final Exam Problem 1 2 3 4 5 6 7 8 9 10 11 Worth 9 6 6 5 9 8 5 8 8 8 10 12 13 14 15 16 17 18 19 20 21 22 Total 8 5 5 6 6 8 6 6 6 6 6 150 Last Name: First Name: Student ID: Section:
More informationGradient expansion formalism for generic spin torques
Gradient expansion formalism for generic spin torques Atsuo Shitade RIKEN Center for Emergent Matter Science Atsuo Shitade, arxiv:1708.03424. Outline 1. Spintronics a. Magnetoresistance and spin torques
More informationECE 6540, Lecture 06 Sufficient Statistics & Complete Statistics Variations
ECE 6540, Lecture 06 Sufficient Statistics & Complete Statistics Variations Last Time Minimum Variance Unbiased Estimators Sufficient Statistics Proving t = T(x) is sufficient Neyman-Fischer Factorization
More informationWhen Worlds Collide: Quantum Probability From Observer Selection?
When Worlds Collide: Quantum Probability From Observer Selection? Robin Hanson Department of Economics George Mason University August 9, 2001 Abstract Deviations from exact decoherence make little difference
More informationPhoton Interactions in Matter
Radiation Dosimetry Attix 7 Photon Interactions in Matter Ho Kyung Kim hokyung@pusan.ac.kr Pusan National University References F. H. Attix, Introduction to Radiological Physics and Radiation Dosimetry,
More information2.4 Error Analysis for Iterative Methods
2.4 Error Analysis for Iterative Methods 1 Definition 2.7. Order of Convergence Suppose {pp nn } nn=0 is a sequence that converges to pp with pp nn pp for all nn. If positive constants λλ and αα exist
More informationThe Elasticity of Quantum Spacetime Fabric. Viorel Laurentiu Cartas
The Elasticity of Quantum Spacetime Fabric Viorel Laurentiu Cartas The notion of gravitational force is replaced by the curvature of the spacetime fabric. The mass tells to spacetime how to curve and the
More informationOne- and Two-Sample Tests of Hypotheses
One- and Two-Sample Tests of Hypotheses 1- Introduction and Definitions Often, the problem confronting the scientist or engineer is producing a conclusion about some scientific system. For example, a medical
More informationPrediction Intervals for Functional Data
Montclair State University Montclair State University Digital Commons Theses, Dissertations and Culminating Projects 1-2018 Prediction Intervals for Functional Data Nicholas Rios Montclair State University
More information6.207/14.15: Networks Lecture 12: Generalized Random Graphs
6.207/14.15: Networks Lecture 12: Generalized Random Graphs 1 Outline Small-world model Growing random networks Power-law degree distributions: Rich-Get-Richer effects Models: Uniform attachment model
More informationReview for Exam Hyunse Yoon, Ph.D. Adjunct Assistant Professor Department of Mechanical Engineering, University of Iowa
Review for Exam2 11. 13. 2015 Hyunse Yoon, Ph.D. Adjunct Assistant Professor Department of Mechanical Engineering, University of Iowa Assistant Research Scientist IIHR-Hydroscience & Engineering, University
More informationRotational Motion. Chapter 10 of Essential University Physics, Richard Wolfson, 3 rd Edition
Rotational Motion Chapter 10 of Essential University Physics, Richard Wolfson, 3 rd Edition 1 We ll look for a way to describe the combined (rotational) motion 2 Angle Measurements θθ ss rr rrrrrrrrrrrrrr
More informationStochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore
Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Lecture No. # 33 Probabilistic methods in earthquake engineering-2 So, we have
More informationPHL424: Nuclear Shell Model. Indian Institute of Technology Ropar
PHL424: Nuclear Shell Model Themes and challenges in modern science Complexity out of simplicity Microscopic How the world, with all its apparent complexity and diversity can be constructed out of a few
More informationExpectation Propagation performs smooth gradient descent GUILLAUME DEHAENE
Expectation Propagation performs smooth gradient descent 1 GUILLAUME DEHAENE In a nutshell Problem: posteriors are uncomputable Solution: parametric approximations 2 But which one should we choose? Laplace?
More information(2) Orbital angular momentum
(2) Orbital angular momentum Consider SS = 0 and LL = rr pp, where pp is the canonical momentum Note: SS and LL are generators for different parts of the wave function. Note: from AA BB ii = εε iiiiii
More information(1) Correspondence of the density matrix to traditional method
(1) Correspondence of the density matrix to traditional method New method (with the density matrix) Traditional method (from thermal physics courses) ZZ = TTTT ρρ = EE ρρ EE = dddd xx ρρ xx ii FF = UU
More informationData Preprocessing. Jilles Vreeken IRDM 15/ Oct 2015
Data Preprocessing Jilles Vreeken 22 Oct 2015 So, how do you pronounce Jilles Yill-less Vreeken Fray-can Okay, now we can talk. Questions of the day How do we preprocess data before we can extract anything
More informationReview of Last Class 1
Review of Last Class 1 X-Ray diffraction of crystals: the Bragg formulation Condition of diffraction peak: 2dd sin θθ = nnλλ Review of Last Class 2 X-Ray diffraction of crystals: the Von Laue formulation
More informationThe Bose Einstein quantum statistics
Page 1 The Bose Einstein quantum statistics 1. Introduction Quantized lattice vibrations Thermal lattice vibrations in a solid are sorted in classical mechanics in normal modes, special oscillation patterns
More informationWorksheets for GCSE Mathematics. Algebraic Expressions. Mr Black 's Maths Resources for Teachers GCSE 1-9. Algebra
Worksheets for GCSE Mathematics Algebraic Expressions Mr Black 's Maths Resources for Teachers GCSE 1-9 Algebra Algebraic Expressions Worksheets Contents Differentiated Independent Learning Worksheets
More informationProtean Instrument Dutchtown Road, Knoxville, TN TEL/FAX:
Application Note AN-0210-1 Tracking Instrument Behavior A frequently asked question is How can I be sure that my instrument is performing normally? Before we can answer this question, we must define what
More information80% of all excitatory synapses - at the dendritic spines.
Dendritic Modelling Dendrites (from Greek dendron, tree ) are the branched projections of a neuron that act to conduct the electrical stimulation received from other cells to and from the cell body, or
More informationWork, Energy, and Power. Chapter 6 of Essential University Physics, Richard Wolfson, 3 rd Edition
Work, Energy, and Power Chapter 6 of Essential University Physics, Richard Wolfson, 3 rd Edition 1 With the knowledge we got so far, we can handle the situation on the left but not the one on the right.
More informationSAMPLE COURSE OUTLINE MATHEMATICS METHODS ATAR YEAR 11
SAMPLE COURSE OUTLINE MATHEMATICS METHODS ATAR YEAR 11 Copyright School Curriculum and Standards Authority, 2017 This document apart from any third party copyright material contained in it may be freely
More informationProblem 3.1 (Verdeyen 5.13) First, I calculate the ABCD matrix for beam traveling through the lens and space.
Problem 3. (Verdeyen 5.3) First, I calculate the ABCD matrix for beam traveling through the lens and space. T = dd 0 0 dd 2 ff 0 = dd 2 dd ff 2 + dd ( dd 2 ff ) dd ff ff Aording to ABCD law, we can have
More informationINTRODUCTION TO PATTERN RECOGNITION
INTRODUCTION TO PATTERN RECOGNITION INSTRUCTOR: WEI DING 1 Pattern Recognition Automatic discovery of regularities in data through the use of computer algorithms With the use of these regularities to take
More informationThe Framework of Quantum Mechanics
The Framework of Quantum Mechanics We now use the mathematical formalism covered in the last lecture to describe the theory of quantum mechanics. In the first section we outline four axioms that lie at
More informationAnalysis and Design of Control Dynamics of Manipulator Robot s Joint Drive
Journal of Mechanics Engineering and Automation 8 (2018) 205-213 doi: 10.17265/2159-5275/2018.05.003 D DAVID PUBLISHING Analysis and Design of Control Dynamics of Manipulator Robot s Joint Drive Bukhar
More informationEstimate by the L 2 Norm of a Parameter Poisson Intensity Discontinuous
Research Journal of Mathematics and Statistics 6: -5, 24 ISSN: 242-224, e-issn: 24-755 Maxwell Scientific Organization, 24 Submied: September 8, 23 Accepted: November 23, 23 Published: February 25, 24
More informationCHAPTER 4 Structure of the Atom
CHAPTER 4 Structure of the Atom Fall 2018 Prof. Sergio B. Mendes 1 Topics 4.1 The Atomic Models of Thomson and Rutherford 4.2 Rutherford Scattering 4.3 The Classic Atomic Model 4.4 The Bohr Model of the
More informationMultiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD
Multiple Regression Analysis: Inference ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Introduction When you perform statistical inference, you are primarily doing one of two things: Estimating the boundaries
More informationThe impact of hot charge carrier mobility on photocurrent losses
Supplementary Information for: The impact of hot charge carrier mobility on photocurrent losses in polymer-based solar cells Bronson Philippa 1, Martin Stolterfoht 2, Paul L. Burn 2, Gytis Juška 3, Paul
More informationExtreme value statistics: from one dimension to many. Lecture 1: one dimension Lecture 2: many dimensions
Extreme value statistics: from one dimension to many Lecture 1: one dimension Lecture 2: many dimensions The challenge for extreme value statistics right now: to go from 1 or 2 dimensions to 50 or more
More informationPCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani
PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given
More informationCOMPRESSION FOR QUANTUM POPULATION CODING
COMPRESSION FOR QUANTUM POPULATION CODING Ge Bai, The University of Hong Kong Collaborative work with: Yuxiang Yang, Giulio Chiribella, Masahito Hayashi INTRODUCTION Population: A group of identical states
More informationAtomic fluorescence. The intensity of a transition line can be described with a transition probability inversely
Atomic fluorescence 1. Introduction Transitions in multi-electron atoms Energy levels of the single-electron hydrogen atom are well-described by EE nn = RR nn2, where RR = 13.6 eeee is the Rydberg constant.
More informationPHY103A: Lecture # 4
Semester II, 2017-18 Department of Physics, IIT Kanpur PHY103A: Lecture # 4 (Text Book: Intro to Electrodynamics by Griffiths, 3 rd Ed.) Anand Kumar Jha 10-Jan-2018 Notes The Solutions to HW # 1 have been
More informationQuantum Mechanics. An essential theory to understand properties of matter and light. Chemical Electronic Magnetic Thermal Optical Etc.
Quantum Mechanics An essential theory to understand properties of matter and light. Chemical Electronic Magnetic Thermal Optical Etc. Fall 2018 Prof. Sergio B. Mendes 1 CHAPTER 3 Experimental Basis of
More informationReliability Theory of Dynamically Loaded Structures (cont.)
Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation
More informationThick Shell Element Form 5 in LS-DYNA
Thick Shell Element Form 5 in LS-DYNA Lee P. Bindeman Livermore Software Technology Corporation Thick shell form 5 in LS-DYNA is a layered node brick element, with nodes defining the boom surface and defining
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More informationThe spectrogram in acoustics
Measuring the power spectrum at various delays gives the spectrogram 2 S ω, τ = dd E t g t τ e iii The spectrogram in acoustics E ssssss t, τ = E t g t τ where g t is a variable gating function Frequency
More informationKinetic Model Parameter Estimation for Product Stability: Non-uniform Finite Elements and Convexity Analysis
Kinetic Model Parameter Estimation for Product Stability: Non-uniform Finite Elements and Convexity Analysis Mark Daichendt, Lorenz Biegler Carnegie Mellon University Pittsburgh, PA 15217 Ben Weinstein,
More informationSUPPLEMENTARY INFORMATION
doi:10.1038/nature11226 Supplementary Discussion D1 Endemics-area relationship (EAR) and its relation to the SAR The EAR comprises the relationship between study area and the number of species that are
More informationMachine Learning 2017
Machine Learning 2017 Volker Roth Department of Mathematics & Computer Science University of Basel 21st March 2017 Volker Roth (University of Basel) Machine Learning 2017 21st March 2017 1 / 41 Section
More informationLecture 3. Linear Regression
Lecture 3. Linear Regression COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne Weeks 2 to 8 inclusive Lecturer: Andrey Kan MS: Moscow, PhD:
More informationRadiative Effects of Contrails and Contrail Cirrus
Radiative Effects of Contrails and Contrail Cirrus Klaus Gierens, DLR Oberpfaffenhofen, Germany Contrail-Cirrus, other Non-CO2 Effects and Smart Flying Workshop, London, 22 Oktober 2015 Two closely related
More informationUnit II. Page 1 of 12
Unit II (1) Basic Terminology: (i) Exhaustive Events: A set of events is said to be exhaustive, if it includes all the possible events. For example, in tossing a coin there are two exhaustive cases either
More informationSECTION 5: POWER FLOW. ESE 470 Energy Distribution Systems
SECTION 5: POWER FLOW ESE 470 Energy Distribution Systems 2 Introduction Nodal Analysis 3 Consider the following circuit Three voltage sources VV sss, VV sss, VV sss Generic branch impedances Could be
More informationOptical pumping and the Zeeman Effect
1. Introduction Optical pumping and the Zeeman Effect The Hamiltonian of an atom with a single electron outside filled shells (as for rubidium) in a magnetic field is HH = HH 0 + ηηii JJ μμ JJ BB JJ μμ
More informationSTAT 501 EXAM I NAME Spring 1999
STAT 501 EXAM I NAME Spring 1999 Instructions: You may use only your calculator and the attached tables and formula sheet. You can detach the tables and formula sheet from the rest of this exam. Show your
More informationModule 7 (Lecture 25) RETAINING WALLS
Module 7 (Lecture 25) RETAINING WALLS Topics Check for Bearing Capacity Failure Example Factor of Safety Against Overturning Factor of Safety Against Sliding Factor of Safety Against Bearing Capacity Failure
More informationInternational Journal of Scientific & Engineering Research, Volume 7, Issue 8, August ISSN MM Double Exponential Distribution
International Journal of Scientific & Engineering Research, Volume 7, Issue 8, August-216 72 MM Double Exponential Distribution Zahida Perveen, Mubbasher Munir Abstract: The exponential distribution is
More informationEdge conduction in monolayer WTe 2
In the format provided by the authors and unedited. DOI: 1.138/NPHYS491 Edge conduction in monolayer WTe 2 Contents SI-1. Characterizations of monolayer WTe2 devices SI-2. Magnetoresistance and temperature
More informationSUPPLEMENTARY INFORMATION
doi:10.1038/nature12742 1. Subjects Two adult male rhesus monkeys, A and F (14 and 12 kg) were trained on a two- alternative, forced- choice, visual discrimination task. Before training, both monkeys were
More information14- Hardening Soil Model with Small Strain Stiffness - PLAXIS
14- Hardening Soil Model with Small Strain Stiffness - PLAXIS This model is the Hardening Soil Model with Small Strain Stiffness as presented in PLAXIS. The model is developed using the user-defined material
More information