Supplementary Material for:
|
|
- Arron Wright
- 5 years ago
- Views:
Transcription
1 Supplementary Material for: Correction to Kyllingsbæk, Markussen, and Bundesen (2012) Unfortunately, the computational shortcut used in Kyllingsbæk et al. (2012) (henceforth; the article) to fit the Poisson counter model to experimental data built on an approximation that was not well-founded. In this supplement to the correction of the article we show why this is so, refit both to get a sense of the actual deviation, and offer a well-founded computational shortcut that strongly reduces the time needed to fit the data. The Poisson counter model fits did, fortunately, not deviate noticeably from those of the computational shortcut, nor did they invalidate any conclusions derived in the article. We, however, still recommend that the Poisson counter model or the well-founded computational shortcut be used in a fitting routine. Why the Computational Shortcut is Not Well-Founded Consider the Poisson counter model proposed in the article (i.e., Equation A1-A4). Assume that R is the set of possible categorization reports (identifications) j of a stimulus i. For each j R there exists a counter X j that during time (t t 0 ) independently accumulates categorizations for response j at a Poisson processing rate of v(i, j). The Poisson counter model proposes that the probability of categorizing stimulus i as belonging to j consists of (1) the probability that counter j alone has the maximum number of counts and (2) the probability that counter j together with one or more of the other counters has the maximum number of counts and the participant chooses category j when guessing randomly among these. The computational shortcut suggested in the article (i.e., Equation A5) for approximating this joint probability is: P L Approx(i, j) = 1 Z L v(i, j) n (t t 0 ) n e v(i,j)(t t 0) n! n=1 ( m k R {j} 0)) v(i, k)(t t m! m=0 e ( ) k R {j} v(i,k)(t t 0) n L n + m where L is a large number and factor Z L is a normalization constant ensuring that PApprox(i, L j) = 1 e k R v(i,k)(t t0), 1
2 which is the probability that at least one counter is greater than zero. If there are no counts, then the participant is assumed to either guess at category j with probability P g (j), or report nothing with probability 1 P g(j). Observe now that if m > 0, then ( n n+m) L 0 as L, and PApprox(i, L j) = 1 ( ) 1 e v(i,j)(t t 0) e k R {j} v(i,k)(t t 0) Z = 1 Z P (X j > 0, X k = 0 for k j). where Z is the normalization factor for L. This means that the computational shortcut evaluates the probability of the jth counter being larger than all the other counters, conditionally on the event that only one counter is non-zero. The computational shortcut does therefore not capture the intuition of the Poisson counter model, and its approximation is thus not well-founded. To get a sense of the actual deviation of this, we let Y = #{k R X k > 0} identify the number of counters with non-zero counts, such that P L Approx(i, j) = P (Y 1) P (X j > 0 Y = 1) = P (Y 1) P (X j > X k for k j Y = 1). Even though the computational shortcut is not well-founded, we still expect that its parameter estimates are close to those of the Poisson counter model, and that the differences in fit may be visible only for intermediate t. To see this, consider the following: when t is long, then the event Y = 1 will in general have a low probability according to the Poisson counter model. However, in this case it is very likely that the counter with the highest Poisson processing rate will be reported, and the computational shortcut provides a good approximation. When t is short, then the event Y 1 will have probability close to 1, and the computational shortcut again provides a good approximation. For intermediate t the computational shortcut exhibits the correct qualitative behavior with a non-monotonic probability for erroneous report. In addition, the observations with short t are the most informative for fits and these are observations where the computational shortcut provides a good approximation. The Effect on Reported Results We refitted the computational shortcut (with L = 20 as in the article) and the Poisson counter model to the data using ADMB (see Fournier et al., 2012). 1 In the following, we will replicate the likelihood fits and goodness-of-fit tests for Experiment 1 and 2 reported in the article, and compare these results to those of the Poisson counter model. We refer to the article for more information on the experiments. 1 The infinite sums of Equation A2, A3 and A5 in the paper are infeasible to compute, and we limited them to N = 18 (instead of N = ). The ADMB template code can be given upon request. 2
3 Experiment 1: Four participants were asked to identify briefly presented single digits between 1 and 9. By briefly presented, we mean that the shown digit was masked at eight systematically varied exposure durations lying between 10 and 100 ms. In every experimental block, each of the nine digits was presented five times at each of eight exposure durations yielding a total of 360 trials per block. The order of presentation was randomized within each block of 360 trials. ran a total of 20 blocks resulting in 100 repetitions of each of the eight exposure durations for each of the nine digits and a total of 7, 200 trials. Probability of Correct Report Probability of False Report Stimulus 2 Stimulus 2 Stimulus 6 Stimulus 6 Stimulus 8 Stimulus 8 Exposure Duration Exposure Duration Figure 1: Maximum likelihood fits for Participant MF in Experiment 1. The graphs show the observed proportions of correct and erroneous reports for stimulus digits 2, 6, and 8 as functions of exposure duration. Left panels: Correct reports. Right panels: False reports. The error bars show 95% confidence intervals of the proportions. The continuous (resp. dotted) curves show the predictions generated by the overall maximum likelihood fit of the Poisson counter model (resp. the computational shortcut) to the results. Figure 1 shows the observed proportions of correct reports (left panels) and erroneous reports (right panels) for three representative stimulus digits of the representative Participant MF. The likelihood fits to data for Participant MF are shown in Figure 1. 3
4 Med9D Max*D D<H9intercept D<H*slope p Maximum*Likelihood*Estimates*for**in*Experiment*1 KK MA MF MR Min*v(i,i) Mean*v(i,i) Max9v(i,i) Min*v(i,<i) Mean*v(i,<i) Max9v(i,<i) Min9P g* Mean9P g* Max9P g* t Neg.*log*lik Table 1: Model = Poisson counter model; Shortcut = computational shortcut; v(i, i) = Poisson processing rate for a correct report; Maximum*Likelihood*Estimates*for**in*Experiment*2 v(i, i) = Poisson processing rate an erronous report; P g = guessing probability; Min = minimum; Mean = mean; Max = maximum; t 0 = threshold for processing in seconds; Neg. log lik. = negative log likelihood value ( ln L(P ) > 0 because P [0, 1]). ADMB minimizes KKthe negative log likelihoodskfunction. MF MR Min*v(i,i) Mean*v(i,i) As can be seen 54 from Table 55 1, the63poisson 64 counter model 44 and computational 41 35shortcut33gives Max9v(i,i) the Min*v(i,<i) same parameter 0 estimates 0 for the guessing 0 probability 0 P g 0and the slack 0 of the Poisson 0 process 0 Mean*v(i,<i) t 0. The likelihood fits of the computational shortcut has a tendency to underestimate the correct Max9v(i,<i) Min9P g* Poisson processing rates v(i, i) and overestimate the erroneous Poisson processing rates v(i, i), Mean9P g* when compared to the Poisson counter model. We thus use i to denote any categorization report Max9P g* that is not i. t Neg.*log*lik For each participant, we recalculated the Monte Carlo tests of goodness-of-fit and the information theoretical measures for both the computational shortcut and the Poisson counter model. The left panel of Figure 2 shows a Q-Q plot of estimated versus simulated p values for both fits to Participant MF. The deviation between estimated and simulated p values for both fits were in general similar across the two (see the last row of Table 2). Only the Kolmogorov-Smirnov two-sample test for Participant KK differed. Table 2 shows the range and median of the information theoretic measures, the intercept and slope of the Kullback-Leibler divergence against the Shannon entropy, and the p value of the Kolmogorov-Smirnov goodness-of-fit test for the Poisson counter model and the computational shortcut fits to each of the four participants. Even though both the Poisson counter model and the computational shortcut by large is rejected by the goodness-of-fit test for all participants, we see that the relative information loss (given by the slope) is quite low (below 4%). Thus, despite the significant deviations between data and the fits, both yielded a fairly accurate account of the data from Experiment 1, including an account of the non-monotonic relationship between the proportion of erroneous reports and the exposure durations exemplified in Figure 1. 4
5 Estimated p values KL divergence Simulated p values Shannon Entropy Figure 2: Evaluations of the Poisson counter model (black) and the computational shortcut (gray) for Participant MF in Experiment 1. Left panel: Q-Q plot of estimated p values against p values simulated under the null hypothesis for all 72 experimental conditions with Participant MF in Experiment 1. A y = x reference line is also plotted. If the estimated p values came from a population with the same distributions simulated under the null hypothesis, the points should fall approximately along this reference line. Right panel: Kullback-Leibler divergence D of the theoretical response distribution from the empirical distribution, plotted against the Shannon entropy H of the empirical distribution, for all 72 experimental conditions with Participant MF in Experiment 1. Information*Theoretic*Measure*and*p*Values*for**in*Experiment*1 KK MA MF MR Min*H Med*H Max9H Min9D* Med9D Max*D D<H9intercept D<H*slope p Table 2: Model = Poisson counter model; Shortcut = computational shortcut; H = Shannon entropy of empirical response Information*Theoretic*Measure*and*p*Values*for**in*Experiment*2 distribution; D = Kullback-Leibler divergence of the theoretical response distribution from the empirical distribution; Min = minimum; Med = median; Max = maximum; D H intercept = intercept by linear regression of D on H; D H slope = slope by linear regression of D on KK H. Each p value was obtained SK by Kolmogorov-Smirnov MF test summarizing the results MR of Monte Carlo Measure tests based on Model the χ 2 test Shortcut statistic. Model Shortcut Model Shortcut Model Shortcut Min*H Med*H Max9H Experiment 2: Min9D* Med9D Because Max*D Experiment showed considerable variation in Poisson processing rates across digit stimuli, D<H9intercept the article also reported a second experiment investigating a0.014 more homogenous stimulus material D<H*slope p To this end, the stimulus material consisted of otherwise identical Landolt rings with eight different gap orientations, evenly spread around the circle. Four participants were in Experiment 2 thus asked to Maximum*Likelihood*Estimates*for**in*Experiment*1 identify a briefly presented Landolt ring with possible gap orientations E, SE, S, SW, W, NW, N, or NE (according to akk compass). Again, thema shown Landolt ring was MF masked at eight systematically MR Min*v(i,i) Mean*v(i,i) Max9v(i,i) Min*v(i,<i) Mean*v(i,<i) Max9v(i,<i)
6 varied exposure durations lying between 10 and 100 ms. In every experimental block, each of the eight stimuli was presented five times at each of the eight exposure durations yielding a total of 320 trials per block. The order of presentation was randomized within each block of 320 trials. The participants ran a total of 20 blocks resulting in 100 repetitions of each of the eight exposure durations for each of the eight stimuli and a total of 6, 400 trials. Probability of Correct Report Probability of False Report Stimulus E Stimulus E Stimulus SE Stimulus SE Stimulus S Stimulus S Exposure Duration Exposure Duration Figure 3: Observed proportions of correct and erroneous reports for Landolt rings with gabs centered at E, SE, and S, respectively, as functions of exposure duration for Participant MF in Experiment 2. Left panels: Correct reports. Right panels: Erroneous reports. The error bars show 95% confidence intervals of the proportions. The continuous (resp. dotted) curves show the predictions generated by the overall maximum likelihood fit of the Poisson counter model (resp. the computational shortcut) to the results of Participant MF. Figure 3 shows the observed proportions of correct reports (left panels) and erroneous reports (right panels) for three representative stimuli with the representative Participant MF. The likelihood fits to data for Participant MF are shown in Figure 3. As can be seen from Table 3, the Poisson counter model and the computational shortcut give 6
7 Min9P g* Mean9P g* Max9P g* t Neg.*log*lik Maximum*Likelihood*Estimates*for**in*Experiment*2 KK SK MF MR Min*v(i,i) Mean*v(i,i) Max9v(i,i) Min*v(i,<i) Mean*v(i,<i) Max9v(i,<i) Min9P g* Mean9P g* Max9P g* t Neg.*log*lik Table 3: Model = Poisson counter model; Shortcut = computational shortcut; v(i, i) = Poisson processing rate for a correct report; v(i, i) = Poisson processing rate for an erronous report; P g = guessing probability; Min = minimum; Mean = mean; Max = maximum; t 0 = threshold for processing in seconds; Neg. log lik. = negative log likelihood value ( ln L(P ) > 0 because P [0, 1]). ADMB minimizes the negative log likelihood function. nearly the same parameter estimates for guessing probability P g and the slack of the Poisson process t 0. The likelihood fits of the computational shortcut still has a tendency to underestimate the Poisson processing rates v(i, i) for the correct categorization, while overestimating the erroneous Poisson processing rates v(i, i), when compared to the Poisson counter model. For each participant in Experiment 2, we again recalculated the Monte Carlo tests of goodnessof-fit and the information theoretical measures for both the Poisson counter model and the computational shortcut. Figure 4 shows the Q-Q plot of estimated versus simulated p values for Estimated p values KL divergence Simulated p values Shannon Entropy Figure 4: Evaluations of the Poisson counter model (black) and the computational shortcut (gray) for Participant MF in Experiment 2. Left panel: Q-Q plot of estimated p values against p values simulated under the null hypothesis for all 64 experimental conditions with Participant MF in Experiment 2. A y = x reference line is also plotted. If the estimated p values came from a population with the same distributions simulated under the null hypothesis, the points should fall approximately along this reference line. Right panel: Kullback-Leibler divergence D of the theoretical response distribution from the empirical distribution, plotted against the Shannon entropy H of the empirical distribution, for all 64 experimental conditions with Participant MF in Experiment 2. 7
8 Information*Theoretic*Measure*and*p*Values*for**in*Experiment*1 KK MA MF MR Min*H Med*H Max9H Min9D* Participant MF. By the Kolmogorov-Smirnov two-sample test of the estimated p against the simulated Med9D p values the deviations between estimated and simulated p values were not significant except Max*D for the fit by the computational shortcut to the data of Participant KK (see Table 4). Thus, by D<H9intercept the Kolmogorov-Smirnov two-sample test, we found no signs of systematic deviations between data D<H*slope p and fits. Information*Theoretic*Measure*and*p*Values*for**in*Experiment*2 KK SK MF MR Min*H Med*H Max9H Min9D* Med9D Max*D D<H9intercept D<H*slope p Table 4: Model = Poisson counter model; Shortcut = computational shortcut; H = Shannon entropy of empirical response distribution; Maximum*Likelihood*Estimates*for**in*Experiment*1 D = Kullback-Leibler divergence of the theoretical response distribution from the empirical distribution; Min = minimum; Med = median; Max = maximum; D H intercept = intercept by linear regression of D on H; D H slope = slope by linear regression of D onkkh. Each p value was obtained MA by Kolmogorov-Smirnov MFtest summarizing the results MR of Monte Carlo Measure tests based onmodel the χ 2 testshortcut statistic. Model Shortcut Model Shortcut Model Shortcut Min*v(i,i) Mean*v(i,i) Max9v(i,i) AMin*v(i,<i) Well-Founded 0 Computational 0 0Shortcut Mean*v(i,<i) Max9v(i,<i) The Min9Pprobability g* 0.00 P (i, j) of0.00 reporting category 0.00 j0.00 in a given 0.00 trial is given 0.00 by the Poisson 0.00 counter 0.00 model Mean9P g* in the article 0.00 (i.e., 0.00 Equation A1-A4) However, 0.01 the infinite 0.00 sums 0.00are infeasible 0.00 to compute 0.00 Max9P g* and the summation over power sets in Equation A3 may be very time consuming. As a well-founded t Neg.*log*lik. >1815 >1775 >3085 >3076 >3684 >3675 >2118 >2116 computational shortcut, we suggest that the probability P 1 (i, j) + P 2 (i, j) may be approximated by P1 N(i, j) + P 2,λ N (i, j) and a normalization factor. Maximum*Likelihood*Estimates*for**in*Experiment*2 First, P1 N KK SK MF MR Min*v(i,i) P Mean*v(i,i) 1 N = e (t t 0) N k R v(i,k) v(i, j) n (t t 0 ) n n 1 v(i, k) m (t t 0 ) m, n! m! Max9v(i,i) n= k R {j} 55 m= Min*v(i,<i) Mean*v(i,<i) where N is a finite positive integer. Second, P Max9v(i,<i) ,λ N Min9P g* (i, j) is the probability that count j is higher than any other counts (Equation A2): (i, j) is the probability that at most λ counters in addition to counter j have maximum counts, and the participant hits category j when guessing Mean9P g* among the counters with maximum counts (Equation A3): Max9P g* t Neg.*log*lik. >3012 >2977 N >3360 >3293 >4086 >4068 >4697 >4694 P N 2,λ = e (t t 0) k R v(i,k) J P λ (R {j}) { } n=1 v(i, j) n (t t 0 ) n J n! v(i, k) n (t t 0 ) n k J n! n 1 k R J {j} m=0 v(i, k) n (t t 0 ) m. m! 8
9 P λ (R {j}) is the limited power set (i.e., the set of all subsets with a cardinality of at most λ) of the set of other categorizations than j, and is the empty set. This limiting of the power set implies that the probability of having more than λ maximum counters, in addition to counter j, is assumed to be zero. Third, the probability that all counters are zero and the participant guesses at category j is: P 3 (i, j) = e (t t 0) k R v(i,k) P g (j). Conversely, the probability that all counters are zero and the participant does not guess is: ( P 4 (i) = e (t t 0) k R v(i,k) 1 ) P g (k). k R Next we quantify the probability mass lost by the approximation of P 1 (i, j) and P 2 (i, j). Approximation Difference: It can be shown that the stimulus i trial-by-trial report accuracy implied by the Poisson counter model, plus the probability that a participant reports nothing P 4 (i) sums to unity, that is P 1 (i, j) + P 2 (i, j) + P 3 (i, j) + P 4 (i) = 1 P 1 (i, j) + P 2 (i, j) + e (t t 0) k R v(i,k) = 1. Notice that our well-founded computational shortcut underestimates the probability mass of P 1 (i, j) and P 2 (i, j), respectively, such that: P1 N (i, j) + P2,λ N (i, j) + e (t t 0) k R v(i,k) < 1. We can quantify this probability mass difference by: P1 N (i, j) + P2,λ N (i, j) + = 1 e (t t 0) k R v(i,k) = 1 P1 N (i, j) P2,λ N (i, j) e (t t 0) k R v(i,k). Notice that the difference,, increases as underestimation becomes more severe. Normalization Constant: To ensure the well-founded computational shortcut is a probability that sums to unity, we rescale 9
10 P1 N and P2,λ N by defining: PApprox N := 1 ( P N Z 1 (i, j) + P2,λ N (i, j)), where Z is a normalization constant chosen such that = 0. That is: 1 Z Z = P N 1 (i, j) + P N 2,λ (i, j) = ( 1 e (t t 0) k R v(i,k)) P 1 N(i, j) + P 2,λ N (i, j) (1 e (t t 0) k R v(i,k) ). The normalization factor can be understood as follows: first, the Poisson counter model implies that ( P 1(i, j) + P 2 (i, j) = 1 e (t t 0) v(i,k)) k R and no rescaling is necessary (Z = 1). Second, when we use the well-founded computational shortcut, the probability masses of P 1 (i, j) and P 2 (i, j) are underestimated and rescaling is necessary (Z < 1). The more we underestimate, the more we need to rescale. Report Accuracy: We thus propose that the well-founded computational shortcut is given by the stimulus i trial-bytrial report accuracy probabilities P N Approx (i, j), P 3(i, j) and P 4 (i), with the property that Example of Tradeoff: ( P N Approx (i, j) + P 3 (i, j) ) + P 4 (i) = 1. As an example of the tradeoff between saved computational time and approximation accuracy, we estimate (with N = 18) an ad hoc measure of computational time for Participant MFs performance in Experiment 2 in the article, and compare it to the approximation difference for each limited power set. As summarized by Table 5, for Participant MF in Experiment 2, the well-founded computational shortcut perfectly approximate the Poisson counter model when the power set is restricted to three other categorizations (λ = 3). The probability that at least four counters have maximum counts is thus zero. Applying the new shortcut with λ = 3 does therefore not cost anything in accuracy, but the computational time is reduced by a factor of (or 10 hours and 44 minutes). If we are willing to accept a small underestimation, then we can restrict the power set to two other categorizations (λ = 2). This approximation will underestimate the probability mass with maximally 0.001, but the computational time will now be reduced by a factor of (or 17 hours and 34 minutes). These observations are of cause ad hoc and not necessarily robust to changes in, for example, participants and paradigms. However, we believe that it is fair to approximate the Poisson counter model by assuming zero probability of having more than two-three counters with maximal counts. Our recommendation is that the approximation difference is reported every time the well-founded 10
11 Well$Founded*Computational*Shortcut*Estimates*for*Participant*MF*in*Experiment*2 Number2of2other2counters2(λ) Measure Min*v(i,i) Mean*v(i,i) Max2v(i,i) Min*v(i,$i) Mean*v(i,$i) Max2v(i,$i) Min2P g* Mean2P g Max*P g t Neg.*log*lik Min*Δ Mean*Δ Max2Δ Time2Ratio Table 5: Min = Minimum; Mean = Mean; Max = Maximum; v(i, i) = Poisson processing rate for a correct categorization; v(i, i) = Poisson processing rate for a false categorization; P g = guessing probability; t 0 = threshold for processing in seconds; Neg. log lik. = negative log likelihood value; = difference between the model and the new shortcut in units of probability mass. Time Ratio = ratio between the computational time of the new shortcut and the computational time of the model, when using ADMB on an Intel Xeon 2.40 GHz with 11 GiB system memory. computational shortcut is applied. Conclusion We conclude by restating our main points. When proposing their Poisson counter model, Kyllingsbæk et al. (2012) used a computational shortcut, which strongly reduced the time needed to fit the Poisson counter model to experimental data. Unfortunately, the computational shortcut built on the assumption that only one of the counters is non-zero, and therefore suggest an approximation that is not well-founded in the Poisson counter model. To get a sense of how much the computational shortcut actually misfitted, we refitted it and the Poisson counter model to the experimental data reported in the article. The Poisson counter model fits did, fortunately, not deviate noticebly from those produced by the computational shortcut, nor did they invalidate any conclusions derived in the article. This is because the computational shortcut exhibits the correct quantitative behavior with a non-monotonic probability for erroneous reports. Finally, we proposed a well-founded computational shortcut that is consistent with the Poisson counter model and showed, by example, how much computational time that can be saved by using it. 11
12 References Fournier, D. A., Skaug, H. J., Ancheta, J., Ianelli, J., Magnusson, A., Maunder, M. N., Nielsen, A., Sibert, J., Ad model builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models. Optimization Methods and Software 27 (2), Kyllingsbæk, S., Markussen, B., Bundesen, C., Testing a Poisson counter model for visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. Journal of Experimental Psychology: Human Perception and Performance 38 (3),
Basic Statistics. 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation).
Basic Statistics There are three types of error: 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation). 2. Systematic error - always too high or too low
More informationModule 03 Lecture 14 Inferential Statistics ANOVA and TOI
Introduction of Data Analytics Prof. Nandan Sudarsanam and Prof. B Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras Module
More informationAP Statistics Cumulative AP Exam Study Guide
AP Statistics Cumulative AP Eam Study Guide Chapters & 3 - Graphs Statistics the science of collecting, analyzing, and drawing conclusions from data. Descriptive methods of organizing and summarizing statistics
More informationComparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models
Supplemental Material Comparison of receptive fields to polar and Cartesian stimuli computed with two kinds of models Motivation The purpose of this analysis is to verify that context dependent changes
More informationMachine Learning using Bayesian Approaches
Machine Learning using Bayesian Approaches Sargur N. Srihari University at Buffalo, State University of New York 1 Outline 1. Progress in ML and PR 2. Fully Bayesian Approach 1. Probability theory Bayes
More information* Tuesday 17 January :30-16:30 (2 hours) Recored on ESSE3 General introduction to the course.
Name of the course Statistical methods and data analysis Audience The course is intended for students of the first or second year of the Graduate School in Materials Engineering. The aim of the course
More informationB.N.Bandodkar College of Science, Thane. Random-Number Generation. Mrs M.J.Gholba
B.N.Bandodkar College of Science, Thane Random-Number Generation Mrs M.J.Gholba Properties of Random Numbers A sequence of random numbers, R, R,., must have two important statistical properties, uniformity
More informationOne-shot Learning of Poisson Distributions Information Theory of Audic-Claverie Statistic for Analyzing cdna Arrays
One-shot Learning of Poisson Distributions Information Theory of Audic-Claverie Statistic for Analyzing cdna Arrays Peter Tiňo School of Computer Science University of Birmingham, UK One-shot Learning
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationPsychology 282 Lecture #4 Outline Inferences in SLR
Psychology 282 Lecture #4 Outline Inferences in SLR Assumptions To this point we have not had to make any distributional assumptions. Principle of least squares requires no assumptions. Can use correlations
More informationLectures 5 & 6: Hypothesis Testing
Lectures 5 & 6: Hypothesis Testing in which you learn to apply the concept of statistical significance to OLS estimates, learn the concept of t values, how to use them in regression work and come across
More informationSuperiorized Inversion of the Radon Transform
Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number
More informationPassing-Bablok Regression for Method Comparison
Chapter 313 Passing-Bablok Regression for Method Comparison Introduction Passing-Bablok regression for method comparison is a robust, nonparametric method for fitting a straight line to two-dimensional
More informationMULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS Page 1 MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level
More informationRecognition Performance from SAR Imagery Subject to System Resource Constraints
Recognition Performance from SAR Imagery Subject to System Resource Constraints Michael D. DeVore Advisor: Joseph A. O SullivanO Washington University in St. Louis Electronic Systems and Signals Research
More informationINTRODUCTION TO PATTERN RECOGNITION
INTRODUCTION TO PATTERN RECOGNITION INSTRUCTOR: WEI DING 1 Pattern Recognition Automatic discovery of regularities in data through the use of computer algorithms With the use of these regularities to take
More informationInterpret Standard Deviation. Outlier Rule. Describe the Distribution OR Compare the Distributions. Linear Transformations SOCS. Interpret a z score
Interpret Standard Deviation Outlier Rule Linear Transformations Describe the Distribution OR Compare the Distributions SOCS Using Normalcdf and Invnorm (Calculator Tips) Interpret a z score What is an
More informationAn initial investigation of the information content of sole catch-atlength distributions regarding recruitment trends
An initial investigation of the information content of sole catch-atlength distributions regarding recruitment trends A. Ross-Gillespie and D.S. Butterworth 1 email: mlland028@myuct.ac.za Summary A simple
More informationStatistical Methods for Astronomy
Statistical Methods for Astronomy Probability (Lecture 1) Statistics (Lecture 2) Why do we need statistics? Useful Statistics Definitions Error Analysis Probability distributions Error Propagation Binomial
More information+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing
Linear recurrent networks Simpler, much more amenable to analytic treatment E.g. by choosing + ( + ) = Firing rates can be negative Approximates dynamics around fixed point Approximation often reasonable
More informationBusiness Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee
Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee Lecture - 04 Basic Statistics Part-1 (Refer Slide Time: 00:33)
More informationAIM HIGH SCHOOL. Curriculum Map W. 12 Mile Road Farmington Hills, MI (248)
AIM HIGH SCHOOL Curriculum Map 2923 W. 12 Mile Road Farmington Hills, MI 48334 (248) 702-6922 www.aimhighschool.com COURSE TITLE: Statistics DESCRIPTION OF COURSE: PREREQUISITES: Algebra 2 Students will
More informationInformation Theory, Statistics, and Decision Trees
Information Theory, Statistics, and Decision Trees Léon Bottou COS 424 4/6/2010 Summary 1. Basic information theory. 2. Decision trees. 3. Information theory and statistics. Léon Bottou 2/31 COS 424 4/6/2010
More informationRecall the Basics of Hypothesis Testing
Recall the Basics of Hypothesis Testing The level of significance α, (size of test) is defined as the probability of X falling in w (rejecting H 0 ) when H 0 is true: P(X w H 0 ) = α. H 0 TRUE H 1 TRUE
More informationExam details. Final Review Session. Things to Review
Exam details Final Review Session Short answer, similar to book problems Formulae and tables will be given You CAN use a calculator Date and Time: Dec. 7, 006, 1-1:30 pm Location: Osborne Centre, Unit
More informationResearch Note: A more powerful test statistic for reasoning about interference between units
Research Note: A more powerful test statistic for reasoning about interference between units Jake Bowers Mark Fredrickson Peter M. Aronow August 26, 2015 Abstract Bowers, Fredrickson and Panagopoulos (2012)
More information13: Variational inference II
10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational
More informationEE/CpE 345. Modeling and Simulation. Fall Class 9
EE/CpE 345 Modeling and Simulation Class 9 208 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation - the behavior
More informationStatistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018
Statistics Boot Camp Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018 March 21, 2018 Outline of boot camp Summarizing and simplifying data Point and interval estimation Foundations of statistical
More informationMeasurements and Data Analysis
Measurements and Data Analysis 1 Introduction The central point in experimental physical science is the measurement of physical quantities. Experience has shown that all measurements, no matter how carefully
More informationGlossary. The ISI glossary of statistical terms provides definitions in a number of different languages:
Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the
More informationStatistical Applications in Genetics and Molecular Biology
Statistical Applications in Genetics and Molecular Biology Volume 5, Issue 1 2006 Article 28 A Two-Step Multiple Comparison Procedure for a Large Number of Tests and Multiple Treatments Hongmei Jiang Rebecca
More informationDeciding, Estimating, Computing, Checking
Deciding, Estimating, Computing, Checking How are Bayesian posteriors used, computed and validated? Fundamentalist Bayes: The posterior is ALL knowledge you have about the state Use in decision making:
More informationDeciding, Estimating, Computing, Checking. How are Bayesian posteriors used, computed and validated?
Deciding, Estimating, Computing, Checking How are Bayesian posteriors used, computed and validated? Fundamentalist Bayes: The posterior is ALL knowledge you have about the state Use in decision making:
More informationChapter 1 Statistical Inference
Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations
More informationStructure learning in human causal induction
Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 jbt,gruffydd @psych.stanford.edu Abstract We use
More informationQuantifying Weather Risk Analysis
Quantifying Weather Risk Analysis Now that an index has been selected and calibrated, it can be used to conduct a more thorough risk analysis. The objective of such a risk analysis is to gain a better
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationEE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002
EE/CpE 345 Modeling and Simulation Class 0 November 8, 2002 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation
More informationSignal Detection Theory With Finite Mixture Distributions: Theoretical Developments With Applications to Recognition Memory
Psychological Review Copyright 2002 by the American Psychological Association, Inc. 2002, Vol. 109, No. 4, 710 721 0033-295X/02/$5.00 DOI: 10.1037//0033-295X.109.4.710 Signal Detection Theory With Finite
More informationTables Table A Table B Table C Table D Table E 675
BMTables.indd Page 675 11/15/11 4:25:16 PM user-s163 Tables Table A Standard Normal Probabilities Table B Random Digits Table C t Distribution Critical Values Table D Chi-square Distribution Critical Values
More informationPractice Problems Section Problems
Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,
More informationWatershed Modeling With DEMs
Watershed Modeling With DEMs Lesson 6 6-1 Objectives Use DEMs for watershed delineation. Explain the relationship between DEMs and feature objects. Use WMS to compute geometric basin data from a delineated
More informationLinear Regression. In this lecture we will study a particular type of regression model: the linear regression model
1 Linear Regression 2 Linear Regression In this lecture we will study a particular type of regression model: the linear regression model We will first consider the case of the model with one predictor
More informationStatistics 572 Semester Review
Statistics 572 Semester Review Final Exam Information: The final exam is Friday, May 16, 10:05-12:05, in Social Science 6104. The format will be 8 True/False and explains questions (3 pts. each/ 24 pts.
More informationExercise I.1 I.2 I.3 I.4 II.1 II.2 III.1 III.2 III.3 IV.1 Question (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Answer
Solutions to Exam in 02402 December 2012 Exercise I.1 I.2 I.3 I.4 II.1 II.2 III.1 III.2 III.3 IV.1 Question (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Answer 3 1 5 2 5 2 3 5 1 3 Exercise IV.2 IV.3 IV.4 V.1
More information1 Using standard errors when comparing estimated values
MLPR Assignment Part : General comments Below are comments on some recurring issues I came across when marking the second part of the assignment, which I thought it would help to explain in more detail
More informationarxiv:astro-ph/ v1 14 Sep 2005
For publication in Bayesian Inference and Maximum Entropy Methods, San Jose 25, K. H. Knuth, A. E. Abbas, R. D. Morris, J. P. Castle (eds.), AIP Conference Proceeding A Bayesian Analysis of Extrasolar
More informationBinary choice 3.3 Maximum likelihood estimation
Binary choice 3.3 Maximum likelihood estimation Michel Bierlaire Output of the estimation We explain here the various outputs from the maximum likelihood estimation procedure. Solution of the maximum likelihood
More informationDRAFT: A2.1 Activity rate Loppersum
DRAFT: A.1 Activity rate Loppersum Rakesh Paleja, Matthew Jones, David Randell, Stijn Bierman April 3, 15 1 Summary On 1st Jan 14, the rate of production in the area of Loppersum was reduced. Here we seek
More informationLeast Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions
Journal of Modern Applied Statistical Methods Volume 8 Issue 1 Article 13 5-1-2009 Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error
More informationDo students sleep the recommended 8 hours a night on average?
BIEB100. Professor Rifkin. Notes on Section 2.2, lecture of 27 January 2014. Do students sleep the recommended 8 hours a night on average? We first set up our null and alternative hypotheses: H0: μ= 8
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationFinding an upper limit in the presence of an unknown background
PHYSICAL REVIEW D 66, 032005 2002 Finding an upper limit in the presence of an unnown bacground S. Yellin* Department of Physics, University of California, Santa Barbara, Santa Barbara, California 93106
More informationUNIVERSITY OF TORONTO Faculty of Arts and Science
UNIVERSITY OF TORONTO Faculty of Arts and Science December 2013 Final Examination STA442H1F/2101HF Methods of Applied Statistics Jerry Brunner Duration - 3 hours Aids: Calculator Model(s): Any calculator
More informationLecture Outline. Biost 518 Applied Biostatistics II. Choice of Model for Analysis. Choice of Model. Choice of Model. Lecture 10: Multiple Regression:
Biost 518 Applied Biostatistics II Scott S. Emerson, M.D., Ph.D. Professor of Biostatistics University of Washington Lecture utline Choice of Model Alternative Models Effect of data driven selection of
More information9/2/2010. Wildlife Management is a very quantitative field of study. throughout this course and throughout your career.
Introduction to Data and Analysis Wildlife Management is a very quantitative field of study Results from studies will be used throughout this course and throughout your career. Sampling design influences
More information!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be
Supplementary Materials General case: computing log likelihood We first describe the general case of computing the log likelihood of a sensory parameter θ that is encoded by the activity of neurons. Each
More informationQuantitative Genomics and Genetics BTRY 4830/6830; PBSB
Quantitative Genomics and Genetics BTRY 4830/6830; PBSB.5201.01 Lecture 20: Epistasis and Alternative Tests in GWAS Jason Mezey jgm45@cornell.edu April 16, 2016 (Th) 8:40-9:55 None Announcements Summary
More informationGlossary for the Triola Statistics Series
Glossary for the Triola Statistics Series Absolute deviation The measure of variation equal to the sum of the deviations of each value from the mean, divided by the number of values Acceptance sampling
More information79 Wyner Math Academy I Spring 2016
79 Wyner Math Academy I Spring 2016 CHAPTER NINE: HYPOTHESIS TESTING Review May 11 Test May 17 Research requires an understanding of underlying mathematical distributions as well as of the research methods
More informationALGEBRA 1 CURRICULUM COMMON CORE BASED
ALGEBRA 1 CURRICULUM COMMON CORE BASED (Supplemented with 8th grade PSSA anchors ) UPPER MERION AREA SCHOOL DISTRICT 435 CROSSFIELD ROAD KING OF PRUSSIA, PA 19406 8/20/2012 PA COMMON CORE ALIGNED MATHEMATICS
More informationLearning Objectives for Stat 225
Learning Objectives for Stat 225 08/20/12 Introduction to Probability: Get some general ideas about probability, and learn how to use sample space to compute the probability of a specific event. Set Theory:
More informationQuestions 3.83, 6.11, 6.12, 6.17, 6.25, 6.29, 6.33, 6.35, 6.50, 6.51, 6.53, 6.55, 6.59, 6.60, 6.65, 6.69, 6.70, 6.77, 6.79, 6.89, 6.
Chapter 7 Reading 7.1, 7.2 Questions 3.83, 6.11, 6.12, 6.17, 6.25, 6.29, 6.33, 6.35, 6.50, 6.51, 6.53, 6.55, 6.59, 6.60, 6.65, 6.69, 6.70, 6.77, 6.79, 6.89, 6.112 Introduction In Chapter 5 and 6, we emphasized
More informationExperimental Design and Data Analysis for Biologists
Experimental Design and Data Analysis for Biologists Gerry P. Quinn Monash University Michael J. Keough University of Melbourne CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv I I Introduction 1 1.1
More informationChapte The McGraw-Hill Companies, Inc. All rights reserved.
er15 Chapte Chi-Square Tests d Chi-Square Tests for -Fit Uniform Goodness- Poisson Goodness- Goodness- ECDF Tests (Optional) Contingency Tables A contingency table is a cross-tabulation of n paired observations
More informationQuantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing
Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October
More informationTrendlines Simple Linear Regression Multiple Linear Regression Systematic Model Building Practical Issues
Trendlines Simple Linear Regression Multiple Linear Regression Systematic Model Building Practical Issues Overfitting Categorical Variables Interaction Terms Non-linear Terms Linear Logarithmic y = a +
More informationFinQuiz Notes
Reading 10 Multiple Regression and Issues in Regression Analysis 2. MULTIPLE LINEAR REGRESSION Multiple linear regression is a method used to model the linear relationship between a dependent variable
More informationECE521 Lectures 9 Fully Connected Neural Networks
ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance
More informationLogistic regression: Miscellaneous topics
Logistic regression: Miscellaneous topics April 11 Introduction We have covered two approaches to inference for GLMs: the Wald approach and the likelihood ratio approach I claimed that the likelihood ratio
More informationAll models are wrong but some are useful. George Box (1979)
All models are wrong but some are useful. George Box (1979) The problem of model selection is overrun by a serious difficulty: even if a criterion could be settled on to determine optimality, it is hard
More informationDistribution Fitting (Censored Data)
Distribution Fitting (Censored Data) Summary... 1 Data Input... 2 Analysis Summary... 3 Analysis Options... 4 Goodness-of-Fit Tests... 6 Frequency Histogram... 8 Comparison of Alternative Distributions...
More informationChapter 26: Comparing Counts (Chi Square)
Chapter 6: Comparing Counts (Chi Square) We ve seen that you can turn a qualitative variable into a quantitative one (by counting the number of successes and failures), but that s a compromise it forces
More informationACTEX CAS EXAM 3 STUDY GUIDE FOR MATHEMATICAL STATISTICS
ACTEX CAS EXAM 3 STUDY GUIDE FOR MATHEMATICAL STATISTICS TABLE OF CONTENTS INTRODUCTORY NOTE NOTES AND PROBLEM SETS Section 1 - Point Estimation 1 Problem Set 1 15 Section 2 - Confidence Intervals and
More informationMath 562 Homework 1 August 29, 2006 Dr. Ron Sahoo
Math 56 Homework August 9, 006 Dr. Ron Sahoo He who labors diligently need never despair; for all things are accomplished by diligence and labor. Menander of Athens Direction: This homework worths 60 points
More informationIntroduction to Bayesian Statistics
Bayesian Parameter Estimation Introduction to Bayesian Statistics Harvey Thornburg Center for Computer Research in Music and Acoustics (CCRMA) Department of Music, Stanford University Stanford, California
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY PHYSICS DEPARTMENT
G. Clark 7oct96 1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY PHYSICS DEPARTMENT 8.13/8.14 Junior Laboratory STATISTICS AND ERROR ESTIMATION The purpose of this note is to explain the application of statistics
More information3.3 Population Decoding
3.3 Population Decoding 97 We have thus far considered discriminating between two quite distinct stimulus values, plus and minus. Often we are interested in discriminating between two stimulus values s
More informationIndex I-1. in one variable, solution set of, 474 solving by factoring, 473 cubic function definition, 394 graphs of, 394 x-intercepts on, 474
Index A Absolute value explanation of, 40, 81 82 of slope of lines, 453 addition applications involving, 43 associative law for, 506 508, 570 commutative law for, 238, 505 509, 570 English phrases for,
More informationHypothesis testing (cont d)
Hypothesis testing (cont d) Ulrich Heintz Brown University 4/12/2016 Ulrich Heintz - PHYS 1560 Lecture 11 1 Hypothesis testing Is our hypothesis about the fundamental physics correct? We will not be able
More informationEncoding or decoding
Encoding or decoding Decoding How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms for extracting a stimulus
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More informationFundamentals to Biostatistics. Prof. Chandan Chakraborty Associate Professor School of Medical Science & Technology IIT Kharagpur
Fundamentals to Biostatistics Prof. Chandan Chakraborty Associate Professor School of Medical Science & Technology IIT Kharagpur Statistics collection, analysis, interpretation of data development of new
More informationMachine Learning. VC Dimension and Model Complexity. Eric Xing , Fall 2015
Machine Learning 10-701, Fall 2015 VC Dimension and Model Complexity Eric Xing Lecture 16, November 3, 2015 Reading: Chap. 7 T.M book, and outline material Eric Xing @ CMU, 2006-2015 1 Last time: PAC and
More informationTheorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )
Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr Pr = Pr Pr Pr() Pr Pr. We are given three coins and are told that two of the coins are fair and the
More informationSum-free sets. Peter J. Cameron University of St Andrews
Sum-free sets Peter J. Cameron University of St Andrews Topological dynamics, functional equations, infinite combinatorics and probability LSE, June 2017 Three theorems A set of natural numbers is k-ap-free
More informationSum-free sets. Peter J. Cameron University of St Andrews
Sum-free sets Peter J. Cameron University of St Andrews Topological dynamics, functional equations, infinite combinatorics and probability LSE, June 2017 Three theorems The missing fourth? A set of natural
More informationMultiple Sample Categorical Data
Multiple Sample Categorical Data paired and unpaired data, goodness-of-fit testing, testing for independence University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html
More information22s:152 Applied Linear Regression. Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA)
22s:152 Applied Linear Regression Chapter 8: 1-Way Analysis of Variance (ANOVA) 2-Way Analysis of Variance (ANOVA) We now consider an analysis with only categorical predictors (i.e. all predictors are
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More informationMedian Statistics Analysis of Non- Gaussian Astrophysical and Cosmological Data Compilations
Median Statistics Analysis of Non- Gaussian Astrophysical and Cosmological Data Compilations Amber Thompson Mentor: Dr. Bharat Ratra Graduate Student: Tia Camarillo Background Motivation Scientific integrity
More informationUsing SPSS for One Way Analysis of Variance
Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial
More informationMATH 118 FINAL EXAM STUDY GUIDE
MATH 118 FINAL EXAM STUDY GUIDE Recommendations: 1. Take the Final Practice Exam and take note of questions 2. Use this study guide as you take the tests and cross off what you know well 3. Take the Practice
More informationROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015
ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 http://intelligentoptimization.org/lionbook Roberto Battiti
More informationConfidence Intervals and Hypothesis Tests
Confidence Intervals and Hypothesis Tests STA 281 Fall 2011 1 Background The central limit theorem provides a very powerful tool for determining the distribution of sample means for large sample sizes.
More informationStat 231 Exam 2 Fall 2013
Stat 231 Exam 2 Fall 2013 I have neither given nor received unauthorized assistance on this exam. Name Signed Date Name Printed 1 1. Some IE 361 students worked with a manufacturer on quantifying the capability
More informationProbability Distributions
CONDENSED LESSON 13.1 Probability Distributions In this lesson, you Sketch the graph of the probability distribution for a continuous random variable Find probabilities by finding or approximating areas
More informationStatistical methods and data analysis
Statistical methods and data analysis Teacher Stefano Siboni Aim The aim of the course is to illustrate the basic mathematical tools for the analysis and modelling of experimental data, particularly concerning
More informationINFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson
INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,
More information