BIOS 312: Precision of Statistical Inference
|
|
- Donald Lucas
- 6 years ago
- Views:
Transcription
1 and Power/Sample Size and Standard Errors BIOS 312: of Statistical Inference Chris Slaughter Department of Biostatistics, Vanderbilt University School of Medicine January 3, 2013
2 Outline Overview and Power/Sample Size and Standard Errors 1 Overview and Power/Sample Size 5 and Standard Errors 6
3 and Power/Sample Size and Standard Errors Bias and Goal of statistical inference is to estimate parameters accurately (unbiased) and with high precision Measures of precision Standard error (not standard deviation) Width of confidence intervals Power (equivalently, type II error rate)
4 and Power/Sample Size and Standard Errors Summary measures Scientific hypotheses are typically refined in statistical hypotheses by identifying some parameter, θ, measuring differences in the distribution of the response variable Often we are interested in if θ differs across of levels of categorical (e.g. treatment/control) or continuous (e.g. age) predictor variables θ could be any summary measure such as Difference/ratio of means Difference/ratio of medians Ratio of geometric means Difference/ratio of proportions Odds ratio, relative risk, risk difference Hazard ratio
5 and Power/Sample Size and Standard Errors Choosing summary measure How to select θ? In order of importance... 1 Scientific (clinical) importance. May be based on current state of knowledge 2 Is θ likely to vary across the predictor of interest? Impacts the ability to detect a difference, if it exists. 3 Statistical precision. Only relevant if all other factors are equal.
6 and Power/Sample Size and Standard Errors Statistical inference Statistics is concerned with making inference about population parameters, (θ), based on a sample of data Frequentist estimation includes both point estimates (ˆθ) and interval estimates (confidence intervals) Bayesian analysis estimates the posterior distribution of θ given the sampled data, p(θ data). The posterior distribution can then be summarized by quantities like the posterior mean and 95% credible interval. Likelihood analysis focuses on using the likelihood function to obtain maximum likelihood estimates. The likelihood function can be used directly to obtain upper and lower confidence-type intervals for estimates.
7 and Power/Sample Size and Standard Errors example Consider the following results from 5 clinical trials of three drugs (A, B, C) designed to lower cholesterol compared to baseline. Assume a 10 unit drop in cholesterol (relative to baseline) is clinically meaningful. Trial Drug Pts Mean diff Std dev Std error 95% CI for diff p-value 1 A [-129, 69] A [-49.6, -10.4] B [-85, 45] B [-8.5, 4.5] C [-9.9, -2.1] 0.002
8 and Power/Sample Size and Standard Errors example Consider the following results from 5 clinical trials of three drugs (A, B, C) designed to lower cholesterol compared to baseline. Assume a 10 unit drop in cholesterol (relative to baseline) is clinically meaningful. Trial Drug Pts Mean diff Std dev Std error 95% CI for diff p-value 1 A [-129, 69] A [-49.6, -10.4] B [-85, 45] B [-8.5, 4.5] C [-9.9, -2.1] Which drug is effective at reducing cholesterol? Why is study 4 more informative than study 3 (even though the p values are similar)?
9 and Power/Sample Size and Standard Errors example Consider the following results from 5 clinical trials of three drugs (A, B, C) designed to lower cholesterol compared to baseline. Assume a 10 unit drop in cholesterol (relative to baseline) is clinically meaningful. Trial Drug Pts Mean diff Std dev Std error 95% CI for diff p-value 1 A [-129, 69] A [-49.6, -10.4] B [-85, 45] B [-8.5, 4.5] C [-9.9, -2.1] Which drug is effective at reducing cholesterol? Why is study 4 more informative than study 3 (even though the p values are similar)? Moral: Hypothesis tests and p-values can often be insufficient to make proper decisions. The confidence interval provides more useful information.
10 Outline Overview and Power/Sample Size and Standard Errors 1 Overview and Power/Sample Size 5 and Standard Errors 6
11 and Power/Sample Size and Standard Errors Sampling distribution defined The sampling distribution is the probability distribution of a statistic ( ) e.g. the sampling distribution of the sample mean is N µ, σ2 n Most often we choose estimators that are asymptotically Normally distributed For large n, ˆθ N ( ) θ, V n ˆθ is our estimate of θ. Theˆindicates it is an estimate. Mean: θ Variance: V, which is related to the average amount of statistical information available from each observation Often V depends on θ Large n depends on the distribution of the underlying data. If n is large enough, approximate Normality of ˆθ will hold.
12 and Power/Sample Size and Standard Errors Confidence intervals when n is large Calculating 100(1 α)% confidence intervals (θ L, θ U ) with approximate Normality θ L = ˆθ V Z 1 α/2 n V θ U = ˆθ + Z 1 α/2 n (estimate) ± (crit val) (std err of estimate) Can similarly calculate approximate two-sided p-values Z = (estimate) (hyp value) (std err of estimate) p-value in Stata: 2 norm p-value in R: use the pnorm() function ( ( )) abs (estimate) (hyp value) (std err of estimate)
13 and Power/Sample Size and Standard Errors Comparing independent estimates If estimates are independent and Normally distributed ˆθ 1 N ( ) θ 1, se1 2 and ˆθ2 N ( ) θ 2, se2 2 Then, ˆθ 1 ˆθ 2 N ( ) θ 1 θ 2, se1 2 + se2 2 ˆθ 1 + ˆθ 2 N ( ) θ 1 + θ 2, se1 2 + se2 ( ) 2 ˆθ 1 N θ1 ˆθ 2 θ 2, se1 2 + θ2 1 se 2 θ2 2 2
14 and Power/Sample Size and Standard Errors Comparing correlated estimates If estimate are correlated and Normally distributed ˆθ 1 N ( ) θ 1, se1 2 and ˆθ 2 N ( ) θ 2, se2 2 ρ = corr(ˆθ 1, ˆθ 2) Then, ˆθ 1 ˆθ 2 N ( ) θ 1 θ 2, se1 2 + se2 2 2 ρ se 1 se 2 ˆθ 1 + ˆθ 2 N ( ) θ 1 + θ 2, se1 2 + se ρ se 1 se 2 Example: Comparing results from the same study Paper may not give the interesting results (from your point of view) Comparison can be difficult because correlation usually not reported
15 Outline Overview and Power/Sample Size and Standard Errors 1 Overview and Power/Sample Size 5 and Standard Errors 6
16 and Power/Sample Size and Standard Errors Classical hypothesis testing Classical hypothesis testing is stated in terms of the null hypothesis (H 0 ). The alternative hypothesis (H 1 ) is the complement of H 0 Two sided: H 0 : θ = θ 0 vs H 1 : θ θ 0 One sided: H 0 : θ θ 0 vs H 1 : θ < θ 0 One sided: H 0 : θ θ 0 vs H 1 : θ > θ 0 Inference is based on either rejecting or failing to reject the null hypothesis Typically, the null hypothesis is stated in some form so as to indicate no association
17 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process Assumes H 0 is true
18 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process Assumes H 0 is true Conceives of data as one of many datasets that might have happened
19 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process Assumes H 0 is true Conceives of data as one of many datasets that might have happened See if data are consistent with H 0
20 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process Assumes H 0 is true Conceives of data as one of many datasets that might have happened See if data are consistent with H 0 Are data extreme or unlikely if H 0 is really true?
21 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process Assumes H 0 is true Conceives of data as one of many datasets that might have happened See if data are consistent with H 0 Are data extreme or unlikely if H 0 is really true? Proof by contradiction: if assuming H 0 is true leads to results that are bizarre or unlikely to have been observed, casts doubt on premise
22 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process Assumes H 0 is true Conceives of data as one of many datasets that might have happened See if data are consistent with H 0 Are data extreme or unlikely if H 0 is really true? Proof by contradiction: if assuming H 0 is true leads to results that are bizarre or unlikely to have been observed, casts doubt on premise Evidence summarized through a single statistic capturing a tendency of data, e.g., x
23 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process Assumes H 0 is true Conceives of data as one of many datasets that might have happened See if data are consistent with H 0 Are data extreme or unlikely if H 0 is really true? Proof by contradiction: if assuming H 0 is true leads to results that are bizarre or unlikely to have been observed, casts doubt on premise Evidence summarized through a single statistic capturing a tendency of data, e.g., x Look at probability of getting a statistic as or more extreme than the calculated one (results as or more impressive than ours) if H 0 is true (the P-value)
24 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process cont. If the statistic has a low probability of being observed to be this extreme we say that if H 0 is true we have acquired data that are very improbable, i.e., have witnessed a low probability event
25 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process cont. If the statistic has a low probability of being observed to be this extreme we say that if H 0 is true we have acquired data that are very improbable, i.e., have witnessed a low probability event Then evidence mounts against H 0 and we might reject it
26 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process cont. If the statistic has a low probability of being observed to be this extreme we say that if H 0 is true we have acquired data that are very improbable, i.e., have witnessed a low probability event Then evidence mounts against H 0 and we might reject it A failure to reject does not imply that we have gathered evidence in favor of H 0 many reasons for studies to not be impressive, including small sample size (n)
27 and Power/Sample Size and Standard Errors Classical hypothesis testing thought process cont. If the statistic has a low probability of being observed to be this extreme we say that if H 0 is true we have acquired data that are very improbable, i.e., have witnessed a low probability event Then evidence mounts against H 0 and we might reject it A failure to reject does not imply that we have gathered evidence in favor of H 0 many reasons for studies to not be impressive, including small sample size (n) Key Limitation Classical hypothesis ignores clinical significance. An approach that allows us to make informed decisions is preferential.
28 and Power/Sample Size and Standard Errors Decision theoretic approach Stated in terms of the null hypothesis and suitable chosen design alternative
29 and Power/Sample Size and Standard Errors Decision theoretic approach Stated in terms of the null hypothesis and suitable chosen design alternative Summarize the design alternative through θ 1 (θ 1 > 0) Two sided: H 0 : θ = θ 0 vs H 1 : θ θ 1 or θ θ 1 One sided: H 0 : θ θ 0 vs H 1 : θ θ 1 One sided: H 0 : θ θ 0 vs H 1 : θ θ 1
30 and Power/Sample Size and Standard Errors Decision theoretic approach Stated in terms of the null hypothesis and suitable chosen design alternative Summarize the design alternative through θ 1 (θ 1 > 0) Two sided: H 0 : θ = θ 0 vs H 1 : θ θ 1 or θ θ 1 One sided: H 0 : θ θ 0 vs H 1 : θ θ 1 One sided: H 0 : θ θ 0 vs H 1 : θ θ 1 Using the decision theoretic approach, can conclude Reject Null Hypothesis. Data is atypical of what we would expect if the null hypothesis is true Reject Alternative Hypothesis. Data is atypical of what we would expect if the alternative hypothesis is true
31 and Power/Sample Size and Standard Errors Decision theoretic approach cont. Key difference from classical approach The design alternative (θ 1 ) is ideally chosen to be the minimal important difference to detect based on scientific or clinical criteria. Clinical significance: In the cholesterol example, the important difference was assumed to be 10 mg/dl Economic impact: A new drug is not marketable unless it has a large effect Feasibility of study: Limited availability of subjects may limit investigators to searching for interventions with large impact
32 and Power/Sample Size and Standard Errors Decision theoretic approach cont. Key difference from classical approach The design alternative (θ 1 ) is ideally chosen to be the minimal important difference to detect based on scientific or clinical criteria. Clinical significance: In the cholesterol example, the important difference was assumed to be 10 mg/dl Economic impact: A new drug is not marketable unless it has a large effect Feasibility of study: Limited availability of subjects may limit investigators to searching for interventions with large impact Remember the cholesterol example. Studies 2, 4, and 5 follow the decision theoretic approach because they allow us to discriminate between scientifically meaningful hypotheses.
33 Outline Overview and Power/Sample Size and Standard Errors 1 Overview and Power/Sample Size 5 and Standard Errors 6
34 and Power/Sample Size and Standard Errors Measures of high precision What are the measures of (high) precision? Estimators are less variable across studies, which is often measured by decreased standard error. Narrower confidence intervals. Estimators are consistent with fewer hypotheses if the CIs are narrow. Able to reject false hypotheses. Z statistic is higher when the alternative hypothesis is true.
35 and Power/Sample Size and Standard Errors Measures of high precision What are the measures of (high) precision? Estimators are less variable across studies, which is often measured by decreased standard error. Narrower confidence intervals. Estimators are consistent with fewer hypotheses if the CIs are narrow. Able to reject false hypotheses. Z statistic is higher when the alternative hypothesis is true. Translation into sample size Based on the width of the confidence interval Choose a sample size such that a 95% CI will not contain both the null and design alternative If both θ 0 and θ 1 cannot be in the CI, we have discriminated between those hypotheses Based on statistical power When the alternative is true, have a high probability of rejecting the null In other words, minimize the type II error rate
36 and Power/Sample Size and Standard Errors Statistical power: quick review Power is the probability of rejecting the null hypothesis when the alternative is true Pr(reject H 0 θ = θ 1) Most often ˆθ N ( ) θ, V n so that the test statistic Z = ˆθ θ 0 wll V /n follow a Normal distribution Under H 0, Z N(0, ( 1) so we ) reject H 0 if Z > Z 1 α/2 θ Under H 1, Z N 1 θ 0, 1 V /n
37 and Power/Sample Size and Standard Errors Statistical power: quick review Power is the probability of rejecting the null hypothesis when the alternative is true Pr(reject H 0 θ = θ 1) Most often ˆθ N ( ) θ, V n so that the test statistic Z = ˆθ θ 0 wll V /n follow a Normal distribution Under H 0, Z N(0, ( 1) so we ) reject H 0 if Z > Z 1 α/2 θ Under H 1, Z N 1 θ 0, 1 V /n Power curves The power function (power curve) is a function of the true value of θ We can compute power for every value of θ As θ moves away from θ 0, power increases (for two-sided alternatives) For any choice of desired power, there is always some θ such that the study has that power Pwr(θ 0 ) = α, the type I error rate
38 and Power/Sample Size and Standard Errors Power curves for a two-sample, equal variance, t-test; n=100 Power σ = 1 σ = True difference in means (theta)
39 and Power/Sample Size and Standard Errors Code for generating example power curve mydiffs <- seq(-0.8, 0.8, 0.05) mypower <- vector("numeric", length(mydiffs)) mypower2 <- vector("numeric", length(mydiffs)) for (i in 1:length(mydiffs)) { mypower[i] <- power.t.test(n = 100, sd = 1, delta = mydiffs[i])$power mypower2[i] <- power.t.test(n = 100, sd = 1.2, delta = mydiffs[i])$power } plot(mydiffs, mypower, xlab = "True difference in means (theta)", ylab = "Power", type = "l", main = "") lines(mydiffs, mypower2, lty = 2) legend("top", c(expression(sigma == 1), expression(sigma == 1.2)), lty = 1:2, inset = 0.05)
40 Outline Overview and Power/Sample Size and Standard Errors 1 Overview and Power/Sample Size 5 and Standard Errors 6
41 and Power/Sample Size and Standard Errors and standard errors Standard errors are the key to precision Greater precision is achieved with smaller standard errors Standard errors are decreased by either decreasing V or increasing n Typically: se(ˆθ) = V n Width of CI: 2 (crit value) se(ˆθ) Test statistic: Z = ˆθ θ 0 se( ˆθ)
42 and Power/Sample Size and Standard Errors Example: One sample mean Observations are independent and identically distributed (iid) iid Y i (µ, σ 2 ), i = 1,..., n n θ = µ, ˆθ = 1 Y n i = Y i=1 V = σ 2, se(ˆθ) = σ 2 n
43 and Power/Sample Size and Standard Errors Example: One sample mean Observations are independent and identically distributed (iid) iid Y i (µ, σ 2 ), i = 1,..., n n θ = µ, ˆθ = 1 Y n i = Y i=1 V = σ 2, se(ˆθ) = σ 2 n Note that we are not assuming a specific distribution for Y i, just that the distribution has a mean and variance We are assuming that n is large so asymptotic results are applicable Then the distribution Y i could be binary data, Poisson, exponential, normal, etc. and the results will hold
44 and Power/Sample Size and Standard Errors Example: One sample mean Observations are independent and identically distributed (iid) iid Y i (µ, σ 2 ), i = 1,..., n n θ = µ, ˆθ = 1 Y n i = Y i=1 V = σ 2, se(ˆθ) = σ 2 n Note that we are not assuming a specific distribution for Y i, just that the distribution has a mean and variance We are assuming that n is large so asymptotic results are applicable Then the distribution Y i could be binary data, Poisson, exponential, normal, etc. and the results will hold There are ways to decrease V including... Restrict sample by age, gender, etc. Take repeated measures on each subject, summarize, and perform test on summary measures Better ideas (this course): Adjust for age and gender; use all data while modeling correlation
45 and Power/Sample Size and Standard Errors Example: Two sample mean Difference of independent means Observations no longer identically distributed, just independent. Group 1 has a different mean and variance than group 2 ind Y ij (µ j, σ 2 j ), j = 1, 2; i = 1,..., n j n = n 1 + n 2; r = n 1/n 2 θ = µ 1 µ 2, ˆθ = Y 1 Y 2 V = (r + 1)( σ2 1 r + σ 2) 2 se(ˆθ) = = σ1 2 V n n 1 + σ2 2 n 2
46 and Power/Sample Size and Standard Errors Comments on the optimal ratio of sample sizes (r) If we are constrained by the maximal sample size n = n 1 + n 2 Smallest V when r = n 1 n 2 = σ 1 σ 2 In other words, smaller V if we sample more subjects from the more variable group If we are unconstrained by the maximal sample size, there is a point of diminishing returns Example: Case-control study where finding cases is difficult/expensive but finding controls is easy/cheap Often quoted r = 5
47 and Power/Sample Size and Standard Errors Optimal sample size ratio for fixed sample size Optimal r for Fixed (n1 + n2): r = s1 / s2 Standard Error r = 1 r = 3 r = 2 s1 = 3*s2 s1 = 2*s2 s1 = s Sample Size Ratio r = n1/n2
48 and Power/Sample Size and Standard Errors Diminishing returns for increase sample size ratio, r Diminishing returns for r > 5 Standard Error s1 = 3*s2 s1 2*s2 s1 = s Sample Size Ratio r = n1/n2
49 and Power/Sample Size and Standard Errors Code for optimal sample size ratio for fixed sample size var.fn <- function(r, s1, s2) { (r + 1) * (s1^2/r + s2^2) } n <- 100 s2 <- 10 plot(function(r) sqrt(var.fn(r, s1 = s2, s2 = s2)/n), 0, 20, ylim = c(1, 6), xlim = c(0, 25), ylab = "Standard Error", xlab = "Sample Size Ratio r = n1/n2", main = "Optimal r for Fixed (n1 + n2): r = s1 / s2") plot(function(r) sqrt(var.fn(r, s1 = 2 * s2, s2 = s2)/n), 0, 20, add = TRUE, lty = 2) plot(function(r) sqrt(var.fn(r, s1 = 3 * s2, s2 = s2)/n), 0, 20, add = TRUE, lty = 3) text(20, 4.7, "s1 = s2", pos = 4) text(20, 5.1, "s1 = 2*s2", pos = 4) text(20, 5.5, "s1 = 3*s2", pos = 4) points(c(1, 2, 3), sqrt(var.fn(c(1, 2, 3), s1 = c(1, 2, 3) * s2, s2 = s2)/n), pch = 2) text(1, 1.8, "r = 1") text(2, 2.8, "r = 2") text(3, 3.8, "r = 3")
50 and Power/Sample Size and Standard Errors Code for diminishing returns for increase sample size ratio n1 <- 200 plot(function(r) sqrt(var.fn(r, s1 = s2, s2 = s2)/(n1 + r * n1)), 0, 20, ylim = c(0.5, 3), xlim = c(0, 25), ylab = "Standard Error", xlab = "Sample Size Ratio r = n1/n2", main = "Diminishing returns for r > 5") plot(function(r) sqrt(var.fn(r, s1 = 2 * s2, s2 = s2)/(n1 + r * n1)), 0, 20, add = TRUE, lty = 2) plot(function(r) sqrt(var.fn(r, s1 = 3 * s2, s2 = s2)/(n1 + r * n1)), 0, 20, add = TRUE, lty = 3) text(20, 0.7, "s1 = s2", pos = 4) text(20, 0.8, "s1 = 2*s2", pos = 4) text(20, 0.9, "s1 = 3*s2", pos = 4)
51 and Power/Sample Size and Standard Errors Example: Paired means Difference of paired means No longer iid. Group 1 has a different mean and variance than group 2, and observations are paired (correlated) Y ij (µ j, σj 2 ), j = 1, 2; i = 1,..., n corr(y i1, Y i2 ) = ρ; corr(y ij, Y mk ) = 0 if i m θ = µ 1 µ 2, ˆθ = Y 1 Y 2 V = σ1 2 + σ2 2 2ρσ 1σ 2 se(ˆθ) = V n gains are made when matched observations are positively correlated (ρ > 0) Usually the case, but possible exceptions Sleep on successive nights Intrauterine growth of litter-mates
52 and Power/Sample Size and Standard Errors Example: Clustered data Clustered data: Experiment where treatments/interventions are assigned based on the basis of Households, schools, clinics, cities, etc. Mean of clustered data Y ij (µ, σ 2 ), i = 1,..., n; j = 1,..., m Up to n clusters, each of which have m subjects corr(y ij, Y ik ) = ρ if j k corr(y ij, Y mk ) = 0 if i m θ = µ, ˆθ = 1 nm n i=1 ( ) V = σ 2 1+(m 1)ρ se(ˆθ) = V n m m Y ij = Y j=1
53 and Power/Sample Size and Standard Errors Example: Clustered data Clustered data: Experiment where treatments/interventions are assigned based on the basis of Households, schools, clinics, cities, etc. Mean of clustered data Y ij (µ, σ 2 ), i = 1,..., n; j = 1,..., m Up to n clusters, each of which have m subjects corr(y ij, Y ik ) = ρ if j k corr(y ij, Y mk ) = 0 if i m θ = µ, ˆθ = 1 nm n i=1 ( ) V = σ 2 1+(m 1)ρ V n m m Y ij = Y j=1 se(ˆθ) = What is V if... ρ = 0 (independent) m = 1 m is large (e.g m = 1000) and ρ is 0, 1, or 0.01
54 and Power/Sample Size and Standard Errors Clustered data cont. With clustered data, even small correlations can be very important to consider Equal precision achieved with Clusters (n) m ρ Total N
55 and Power/Sample Size and Standard Errors Clustered data cont. With clustered data, even small correlations can be very important to consider Equal precision achieved with Clusters (n) m ρ Total N Always consider practical issues. Is it easier/cheaper to collect 1 observation on 1000 different subjects, or 100 observations on 20 different subjects?
56 and Power/Sample Size and Standard Errors Example: Independent odds ratios Binary outcomes ind Y ij B(1, p j ), i = 1,..., n j ; j = 1, 2 n = n 1 + n 2; r = n 1/n 2 θ = log σ 2 j = ( p1 /(1 p 1 ) p 2 /(1 p 2 ) 1 = 1 p j (1 p j ) p j (q j ) V = (r + 1)( σ2 1 r + σ2) 2 se(ˆθ) = = 1 V n ) ; ˆθ = log n 1 p 1 q n 2 p 2 q 2 ( ) ˆp1 /(1 ˆp 1 ) ˆp 2 /(1 ˆp 2 ) Notes on maximum precision Max precision is achieved when the underlying odds are near 1 (proportions near 0.5) If we were considering differences in proportions, the max precision is achieved when the underlying proportions are near 0 or 1
57 and Power/Sample Size and Standard Errors Example: Hazard ratios Independent censored time to event outcomes (T ij, δ ij ), i = 1,..., n j ; j = 1, 2 n = n 1 + n 2; r = n 1/n 2 θ = log(hr); ˆθ = ˆβ from proportional hazards (PH) regression V = (r+1)(1/r+1) se(ˆθ) = Pr(δ ij =1) V n = (r+1)(1/r+1) d In the PH model, statistical information is roughly proportional to d, the number of observed events Papers always report the number of events Study design must consider how long it will take to observe events (e.g. deaths) starting from randomization
58 and Power/Sample Size and Standard Errors Example: Linear regression Independent continuous outcomes associated with covariates ind Y i X i (β 0 + β 1X i, σ 2 Y X ), i = 1,..., n θ = β 1, ˆθ = ˆβ 1 from LS regression V = σ2 Y X Var(X ) se(ˆθ) = ˆσ 2 Y X n ˆ Var(X ) tends to increases as the predictor (X ) is measured over a wider range also related to the within group variance σ 2 Y X What happens to the formulas when X is a binary variable? See two sample mean
59 Outline Overview and Power/Sample Size and Standard Errors 1 Overview and Power/Sample Size 5 and Standard Errors 6
60 Summary Overview and Power/Sample Size and Standard Errors Options for increasing precision Increase sample size Decrease V (Decrease confidence level) Criteria for precision Standard error Width of confidence intervals Statistical power Select a suitable design alternative Select desired power
61 Summary Overview and Power/Sample Size and Standard Errors Sample size calculation: The number of sampling units needed to obtain the desired precision Level of significance α when θ = θ 0 Power β when θ = θ 1 Variability V within one sampling unit n = (z 1 α/2 +z β ) 2 V (θ 1 θ 0 ) 2 When sample size is constrained (the usual case) either Compute power to detect a specified alternative ( ) (θ 1 β = φ 1 θ 0 ) z 1 α/2 V /n φ is the standard Normal cdf function In STATA, use normprob for the φ function Compute alternative that can be detected with high power θ 1 = θ 0 + (z 1 α/2 + z β ) V /n
62 and Power/Sample Size and Standard Errors General comments Sample size required behaves like the square of the width of the CI. To cut the width of the CI in half, need to quadruple the sample size. Positively correlated observations within the same group provide less precision than the same number of independent observations Positively correlated observations across groups provide more precision What power do you use? Most popular is 80% (too low) or 90% Key is to be able to discriminate between scientifically meaningful hypotheses
Biost 518 Applied Biostatistics II. Purpose of Statistics. First Stage of Scientific Investigation. Further Stages of Scientific Investigation
Biost 58 Applied Biostatistics II Scott S. Emerson, M.D., Ph.D. Professor of Biostatistics University of Washington Lecture 5: Review Purpose of Statistics Statistics is about science (Science in the broadest
More informationGroup Sequential Tests for Delayed Responses. Christopher Jennison. Lisa Hampson. Workshop on Special Topics on Sequential Methodology
Group Sequential Tests for Delayed Responses Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj Lisa Hampson Department of Mathematics and Statistics,
More informationBios 6648: Design & conduct of clinical research
Bios 6648: Design & conduct of clinical research Section 2 - Formulating the scientific and statistical design designs 2.5(b) Binary 2.5(c) Skewed baseline (a) Time-to-event (revisited) (b) Binary (revisited)
More information401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.
401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis
More informationBios 6649: Clinical Trials - Statistical Design and Monitoring
Bios 6649: Clinical Trials - Statistical Design and Monitoring Spring Semester 2015 John M. Kittelson Department of Biostatistics & Informatics Colorado School of Public Health University of Colorado Denver
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationSequential Monitoring of Clinical Trials Session 4 - Bayesian Evaluation of Group Sequential Designs
Sequential Monitoring of Clinical Trials Session 4 - Bayesian Evaluation of Group Sequential Designs Presented August 8-10, 2012 Daniel L. Gillen Department of Statistics University of California, Irvine
More informationLecture 25. Ingo Ruczinski. November 24, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University
Lecture 25 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University November 24, 2015 1 2 3 4 5 6 7 8 9 10 11 1 Hypothesis s of homgeneity 2 Estimating risk
More informationThe t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary
Patrick Breheny October 13 Patrick Breheny Biostatistical Methods I (BIOS 5710) 1/25 Introduction Introduction What s wrong with z-tests? So far we ve (thoroughly!) discussed how to carry out hypothesis
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationMonitoring clinical trial outcomes with delayed response: incorporating pipeline data in group sequential designs. Christopher Jennison
Monitoring clinical trial outcomes with delayed response: incorporating pipeline data in group sequential designs Christopher Jennison Department of Mathematical Sciences, University of Bath http://people.bath.ac.uk/mascj
More informationBasic Concepts of Inference
Basic Concepts of Inference Corresponds to Chapter 6 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT) with some slides by Jacqueline Telford (Johns Hopkins University) and Roy Welsch (MIT).
More informationBusiness Statistics. Lecture 10: Course Review
Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,
More informationPubH 7470: STATISTICS FOR TRANSLATIONAL & CLINICAL RESEARCH
PubH 7470: STATISTICS FOR TRANSLATIONAL & CLINICAL RESEARCH The First Step: SAMPLE SIZE DETERMINATION THE ULTIMATE GOAL The most important, ultimate step of any of clinical research is to do draw inferences;
More informationHigh-Throughput Sequencing Course
High-Throughput Sequencing Course DESeq Model for RNA-Seq Biostatistics and Bioinformatics Summer 2017 Outline Review: Standard linear regression model (e.g., to model gene expression as function of an
More informationGeneral Linear Model (Chapter 4)
General Linear Model (Chapter 4) Outcome variable is considered continuous Simple linear regression Scatterplots OLS is BLUE under basic assumptions MSE estimates residual variance testing regression coefficients
More informationStatistics Primer. ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong
Statistics Primer ORC Staff: Jayme Palka Peter Boedeker Marcus Fagan Trey Dejong 1 Quick Overview of Statistics 2 Descriptive vs. Inferential Statistics Descriptive Statistics: summarize and describe data
More informationBios 6649: Clinical Trials - Statistical Design and Monitoring
Bios 6649: Clinical Trials - Statistical Design and Monitoring Spring Semester 2015 John M. Kittelson Department of Biostatistics & nformatics Colorado School of Public Health University of Colorado Denver
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationBIO5312 Biostatistics Lecture 6: Statistical hypothesis testings
BIO5312 Biostatistics Lecture 6: Statistical hypothesis testings Yujin Chung October 4th, 2016 Fall 2016 Yujin Chung Lec6: Statistical hypothesis testings Fall 2016 1/30 Previous Two types of statistical
More informationBusiness Statistics: Lecture 8: Introduction to Estimation & Hypothesis Testing
Business Statistics: Lecture 8: Introduction to Estimation & Hypothesis Testing Agenda Introduction to Estimation Point estimation Interval estimation Introduction to Hypothesis Testing Concepts en terminology
More informationWelcome! Webinar Biostatistics: sample size & power. Thursday, April 26, 12:30 1:30 pm (NDT)
. Welcome! Webinar Biostatistics: sample size & power Thursday, April 26, 12:30 1:30 pm (NDT) Get started now: Please check if your speakers are working and mute your audio. Please use the chat box to
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationPractice Problems Section Problems
Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationCorrelation and Simple Linear Regression
Correlation and Simple Linear Regression Sasivimol Rattanasiri, Ph.D Section for Clinical Epidemiology and Biostatistics Ramathibodi Hospital, Mahidol University E-mail: sasivimol.rat@mahidol.ac.th 1 Outline
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationTMA 4275 Lifetime Analysis June 2004 Solution
TMA 4275 Lifetime Analysis June 2004 Solution Problem 1 a) Observation of the outcome is censored, if the time of the outcome is not known exactly and only the last time when it was observed being intact,
More informationClassification. Chapter Introduction. 6.2 The Bayes classifier
Chapter 6 Classification 6.1 Introduction Often encountered in applications is the situation where the response variable Y takes values in a finite set of labels. For example, the response Y could encode
More informationStatistical Inference
Statistical Inference Bernhard Klingenberg Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Outline Estimation: Review of concepts
More informationTwo-stage Adaptive Randomization for Delayed Response in Clinical Trials
Two-stage Adaptive Randomization for Delayed Response in Clinical Trials Guosheng Yin Department of Statistics and Actuarial Science The University of Hong Kong Joint work with J. Xu PSI and RSS Journal
More informationPreliminary Statistics Lecture 5: Hypothesis Testing (Outline)
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) Gujarati D. Basic Econometrics, Appendix A.8 Barrow M. Statistics
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationIntroduction to Statistical Analysis
Introduction to Statistical Analysis Changyu Shen Richard A. and Susan F. Smith Center for Outcomes Research in Cardiology Beth Israel Deaconess Medical Center Harvard Medical School Objectives Descriptive
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 12: Frequentist properties of estimators (v4) Ramesh Johari ramesh.johari@stanford.edu 1 / 39 Frequentist inference 2 / 39 Thinking like a frequentist Suppose that for some
More informationSYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions
SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu
More informationLecture 5: ANOVA and Correlation
Lecture 5: ANOVA and Correlation Ani Manichaikul amanicha@jhsph.edu 23 April 2007 1 / 62 Comparing Multiple Groups Continous data: comparing means Analysis of variance Binary data: comparing proportions
More informationHypothesis Testing. ECE 3530 Spring Antonio Paiva
Hypothesis Testing ECE 3530 Spring 2010 Antonio Paiva What is hypothesis testing? A statistical hypothesis is an assertion or conjecture concerning one or more populations. To prove that a hypothesis is
More informationReview of Statistics 101
Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods
More informationSTAT 4385 Topic 01: Introduction & Review
STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics
More informationMarginal versus conditional effects: does it make a difference? Mireille Schnitzer, PhD Université de Montréal
Marginal versus conditional effects: does it make a difference? Mireille Schnitzer, PhD Université de Montréal Overview In observational and experimental studies, the goal may be to estimate the effect
More informationFrailty Modeling for Spatially Correlated Survival Data, with Application to Infant Mortality in Minnesota By: Sudipto Banerjee, Mela. P.
Frailty Modeling for Spatially Correlated Survival Data, with Application to Infant Mortality in Minnesota By: Sudipto Banerjee, Melanie M. Wall, Bradley P. Carlin November 24, 2014 Outlines of the talk
More informationInterval estimation. October 3, Basic ideas CLT and CI CI for a population mean CI for a population proportion CI for a Normal mean
Interval estimation October 3, 2018 STAT 151 Class 7 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0
More informationOne-stage dose-response meta-analysis
One-stage dose-response meta-analysis Nicola Orsini, Alessio Crippa Biostatistics Team Department of Public Health Sciences Karolinska Institutet http://ki.se/en/phs/biostatistics-team 2017 Nordic and
More informationSPRING 2007 EXAM C SOLUTIONS
SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =
More informationTesting Independence
Testing Independence Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM 1/50 Testing Independence Previously, we looked at RR = OR = 1
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationInference for Single Proportions and Means T.Scofield
Inference for Single Proportions and Means TScofield Confidence Intervals for Single Proportions and Means A CI gives upper and lower bounds between which we hope to capture the (fixed) population parameter
More informationME3620. Theory of Engineering Experimentation. Spring Chapter IV. Decision Making for a Single Sample. Chapter IV
Theory of Engineering Experimentation Chapter IV. Decision Making for a Single Sample Chapter IV 1 4 1 Statistical Inference The field of statistical inference consists of those methods used to make decisions
More informationDecision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over
Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we
More informationBeyond GLM and likelihood
Stat 6620: Applied Linear Models Department of Statistics Western Michigan University Statistics curriculum Core knowledge (modeling and estimation) Math stat 1 (probability, distributions, convergence
More informationPsychology 282 Lecture #4 Outline Inferences in SLR
Psychology 282 Lecture #4 Outline Inferences in SLR Assumptions To this point we have not had to make any distributional assumptions. Principle of least squares requires no assumptions. Can use correlations
More informationStatistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,
More informationIntroduction to Statistical Inference
Introduction to Statistical Inference Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Statistics 1 / 30 1 Point Estimation 2 Hypothesis Testing Ping Yu (HKU) Statistics 2 / 30 The
More informationGeneral Regression Model
Scott S. Emerson, M.D., Ph.D. Department of Biostatistics, University of Washington, Seattle, WA 98195, USA January 5, 2015 Abstract Regression analysis can be viewed as an extension of two sample statistical
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will
More informationOne-sample categorical data: approximate inference
One-sample categorical data: approximate inference Patrick Breheny October 6 Patrick Breheny Biostatistical Methods I (BIOS 5710) 1/25 Introduction It is relatively easy to think about the distribution
More informationDiscrete Multivariate Statistics
Discrete Multivariate Statistics Univariate Discrete Random variables Let X be a discrete random variable which, in this module, will be assumed to take a finite number of t different values which are
More informationLecture 1: Bayesian Framework Basics
Lecture 1: Bayesian Framework Basics Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de April 21, 2014 What is this course about? Building Bayesian machine learning models Performing the inference of
More informationSwarthmore Honors Exam 2012: Statistics
Swarthmore Honors Exam 2012: Statistics 1 Swarthmore Honors Exam 2012: Statistics John W. Emerson, Yale University NAME: Instructions: This is a closed-book three-hour exam having six questions. You may
More informationPubh 8482: Sequential Analysis
Pubh 8482: Sequential Analysis Joseph S. Koopmeiners Division of Biostatistics University of Minnesota Week 8 P-values When reporting results, we usually report p-values in place of reporting whether or
More informationUNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017
UNIVERSITY OF MASSACHUSETTS Department of Mathematics and Statistics Basic Exam - Applied Statistics Tuesday, January 17, 2017 Work all problems 60 points are needed to pass at the Masters Level and 75
More informationReports of the Institute of Biostatistics
Reports of the Institute of Biostatistics No 02 / 2008 Leibniz University of Hannover Natural Sciences Faculty Title: Properties of confidence intervals for the comparison of small binomial proportions
More informationImproving Efficiency of Inferences in Randomized Clinical Trials Using Auxiliary Covariates
Improving Efficiency of Inferences in Randomized Clinical Trials Using Auxiliary Covariates Anastasios (Butch) Tsiatis Department of Statistics North Carolina State University http://www.stat.ncsu.edu/
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationInverse Sampling for McNemar s Test
International Journal of Statistics and Probability; Vol. 6, No. 1; January 27 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education Inverse Sampling for McNemar s Test
More informationBios 6649: Clinical Trials - Statistical Design and Monitoring
Bios 6649: Clinical Trials - Statistical Design and Monitoring Spring Semester 2015 John M. Kittelson Department of Biostatistics & Informatics Colorado School of Public Health University of Colorado Denver
More informationMachine Learning Linear Classification. Prof. Matteo Matteucci
Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)
More informationGroup Sequential Designs: Theory, Computation and Optimisation
Group Sequential Designs: Theory, Computation and Optimisation Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj 8th International Conference
More informationMultiple Regression Analysis
Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators
More informationIntroductory Econometrics. Review of statistics (Part II: Inference)
Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing
More informationHypothesis Testing The basic ingredients of a hypothesis test are
Hypothesis Testing The basic ingredients of a hypothesis test are 1 the null hypothesis, denoted as H o 2 the alternative hypothesis, denoted as H a 3 the test statistic 4 the data 5 the conclusion. The
More informationBias Variance Trade-off
Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]
More informationSTAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)
STAT40 Midterm Exam University of Illinois Urbana-Champaign October 19 (Friday), 018 3:00 4:15p SOLUTIONS (Yellow) Question 1 (15 points) (10 points) 3 (50 points) extra ( points) Total (77 points) Points
More informationInterim Monitoring of Clinical Trials: Decision Theory, Dynamic Programming. and Optimal Stopping
Interim Monitoring of Clinical Trials: Decision Theory, Dynamic Programming and Optimal Stopping Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj
More informationTopic 12 Overview of Estimation
Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the
More informationUnobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:
Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 15: Examples of hypothesis tests (v5) Ramesh Johari ramesh.johari@stanford.edu 1 / 32 The recipe 2 / 32 The hypothesis testing recipe In this lecture we repeatedly apply the
More informationBayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang
Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features Yangxin Huang Department of Epidemiology and Biostatistics, COPH, USF, Tampa, FL yhuang@health.usf.edu January
More informationStatistical Methods III Statistics 212. Problem Set 2 - Answer Key
Statistical Methods III Statistics 212 Problem Set 2 - Answer Key 1. (Analysis to be turned in and discussed on Tuesday, April 24th) The data for this problem are taken from long-term followup of 1423
More informationVisual interpretation with normal approximation
Visual interpretation with normal approximation H 0 is true: H 1 is true: p =0.06 25 33 Reject H 0 α =0.05 (Type I error rate) Fail to reject H 0 β =0.6468 (Type II error rate) 30 Accept H 1 Visual interpretation
More informationMultivariate Survival Analysis
Multivariate Survival Analysis Previously we have assumed that either (X i, δ i ) or (X i, δ i, Z i ), i = 1,..., n, are i.i.d.. This may not always be the case. Multivariate survival data can arise in
More informationLecture Outline. Biost 518 Applied Biostatistics II. Choice of Model for Analysis. Choice of Model. Choice of Model. Lecture 10: Multiple Regression:
Biost 518 Applied Biostatistics II Scott S. Emerson, M.D., Ph.D. Professor of Biostatistics University of Washington Lecture utline Choice of Model Alternative Models Effect of data driven selection of
More information2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 2.830J / 6.780J / ESD.63J Control of Processes (SMA 6303) Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationMAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM
Eco517 Fall 2004 C. Sims MAXIMUM LIKELIHOOD, SET ESTIMATION, MODEL CRITICISM 1. SOMETHING WE SHOULD ALREADY HAVE MENTIONED A t n (µ, Σ) distribution converges, as n, to a N(µ, Σ). Consider the univariate
More informationEcon 325: Introduction to Empirical Economics
Econ 325: Introduction to Empirical Economics Chapter 9 Hypothesis Testing: Single Population Ch. 9-1 9.1 What is a Hypothesis? A hypothesis is a claim (assumption) about a population parameter: population
More informationHarvard University. Rigorous Research in Engineering Education
Statistical Inference Kari Lock Harvard University Department of Statistics Rigorous Research in Engineering Education 12/3/09 Statistical Inference You have a sample and want to use the data collected
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationLecture 2: Statistical Decision Theory (Part I)
Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical
More informationSection Comparing Two Proportions
Section 8.2 - Comparing Two Proportions Statistics 104 Autumn 2004 Copyright c 2004 by Mark E. Irwin Comparing Two Proportions Two-sample problems Want to compare the responses in two groups or treatments
More informationSample Size and Power I: Binary Outcomes. James Ware, PhD Harvard School of Public Health Boston, MA
Sample Size and Power I: Binary Outcomes James Ware, PhD Harvard School of Public Health Boston, MA Sample Size and Power Principles: Sample size calculations are an essential part of study design Consider
More informationSample Size and Power Considerations for Longitudinal Studies
Sample Size and Power Considerations for Longitudinal Studies Outline Quantities required to determine the sample size in longitudinal studies Review of type I error, type II error, and power For continuous
More informationAdvanced Herd Management Probabilities and distributions
Advanced Herd Management Probabilities and distributions Anders Ringgaard Kristensen Slide 1 Outline Probabilities Conditional probabilities Bayes theorem Distributions Discrete Continuous Distribution
More informationREGRESSION ANALYSIS FOR TIME-TO-EVENT DATA THE PROPORTIONAL HAZARDS (COX) MODEL ST520
REGRESSION ANALYSIS FOR TIME-TO-EVENT DATA THE PROPORTIONAL HAZARDS (COX) MODEL ST520 Department of Statistics North Carolina State University Presented by: Butch Tsiatis, Department of Statistics, NCSU
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationAccounting for Baseline Observations in Randomized Clinical Trials
Accounting for Baseline Observations in Randomized Clinical Trials Scott S Emerson, MD, PhD Department of Biostatistics, University of Washington, Seattle, WA 9895, USA October 6, 0 Abstract In clinical
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More information