Example using R: Heart Valves Study

Size: px
Start display at page:

Download "Example using R: Heart Valves Study"

Transcription

1 Example using R: Heart Valves Study Goal: Show that the thrombogenicity rate (TR) is less than two times the objective performance criterion R and WinBUGS Examples p. 1/27

2 Example using R: Heart Valves Study Goal: Show that the thrombogenicity rate (TR) is less than two times the objective performance criterion Data: From both the current study and a previous study on a similar product (St. Jude mechanical valve). R and WinBUGS Examples p. 1/27

3 Example using R: Heart Valves Study Goal: Show that the thrombogenicity rate (TR) is less than two times the objective performance criterion Data: From both the current study and a previous study on a similar product (St. Jude mechanical valve). Model: Let T be the total number of patient-years of followup, and θ be the TR per year. We assume the number of thrombogenicity events Y P oisson(θt): f(y θ) = e θt (θt) y y!. R and WinBUGS Examples p. 1/27

4 Example using R: Heart Valves Study Goal: Show that the thrombogenicity rate (TR) is less than two times the objective performance criterion Data: From both the current study and a previous study on a similar product (St. Jude mechanical valve). Model: Let T be the total number of patient-years of followup, and θ be the TR per year. We assume the number of thrombogenicity events Y P oisson(θt): f(y θ) = e θt (θt) y y!. Prior: Assume a Gamma(α,β) prior for θ: p(θ) = θα 1 e θ/β Γ(α)β α,θ > 0. R and WinBUGS Examples p. 1/27

5 Heart Valves Study The gamma prior is conjugate with the likelihood, so the posterior emerges in closed form: p(θ y) θ y+α 1 e θ(t+1/β) The study objective is met if where OPC = θ 0 = Gamma(y + α, (T + 1/β) 1 ). P(θ < 2 OPC y) 0.95, R and WinBUGS Examples p. 2/27

6 Heart Valves Study The gamma prior is conjugate with the likelihood, so the posterior emerges in closed form: p(θ y) θ y+α 1 e θ(t+1/β) The study objective is met if where OPC = θ 0 = Gamma(y + α, (T + 1/β) 1 ). P(θ < 2 OPC y) 0.95, Prior selection: Our gamma prior has mean M = αβ and variance V = αβ 2. This means that if we specify M and V, we can solve for α and β as α = M 2 /V and β = V/M. R and WinBUGS Examples p. 2/27

7 Heart Valves Study A few possibilities for prior parameters: R and WinBUGS Examples p. 3/27

8 Heart Valves Study A few possibilities for prior parameters: Suppose we set M = θ 0 = and V = 2θ 0 (so that 0 is two standard deviations below the mean). Then α = 0.25 and β = 0.152, a rather vague prior. R and WinBUGS Examples p. 3/27

9 Heart Valves Study A few possibilities for prior parameters: Suppose we set M = θ 0 = and V = 2θ 0 (so that 0 is two standard deviations below the mean). Then α = 0.25 and β = 0.152, a rather vague prior. Suppose we set M = 98/5891 =.0166, the overall value from the St. Jude studies, and V = M (so 0 is one sd below the mean). Then α = 1 and β = , a moderate (exponential) prior. R and WinBUGS Examples p. 3/27

10 Heart Valves Study A few possibilities for prior parameters: Suppose we set M = θ 0 = and V = 2θ 0 (so that 0 is two standard deviations below the mean). Then α = 0.25 and β = 0.152, a rather vague prior. Suppose we set M = 98/5891 =.0166, the overall value from the St. Jude studies, and V = M (so 0 is one sd below the mean). Then α = 1 and β = , a moderate (exponential) prior. Suppose we set M = 98/5891 =.0166 again, but set V = M/2. This is a rather informative prior. R and WinBUGS Examples p. 3/27

11 Heart Valves Study A few possibilities for prior parameters: Suppose we set M = θ 0 = and V = 2θ 0 (so that 0 is two standard deviations below the mean). Then α = 0.25 and β = 0.152, a rather vague prior. Suppose we set M = 98/5891 =.0166, the overall value from the St. Jude studies, and V = M (so 0 is one sd below the mean). Then α = 1 and β = , a moderate (exponential) prior. Suppose we set M = 98/5891 =.0166 again, but set V = M/2. This is a rather informative prior. We also consider event counts that are lower (1), about the same (3), and much higher (20) than for St. Jude. R and WinBUGS Examples p. 3/27

12 Heart Valves Study A few possibilities for prior parameters: Suppose we set M = θ 0 = and V = 2θ 0 (so that 0 is two standard deviations below the mean). Then α = 0.25 and β = 0.152, a rather vague prior. Suppose we set M = 98/5891 =.0166, the overall value from the St. Jude studies, and V = M (so 0 is one sd below the mean). Then α = 1 and β = , a moderate (exponential) prior. Suppose we set M = 98/5891 =.0166 again, but set V = M/2. This is a rather informative prior. We also consider event counts that are lower (1), about the same (3), and much higher (20) than for St. Jude. The study objective is not met with the bad data unless the posterior is rescued by the informative prior (lower right corner, next page). R and WinBUGS Examples p. 3/27

13 Heart Valves Study Priors and posteriors, Heart Valves ADVANTAGE study, Poisson-gamma model for various prior (M, sd) and data (y) values; T = 200, 2 OPC = vague prior posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = M, sd = ; Y = 1 M, sd = ; Y = 3 M, sd = ; Y = 20 moderate prior posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = M, sd = ; Y = 1 M, sd = ; Y = 3 M, sd = ; Y = 20 informative prior posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = M, sd = ; Y = M, sd = ; Y = M, sd = ; Y = 20 R and WinBUGS Examples p. 4/27

14 Heart Valves Study Priors and posteriors, Heart Valves ADVANTAGE study, Poisson-gamma model for various prior (M, sd) and data (y) values; T = 200, 2 OPC = vague prior posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = M, sd = ; Y = 1 M, sd = ; Y = 3 M, sd = ; Y = 20 moderate prior posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = M, sd = ; Y = 1 M, sd = ; Y = 3 M, sd = ; Y = 20 informative prior posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = posterior prior P(theta < 2 OPC y) = M, sd = ; Y = 1 M, sd = ; Y = 3 M, sd = ; Y = 20 S code to create this plot is available in brad/hv.s try it yourself in S-plus or R ( R and WinBUGS Examples p. 4/27

15 Alternate hierarchical models One might be uncomfortable with our implicit assumption that the TR is the same in both studies. To handle this, extend to a hierarchical model: Y i Poisson(θ i T i ), i = 1, 2, where i = 1 for St. Jude, and i = 2 for the new study. R and WinBUGS Examples p. 5/27

16 Alternate hierarchical models One might be uncomfortable with our implicit assumption that the TR is the same in both studies. To handle this, extend to a hierarchical model: Y i Poisson(θ i T i ), i = 1, 2, where i = 1 for St. Jude, and i = 2 for the new study. Borrow strength between studies by assuming θ i iid Gamma(α,β), i.e., the two TR s are exchangeable, but not identical. R and WinBUGS Examples p. 5/27

17 Alternate hierarchical models One might be uncomfortable with our implicit assumption that the TR is the same in both studies. To handle this, extend to a hierarchical model: Y i Poisson(θ i T i ), i = 1, 2, where i = 1 for St. Jude, and i = 2 for the new study. Borrow strength between studies by assuming θ i iid Gamma(α,β), i.e., the two TR s are exchangeable, but not identical. We now place a third stage prior on α and β, say α Exp(a) and β IG(c,d). R and WinBUGS Examples p. 5/27

18 Alternate hierarchical models One might be uncomfortable with our implicit assumption that the TR is the same in both studies. To handle this, extend to a hierarchical model: Y i Poisson(θ i T i ), i = 1, 2, where i = 1 for St. Jude, and i = 2 for the new study. Borrow strength between studies by assuming θ i iid Gamma(α,β), i.e., the two TR s are exchangeable, but not identical. We now place a third stage prior on α and β, say α Exp(a) and β IG(c,d). Fit in WinBUGS using the pump example as a guide! R and WinBUGS Examples p. 5/27

19 Example 2.5 revisited again! Pump Example ind Y i θ i Poisson(θ i t i ), ind θ i G(α,β), α Exp(µ), β IG(c,d), i = 1,...,k, where µ,c,d, and the t i are known, and Exp denotes the exponential distribution. R and WinBUGS Examples p. 6/27

20 Example 2.5 revisited again! Pump Example ind Y i θ i Poisson(θ i t i ), ind θ i G(α,β), α Exp(µ), β IG(c,d), i = 1,...,k, where µ,c,d, and the t i are known, and Exp denotes the exponential distribution. We apply this model to a dataset giving the numbers of pump failures, Y i, observed in t i thousands of hours for k = 10 different systems of a certain nuclear power plant. R and WinBUGS Examples p. 6/27

21 Example 2.5 revisited again! Pump Example ind Y i θ i Poisson(θ i t i ), ind θ i G(α,β), α Exp(µ), β IG(c,d), i = 1,...,k, where µ,c,d, and the t i are known, and Exp denotes the exponential distribution. We apply this model to a dataset giving the numbers of pump failures, Y i, observed in t i thousands of hours for k = 10 different systems of a certain nuclear power plant. The observations are listed in increasing order of raw failure rate r i = Y i /t i, the classical point estimate of the true failure rate θ i for the i th system. R and WinBUGS Examples p. 6/27

22 Pump Data i Y i t i r i Hyperparameters: We choose the values µ = 1, c = 0.1, and d = 1.0, resulting in reasonably vague hyperpriors for α and β. R and WinBUGS Examples p. 7/27

23 Pump Example Recall that the full conditional distributions for the θ i and β are available in closed form (gamma and inverse gamma, respectively), but that no conjugate prior for α exists. R and WinBUGS Examples p. 8/27

24 Pump Example Recall that the full conditional distributions for the θ i and β are available in closed form (gamma and inverse gamma, respectively), but that no conjugate prior for α exists. However, the full conditional for α, [ k ] p(α β, {θ i },y) g(θ i α,β) h(α) i=1 [ k i=1 ] θi α 1 Γ(α)β α e α/µ can be shown to be log-concave in α. Thus WinBUGS uses adaptive rejection sampling for this parameter. R and WinBUGS Examples p. 8/27

25 WinBUGS code to fit this model model { for (i in 1:k) { theta[i] dgamma(alpha,beta) lambda[i] <- theta[i]*t[i] Y[i] dpois(lambda[i]) } alpha dexp(1.0) beta dgamma(0.1, 1.0) } DATA: list(k = 10, Y = c(5, 1, 5, 14, 3, 19, 1, 1, 4, 22), t = c(94.320, 15.72, 62.88, , 5.24, 31.44, 1.048, 1.048, 2.096, 10.48)) INITS: list(theta=c(1,1,1,1,1,1,1,1,1,1), alpha=1, beta=1) R and WinBUGS Examples p. 9/27

26 Pump Example Results Results from running 1000 burn-in samples, followed by a production run of 10,000 samples (single chain): node mean sd MC error 2.5% median 97.5% alpha beta theta[1] E theta[5] theta[6] theta[10] R and WinBUGS Examples p. 10/27

27 Pump Example Results Results from running 1000 burn-in samples, followed by a production run of 10,000 samples (single chain): node mean sd MC error 2.5% median 97.5% alpha beta theta[1] E theta[5] theta[6] theta[10] Note that while θ 5 and θ 6 have very similar posterior means, the latter posterior is much narrower (smaller sd). R and WinBUGS Examples p. 10/27

28 Pump Example Results Results from running 1000 burn-in samples, followed by a production run of 10,000 samples (single chain): node mean sd MC error 2.5% median 97.5% alpha beta theta[1] E theta[5] theta[6] theta[10] Note that while θ 5 and θ 6 have very similar posterior means, the latter posterior is much narrower (smaller sd). This is because, while the crude failure rates for the two pumps are similar, the latter is based on a far greater number of hours of observation (t 6 = 31.44, while t 5 = 5.24). Hence we know more about pump 6! R and WinBUGS Examples p. 10/27

29 PK Example Wakefield et al. (1994) consider a dataset for which Y ij x ij = plasma concentration of the drug Cadralazine = time elapsed since dose given where i = 1,...,10 indexes the patient, while j = 1,...,n i indexes the observations, 5 n i 8. R and WinBUGS Examples p. 11/27

30 PK Example Wakefield et al. (1994) consider a dataset for which Y ij x ij = plasma concentration of the drug Cadralazine = time elapsed since dose given where i = 1,...,10 indexes the patient, while j = 1,...,n i indexes the observations, 5 n i 8. Attempt to fit the one-compartment nonlinear pharmacokinetic (PK) model, η ij (x ij ) = 30α 1 i exp( β i x ij /α i ). where η ij (x ij ) is the mean plasma concentration at time x ij. R and WinBUGS Examples p. 11/27

31 PK Example This model is best fit on the log scale, i.e. where ǫ ij ind N(0,τ i ). Z ij log Y ij = log η ij (x ij ) + ǫ ij, R and WinBUGS Examples p. 12/27

32 PK Example This model is best fit on the log scale, i.e. where ǫ ij ind N(0,τ i ). Z ij log Y ij = log η ij (x ij ) + ǫ ij, The mean structure for the Z ij s thus emerges as log η ij (x ij ) = log [ 30α 1 i exp( β i x ij /α i ) ] where a i = log α i and b i = log β i. = log 30 log α i β i x ij /α i = log 30 a i exp(b i a i )x ij, R and WinBUGS Examples p. 12/27

33 PK Data no. of hours following drug administration, x patient R and WinBUGS Examples p. 13/27

34 PK Data, original scale x Y R and WinBUGS Examples p. 14/27

35 PK Data, log scale x log Y R and WinBUGS Examples p. 15/27

36 PK Example For the subject-specific random effects θ i (a i,b i ), assume θ i iid N 2 (µ, Ω), where µ = (µ a,µ b ). R and WinBUGS Examples p. 16/27

37 PK Example For the subject-specific random effects θ i (a i,b i ), assume θ i iid N 2 (µ, Ω), where µ = (µ a,µ b ). Usual conjugate prior specification: µ N 2 (λ,c) τ i Ω iid G(ν 0 /2, ν 0 τ 0 /2) Wishart((ρR) 1,ρ) R and WinBUGS Examples p. 16/27

38 PK Example For the subject-specific random effects θ i (a i,b i ), assume θ i iid N 2 (µ, Ω), where µ = (µ a,µ b ). Usual conjugate prior specification: µ N 2 (λ,c) τ i Ω iid G(ν 0 /2, ν 0 τ 0 /2) Wishart((ρR) 1,ρ) Note that the θ i full conditional distributions are: not simple conjugate forms not guaranteed to be log-concave Thus, the Metropolis capability of WinBUGS is required. R and WinBUGS Examples p. 16/27

39 PK Results (WinBUGS vs. Fortran) BUGS Sargent et al. (2000) parameter mean sd lag 1 acf mean sd lag 1 acf a a a a b b b b τ τ τ τ Y 2, Y 7, R and WinBUGS Examples p. 17/27

40 Lip Cancer Example Consider the spatial disease mapping model: Y i µ i ind Po (E i e µ i ), where Y i = observed disease count, E i = expected count (known), and µ i = x iβ + θ i + φ i R and WinBUGS Examples p. 18/27

41 Lip Cancer Example Consider the spatial disease mapping model: Y i µ i ind Po (E i e µ i ), where Y i = observed disease count, E i = expected count (known), and µ i = x iβ + θ i + φ i The x i are explanatory spatial covariates; typically β is assigned a flat prior. R and WinBUGS Examples p. 18/27

42 Lip Cancer Example Consider the spatial disease mapping model: Y i µ i ind Po (E i e µ i ), where Y i = observed disease count, E i = expected count (known), and µ i = x iβ + θ i + φ i The x i are explanatory spatial covariates; typically β is assigned a flat prior. Note the mean structure also contains two sets of random effects! The first, θ i, capture heterogeneity among the regions via θ i iid N(0, 1/τ h ). R and WinBUGS Examples p. 18/27

43 Lip Cancer Example The second set, φ i, capture regional clustering via a conditionally autoregressive (CAR) prior, φ i φ j i N( φ i, 1/(τ c m i )), where m i is the number of neighbors of region i, and φ i = m 1 i Σ j i φ j. R and WinBUGS Examples p. 19/27

44 Lip Cancer Example The second set, φ i, capture regional clustering via a conditionally autoregressive (CAR) prior, φ i φ j i N( φ i, 1/(τ c m i )), where m i is the number of neighbors of region i, and φ i = m 1 i Σ j i φ j. The CAR prior is translation invariant, so typically we insist Σ I i=1 φ i = 0 (imposed numerically after each MCMC iteration). Still, Y i cannot inform about θ i or φ i, but only about their sum η i = θ i + φ i. R and WinBUGS Examples p. 19/27

45 Lip Cancer Example The second set, φ i, capture regional clustering via a conditionally autoregressive (CAR) prior, φ i φ j i N( φ i, 1/(τ c m i )), where m i is the number of neighbors of region i, and φ i = m 1 i Σ j i φ j. The CAR prior is translation invariant, so typically we insist Σ I i=1 φ i = 0 (imposed numerically after each MCMC iteration). Still, Y i cannot inform about θ i or φ i, but only about their sum η i = θ i + φ i. Making the reparametrization from (θ,φ) to (θ,η), we have the joint posterior p(θ, η y) L(η; y)p(θ)p(η θ). R and WinBUGS Examples p. 19/27

46 Lip Cancer Example This means that p(θ i θ j i,η,y) p(θ i ) p(η i θ i {η j θ j } j i ). R and WinBUGS Examples p. 20/27

47 Lip Cancer Example This means that p(θ i θ j i,η,y) p(θ i ) p(η i θ i {η j θ j } j i ). Since this distribution is free of the data y, the θ i are Bayesianly unidentified (and so are the φ i ). R and WinBUGS Examples p. 20/27

48 Lip Cancer Example This means that p(θ i θ j i,η,y) p(θ i ) p(η i θ i {η j θ j } j i ). Since this distribution is free of the data y, the θ i are Bayesianly unidentified (and so are the φ i ). BUT this does not preclude Bayesian learning about θ i ; this would instead require p(θ i y) = p(θ i ). [Stronger condition: data have no impact on the marginal (not conditional) posterior.] R and WinBUGS Examples p. 20/27

49 Lip Cancer Example Dilemma: Though unidentified, the θ i and φ i are interesting in their own right, as is ψ = sd(φ) sd(θ) + sd(φ), where sd( ) is the empirical marginal standard deviation function. Can we specify vague but proper prior values τ h and τ c that R and WinBUGS Examples p. 21/27

50 Lip Cancer Example Dilemma: Though unidentified, the θ i and φ i are interesting in their own right, as is ψ = sd(φ) sd(θ) + sd(φ), where sd( ) is the empirical marginal standard deviation function. Can we specify vague but proper prior values τ h and τ c that lead to acceptable convergence behavior, and R and WinBUGS Examples p. 21/27

51 Lip Cancer Example Dilemma: Though unidentified, the θ i and φ i are interesting in their own right, as is ψ = sd(φ) sd(θ) + sd(φ), where sd( ) is the empirical marginal standard deviation function. Can we specify vague but proper prior values τ h and τ c that lead to acceptable convergence behavior, and still allow Bayesian learning? R and WinBUGS Examples p. 21/27

52 Lip Cancer Example Dilemma: Though unidentified, the θ i and φ i are interesting in their own right, as is ψ = sd(φ) sd(θ) + sd(φ), where sd( ) is the empirical marginal standard deviation function. Can we specify vague but proper prior values τ h and τ c that lead to acceptable convergence behavior, and still allow Bayesian learning? Tricky to specify a fair prior balance between heterogeneity and clustering (e.g., one for which ψ 1/2) since θ i prior is specified marginally while the φ i prior is specified conditionally. R and WinBUGS Examples p. 21/27

53 Dataset: Scottish lip cancer data a) b) > < 50 > N Miles a) SMR i = 100Y i /E i, standardized mortality ratio for lip cancer in I = 56 districts, R and WinBUGS Examples p. 22/27

54 Dataset: Scottish lip cancer data a) b) > < 50 > N Miles a) SMR i = 100Y i /E i, standardized mortality ratio for lip cancer in I = 56 districts, b) one covariate, x i = percentage of the population engaged in agriculture, fishing or forestry (AFF) R and WinBUGS Examples p. 22/27

55 WinBUGS code to fit this model model { for (i in 1 : regions) { O[i] dpois(mu[i]) log(mu[i]) <- log(e[i]) + beta*aff[i]/10 + phi[i] + theta[i] theta[i] dnorm(0.0,tau.h) eta[i] <- theta[i] + phi[i] } phi[1:regions] {\red car.normal}(adj[], weights[], num[], tau.c) beta dnorm(0.0, 1.0E-5) # vague prior on covariate effect tau.h dgamma(1.0e-3,1.0e-3) # fair prior from Best et al. tau.c dgamma(1.0e-1,1.0e-1) # (1999, Bayesian Statistics 6) } sd.h <- sd(theta[]) # marginal SD of heterogeneity effects sd.c <- sd(phi[]) # marginal SD of clustering (spatial) effects psi <- sd.c / (sd.h + sd.c) (See WinBUGS Map manual for DATA and INITS for this example) R and WinBUGS Examples p. 23/27

56 Lip Cancer Results posterior for ψ posterior for β priors for τ c, τ h mean sd l1acf mean sd l1acf G(1.0, 1.0), G(3.2761, 1.81) G(.1,.1), G(.32761,.181) G(.1,.1), G(.001,.001) posterior for η 1 posterior for η 56 priors for τ c, τ h mean sd l1acf mean sd l1acf G(1.0, 1.0), G(3.2761, 1.81) G(.1,.1), G(.32761,.181) G(.1,.1), G(.001,.001) AFF covariate is significantly 0 under all 3 priors R and WinBUGS Examples p. 24/27

57 Lip Cancer Results posterior for ψ posterior for β priors for τ c, τ h mean sd l1acf mean sd l1acf G(1.0, 1.0), G(3.2761, 1.81) G(.1,.1), G(.32761,.181) G(.1,.1), G(.001,.001) posterior for η 1 posterior for η 56 priors for τ c, τ h mean sd l1acf mean sd l1acf G(1.0, 1.0), G(3.2761, 1.81) G(.1,.1), G(.32761,.181) G(.1,.1), G(.001,.001) AFF covariate is significantly 0 under all 3 priors convergence is slow for ψ and β, but rapid for η i (and µ i ) R and WinBUGS Examples p. 24/27

58 Lip Cancer Results posterior for ψ posterior for β priors for τ c, τ h mean sd l1acf mean sd l1acf G(1.0, 1.0), G(3.2761, 1.81) G(.1,.1), G(.32761,.181) G(.1,.1), G(.001,.001) posterior for η 1 posterior for η 56 priors for τ c, τ h mean sd l1acf mean sd l1acf G(1.0, 1.0), G(3.2761, 1.81) G(.1,.1), G(.32761,.181) G(.1,.1), G(.001,.001) AFF covariate is significantly 0 under all 3 priors convergence is slow for ψ and β, but rapid for η i (and µ i ) Excess variability in the data is mostly due to clustering (E(ψ y) >.50), but the posterior distribution for ψ does not seem robust to changes in the prior. R and WinBUGS Examples p. 24/27

59 InterStim Example Device that uses electrical stimulation of the brain to prevent urinary incontinences. For patients i = 1,...,49: X 1i = number of incontinences per week at baseline X 2i = number of incontinences per week at 3 months X 3i = number of incontinences per week at 6 months X 4i = number of incontinences per week at 12 months patient X 1i X 2i X 3i X 4i NA NA NA NA R and WinBUGS Examples p. 25/27

60 InterStim Example Goal 1: Obtain full predictive inference for all missing X values (point and interval estimates) R and WinBUGS Examples p. 26/27

61 InterStim Example Goal 1: Obtain full predictive inference for all missing X values (point and interval estimates) Goal 2: Obtain measure of percent improvement (relative to baseline) due to InterStim at 6 and 12 months R and WinBUGS Examples p. 26/27

62 InterStim Example Goal 1: Obtain full predictive inference for all missing X values (point and interval estimates) Goal 2: Obtain measure of percent improvement (relative to baseline) due to InterStim at 6 and 12 months Model: Let X i = (X 1i,X 2i,X 3i,X 4i ) and θ = (θ 1,θ 2,θ 3,θ 4 ). Clearly the X ij s are correlated, but an ordinary longitudinal model does not seem appropriate (we can t just use a linear model here). So instead, maintain the generality: X i θ, Υ iid N 4 (θ, Υ 1 ) θ N 4 (µ, Ω 1 ) Υ Wishart 4 (R,ρ) R and WinBUGS Examples p. 26/27

63 InterStim Example WinBUGS will generate all missing X s ( NA s in the dataset) from their full conditional distributions as part of the Gibbs algorithm. Thus we will obtain samples from p(x ij X obs ) for all missing X ij (achieving Goal 1). R and WinBUGS Examples p. 27/27

64 InterStim Example WinBUGS will generate all missing X s ( NA s in the dataset) from their full conditional distributions as part of the Gibbs algorithm. Thus we will obtain samples from p(x ij X obs ) for all missing X ij (achieving Goal 1). Re: Goal 2 (percent improvement from baseline), let α = θ 1 θ 3 θ 1 and β = θ 1 θ 4 θ 1 Then p(α X obs ) and p(β X obs ) address this issue! R and WinBUGS Examples p. 27/27

65 InterStim Example WinBUGS will generate all missing X s ( NA s in the dataset) from their full conditional distributions as part of the Gibbs algorithm. Thus we will obtain samples from p(x ij X obs ) for all missing X ij (achieving Goal 1). Re: Goal 2 (percent improvement from baseline), let α = θ 1 θ 3 θ 1 and β = θ 1 θ 4 θ 1 Then p(α X obs ) and p(β X obs ) address this issue! Hyperparameters: all vague: Ω = Diag(10 6,...,10 6 ), µ = 0, ρ = 4 (the smallest value for which the Wishart prior for Υ is proper) and R = Diag(10 1,...,10 1 ). R and WinBUGS Examples p. 27/27

66 InterStim Example WinBUGS will generate all missing X s ( NA s in the dataset) from their full conditional distributions as part of the Gibbs algorithm. Thus we will obtain samples from p(x ij X obs ) for all missing X ij (achieving Goal 1). Re: Goal 2 (percent improvement from baseline), let α = θ 1 θ 3 θ 1 and β = θ 1 θ 4 θ 1 Then p(α X obs ) and p(β X obs ) address this issue! Hyperparameters: all vague: Ω = Diag(10 6,...,10 6 ), µ = 0, ρ = 4 (the smallest value for which the Wishart prior for Υ is proper) and R = Diag(10 1,...,10 1 ). Code and Data: brad/data/interstim.odc R and WinBUGS Examples p. 27/27

Bayesian Graphical Models

Bayesian Graphical Models Graphical Models and Inference, Lecture 16, Michaelmas Term 2009 December 4, 2009 Parameter θ, data X = x, likelihood L(θ x) p(x θ). Express knowledge about θ through prior distribution π on θ. Inference

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

10. Exchangeability and hierarchical models Objective. Recommended reading

10. Exchangeability and hierarchical models Objective. Recommended reading 10. Exchangeability and hierarchical models Objective Introduce exchangeability and its relation to Bayesian hierarchical models. Show how to fit such models using fully and empirical Bayesian methods.

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

Bayesian Hierarchical Models

Bayesian Hierarchical Models Bayesian Hierarchical Models Gavin Shaddick, Millie Green, Matthew Thomas University of Bath 6 th - 9 th December 2016 1/ 34 APPLICATIONS OF BAYESIAN HIERARCHICAL MODELS 2/ 34 OUTLINE Spatial epidemiology

More information

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features Yangxin Huang Department of Epidemiology and Biostatistics, COPH, USF, Tampa, FL yhuang@health.usf.edu January

More information

Fully Bayesian Spatial Analysis of Homicide Rates.

Fully Bayesian Spatial Analysis of Homicide Rates. Fully Bayesian Spatial Analysis of Homicide Rates. Silvio A. da Silva, Luiz L.M. Melo and Ricardo S. Ehlers Universidade Federal do Paraná, Brazil Abstract Spatial models have been used in many fields

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Bayesian Areal Wombling for Geographic Boundary Analysis

Bayesian Areal Wombling for Geographic Boundary Analysis Bayesian Areal Wombling for Geographic Boundary Analysis Haolan Lu, Haijun Ma, and Bradley P. Carlin haolanl@biostat.umn.edu, haijunma@biostat.umn.edu, and brad@biostat.umn.edu Division of Biostatistics

More information

Spatial Analysis of Incidence Rates: A Bayesian Approach

Spatial Analysis of Incidence Rates: A Bayesian Approach Spatial Analysis of Incidence Rates: A Bayesian Approach Silvio A. da Silva, Luiz L.M. Melo and Ricardo Ehlers July 2004 Abstract Spatial models have been used in many fields of science where the data

More information

Multivariate spatial modeling

Multivariate spatial modeling Multivariate spatial modeling Point-referenced spatial data often come as multivariate measurements at each location Chapter 7: Multivariate Spatial Modeling p. 1/21 Multivariate spatial modeling Point-referenced

More information

Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models

Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Introduction to Bayesian Statistics with WinBUGS Part 4 Priors and Hierarchical Models Matthew S. Johnson New York ASA Chapter Workshop CUNY Graduate Center New York, NY hspace1in December 17, 2009 December

More information

Part 7: Hierarchical Modeling

Part 7: Hierarchical Modeling Part 7: Hierarchical Modeling!1 Nested data It is common for data to be nested: i.e., observations on subjects are organized by a hierarchy Such data are often called hierarchical or multilevel For example,

More information

eqr094: Hierarchical MCMC for Bayesian System Reliability

eqr094: Hierarchical MCMC for Bayesian System Reliability eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167

More information

Stat 544 Exam 2. May 5, 2008

Stat 544 Exam 2. May 5, 2008 Stat 544 Exam 2 May 5, 2008 I have neither given nor received unauthorized assistance on this examination. signature name printed There are 10parts on this exam. I will score every part out of 10 points.

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Lecture 2: Priors and Conjugacy

Lecture 2: Priors and Conjugacy Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.

More information

Hierarchical Models & Bayesian Model Selection

Hierarchical Models & Bayesian Model Selection Hierarchical Models & Bayesian Model Selection Geoffrey Roeder Departments of Computer Science and Statistics University of British Columbia Jan. 20, 2016 Contact information Please report any typos or

More information

Bayesian Networks in Educational Assessment

Bayesian Networks in Educational Assessment Bayesian Networks in Educational Assessment Estimating Parameters with MCMC Bayesian Inference: Expanding Our Context Roy Levy Arizona State University Roy.Levy@asu.edu 2017 Roy Levy MCMC 1 MCMC 2 Posterior

More information

Hierarchical Linear Models

Hierarchical Linear Models Hierarchical Linear Models Statistics 220 Spring 2005 Copyright c 2005 by Mark E. Irwin The linear regression model Hierarchical Linear Models y N(Xβ, Σ y ) β σ 2 p(β σ 2 ) σ 2 p(σ 2 ) can be extended

More information

Statistics in Environmental Research (BUC Workshop Series) II Problem sheet - WinBUGS - SOLUTIONS

Statistics in Environmental Research (BUC Workshop Series) II Problem sheet - WinBUGS - SOLUTIONS Statistics in Environmental Research (BUC Workshop Series) II Problem sheet - WinBUGS - SOLUTIONS 1. (a) The posterior mean estimate of α is 14.27, and the posterior mean for the standard deviation of

More information

Weakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0

More information

Approaches for Multiple Disease Mapping: MCAR and SANOVA

Approaches for Multiple Disease Mapping: MCAR and SANOVA Approaches for Multiple Disease Mapping: MCAR and SANOVA Dipankar Bandyopadhyay Division of Biostatistics, University of Minnesota SPH April 22, 2015 1 Adapted from Sudipto Banerjee s notes SANOVA vs MCAR

More information

Classical and Bayesian inference

Classical and Bayesian inference Classical and Bayesian inference AMS 132 January 18, 2018 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 18, 2018 1 / 9 Sampling from a Bernoulli Distribution Theorem (Beta-Bernoulli

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

CTDL-Positive Stable Frailty Model

CTDL-Positive Stable Frailty Model CTDL-Positive Stable Frailty Model M. Blagojevic 1, G. MacKenzie 2 1 Department of Mathematics, Keele University, Staffordshire ST5 5BG,UK and 2 Centre of Biostatistics, University of Limerick, Ireland

More information

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California

Ronald Christensen. University of New Mexico. Albuquerque, New Mexico. Wesley Johnson. University of California, Irvine. Irvine, California Texts in Statistical Science Bayesian Ideas and Data Analysis An Introduction for Scientists and Statisticians Ronald Christensen University of New Mexico Albuquerque, New Mexico Wesley Johnson University

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Lecture 6. Prior distributions

Lecture 6. Prior distributions Summary Lecture 6. Prior distributions 1. Introduction 2. Bivariate conjugate: normal 3. Non-informative / reference priors Jeffreys priors Location parameters Proportions Counts and rates Scale parameters

More information

Bayesian Model Diagnostics and Checking

Bayesian Model Diagnostics and Checking Earvin Balderama Quantitative Ecology Lab Department of Forestry and Environmental Resources North Carolina State University April 12, 2013 1 / 34 Introduction MCMCMC 2 / 34 Introduction MCMCMC Steps in

More information

Introduction to Applied Bayesian Modeling. ICPSR Day 4

Introduction to Applied Bayesian Modeling. ICPSR Day 4 Introduction to Applied Bayesian Modeling ICPSR Day 4 Simple Priors Remember Bayes Law: Where P(A) is the prior probability of A Simple prior Recall the test for disease example where we specified the

More information

Bayesian Inference for Regression Parameters

Bayesian Inference for Regression Parameters Bayesian Inference for Regression Parameters 1 Bayesian inference for simple linear regression parameters follows the usual pattern for all Bayesian analyses: 1. Form a prior distribution over all unknown

More information

36-463/663Multilevel and Hierarchical Models

36-463/663Multilevel and Hierarchical Models 36-463/663Multilevel and Hierarchical Models From Bayes to MCMC to MLMs Brian Junker 132E Baker Hall brian@stat.cmu.edu 1 Outline Bayesian Statistics and MCMC Distribution of Skill Mastery in a Population

More information

Chapter 5. Bayesian Statistics

Chapter 5. Bayesian Statistics Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective

More information

Introduction to Markov Chain Monte Carlo & Gibbs Sampling

Introduction to Markov Chain Monte Carlo & Gibbs Sampling Introduction to Markov Chain Monte Carlo & Gibbs Sampling Prof. Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Ithaca, NY 14853-3801 Email: zabaras@cornell.edu

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

Cluster investigations using Disease mapping methods International workshop on Risk Factors for Childhood Leukemia Berlin May

Cluster investigations using Disease mapping methods International workshop on Risk Factors for Childhood Leukemia Berlin May Cluster investigations using Disease mapping methods International workshop on Risk Factors for Childhood Leukemia Berlin May 5-7 2008 Peter Schlattmann Institut für Biometrie und Klinische Epidemiologie

More information

Frailty Modeling for Spatially Correlated Survival Data, with Application to Infant Mortality in Minnesota By: Sudipto Banerjee, Mela. P.

Frailty Modeling for Spatially Correlated Survival Data, with Application to Infant Mortality in Minnesota By: Sudipto Banerjee, Mela. P. Frailty Modeling for Spatially Correlated Survival Data, with Application to Infant Mortality in Minnesota By: Sudipto Banerjee, Melanie M. Wall, Bradley P. Carlin November 24, 2014 Outlines of the talk

More information

Local Likelihood Bayesian Cluster Modeling for small area health data. Andrew Lawson Arnold School of Public Health University of South Carolina

Local Likelihood Bayesian Cluster Modeling for small area health data. Andrew Lawson Arnold School of Public Health University of South Carolina Local Likelihood Bayesian Cluster Modeling for small area health data Andrew Lawson Arnold School of Public Health University of South Carolina Local Likelihood Bayesian Cluster Modelling for Small Area

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

θ 1 θ 2 θ n y i1 y i2 y in Hierarchical models (chapter 5) Hierarchical model Introduction to hierarchical models - sometimes called multilevel model

θ 1 θ 2 θ n y i1 y i2 y in Hierarchical models (chapter 5) Hierarchical model Introduction to hierarchical models - sometimes called multilevel model Hierarchical models (chapter 5) Introduction to hierarchical models - sometimes called multilevel model Exchangeability Slide 1 Hierarchical model Example: heart surgery in hospitals - in hospital j survival

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Multivariate Survival Analysis

Multivariate Survival Analysis Multivariate Survival Analysis Previously we have assumed that either (X i, δ i ) or (X i, δ i, Z i ), i = 1,..., n, are i.i.d.. This may not always be the case. Multivariate survival data can arise in

More information

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An

More information

Weakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0

More information

Robust Bayesian Regression

Robust Bayesian Regression Readings: Hoff Chapter 9, West JRSSB 1984, Fúquene, Pérez & Pericchi 2015 Duke University November 17, 2016 Body Fat Data: Intervals w/ All Data Response % Body Fat and Predictor Waist Circumference 95%

More information

2 Bayesian Hierarchical Response Modeling

2 Bayesian Hierarchical Response Modeling 2 Bayesian Hierarchical Response Modeling In the first chapter, an introduction to Bayesian item response modeling was given. The Bayesian methodology requires careful specification of priors since item

More information

Riemann Manifold Methods in Bayesian Statistics

Riemann Manifold Methods in Bayesian Statistics Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes

More information

Case Study in the Use of Bayesian Hierarchical Modeling and Simulation for Design and Analysis of a Clinical Trial

Case Study in the Use of Bayesian Hierarchical Modeling and Simulation for Design and Analysis of a Clinical Trial Case Study in the Use of Bayesian Hierarchical Modeling and Simulation for Design and Analysis of a Clinical Trial William R. Gillespie Pharsight Corporation Cary, North Carolina, USA PAGE 2003 Verona,

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Multivariate Normal & Wishart

Multivariate Normal & Wishart Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.

More information

Advanced Statistical Modelling

Advanced Statistical Modelling Markov chain Monte Carlo (MCMC) Methods and Their Applications in Bayesian Statistics School of Technology and Business Studies/Statistics Dalarna University Borlänge, Sweden. Feb. 05, 2014. Outlines 1

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

A short introduction to INLA and R-INLA

A short introduction to INLA and R-INLA A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk

More information

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017 Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries

More information

Ages of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008

Ages of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008 Ages of stellar populations from color-magnitude diagrams Paul Baines Department of Statistics Harvard University September 30, 2008 Context & Example Welcome! Today we will look at using hierarchical

More information

Bayesian Model Comparison:

Bayesian Model Comparison: Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

SPATIAL ANALYSIS & MORE

SPATIAL ANALYSIS & MORE SPATIAL ANALYSIS & MORE Thomas A. Louis, PhD Department of Biostatistics Johns Hopins Bloomberg School of Public Health www.biostat.jhsph.edu/ tlouis/ tlouis@jhu.edu 1/28 Outline Prediction vs Inference

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Bayesian model selection for computer model validation via mixture model estimation

Bayesian model selection for computer model validation via mixture model estimation Bayesian model selection for computer model validation via mixture model estimation Kaniav Kamary ATER, CNAM Joint work with É. Parent, P. Barbillon, M. Keller and N. Bousquet Outline Computer model validation

More information

Gibbs Sampling in Endogenous Variables Models

Gibbs Sampling in Endogenous Variables Models Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take

More information

Bayesian Meta-analysis with Hierarchical Modeling Brian P. Hobbs 1

Bayesian Meta-analysis with Hierarchical Modeling Brian P. Hobbs 1 Bayesian Meta-analysis with Hierarchical Modeling Brian P. Hobbs 1 Division of Biostatistics, School of Public Health, University of Minnesota, Mayo Mail Code 303, Minneapolis, Minnesota 55455 0392, U.S.A.

More information

Technical Vignette 5: Understanding intrinsic Gaussian Markov random field spatial models, including intrinsic conditional autoregressive models

Technical Vignette 5: Understanding intrinsic Gaussian Markov random field spatial models, including intrinsic conditional autoregressive models Technical Vignette 5: Understanding intrinsic Gaussian Markov random field spatial models, including intrinsic conditional autoregressive models Christopher Paciorek, Department of Statistics, University

More information

Bayesian Inference. Chapter 9. Linear models and regression

Bayesian Inference. Chapter 9. Linear models and regression Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering

More information

13: Variational inference II

13: Variational inference II 10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational

More information

STA 216, GLM, Lecture 16. October 29, 2007

STA 216, GLM, Lecture 16. October 29, 2007 STA 216, GLM, Lecture 16 October 29, 2007 Efficient Posterior Computation in Factor Models Underlying Normal Models Generalized Latent Trait Models Formulation Genetic Epidemiology Illustration Structural

More information

Generalized common spatial factor model

Generalized common spatial factor model Biostatistics (2003), 4, 4,pp. 569 582 Printed in Great Britain Generalized common spatial factor model FUJUN WANG Eli Lilly and Company, Indianapolis, IN 46285, USA MELANIE M. WALL Division of Biostatistics,

More information

Introduction. Start with a probability distribution f(y θ) for the data. where η is a vector of hyperparameters

Introduction. Start with a probability distribution f(y θ) for the data. where η is a vector of hyperparameters Introduction Start with a probability distribution f(y θ) for the data y = (y 1,...,y n ) given a vector of unknown parameters θ = (θ 1,...,θ K ), and add a prior distribution p(θ η), where η is a vector

More information

Bayesian Hierarchical Modelling using WinBUGS

Bayesian Hierarchical Modelling using WinBUGS Bayesian Hierarchical Modelling using WinBUGS Nicky Best, Alexina Mason and Philip Li Short Course, Feb 17 18, 2011 http://www.bias-project.org.uk Lecture 1. Introduction to Bayesian hierarchical models

More information

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model Regression: Part II Linear Regression y~n X, 2 X Y Data Model β, σ 2 Process Model Β 0,V β s 1,s 2 Parameter Model Assumptions of Linear Model Homoskedasticity No error in X variables Error in Y variables

More information

BUGS Example 1: Linear Regression

BUGS Example 1: Linear Regression BUGS Example 1: Linear Regression length 1.8 2.0 2.2 2.4 2.6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 log(age) For n = 27 captured samples of the sirenian species dugong (sea cow), relate an animal s length in

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

Part 6: Multivariate Normal and Linear Models

Part 6: Multivariate Normal and Linear Models Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of

More information

HW3 Solutions : Applied Bayesian and Computational Statistics

HW3 Solutions : Applied Bayesian and Computational Statistics HW3 Solutions 36-724: Applied Bayesian and Computational Statistics March 2, 2006 Problem 1 a Fatal Accidents Poisson(θ I will set a prior for θ to be Gamma, as it is the conjugate prior. I will allow

More information

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9 Metropolis Hastings Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 9 1 The Metropolis-Hastings algorithm is a general term for a family of Markov chain simulation methods

More information

Bayesian Correlated Factor Analysis for Spatial Data

Bayesian Correlated Factor Analysis for Spatial Data Bayesian Correlated Factor Analysis for Spatial Data Maura Mezzetti Dipartimento di Studi Economico-Finanziari e Metodi Quantitativi, Universita Tor Vergata, Roma e-mail: maura.mezzetti@uniroma2.it Abstract:

More information

Bayesian Gaussian Process Regression

Bayesian Gaussian Process Regression Bayesian Gaussian Process Regression STAT8810, Fall 2017 M.T. Pratola October 7, 2017 Today Bayesian Gaussian Process Regression Bayesian GP Regression Recall we had observations from our expensive simulator,

More information

Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies. Abstract

Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies. Abstract Bayesian Estimation of A Distance Functional Weight Matrix Model Kazuhiko Kakamu Department of Economics Finance, Institute for Advanced Studies Abstract This paper considers the distance functional weight

More information

An Introduction to Bayesian Linear Regression

An Introduction to Bayesian Linear Regression An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,

More information

MAS3301 Bayesian Statistics Problems 5 and Solutions

MAS3301 Bayesian Statistics Problems 5 and Solutions MAS3301 Bayesian Statistics Problems 5 and Solutions Semester 008-9 Problems 5 1. (Some of this question is also in Problems 4). I recorded the attendance of students at tutorials for a module. Suppose

More information

Assessing the Effect of Prior Distribution Assumption on the Variance Parameters in Evaluating Bioequivalence Trials

Assessing the Effect of Prior Distribution Assumption on the Variance Parameters in Evaluating Bioequivalence Trials Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics 8--006 Assessing the Effect of Prior Distribution Assumption on the Variance

More information

Hierarchical Linear Models. Hierarchical Linear Models. Much of this material already seen in Chapters 5 and 14. Hyperprior on K parameters α:

Hierarchical Linear Models. Hierarchical Linear Models. Much of this material already seen in Chapters 5 and 14. Hyperprior on K parameters α: Hierarchical Linear Models Hierarchical Linear Models Much of this material already seen in Chapters 5 and 14 Hierarchical linear models combine regression framework with hierarchical framework Unified

More information

Part 8: GLMs and Hierarchical LMs and GLMs

Part 8: GLMs and Hierarchical LMs and GLMs Part 8: GLMs and Hierarchical LMs and GLMs 1 Example: Song sparrow reproductive success Arcese et al., (1992) provide data on a sample from a population of 52 female song sparrows studied over the course

More information

PIER HLM Course July 30, 2011 Howard Seltman. Discussion Guide for Bayes and BUGS

PIER HLM Course July 30, 2011 Howard Seltman. Discussion Guide for Bayes and BUGS PIER HLM Course July 30, 2011 Howard Seltman Discussion Guide for Bayes and BUGS 1. Classical Statistics is based on parameters as fixed unknown values. a. The standard approach is to try to discover,

More information

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D.

Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Web Appendix for Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors by D. B. Woodard, C. Crainiceanu, and D. Ruppert A. EMPIRICAL ESTIMATE OF THE KERNEL MIXTURE Here we

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling

Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling 1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]

More information

A Very Brief Summary of Bayesian Inference, and Examples

A Very Brief Summary of Bayesian Inference, and Examples A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X

More information

Graphical Models for Collaborative Filtering

Graphical Models for Collaborative Filtering Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,

More information

Spatio-Temporal Modelling of Credit Default Data

Spatio-Temporal Modelling of Credit Default Data 1/20 Spatio-Temporal Modelling of Credit Default Data Sathyanarayan Anand Advisor: Prof. Robert Stine The Wharton School, University of Pennsylvania April 29, 2011 2/20 Outline 1 Background 2 Conditional

More information

Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification

Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification Michael Anderson, PhD Hélène Carabin, DVM, PhD Department of Biostatistics and Epidemiology The University of Oklahoma

More information