Metric Predicted Variable With One Nominal Predictor Variable

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Metric Predicted Variable With One Nominal Predictor Variable"

Transcription

1 Metric Predicted Variable With One Nominal Predictor Variable Tim Frasier Copyright Tim Frasier This work is licensed under the Creative Commons Attribution 4.0 International license. Click here for more information.

2 Goals & General Idea

3 Goals When would we use this type of analysis? When we want to know the effect of being in a group on some metric predictor variable Very common type of data set Monetary income (metric) and political party (nominal) Drug effect across groups etc. Frequently analyzed with a single factor (or one-way) ANOVA

4 General Idea Trying to quantify the relationship between two different sets of data One (y) is the metric response (or predicted) variable The other (x) is the nominal predictor variable that represents the categories in which measurements, samples, individuals can belong

5 Equation or Kruschke (2015) p. 555

6 Equation or Mean value of y, across all groupings Kruschke (2015) p. 555

7 Equation or Degree to which values are deflected above or below mean value, based on being in group j Kruschke (2015) p. 555

8 Equation or βj = 0, by definition Will add this constraint to our model Kruschke (2015) p. 555

9 The Data

10 Data Suppose you re studying a horse population that had a large die-off in one year (from which you ve obtained samples) You re interested in the effect of inbreeding on survival in this population May expect the following patterns

11 Data Age Class Adults Juveniles Foals Prediction & Logic Lowest degree of inbreeding (have survived previous selection events; inbred individuals have already been weeded out ) Moderate degree of inbreeding (have survived some, but not too many, previous selection events; some inbred individuals have been weeded out ) Highest degree of inbreeding (have not had to survive any previous selection events; no inbred individuals have been weeded out yet, except in this event)

12 Data Metric predicted variable with one nominal predictor Inbreeding Coefficient Age Class 0.12 adult 0.23 juvenile 0.06 adult 0.22 foal 0.34 foal 0.17 juvenile

13 Getting A Feel for the Data Before we can create an appropriate model, we need to get a feel for the data I think two main plots would be useful here A box plot grouped by age class Points grouped by age class

14 Getting A Feel for the Data Put simhorsedata.csv file in R s working directory Load it into R horsedata <- read.table( simhorsedata.csv, header = TRUE, sep =, )

15 Getting A Feel for the Data Make a box plot Make the aclass field a factor aclass <- factor(horsedata$aclass, levels = c( adult, juvenile, foal ), ordered = TRUE)

16 Getting A Feel for the Data Make a box plot Make the aclass field a factor aclass <- factor(horsedata$aclass, levels = c( adult, juvenile, foal ), ordered = TRUE) Note that we re specifying the order here, so that adult will be 1, juvenile will be 2, and foal will be 3. Otherwise, these would be ordered alphabetically (i.e., adult, foal, juvenile)

17 Getting A Feel for the Data Make the boxplot boxplot(horsedata$ic ~ aclass, ylab = Inbreeding Coefficient, xlab = Age Class, col = skyblue ) Inbreeding Coefficient adult juvenile foal Age Class

18 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 0, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal ))

19 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 0, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal )) Have to use numeric form, otherwise it defaults to a box plot.

20 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 0, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal )) Tell R not to plot the x-axis labels (we ll add our own later). These would be numbers.

21 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 0, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal )) Make points filled black circles that are opaque, so that you can see where there is overlap.

22 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 0, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal )) Function for adding a custom axis.

23 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 0, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal )) Position of the axis. 1 = bottom, 2 = left, 3 = top, 4 = right.

24 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 1, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal )) At what positions to add our custom labels (remember that as numeric, our categories are 1, 2, and 3)

25 Getting A Feel for the Data Can also plot the points to see raw data (requires a few tricks) plot(as.numeric(aclass), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 0, 0.25)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal )) What the labels should be at our indicated positions.

26 Getting A Feel for the Data Inbreeding Coefficient adult juvenile foal Age Class

27 Getting A Feel for the Data This is a little too clumped to be useful Can jitter the x-values of the points to make it clearer This adds random noise to the data in the specified axis plot(jitter(as.numeric(aclass)), horsedata$ic, xaxt = n, ylab = Inbreeding Coefficient, xlab = Age Class, pch = 16, col = rgb(0, 0, 1, 0.5)) axis(1, at = c(1, 2, 3), labels = c( adult, juvenile, foal ))

28 Getting A Feel for the Data Inbreeding Coefficient adult juvenile foal Age Class

29 Frequentist Approach

30 Frequentist Approach Again, these type of data are typically analyzed with an ANOVA, which can be called with the aov function anovatest <- aov(horsedata$ic ~ horsedata$aclass)

31 Frequentist Approach Can get the coefficient estimates by looking at the model tables print(model.tables(anovatest)) Tables of effects horsedata$aclass adult foal juvenile rep

32 Frequentist Approach Can get the coefficient estimates by looking at the model tables print(model.tables(anovatest)) Tables of effects horsedata$aclass adult foal juvenile rep Adults have lower inbreeding coefficients than the population average, those of juveniles are slightly lower than average, and those for foals are substantially above average.

33 Frequentist Approach Can get the coefficient estimates by looking at the model tables print(model.tables(anovatest)) Tables of effects horsedata$aclass adult foal juvenile rep Number of subjects in each category.

34 Frequentist Approach Can get the coefficient estimates by looking at the model tables print(model.tables(anovatest)) Tables of effects horsedata$aclass adult foal juvenile rep Note lack of confidence intervals in coefficient estimates.

35 Frequentist Approach Can see if this effect is significant summary(anovatest) Df Sum Sq Mean Sq F value Pr(>F) horsedata$aclass <2e-16 *** Residuals Signif. codes: 0 *** ** 0.01 * Yep.

36 Bayesian Approach

37 Load Libraries & Functions library(runjags) library(coda) source( plotpost.r )

38 Prepare the Data Standardize metric (y) data y <- horsedata$ic ymean <- mean(y) ysd <- sd(y) zy <- (y - ymean) / ysd N <- length(y)

39 Prepare the Data Organize the nominal (x) data x <- as.numeric(horsedata$aclass) xnames <- levels(as.factor(horsedata$aclass)) nageclass <- length(unique(horsedata$aclass))

40 Prepare the Data Organize the nominal (x) data x <- as.numeric(horsedata$aclass) xnames <- levels(as.factor(horsedata$aclass)) nageclass <- length(unique(horsedata$aclass)) Save the nominal data as x in numeric form (1, 2, and 3 instead of adult, foal, and juvenile )

41 Prepare the Data Organize the nominal (x) data x <- as.numeric(horsedata$aclass) xnames <- levels(as.factor(horsedata$aclass)) nageclass <- length(unique(horsedata$aclass)) Make a vector ( xnames ) with the names of each nominal category. xnames is now adult, foal, juvenile.

42 Prepare the Data Organize the nominal (x) data x <- as.numeric(horsedata$aclass) xnames <- levels(as.factor(horsedata$aclass)) nageclass <- length(unique(horsedata$aclass)) Get the number of categories in the data set (unique values in our nominal data set)

43 Define the Model

44 Define the Model µ τ = 1/σ 2 - norm yi

45 Define the Model α gamma β µ τ = 1/σ 2 - norm yi

46 Define the Model 0 10 µ τ = 1/σ 2 - norm α gamma β µ τ = 1/σ 2 - norm yi

47 Define the Model µ τ = 1/σ 2 µ τ = 1/σ 2 - norm - norm α gamma β µ τ = 1/σ 2 - norm yi

48 Define the Model µ τ = 1/σ 2 µ τ = 1/σ 2 norm norm - Same as before, data just coded differently!!! 1.1 (Will have some differences in actual model) α β gamma µ τ = 1/σ 2 norm yi

49 Define the Model modelstring = " model { #--- Likelihood ---# for (i in 1:N) { y[i] ~ dnorm(mu[i], tau) mu[i] <- a0 + a1[x[i]] } #--- Priors ---# a0 ~ dnorm(0, 1/10^2) for (j in 1:nAgeClass) { a1[j] ~ dnorm(0, 1/10^2) } sigma ~ dgamma(1.1, 0.11) tau <- 1 / sigma^2... (there s more)

50 Define the Model modelstring = " model { How to indicate that value of x[i] is categorical, rather than a number to be taken at face value #--- Likelihood ---# for (i in 1:N) { y[i] ~ dnorm(mu[i], tau) mu[i] <- a0 + a1[x[i]] } #--- Priors ---# a0 ~ dnorm(0, 1/10^2) for (j in 1:nAgeClass) { a1[j] ~ dnorm(0, 1/10^2) } sigma ~ dgamma(1.1, 0.11) tau <- 1 / sigma^2... (there s more)

51 Define the Model modelstring = " model { Using a instead of standard b (for beta) to indicate that these coefficients are not yet standardized to sum to zero #--- Likelihood ---# for (i in 1:N) { y[i] ~ dnorm(mu[i], tau) mu[i] <- a0 + a1[x[i]] } #--- Priors ---# a0 ~ dnorm(0, 1/10^2) for (j in 1:nAgeClass) { a1[j] ~ dnorm(0, 1/10^2) } sigma ~ dgamma(1.1, 0.11) tau <- 1 / sigma^2... (there s more)

52 Define the Model... # # # Convert a0 and a1[] to sum-to-zero b0 and b1[] # # # #--- Create matrix with values for each age class---# for (j in 1:nAgeClass) { m[j] <- a0 + a1[j] } #--- Make b0 the mean across all age classes ---# b0 <- mean(m[1:nageclass] #--- Make b1[j] the difference between that category and b0 ---# for (j in 1:nAgeClass) { b1[j] <- m[j] - b0 } } writelines(modelstring, con = model.txt )

53 Prepare Data for JAGS Specify as a list for JAGS datalist = list ( y = zy, x = x, N = N, nageclass = nageclass )

54 Specify Initial Values initslist <- function() { list( sigma = rgamma(n = 1, shape = 1.1, rate = 0.11), a0 = rnorm(n = 1, mean = 0, sd = 10), a1 = rnorm(n = nageclass, mean = 0, sd = 10) ) }

55 Specify Initial Values initslist <- function() { list( sigma = rgamma(n = 1, shape = 1.1, rate = 0.11), a0 = rnorm(n = 1, mean = 0, sd = 10), a1 = rnorm(n = nageclass, mean = 0, sd = 10) ) } Note that we need one for each category!

56 Specify MCMC Parameters and Run library(runjags) runjagsout <- run.jags( method = simple, model = model.txt, monitor = c( b0, b1, sigma ), data = datalist, inits = initslist, n.chains = 3, adapt = 500, burnin = 1000, sample = 20000, thin = 1, summarise = TRUE, plots = FALSE)

57 Specify MCMC Parameters and Run library(runjags) runjagsout <- run.jags( method = simple, model = model.txt, monitor = c( b0, b1, sigma ), data = datalist, inits = initslist, n.chains = 3, adapt = 500, burnin = 1000, sample = 20000, thin = 1, summarise = TRUE, plots = FALSE) Will keep track of all b1 values (one for each category)

58 Evaluate Performance of the Model

59 Testing Model Performance Retrieve the data and take a peak at the structure codasamples = as.mcmc.list(runjagsout) head(codasamples[[1]]) Markov Chain Monte Carlo (MCMC) output: Start = 1501 End = 1507 Thinning interval = 1 b0 b1[1] b1[2] b1[3] sigma

60 Testing Model Performance Trace plots par(mfrow = c(2,3)) traceplot(codasamples)

61 Testing Model Performance Autocorrelation plots autocorr.plot(codasamples[[1]]) b0 b1[1] Autocorrelation Autocorrelation Lag Lag b1[2] b1[3] Autocorrelation Autocorrelation Lag Lag sigma Autocorrelation Lag

62 Testing Model Performance Gelman & Rubin diagnostic gelman.diag(codasamples) Potential scale reduction factors: Point est. Upper C.I. b0 1 1 b1[1] 1 1 b1[2] 1 1 b1[3] 1 1 sigma 1 1 Multivariate psrf 1

63 Testing Model Performance Effective size effectivesize(codasamples) b0 b1[1] b1[2] b1[3] sigma

64 Viewing Results

65 Parsing Data Convert codasamples to a matrix Will concatenate chains into one long one mcmcchain = as.matrix(codasamples)

66 Parsing Data Separate out data for each parameter #--- sigma---# zsigma = mcmcchain[, sigma ] #--- b0---# zb0 = mcmcchain[, b0 ] #--- b1---# chainlength = length(zb0) zb1 = matrix(0, ncol = chainlength, nrow = nageclass) for (j in 1:nAgeClass) { zb1[j, ] = mcmcchain[, paste("b1[", j, "]", sep = "")] }

67 Parsing Data Separate out data for each parameter Create a matrix to hold posteriors for each category in our Age Class variable: #--- sigma---# zsigma = mcmcchain[, sigma ] One column for each step in the chain, One row per category #--- b0---# zb0 = mcmcchain[, b0 ] #--- b1---# chainlength = length(zb0) zb1 = matrix(0, ncol = chainlength, nrow = nageclass) for (j in 1:nAgeClass) { zb1[j, ] = mcmcchain[, paste("b1[", j, "]", sep = "")] }

68 Parsing Data Separate out data for each parameter #--- sigma---# zsigma = mcmcchain[, sigma ] #--- b0---# zb0 = mcmcchain[, b0 ] Fill each row (category) with the posteriors from the appropriately-named column from the mcmcchain #--- b1---# chainlength = length(zb0) zb1 = matrix(0, ncol = chainlength, nrow = nageclass) for (j in 1:nAgeClass) { zb1[j, ] = mcmcchain[, paste("b1[", j, "]", sep = "")] }

69 Parsing Data Separate out data for each parameter #--- sigma---# zsigma = mcmcchain[, sigma ] #--- b0---# zb0 = mcmcchain[, b0 ] Will be: b1[1] b1[2] b1[3] #--- b1---# chainlength = length(zb0) zb1 = matrix(0, ncol = chainlength, nrow = nageclass) for (j in 1:nAgeClass) { zb1[j, ] = mcmcchain[, paste("b1[", j, "]", sep = "")] }

70 Convert Back to Original Scale #--- sigma---# sigma <- zsigma * ysd #--- b0---# b0 <- zb0 * ysd + ymean #--- b1---# b1 = matrix(0, ncol = chainlength, nrow = nageclass) for (j in 1:nAgeClass) { b1[j, ] = zb1[j, ] * ysd }

71 Plot Posterior Distributions Sigma par(mfrow = c(1, 1)) histinfo = plotpost(sigma, xlab = bquote(sigma)) mean = % HDI σ

72 Plot Posterior Distributions b0 histinfo = plotpost(b0, xlab = bquote(beta[0])) mean = % HDI β 0

73 Plot Posterior Distributions b1 par(mfrow = c(1, 3)) for (j in 1:nAgeClass) { histinfo = plotpost(b1[j, ], xlab = bquote(b1[.(j)]), main = paste( b1:, xnames[j])) }

74 Plot Posterior Distributions b1 Remember, these are effects of being in each category (deflections) To get actual values, add b0 to each b1: adult b1: foal b1: juvenile mean = mean = mean = % HDI % HDI % HDI b b b1 3

75 Comparing Groups

76 Comparing Groups eadults <- b1[1, ] efoals <- b1[2, ] ejuveniles <- b1[3, ]

77 Comparing Groups eadults <- b1[1, ] efoals <- b1[2, ] ejuveniles <- b1[3, ] e to indicate we are looking at effects rather than actual values

78 Comparing Groups Adult vs foal par(mfrow = c(1, 1)) AvF <- eadults - efoals histinfo = plotpost(avf, main = "Adult v Foal", xlab = "") Adult v Foal mean = % HDI

79 Comparing Groups Adult vs juvenile AvJ <- eadults - ejuveniles histinfo = plotpost(avj, main = "Adult v Juvenile", xlab = "") Adult v Juvenile mean = % HDI

80 Comparing Groups Foal vs juvenile FvJ <- efoals - ejuveniles histinfo = plotpost(fvj, main = "Foal v Juvenile", xlab = "") Foal v Juvenile mean = % HDI

81 Comparing Groups Adult vs others JpF <- (ejuveniles + efoals) / 2 AvO <- eadults - JpF histinfo = plotpost(avo, main = "Adults v Others", xlab = "") Adults v Others mean = % HDI

82 Comparing Groups Foals vs others ApJ <- (eadults + ejuveniles) / 2 FvO <- efoals - ApJ histinfo = plotpost(fvo, main = "Foals v Others", xlab = "") Foals v Others mean = % HDI

83 Comparing Groups Dot charts Nice for comparing the same parameter across groups Need to make a new data frame containing the mean, and error bars for the parameter in each group

84 Comparing Groups Dot charts First, store the mean of each coefficient as a new vector b1means <- c(mean(eadults), mean(efoals), mean(ejuveniles))

85 Comparing Groups Dot charts Get the highest density interval for each beta, and combine source( HDIofMCMC.R ) b1.adultshdi <- HDIofMCMC(eAdults) b1.foalshdi <- HDIofMCMC(eFoals) b1.juvenileshdi <- HDIofMCMC(eJuveniles) b1hdi <- rbind(b1.adultshdi, b1.foalshdi, b1.juvenileshdi)

86 Comparing Groups Dot charts Get the highest density interval for each beta, and combine source( HDIofMCMC.R ) b1.adultshdi <- HDIofMCMC(eAdults) b1.foalshdi <- HDIofMCMC(eFoals) b1.juvenileshdi <- HDIofMCMC(eJuveniles) b1hdi <- rbind(b1.adultshdi, b1.foalshdi, b1.juvenileshdi) Returns the upper and lower values

87 Comparing Groups Dot charts Get the highest density interval for each beta, and combine source( HDIofMCMC.R ) b1.adultshdi <- HDIofMCMC(eAdults) b1.foalshdi <- HDIofMCMC(eFoals) b1.juvenileshdi <- HDIofMCMC(eJuveniles) b1hdi <- rbind(b1.adultshdi, b1.foalshdi, b1.juvenileshdi) Can change what percentage to use with the credmass argument (i.e., to get the 89% HDI, credmass = 0.89)

88 Comparing Groups Dot charts Get the highest density interval for each beta, and combine source( HDIofMCMC.R ) b1.adultshdi <- HDIofMCMC(eAdults) b1.foalshdi <- HDIofMCMC(eFoals) b1.juvenileshdi <- HDIofMCMC(eJuveniles) b1hdi <- rbind(b1.adultshdi, b1.foalshdi, b1.juvenileshdi)

89 Comparing Groups Dot charts Plot the means of each group dotchart(b1means, pch = 16, labels = c("adult", "foal", juvenile"), xlim = c(min(b1hdi), max(b1hdi)), xlab = "Beta coefficient") juvenile foal adult Beta coefficients

90 Comparing Groups Dot charts Add the HDI bars segments(b1hdi[, 1], 1:3, b1hdi[, 2], 1:3, lwd = 2) juvenile foal adult Beta coefficients

91 Comparing Groups Dot charts Add the HDI bars segments(b1hdi[, 1], 1:3, b1hdi[, 2], 1:3, lwd = 2) juvenile x- and foal y-coordinates of starting positions for line adult segments Beta coefficients

92 Comparing Groups Dot charts Add the HDI bars segments(b1hdi[, 1], 1:3, b1hdi[, 2], 1:3, lwd = 2) juvenile x- and foal y-coordinates of ending positions for line adult segments Beta coefficients

93 Comparing Groups Dot charts Add the HDI bars segments(b1hdi[, 1], 1:3, b1hdi[, 2], 1:3, lwd = 2) juvenile foal Make line twice as thick as default adult Beta coefficients

94 Comparing Groups Dot charts Add the HDI bars segments(b1hdi[, 1], 1:3, b1hdi[, 2], 1:3, lwd = 2) juvenile foal adult Beta coefficients

95 Check Validity of Model: Posterior Predictive Check

96 Posterior Predictive Check Generate new y values for a subset of x values in the data set based on estimates for coefficients Compare these predicted y values to the real ones

97 Posterior Predictive Check Select a subset of the data on which to make predictions (let s pick 50) newrows <- seq(from = 1, to = NROW(horsedata), length = 50)

98 Posterior Predictive Check Select a subset of the data on which to make predictions (let s pick 30) npred <- 30 newrows <- seq(from = 1, to = NROW(horsedata), length = npred) newrows [1] [10] [19] [28] [37] [46]

99 Posterior Predictive Check newrows <- round(newrows) newrows [1] [27]

100 Posterior Predictive Check Parse out these data from the original data frame newdata <- horsedata[newrows, ]

101 Posterior Predictive Check Order based on categorical (predictor) variable to make plots clearer later newdata <- newdata[order(newdata$aclass), ]

102 Posterior Predictive Check Separate out just the x data too, on which we will make predictions xnew <- newdata$aclass xnewnums <- as.numeric(xnew)

103 Posterior Predictive Check Organize categorical coefficients into one data frame (makes indexing later easier) b <- rbind(eadults, efoals, ejuveniles)

104 Posterior Predictive Check Next, define a matrix that will hold all of the predicted y values Number of rows is the number of x values for prediction Number of columns is the number of y values generated from the MCMC process We ll start with the matrix filled with zeros, but will fill it in later postsampsize = length(b1) ynew = matrix(0, nrow = length(xnew), ncol = postsampsize)

105 Posterior Predictive Check Define a matrix for holding the HDI limits of the predicted y values Same number of rows as above Only two columns (one for each end of the HDI) yhdilim = matrix(0, nrow = length(xnew), ncol = 2)

106 Posterior Predictive Check Now, populate the ynew matrix by generating one predicted y value for each step in the chain for (i in 1:postSampSize) { ynew[, i] <- rnorm(length(xnew), mean = b0 + b[xnewnums], sd = sigma) }

107 Posterior Predictive Check Now, populate the ynew matrix by generating one predicted y value for each step in the chain for (i in 1:postSampSize) { ynew[, i] <- rnorm(length(xnew), mean = b0 + b[xnewnums], sd = sigma) } For every step in the chain, fill out a new column (all rows) of the new matrix...

108 Posterior Predictive Check Now, populate the ynew matrix by generating one predicted y value for each step in the chain for (i in 1:postSampSize) { ynew[, i] <- rnorm(length(xnew), mean = b0 + b[xnewnums], sd = sigma) }...pulling the same number of x values as in our xnew list from a normal distribution...

109 Posterior Predictive Check Now, populate the ynew matrix by generating one predicted y value for each step in the chain for (i in 1:postSampSize) { ynew[, i] <- rnorm(length(xnew), mean = b0 + b[xnewnums], sd = sigma) }...with a mean based on b0 plus which category each x value is in...

110 Posterior Predictive Check Now, populate the ynew matrix by generating one predicted y value for each step in the chain for (i in 1:postSampSize) { ynew[, i] <- rnorm(length(xnew), mean = b0 + b[xnewnums], sd = sigma) }...and a standard deviation based on those data from the posterior.

111 Posterior Predictive Check Calculate means for each prediction, and the associated low and high 95% HDI estimates means <- rowmeans(ynew) source("hdiofmcmc.r") for (i in 1:length(xNew)) { yhdilim[i, ] <- HDIofMCMC(yNew[i, ]) }

112 Posterior Predictive Check Combine the data predtable <- cbind(xnew, means, yhdilim)

113 Posterior Predictive Check Plot the results #--- Plot the predicted values (dot plot) ---# dotchart(means, labels = 1:nPred, xlim = c(min(yhdilim), max(yhdilim)), xlab = "Inbreeding Coefficient", pch = 16) segments(yhdilim[, 1], 1:nPred, yhdilim[, 2], 1:nPred, lwd = 2) #--- Plot the true values ---# points(x = newdata$ic, y = 1:nPred, pch = 16, col = rgb(1, 0, 0, 0.5))

114 Posterior Predictive Check Inbreeding Coefficient

115 Questions?

116 Homework!

117 Homework Current model assumes equal standard deviation (precision) for each category Modify the model to estimate a different standard deviation for each category, and evaluate in the same ways as we did the mean

118 Creative Commons License Anyone is allowed to distribute, remix, tweak, and build upon this work, even commercially, as long as they credit me for the original creation. See the Creative Commons website for more information. Click here to go back to beginning

Univariate Descriptive Statistics for One Sample

Univariate Descriptive Statistics for One Sample Department of Psychology and Human Development Vanderbilt University 1 Introduction 2 3 4 5 6 7 8 Introduction Our first step in descriptive statistics is to characterize the data in a single group of

More information

Statistical Simulation An Introduction

Statistical Simulation An Introduction James H. Steiger Department of Psychology and Human Development Vanderbilt University Regression Modeling, 2009 Simulation Through Bootstrapping Introduction 1 Introduction When We Don t Need Simulation

More information

Package leiv. R topics documented: February 20, Version Type Package

Package leiv. R topics documented: February 20, Version Type Package Version 2.0-7 Type Package Package leiv February 20, 2015 Title Bivariate Linear Errors-In-Variables Estimation Date 2015-01-11 Maintainer David Leonard Depends R (>= 2.9.0)

More information

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model Regression: Part II Linear Regression y~n X, 2 X Y Data Model β, σ 2 Process Model Β 0,V β s 1,s 2 Parameter Model Assumptions of Linear Model Homoskedasticity No error in X variables Error in Y variables

More information

Lecture 9: Predictive Inference for the Simple Linear Model

Lecture 9: Predictive Inference for the Simple Linear Model See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 9: Predictive Inference for the Simple Linear Model 36-401, Fall 2015, Section B 29 September 2015 Contents 1 Confidence intervals

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

Trevor Davies, Dalhousie University, Halifax, Canada Steven J.D. Martell, International Pacific Halibut Commission, Seattle WA.

Trevor Davies, Dalhousie University, Halifax, Canada Steven J.D. Martell, International Pacific Halibut Commission, Seattle WA. Comparison of ADMB-RE and JAGS Bayesian State-space length disaggregated population model: Estimating total mortality by decade for winter skate (Leucoraja ocellata) Trevor Davies, Dalhousie University,

More information

Matematisk statistik allmän kurs, MASA01:A, HT-15 Laborationer

Matematisk statistik allmän kurs, MASA01:A, HT-15 Laborationer Lunds universitet Matematikcentrum Matematisk statistik Matematisk statistik allmän kurs, MASA01:A, HT-15 Laborationer General information on labs During the rst half of the course MASA01 we will have

More information

WinLTA USER S GUIDE for Data Augmentation

WinLTA USER S GUIDE for Data Augmentation USER S GUIDE for Version 1.0 (for WinLTA Version 3.0) Linda M. Collins Stephanie T. Lanza Joseph L. Schafer The Methodology Center The Pennsylvania State University May 2002 Dev elopment of this program

More information

Correlation. January 11, 2018

Correlation. January 11, 2018 Correlation January 11, 2018 Contents Correlations The Scattterplot The Pearson correlation The computational raw-score formula Survey data Fun facts about r Sensitivity to outliers Spearman rank-order

More information

DAG models and Markov Chain Monte Carlo methods a short overview

DAG models and Markov Chain Monte Carlo methods a short overview DAG models and Markov Chain Monte Carlo methods a short overview Søren Højsgaard Institute of Genetics and Biotechnology University of Aarhus August 18, 2008 Printed: August 18, 2008 File: DAGMC-Lecture.tex

More information

Finite Mixture Model Diagnostics Using Resampling Methods

Finite Mixture Model Diagnostics Using Resampling Methods Finite Mixture Model Diagnostics Using Resampling Methods Bettina Grün Johannes Kepler Universität Linz Friedrich Leisch Universität für Bodenkultur Wien Abstract This paper illustrates the implementation

More information

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang

Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features. Yangxin Huang Bayesian Inference on Joint Mixture Models for Survival-Longitudinal Data with Multiple Features Yangxin Huang Department of Epidemiology and Biostatistics, COPH, USF, Tampa, FL yhuang@health.usf.edu January

More information

Business Statistics. Lecture 9: Simple Regression

Business Statistics. Lecture 9: Simple Regression Business Statistics Lecture 9: Simple Regression 1 On to Model Building! Up to now, class was about descriptive and inferential statistics Numerical and graphical summaries of data Confidence intervals

More information

Chapter 4 Exercises 1. Data Analysis & Graphics Using R Solutions to Exercises (December 11, 2006)

Chapter 4 Exercises 1. Data Analysis & Graphics Using R Solutions to Exercises (December 11, 2006) Chapter 4 Exercises 1 Data Analysis & Graphics Using R Solutions to Exercises (December 11, 2006) Preliminaries > library(daag) Exercise 2 Draw graphs that show, for degrees of freedom between 1 and 100,

More information

Package bpp. December 13, 2016

Package bpp. December 13, 2016 Type Package Package bpp December 13, 2016 Title Computations Around Bayesian Predictive Power Version 1.0.0 Date 2016-12-13 Author Kaspar Rufibach, Paul Jordan, Markus Abt Maintainer Kaspar Rufibach Depends

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Information. Hierarchical Models - Statistical Methods. References. Outline

Information. Hierarchical Models - Statistical Methods. References. Outline Information Hierarchical Models - Statistical Methods Sarah Filippi 1 University of Oxford Hilary Term 2015 Webpage: http://www.stats.ox.ac.uk/~filippi/msc_ hierarchicalmodels_2015.html Lectures: Week

More information

Introductory Statistics with R: Linear models for continuous response (Chapters 6, 7, and 11)

Introductory Statistics with R: Linear models for continuous response (Chapters 6, 7, and 11) Introductory Statistics with R: Linear models for continuous response (Chapters 6, 7, and 11) Statistical Packages STAT 1301 / 2300, Fall 2014 Sungkyu Jung Department of Statistics University of Pittsburgh

More information

From Practical Data Analysis with JMP, Second Edition. Full book available for purchase here. About This Book... xiii About The Author...

From Practical Data Analysis with JMP, Second Edition. Full book available for purchase here. About This Book... xiii About The Author... From Practical Data Analysis with JMP, Second Edition. Full book available for purchase here. Contents About This Book... xiii About The Author... xxiii Chapter 1 Getting Started: Data Analysis with JMP...

More information

Package lmm. R topics documented: March 19, Version 0.4. Date Title Linear mixed models. Author Joseph L. Schafer

Package lmm. R topics documented: March 19, Version 0.4. Date Title Linear mixed models. Author Joseph L. Schafer Package lmm March 19, 2012 Version 0.4 Date 2012-3-19 Title Linear mixed models Author Joseph L. Schafer Maintainer Jing hua Zhao Depends R (>= 2.0.0) Description Some

More information

ASA Section on Survey Research Methods

ASA Section on Survey Research Methods REGRESSION-BASED STATISTICAL MATCHING: RECENT DEVELOPMENTS Chris Moriarity, Fritz Scheuren Chris Moriarity, U.S. Government Accountability Office, 411 G Street NW, Washington, DC 20548 KEY WORDS: data

More information

Kernel density estimation in R

Kernel density estimation in R Kernel density estimation in R Kernel density estimation can be done in R using the density() function in R. The default is a Guassian kernel, but others are possible also. It uses it s own algorithm to

More information

Weakness of Beta priors (or conjugate priors in general) They can only represent a limited range of prior beliefs. For example... There are no bimodal beta distributions (except when the modes are at 0

More information

Contents. 1 Introduction: what is overdispersion? 2 Recognising (and testing for) overdispersion. 1 Introduction: what is overdispersion?

Contents. 1 Introduction: what is overdispersion? 2 Recognising (and testing for) overdispersion. 1 Introduction: what is overdispersion? Overdispersion, and how to deal with it in R and JAGS (requires R-packages AER, coda, lme4, R2jags, DHARMa/devtools) Carsten F. Dormann 07 December, 2016 Contents 1 Introduction: what is overdispersion?

More information

Package zoib. October 18, 2016

Package zoib. October 18, 2016 Type Package Package zoib October 18, 2016 Title Bayesian Inference for Beta Regression and Zero-or-One Inflated Beta Regression Version 1.4.2 Date 2016-10-18 Author Fang Liu with

More information

Using SPSS for One Way Analysis of Variance

Using SPSS for One Way Analysis of Variance Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial

More information

Bayesian Dynamic Modeling for Space-time Data in R

Bayesian Dynamic Modeling for Space-time Data in R Bayesian Dynamic Modeling for Space-time Data in R Andrew O. Finley and Sudipto Banerjee September 5, 2014 We make use of several libraries in the following example session, including: ˆ library(fields)

More information

R-squared for Bayesian regression models

R-squared for Bayesian regression models R-squared for Bayesian regression models Andrew Gelman Ben Goodrich Jonah Gabry Imad Ali 8 Nov 2017 Abstract The usual definition of R 2 (variance of the predicted values divided by the variance of the

More information

A Handbook of Statistical Analyses Using R. Brian S. Everitt and Torsten Hothorn

A Handbook of Statistical Analyses Using R. Brian S. Everitt and Torsten Hothorn A Handbook of Statistical Analyses Using R Brian S. Everitt and Torsten Hothorn CHAPTER 6 Logistic Regression and Generalised Linear Models: Blood Screening, Women s Role in Society, and Colonic Polyps

More information

FIN822 project 2 Project 2 contains part I and part II. (Due on November 10, 2008)

FIN822 project 2 Project 2 contains part I and part II. (Due on November 10, 2008) FIN822 project 2 Project 2 contains part I and part II. (Due on November 10, 2008) Part I Logit Model in Bankruptcy Prediction You do not believe in Altman and you decide to estimate the bankruptcy prediction

More information

2015 SISG Bayesian Statistics for Genetics R Notes: Generalized Linear Modeling

2015 SISG Bayesian Statistics for Genetics R Notes: Generalized Linear Modeling 2015 SISG Bayesian Statistics for Genetics R Notes: Generalized Linear Modeling Jon Wakefield Departments of Statistics and Biostatistics, University of Washington 2015-07-24 Case control example We analyze

More information

Interactions and Factorial ANOVA

Interactions and Factorial ANOVA Interactions and Factorial ANOVA STA442/2101 F 2017 See last slide for copyright information 1 Interactions Interaction between explanatory variables means It depends. Relationship between one explanatory

More information

MCMC Methods: Gibbs and Metropolis

MCMC Methods: Gibbs and Metropolis MCMC Methods: Gibbs and Metropolis Patrick Breheny February 28 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/30 Introduction As we have seen, the ability to sample from the posterior distribution

More information

Problems from Chapter 3 of Shumway and Stoffer s Book

Problems from Chapter 3 of Shumway and Stoffer s Book UNIVERSITY OF UTAH GUIDED READING TIME SERIES Problems from Chapter 3 of Shumway and Stoffer s Book Author: Curtis MILLER Supervisor: Prof. Lajos HORVATH November 10, 2015 UNIVERSITY OF UTAH DEPARTMENT

More information

Multiple Regression Introduction to Statistics Using R (Psychology 9041B)

Multiple Regression Introduction to Statistics Using R (Psychology 9041B) Multiple Regression Introduction to Statistics Using R (Psychology 9041B) Paul Gribble Winter, 2016 1 Correlation, Regression & Multiple Regression 1.1 Bivariate correlation The Pearson product-moment

More information

The cts Package. June 23, 2006

The cts Package. June 23, 2006 The cts Package June 23, 2006 Title Continuous Time Autoregressive Models Version 1.0 Author Fortran original by G. Tunnicliffe-Wilson and Zhu Wang, R port by Zhu Wang. Continuous Time Autoregressive Models

More information

BayesSummaryStatLM: An R package for Bayesian Linear Models for

BayesSummaryStatLM: An R package for Bayesian Linear Models for BayesSummaryStatLM: An R package for Bayesian Linear Models for Big Data and Data Science Alexey Miroshnikov 1, Evgeny Savel'ev, Erin M. Conlon 1* 1 Department of Mathematics and Statistics, University

More information

Package spatial.gev.bma

Package spatial.gev.bma Type Package Package spatial.gev.bma February 20, 2015 Title Hierarchical spatial generalized extreme value (GEV) modeling with Bayesian Model Averaging (BMA) Version 1.0 Date 2014-03-11 Author Alex Lenkoski

More information

Terminology for Statistical Data

Terminology for Statistical Data Terminology for Statistical Data variables - features - attributes observations - cases (consist of multiple values) In a standard data matrix, variables or features correspond to columns observations

More information

Package rnmf. February 20, 2015

Package rnmf. February 20, 2015 Type Package Title Robust Nonnegative Matrix Factorization Package rnmf February 20, 2015 An implementation of robust nonnegative matrix factorization (rnmf). The rnmf algorithm decomposes a nonnegative

More information

Analyzing ordinal data with metric models: What could possibly go wrong?

Analyzing ordinal data with metric models: What could possibly go wrong? Draft of November 6, 2017 Analyzing ordinal data with metric models: What could possibly go wrong? Torrin M. Liddell and John K. Kruschke Department of Psychological and Brain Sciences Indiana University,

More information

Package elhmc. R topics documented: July 4, Type Package

Package elhmc. R topics documented: July 4, Type Package Package elhmc July 4, 2017 Type Package Title Sampling from a Empirical Likelihood Bayesian Posterior of Parameters Using Hamiltonian Monte Carlo Version 1.1.0 Date 2017-07-03 Author Dang Trung Kien ,

More information

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.

PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation. PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using

More information

Fitting Cox Regression Models

Fitting Cox Regression Models Department of Psychology and Human Development Vanderbilt University GCM, 2010 1 Introduction 2 3 4 Introduction The Partial Likelihood Method Implications and Consequences of the Cox Approach 5 Introduction

More information

ESP 178 Applied Research Methods. 2/23: Quantitative Analysis

ESP 178 Applied Research Methods. 2/23: Quantitative Analysis ESP 178 Applied Research Methods 2/23: Quantitative Analysis Data Preparation Data coding create codebook that defines each variable, its response scale, how it was coded Data entry for mail surveys and

More information

lme4 Luke Chang Last Revised July 16, Fitting Linear Mixed Models with a Varying Intercept

lme4 Luke Chang Last Revised July 16, Fitting Linear Mixed Models with a Varying Intercept lme4 Luke Chang Last Revised July 16, 2010 1 Using lme4 1.1 Fitting Linear Mixed Models with a Varying Intercept We will now work through the same Ultimatum Game example from the regression section and

More information

> nrow(hmwk1) # check that the number of observations is correct [1] 36 > attach(hmwk1) # I like to attach the data to avoid the '$' addressing

> nrow(hmwk1) # check that the number of observations is correct [1] 36 > attach(hmwk1) # I like to attach the data to avoid the '$' addressing Homework #1 Key Spring 2014 Psyx 501, Montana State University Prof. Colleen F Moore Preliminary comments: The design is a 4x3 factorial between-groups. Non-athletes do aerobic training for 6, 4 or 2 weeks,

More information

Unit5: Inferenceforcategoricaldata. 4. MT2 Review. Sta Fall Duke University, Department of Statistical Science

Unit5: Inferenceforcategoricaldata. 4. MT2 Review. Sta Fall Duke University, Department of Statistical Science Unit5: Inferenceforcategoricaldata 4. MT2 Review Sta 101 - Fall 2015 Duke University, Department of Statistical Science Dr. Çetinkaya-Rundel Slides posted at http://bit.ly/sta101_f15 Outline 1. Housekeeping

More information

CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA

CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA Examples: Multilevel Modeling With Complex Survey Data CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA Complex survey data refers to data obtained by stratification, cluster sampling and/or

More information

ASQWorkshoponBayesianStatisticsfor Industry

ASQWorkshoponBayesianStatisticsfor Industry ASQWorkshoponBayesianStatisticsfor Industry March 8, 2006 Prof. Stephen Vardeman Statistics and IMSE Departments Iowa State University vardeman@iastate.edu 1 Module 6: Some One-Sample Normal Examples We

More information

Lecture 2: Linear regression

Lecture 2: Linear regression Lecture 2: Linear regression Roger Grosse 1 Introduction Let s ump right in and look at our first machine learning algorithm, linear regression. In regression, we are interested in predicting a scalar-valued

More information

Hotelling s One- Sample T2

Hotelling s One- Sample T2 Chapter 405 Hotelling s One- Sample T2 Introduction The one-sample Hotelling s T2 is the multivariate extension of the common one-sample or paired Student s t-test. In a one-sample t-test, the mean response

More information

SAMSI Astrostatistics Tutorial. More Markov chain Monte Carlo & Demo of Mathematica software

SAMSI Astrostatistics Tutorial. More Markov chain Monte Carlo & Demo of Mathematica software SAMSI Astrostatistics Tutorial More Markov chain Monte Carlo & Demo of Mathematica software Phil Gregory University of British Columbia 26 Bayesian Logical Data Analysis for the Physical Sciences Contents:

More information

Homework #5 - Answer Key Mark Scheuerell

Homework #5 - Answer Key Mark Scheuerell Homework #5 - Answer Key Mark Scheuerell Background Here are the answers for the homework problems from the sixth week of class on the Dynamic Linear Models (DLMs) material. Begin by getting the data get

More information

11 Factors, ANOVA, and Regression: SAS versus Splus

11 Factors, ANOVA, and Regression: SAS versus Splus Adapted from P. Smith, and expanded 11 Factors, ANOVA, and Regression: SAS versus Splus Factors. A factor is a variable with finitely many values or levels which is treated as a predictor within regression-type

More information

Conformational Analysis of n-butane

Conformational Analysis of n-butane Conformational Analysis of n-butane In this exercise you will calculate the Molecular Mechanics (MM) single point energy of butane in various conformations with respect to internal rotation around the

More information

GENERALIZED ERROR DISTRIBUTION

GENERALIZED ERROR DISTRIBUTION CHAPTER 21 GENERALIZED ERROR DISTRIBUTION 21.1 ASSIGNMENT Write R functions for the Generalized Error Distribution, GED. Nelson [1991] introduced the Generalized Error Distribution for modeling GARCH time

More information

A Handbook of Statistical Analyses Using R 2nd Edition. Brian S. Everitt and Torsten Hothorn

A Handbook of Statistical Analyses Using R 2nd Edition. Brian S. Everitt and Torsten Hothorn A Handbook of Statistical Analyses Using R 2nd Edition Brian S. Everitt and Torsten Hothorn CHAPTER 7 Logistic Regression and Generalised Linear Models: Blood Screening, Women s Role in Society, Colonic

More information

Package plw. R topics documented: May 7, Type Package

Package plw. R topics documented: May 7, Type Package Type Package Package plw May 7, 2018 Title Probe level Locally moderated Weighted t-tests. Version 1.40.0 Date 2009-07-22 Author Magnus Astrand Maintainer Magnus Astrand

More information

Package CEC. R topics documented: August 29, Title Cross-Entropy Clustering Version Date

Package CEC. R topics documented: August 29, Title Cross-Entropy Clustering Version Date Title Cross-Entropy Clustering Version 0.9.4 Date 2016-04-23 Package CEC August 29, 2016 Author Konrad Kamieniecki [aut, cre], Przemyslaw Spurek [ctb] Maintainer Konrad Kamieniecki

More information

(a) The percentage of variation in the response is given by the Multiple R-squared, which is 52.67%.

(a) The percentage of variation in the response is given by the Multiple R-squared, which is 52.67%. STOR 664 Homework 2 Solution Part A Exercise (Faraway book) Ch2 Ex1 > data(teengamb) > attach(teengamb) > tgl summary(tgl) Coefficients: Estimate Std Error t value

More information

Homework 9: Protein Folding & Simulated Annealing : Programming for Scientists Due: Thursday, April 14, 2016 at 11:59 PM

Homework 9: Protein Folding & Simulated Annealing : Programming for Scientists Due: Thursday, April 14, 2016 at 11:59 PM Homework 9: Protein Folding & Simulated Annealing 02-201: Programming for Scientists Due: Thursday, April 14, 2016 at 11:59 PM 1. Set up We re back to Go for this assignment. 1. Inside of your src directory,

More information

Introduction to Analysis of Variance (ANOVA) Part 2

Introduction to Analysis of Variance (ANOVA) Part 2 Introduction to Analysis of Variance (ANOVA) Part 2 Single factor Serpulid recruitment and biofilms Effect of biofilm type on number of recruiting serpulid worms in Port Phillip Bay Response variable:

More information

Bayesian Inference. Chapter 1. Introduction and basic concepts

Bayesian Inference. Chapter 1. Introduction and basic concepts Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master

More information

Multilevel Statistical Models: 3 rd edition, 2003 Contents

Multilevel Statistical Models: 3 rd edition, 2003 Contents Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction

More information

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/?? to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter

More information

ANOVA Situation The F Statistic Multiple Comparisons. 1-Way ANOVA MATH 143. Department of Mathematics and Statistics Calvin College

ANOVA Situation The F Statistic Multiple Comparisons. 1-Way ANOVA MATH 143. Department of Mathematics and Statistics Calvin College 1-Way ANOVA MATH 143 Department of Mathematics and Statistics Calvin College An example ANOVA situation Example (Treating Blisters) Subjects: 25 patients with blisters Treatments: Treatment A, Treatment

More information

DOE Wizard Screening Designs

DOE Wizard Screening Designs DOE Wizard Screening Designs Revised: 10/10/2017 Summary... 1 Example... 2 Design Creation... 3 Design Properties... 13 Saving the Design File... 16 Analyzing the Results... 17 Statistical Model... 18

More information

Linear regression models in matrix form

Linear regression models in matrix form 1 Linear regression models in matrix form This chapter shows how to write linear regression models in matrix form. The purpose is to get you comfortable writing multivariate linear models in different

More information

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and TMA 4265 Stochastic Processes Semester project, fall 2014 Student number 730631 and 732038 Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}.

More information

CLUe Training An Introduction to Machine Learning in R with an example from handwritten digit recognition

CLUe Training An Introduction to Machine Learning in R with an example from handwritten digit recognition CLUe Training An Introduction to Machine Learning in R with an example from handwritten digit recognition Ad Feelders Universiteit Utrecht Department of Information and Computing Sciences Algorithmic Data

More information

Stat 20 Midterm 1 Review

Stat 20 Midterm 1 Review Stat 20 Midterm Review February 7, 2007 This handout is intended to be a comprehensive study guide for the first Stat 20 midterm exam. I have tried to cover all the course material in a way that targets

More information

Model Building Chap 5 p251

Model Building Chap 5 p251 Model Building Chap 5 p251 Models with one qualitative variable, 5.7 p277 Example 4 Colours : Blue, Green, Lemon Yellow and white Row Blue Green Lemon Insects trapped 1 0 0 1 45 2 0 0 1 59 3 0 0 1 48 4

More information

MCMC notes by Mark Holder

MCMC notes by Mark Holder MCMC notes by Mark Holder Bayesian inference Ultimately, we want to make probability statements about true values of parameters, given our data. For example P(α 0 < α 1 X). According to Bayes theorem:

More information

R-companion to: Estimation of the Thurstonian model for the 2-AC protocol

R-companion to: Estimation of the Thurstonian model for the 2-AC protocol R-companion to: Estimation of the Thurstonian model for the 2-AC protocol Rune Haubo Bojesen Christensen, Hye-Seong Lee & Per Bruun Brockhoff August 24, 2017 This document describes how the examples in

More information

Robust Inference in Generalized Linear Models

Robust Inference in Generalized Linear Models Robust Inference in Generalized Linear Models Claudio Agostinelli claudio@unive.it Dipartimento di Statistica Università Ca Foscari di Venezia San Giobbe, Cannaregio 873, Venezia Tel. 041 2347446, Fax.

More information

How to work correctly statistically about sex ratio

How to work correctly statistically about sex ratio How to work correctly statistically about sex ratio Marc Girondot Version of 12th April 2014 Contents 1 Load packages 2 2 Introduction 2 3 Confidence interval of a proportion 4 3.1 Pourcentage..............................

More information

Luke B Smith and Brian J Reich North Carolina State University May 21, 2013

Luke B Smith and Brian J Reich North Carolina State University May 21, 2013 BSquare: An R package for Bayesian simultaneous quantile regression Luke B Smith and Brian J Reich North Carolina State University May 21, 2013 BSquare in an R package to conduct Bayesian quantile regression

More information

Using R formulae to test for main effects in the presence of higher-order interactions

Using R formulae to test for main effects in the presence of higher-order interactions Using R formulae to test for main effects in the presence of higher-order interactions Roger Levy arxiv:1405.2094v2 [stat.me] 15 Jan 2018 January 16, 2018 Abstract Traditional analysis of variance (ANOVA)

More information

theta a framework for template-based modeling and inference

theta a framework for template-based modeling and inference theta a framework for template-based modeling and inference Thomas Müller Jochen Ott Jeannine Wagner-Kuhr Institut für Experimentelle Kernphysik, Karlsruhe Institute of Technology (KIT), Germany June 17,

More information

MISCELLANEOUS REGRESSION TOPICS

MISCELLANEOUS REGRESSION TOPICS DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 816 MISCELLANEOUS REGRESSION TOPICS I. AGENDA: A. Example of correcting for autocorrelation. B. Regression with ordinary independent

More information

Package SpatPCA. R topics documented: February 20, Type Package

Package SpatPCA. R topics documented: February 20, Type Package Type Package Package SpatPCA February 20, 2018 Title Regularized Principal Component Analysis for Spatial Data Version 1.2.0.0 Date 2018-02-20 URL https://github.com/egpivo/spatpca BugReports https://github.com/egpivo/spatpca/issues

More information

SAMPLING ALGORITHMS. In general. Inference in Bayesian models

SAMPLING ALGORITHMS. In general. Inference in Bayesian models SAMPLING ALGORITHMS SAMPLING ALGORITHMS In general A sampling algorithm is an algorithm that outputs samples x 1, x 2,... from a given distribution P or density p. Sampling algorithms can for example be

More information

Regression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont.

Regression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont. TCELL 9/4/205 36-309/749 Experimental Design for Behavioral and Social Sciences Simple Regression Example Male black wheatear birds carry stones to the nest as a form of sexual display. Soler et al. wanted

More information

arxiv: v2 [stat.co] 17 Nov 2016

arxiv: v2 [stat.co] 17 Nov 2016 blavaan: Bayesian Structural Equation Models via Parameter Expansion Edgar C. Merkle University of Missouri Yves Rosseel Ghent University Abstract arxiv:1511.05604v2 [stat.co] 17 Nov 2016 This article

More information

Package FDRSeg. September 20, 2017

Package FDRSeg. September 20, 2017 Type Package Package FDRSeg September 20, 2017 Title FDR-Control in Multiscale Change-Point Segmentation Version 1.0-3 Date 2017-09-20 Author Housen Li [aut], Hannes Sieling [aut], Timo Aspelmeier [cre]

More information

Topic 1. Definitions

Topic 1. Definitions S Topic. Definitions. Scalar A scalar is a number. 2. Vector A vector is a column of numbers. 3. Linear combination A scalar times a vector plus a scalar times a vector, plus a scalar times a vector...

More information

BIOS 312: Precision of Statistical Inference

BIOS 312: Precision of Statistical Inference and Power/Sample Size and Standard Errors BIOS 312: of Statistical Inference Chris Slaughter Department of Biostatistics, Vanderbilt University School of Medicine January 3, 2013 Outline Overview and Power/Sample

More information

Tutorial 8 Raster Data Analysis

Tutorial 8 Raster Data Analysis Objectives Tutorial 8 Raster Data Analysis This tutorial is designed to introduce you to a basic set of raster-based analyses including: 1. Displaying Digital Elevation Model (DEM) 2. Slope calculations

More information

Chapter 2. Mean and Standard Deviation

Chapter 2. Mean and Standard Deviation Chapter 2. Mean and Standard Deviation The median is known as a measure of location; that is, it tells us where the data are. As stated in, we do not need to know all the exact values to calculate the

More information

Online Resource 2: Why Tobit regression?

Online Resource 2: Why Tobit regression? Online Resource 2: Why Tobit regression? March 8, 2017 Contents 1 Introduction 2 2 Inspect data graphically 3 3 Why is linear regression not good enough? 3 3.1 Model assumptions are not fulfilled.................................

More information

Bayesian linear regression

Bayesian linear regression Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding

More information

MCMC Review. MCMC Review. Gibbs Sampling. MCMC Review

MCMC Review. MCMC Review. Gibbs Sampling. MCMC Review MCMC Review http://jackman.stanford.edu/mcmc/icpsr99.pdf http://students.washington.edu/fkrogsta/bayes/stat538.pdf http://www.stat.berkeley.edu/users/terry/classes/s260.1998 /Week9a/week9a/week9a.html

More information

One-Way Analysis of Variance: ANOVA

One-Way Analysis of Variance: ANOVA One-Way Analysis of Variance: ANOVA Dr. J. Kyle Roberts Southern Methodist University Simmons School of Education and Human Development Department of Teaching and Learning Background to ANOVA Recall from

More information

36-309/749 Experimental Design for Behavioral and Social Sciences. Sep. 22, 2015 Lecture 4: Linear Regression

36-309/749 Experimental Design for Behavioral and Social Sciences. Sep. 22, 2015 Lecture 4: Linear Regression 36-309/749 Experimental Design for Behavioral and Social Sciences Sep. 22, 2015 Lecture 4: Linear Regression TCELL Simple Regression Example Male black wheatear birds carry stones to the nest as a form

More information

Introducing GIS analysis

Introducing GIS analysis 1 Introducing GIS analysis GIS analysis lets you see patterns and relationships in your geographic data. The results of your analysis will give you insight into a place, help you focus your actions, or

More information

Chapter 19: Logistic regression

Chapter 19: Logistic regression Chapter 19: Logistic regression Self-test answers SELF-TEST Rerun this analysis using a stepwise method (Forward: LR) entry method of analysis. The main analysis To open the main Logistic Regression dialog

More information

Package mcmcse. July 4, 2017

Package mcmcse. July 4, 2017 Version 1.3-2 Date 2017-07-03 Title Monte Carlo Standard Errors for MCMC Package mcmcse July 4, 2017 Author James M. Flegal , John Hughes , Dootika Vats ,

More information

Stat 451 Lecture Notes Monte Carlo Integration

Stat 451 Lecture Notes Monte Carlo Integration Stat 451 Lecture Notes 06 12 Monte Carlo Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 23 in Lange, and Chapters 3 4 in Robert & Casella 2 Updated:

More information