Special Topic: Bayesian Finite Population Survey Sampling
|
|
- Delilah Miller
- 5 years ago
- Views:
Transcription
1 Special Topic: Bayesian Finite Population Survey Sampling Sudipto Banerjee Division of Biostatistics School of Public Health University of Minnesota April 2, Special Topic
2 Overview Scientific survey sampling (Neyman, 1934) represents one of statistics greatest contribution to science 2 Special Topic
3 Overview Scientific survey sampling (Neyman, 1934) represents one of statistics greatest contribution to science It provides the remarkable ability to obtain useful inferences about large populations from modest samples, with measurable uncertainty 2 Special Topic
4 Overview Scientific survey sampling (Neyman, 1934) represents one of statistics greatest contribution to science It provides the remarkable ability to obtain useful inferences about large populations from modest samples, with measurable uncertainty It forms the backbone of data collection in science and government 2 Special Topic
5 Overview Scientific survey sampling (Neyman, 1934) represents one of statistics greatest contribution to science It provides the remarkable ability to obtain useful inferences about large populations from modest samples, with measurable uncertainty It forms the backbone of data collection in science and government Traditional (and widely favoured) approach: Randomization-based 2 Special Topic
6 Overview Scientific survey sampling (Neyman, 1934) represents one of statistics greatest contribution to science It provides the remarkable ability to obtain useful inferences about large populations from modest samples, with measurable uncertainty It forms the backbone of data collection in science and government Traditional (and widely favoured) approach: Randomization-based Advocating Bayes for survey sampling is like swimming upstream : Modelling assumptions of any kind are anathema here, let alone priors and further subjectivity that Bayes brings along! 2 Special Topic
7 Fundamental Notions For a population with N units, let Y = (Y 1,..., Y N ) denote the population variable 3 Special Topic
8 Fundamental Notions For a population with N units, let Y = (Y 1,..., Y N ) denote the population variable Y i is the value of unit i in the population 3 Special Topic
9 Fundamental Notions For a population with N units, let Y = (Y 1,..., Y N ) denote the population variable Y i is the value of unit i in the population Let I = (I 1,..., I N ) be the set of inclusion indicator variables 3 Special Topic
10 Fundamental Notions For a population with N units, let Y = (Y 1,..., Y N ) denote the population variable Y i is the value of unit i in the population Let I = (I 1,..., I N ) be the set of inclusion indicator variables I i = 1 if unit i is selected in the sample; I i = 0 otherwise 3 Special Topic
11 Fundamental Notions For a population with N units, let Y = (Y 1,..., Y N ) denote the population variable Y i is the value of unit i in the population Let I = (I 1,..., I N ) be the set of inclusion indicator variables I i = 1 if unit i is selected in the sample; I i = 0 otherwise Let I denote the complement of I, obtained by switching the 1 s to 0 s and 0 s to 1 s in I 3 Special Topic
12 Fundamental Notions For a population with N units, let Y = (Y 1,..., Y N ) denote the population variable Y i is the value of unit i in the population Let I = (I 1,..., I N ) be the set of inclusion indicator variables I i = 1 if unit i is selected in the sample; I i = 0 otherwise Let I denote the complement of I, obtained by switching the 1 s to 0 s and 0 s to 1 s in I Thus, I represents the exclusion indicator variables, indexing units not selected in the sample. 3 Special Topic
13 Simple random sampling Simple Random Sampling With Replacement (SRSWR): We draw a unit from the population, put it back and draw again. 4 Special Topic
14 Simple random sampling Simple Random Sampling With Replacement (SRSWR): We draw a unit from the population, put it back and draw again. Simple Random Sampling Without Replacement (SRSWOR): We draw a unit from the population, keep it aside and draw again 4 Special Topic
15 Simple random sampling Simple Random Sampling With Replacement (SRSWR): We draw a unit from the population, put it back and draw again. Simple Random Sampling Without Replacement (SRSWOR): We draw a unit from the population, keep it aside and draw again SRSWR leads to binomial distribution; chances of units being drawn remain the same between draws 4 Special Topic
16 Simple random sampling Simple Random Sampling With Replacement (SRSWR): We draw a unit from the population, put it back and draw again. Simple Random Sampling Without Replacement (SRSWOR): We draw a unit from the population, keep it aside and draw again SRSWR leads to binomial distribution; chances of units being drawn remain the same between draws SRSWOR leads to hypergeometric distribution: chances of units being drawn change from draw to draw 4 Special Topic
17 Simple random sampling Simple Random Sampling With Replacement (SRSWR): We draw a unit from the population, put it back and draw again. Simple Random Sampling Without Replacement (SRSWOR): We draw a unit from the population, keep it aside and draw again SRSWR leads to binomial distribution; chances of units being drawn remain the same between draws SRSWOR leads to hypergeometric distribution: chances of units being drawn change from draw to draw Key questions: What are the random variables? 4 Special Topic
18 Simple random sampling Simple Random Sampling With Replacement (SRSWR): We draw a unit from the population, put it back and draw again. Simple Random Sampling Without Replacement (SRSWOR): We draw a unit from the population, keep it aside and draw again SRSWR leads to binomial distribution; chances of units being drawn remain the same between draws SRSWOR leads to hypergeometric distribution: chances of units being drawn change from draw to draw Key questions: What are the random variables? What are the parameters? 4 Special Topic
19 Estimating the population mean Let Y I be the collection of units selected in the sample; we often write Y I as y = (y 1,..., y n ). 5 Special Topic
20 Estimating the population mean Let Y I be the collection of units selected in the sample; we often write Y I as y = (y 1,..., y n ). IMPORTANT: The lower-case y i s simply index the sample; thus y i does not necessarily represent Y i, but the i-th member of the sample. 5 Special Topic
21 Estimating the population mean Let Y I be the collection of units selected in the sample; we often write Y I as y = (y 1,..., y n ). IMPORTANT: The lower-case y i s simply index the sample; thus y i does not necessarily represent Y i, but the i-th member of the sample. Let Ȳ = 1 N N i=1 Y i be the population mean 5 Special Topic
22 Estimating the population mean Let Y I be the collection of units selected in the sample; we often write Y I as y = (y 1,..., y n ). IMPORTANT: The lower-case y i s simply index the sample; thus y i does not necessarily represent Y i, but the i-th member of the sample. Let Ȳ = 1 N N i=1 Y i be the population mean Let ȲI = ȳ = 1 n n i=1 y i be the sample mean; i.e. the mean of Y I 5 Special Topic
23 Estimating the population mean Let Y I be the collection of units selected in the sample; we often write Y I as y = (y 1,..., y n ). IMPORTANT: The lower-case y i s simply index the sample; thus y i does not necessarily represent Y i, but the i-th member of the sample. Let Ȳ = 1 N N i=1 Y i be the population mean Let ȲI = ȳ = 1 n n i=1 y i be the sample mean; i.e. the mean of Y I Let Y I be the collection of units excluded from the sample; let ȲI be the mean of Y I. 5 Special Topic
24 Estimating the population mean Let Y I be the collection of units selected in the sample; we often write Y I as y = (y 1,..., y n ). IMPORTANT: The lower-case y i s simply index the sample; thus y i does not necessarily represent Y i, but the i-th member of the sample. Let Ȳ = 1 N N i=1 Y i be the population mean Let ȲI = ȳ = 1 n n i=1 y i be the sample mean; i.e. the mean of Y I Let Y I be the collection of units excluded from the sample; let ȲI be the mean of Y I. Then Ȳ = fȳ + (1 f)ȳi ; f = n N is the sampling fraction 5 Special Topic
25 Estimating the population mean Key questions answered: What are the random variables? Answer: The I i s What are the parameters? Answer: The Y i s. 6 Special Topic
26 Estimating the population mean Key questions answered: What are the random variables? Answer: The I i s What are the parameters? Answer: The Y i s. For SRSWOR: E[I i ] = P (I i = 1) = (N 1 n 1) = ( N n) n N for each i. 6 Special Topic
27 Estimating the population mean Key questions answered: What are the random variables? Answer: The I i s What are the parameters? Answer: The Y i s. For SRSWOR: E[I i ] = P (I i = 1) = (N 1 n 1) = ( N n) n N for each i. Unbiasedness of the sample mean ȳ: Write ȳ = 1 N n i=1 I iy i and see: E[ȳ] = 1 n N E[I i ]Y i = 1 n i=1 N i=1 n N Y i = Ȳ. 6 Special Topic
28 Estimating the population mean Key questions answered: What are the random variables? Answer: The I i s What are the parameters? Answer: The Y i s. For SRSWOR: E[I i ] = P (I i = 1) = (N 1 n 1) = ( N n) n N for each i. Unbiasedness of the sample mean ȳ: Write ȳ = 1 N n i=1 I iy i and see: E[ȳ] = 1 n N E[I i ]Y i = 1 n i=1 N i=1 n N Y i = Ȳ. For SRSWR: ȳ = N i=1 W iy i, where W i is the number of times unit i appears in the sample. Then, W i Bin(n, 1 N ), so E[W i ] = n N, hence E[ȳ] = 1 n N i=1 E[W i]y i = 1 n N i=1 n N Y i = Ȳ 6 Special Topic
29 Bayesian modelling We need a prior for each population unit: Y i 7 Special Topic
30 Bayesian modelling We need a prior for each population unit: Y i Suppose Y i iid N(µ, σ 2 ) 7 Special Topic
31 Bayesian modelling We need a prior for each population unit: Y i Suppose Y i iid N(µ, σ 2 ) Let us first assume σ 2 is known and P (µ) 1. 7 Special Topic
32 Bayesian modelling We need a prior for each population unit: Y i Suppose Y i iid N(µ, σ 2 ) Let us first assume σ 2 is known and P (µ) 1. What is seen? D = (Y I, I) 7 Special Topic
33 Bayesian modelling We need a prior for each population unit: Y i Suppose Y i iid N(µ, σ 2 ) Let us first assume σ 2 is known and P (µ) 1. What is seen? D = (Y I, I) Note that P (D µ, σ 2 ) = P (I)P (Y I σ 2 ) P (Y I σ 2 ); so likelihood is product of n independent N(µ, σ 2 ). 7 Special Topic
34 Bayesian modelling We need a prior for each population unit: Y i Suppose Y i iid N(µ, σ 2 ) Let us first assume σ 2 is known and P (µ) 1. What is seen? D = (Y I, I) Note that P (D µ, σ 2 ) = P (I)P (Y I σ 2 ) P (Y I σ 2 ); so likelihood is product of n independent N(µ, σ 2 ). We need the posterior distribution P (Ȳ D, σ2 ) = P (Ȳ D, µ, σ2 )P (µ D, σ 2 )dµ. 7 Special Topic
35 Bayesian modelling Let us take a closer look at P (Ȳ D, σ2 ). 8 Special Topic
36 Bayesian modelling Let us take a closer look at P (Ȳ D, σ2 ). Observe that Ȳ = fȳ + (1 f)ȳi. 8 Special Topic
37 Bayesian modelling Let us take a closer look at P (Ȳ D, σ2 ). Observe that Ȳ = fȳ + (1 f)ȳi. Given D, ȳ is fixed; so the posterior distribution of ȲI will determine the posterior distribution of Ȳ. 8 Special Topic
38 Bayesian modelling Let us take a closer look at P (Ȳ D, σ2 ). Observe that Ȳ = fȳ + (1 f)ȳi. Given D, ȳ is fixed; so the posterior distribution of ȲI will determine the posterior distribution of Ȳ. So, what is the posterior distribution P (ȲI D, σ2 )? 8 Special Topic
39 Bayesian modelling Let us take a closer look at P (Ȳ D, σ2 ). Observe that Ȳ = fȳ + (1 f)ȳi. Given D, ȳ is fixed; so the posterior distribution of ȲI will determine the posterior distribution of Ȳ. So, what is the posterior distribution P (ȲI D, σ2 )? P (ȲI D, σ2 ) = P (ȲI µ, σ2 )P (µ D, σ 2 )dµ, where we use P (ȲI ( D, µ, σ2 ) = P (ȲI µ, σ2 ). Note P (ȲI µ, σ2 σ ) = N µ, 2 N(1 f) 8 Special Topic
40 Bayesian modelling Let us take a closer look at P (Ȳ D, σ2 ). Observe that Ȳ = fȳ + (1 f)ȳi. Given D, ȳ is fixed; so the posterior distribution of ȲI will determine the posterior distribution of Ȳ. So, what is the posterior distribution P (ȲI D, σ2 )? P (ȲI D, σ2 ) = P (ȲI µ, σ2 )P (µ D, σ 2 )dµ, where we use P (ȲI ( D, µ, σ2 ) = P (ȲI µ, σ2 ). Note P (ȲI µ, σ2 σ ) = N µ, 2 N(1 f) Can we simplify this posterior without integrating? 8 Special Topic
41 Bayesian modelling What is the posterior distribution P (µ D, σ 2 )? 9 Special Topic
42 Bayesian modelling What is the posterior distribution P (µ D, σ 2 )? Note that P (µ D, σ 2 ) P (D µ, σ 2 ) 9 Special Topic
43 Bayesian modelling What is the posterior distribution P (µ D, σ 2 )? Note that P (µ D, σ 2 ) P (D µ, σ 2 ) ( = P (µ D, σ 2 ) = N ȳ, σ2 n ). 9 Special Topic
44 Bayesian modelling What is the posterior distribution P (µ D, σ 2 )? Note that P (µ D, σ 2 ) P (D µ, σ 2 ) ( = P (µ D, σ 2 ) = N ȳ, σ2 n ). Conditional on D, µ and σ 2, we can write: Ȳ = fȳ + (1 f)ȳi ; ( σ 2 ) Ȳ I = µ + u 1 ; u 1 N 0, ; N(1 f) Ȳ = fȳ + (1 f)µ + (1 f)u 1 ( Also, conditional on D and σ 2, µ = ȳ + u 2 ; u 2 N 0, σ2 n ). 9 Special Topic
45 Bayesian modelling Then: Ȳ = ȳ + (1 f)u 2 + (1 f)u 1 10 Special Topic
46 Bayesian modelling Then: Ȳ = ȳ + (1 f)u 2 + (1 f)u 1 The posterior is P (Ȳ D, σ2 ) is: N ) (ȳ, σ2 (1 f) n 10 Special Topic
47 Bayesian modelling Then: Ȳ = ȳ + (1 f)u 2 + (1 f)u 1 The posterior is P (Ȳ D, σ2 ) is: N ) (ȳ, σ2 (1 f) n Let σ 2 be unknown. Prior: P (µ, σ 2 ) 1 σ 2 10 Special Topic
48 Bayesian modelling Then: Ȳ = ȳ + (1 f)u 2 + (1 f)u 1 The posterior is P (Ȳ D, σ2 ) is: N ) (ȳ, σ2 (1 f) n Let σ 2 be unknown. Prior: P (µ, σ 2 ) 1 σ 2 Then we reproduce classical result: Ȳ ȳ s 2n (1 f) t n 1 where s 2 = 1 n 1 n i=1 (y i ȳ) 2 is the sample variance. 10 Special Topic
49 Bayesian computation Note that the marginal ( posterior ) distribution P (σ 2 D) = IG n 1 2, (n 1)s2 2, where s 2 = 1 n n 1 i=1 (y i ȳ) 2 is the sample variance. 11 Special Topic
50 Bayesian computation Note that the marginal ( posterior ) distribution P (σ 2 D) = IG n 1 2, (n 1)s2 2, where s 2 = 1 n n 1 i=1 (y i ȳ) 2 is the sample variance. We can perform exact posterior sampling as follows. For l = 1,..., L do the following: 11 Special Topic
51 Bayesian computation Note that the marginal ( posterior ) distribution P (σ 2 D) = IG n 1 2, (n 1)s2 2, where s 2 = 1 n n 1 i=1 (y i ȳ) 2 is the sample variance. We can perform exact posterior sampling as follows. For l = 1,..., L do the following: ( ) Draw σ(l) 2 IG n 1 2, (n 1)s Special Topic
52 Bayesian computation Note that the marginal ( posterior ) distribution P (σ 2 D) = IG n 1 2, (n 1)s2 2, where s 2 = 1 n n 1 i=1 (y i ȳ) 2 is the sample variance. We can perform exact posterior sampling as follows. For l = 1,..., L do the following: ( ) Draw σ(l) 2 IG n 1 2, (n 1)s2 2 ( ) Draw µ (l) N ȳ, σ2 (l) n 11 Special Topic
53 Bayesian computation Note that the marginal ( posterior ) distribution P (σ 2 D) = IG n 1 2, (n 1)s2 2, where s 2 = 1 n n 1 i=1 (y i ȳ) 2 is the sample variance. We can perform exact posterior sampling as follows. For l = 1,..., L do the following: ( ) Draw σ(l) 2 IG n 1 2, (n 1)s2 2 ( ) Draw µ (l) N ȳ, σ2 (l) n ) Draw Ȳ (l) I (µ N (l), σ 2 (l) N(1 f) 11 Special Topic
54 Bayesian computation Note that the marginal ( posterior ) distribution P (σ 2 D) = IG n 1 2, (n 1)s2 2, where s 2 = 1 n n 1 i=1 (y i ȳ) 2 is the sample variance. We can perform exact posterior sampling as follows. For l = 1,..., L do the following: ( ) Draw σ(l) 2 IG n 1 2, (n 1)s2 2 ( ) Draw µ (l) N ȳ, σ2 (l) n ) Draw Ȳ (l) I (µ N (l), σ 2 (l) N(1 f) Set Ȳ (l) = fȳ + (1 f)ȳ (l) I. 11 Special Topic
55 Bayesian computation Note that the marginal ( posterior ) distribution P (σ 2 D) = IG n 1 2, (n 1)s2 2, where s 2 = 1 n n 1 i=1 (y i ȳ) 2 is the sample variance. We can perform exact posterior sampling as follows. For l = 1,..., L do the following: ( ) Draw σ(l) 2 IG n 1 2, (n 1)s2 2 ( ) Draw µ (l) N ȳ, σ2 (l) n ) Draw Ȳ (l) I (µ N (l), σ 2 (l) N(1 f) Set Ȳ (l) = fȳ + (1 f)ȳ (l) I. The resulting collection {Ȳ (l) } L l=1 is a sample from the posterior distribution of the population mean. 11 Special Topic
56 Homework HOMEWORK A simple random sample of 30 households was drawn from a township containing 576 households. The numbers of persons per household in the sample were as follows: 5, 6, 3, 3, 2, 3, 3, 3, 4, 4, 3, 2, 7, 4, 3, 5, 4, 4, 3, 3, 4, 3, 3, 1, 2, 4, 3, 4, 2, 4 Use a non-informative Bayesian analysis to estimate the total number of people in the area and compute the posterior probability that the population total lies within 10% of the sample estimate. From a list of 468 small 2-year colleges in the North-Eastern United States, a simple sample of 100 colleges was drawn. Data for the number of students (y) and the number of teachers (x) for these colleges were summarized as follows: The total number of students in the sample was: 44, 987 The total number of teachers in the sample was: 3, 079 Also given are the sample sums of squares: P n i=1 yi 2 = 29, 881, 219 and P ni=1 x 2 i = 111, 090. Assuming a non-informative Bayesian setting and where the population of students and teachers are independent, find the posterior mean and 95% credible interval for the student-teacher ratio in the population (i.e. all the 468 colleges combined). 12 Special Topic
57 Missing Data Inference A general theory for Bayesian missing data analysis 13 Special Topic
58 Missing Data Inference A general theory for Bayesian missing data analysis Let y = (y 1,..., y N ) be the units in the population. We assume a likelihood for this population, say p(y θ), where θ are the parameters in the model. 13 Special Topic
59 Missing Data Inference A general theory for Bayesian missing data analysis Let y = (y 1,..., y N ) be the units in the population. We assume a likelihood for this population, say p(y θ), where θ are the parameters in the model. Let I = (I 1,..., I N ) be the set of inclusion indicators. 13 Special Topic
60 Missing Data Inference A general theory for Bayesian missing data analysis Let y = (y 1,..., y N ) be the units in the population. We assume a likelihood for this population, say p(y θ), where θ are the parameters in the model. Let I = (I 1,..., I N ) be the set of inclusion indicators. Let us denote by obs the index set {i : I i = 1} and by mis the index set {i; I i = 0} 13 Special Topic
61 Missing Data Inference A general theory for Bayesian missing data analysis Let y = (y 1,..., y N ) be the units in the population. We assume a likelihood for this population, say p(y θ), where θ are the parameters in the model. Let I = (I 1,..., I N ) be the set of inclusion indicators. Let us denote by obs the index set {i : I i = 1} and by mis the index set {i; I i = 0} So, y obs is the set of observed data points and y mis is for the unobserved data. We will often write y = (y obs, y mis ). 13 Special Topic
62 Buliding the models We will break the joint probability model into two parts 14 Special Topic
63 Buliding the models We will break the joint probability model into two parts The model for the complete data p(y θ) including observed and missing data 14 Special Topic
64 Buliding the models We will break the joint probability model into two parts The model for the complete data p(y θ) including observed and missing data The model for the inclusion vector I, say p(i y, φ) 14 Special Topic
65 Buliding the models We will break the joint probability model into two parts The model for the complete data p(y θ) including observed and missing data The model for the inclusion vector I, say p(i y, φ) We define the complete-likelihood as: p(y, I θ, φ) = p(y θ)p(i y, φ) 14 Special Topic
66 Buliding the models We will break the joint probability model into two parts The model for the complete data p(y θ) including observed and missing data The model for the inclusion vector I, say p(i y, φ) We define the complete-likelihood as: p(y, I θ, φ) = p(y θ)p(i y, φ) Parameters: Scientific interest usually revolves around estimation of θ (sometimes called the super-population parameters) 14 Special Topic
67 Buliding the models We will break the joint probability model into two parts The model for the complete data p(y θ) including observed and missing data The model for the inclusion vector I, say p(i y, φ) We define the complete-likelihood as: p(y, I θ, φ) = p(y θ)p(i y, φ) Parameters: Scientific interest usually revolves around estimation of θ (sometimes called the super-population parameters) The parameters φ index the missingness model (often called design parameters). 14 Special Topic
68 Buliding the models The complete-likelihood is not the data likelihood as it involves the missing data component as well. 15 Special Topic
69 Buliding the models The complete-likelihood is not the data likelihood as it involves the missing data component as well. The actual information available is (y obs, I). So the data likelihood is: p(y obs, I θ, φ) = p(y, I θ, φ)dy mis 15 Special Topic
70 Buliding the models The complete-likelihood is not the data likelihood as it involves the missing data component as well. The actual information available is (y obs, I). So the data likelihood is: p(y obs, I θ, φ) = p(y, I θ, φ)dy mis The joint posterior distribution is p(θ, φ y obs, I) p(θ, φ)p(y obs, I θ, φ) p(θ, φ) p(y θ)p(i y, φ)dy mis 15 Special Topic
71 Buliding the models The complete-likelihood is not the data likelihood as it involves the missing data component as well. The actual information available is (y obs, I). So the data likelihood is: p(y obs, I θ, φ) = p(y, I θ, φ)dy mis The joint posterior distribution is p(θ, φ y obs, I) p(θ, φ)p(y obs, I θ, φ) p(θ, φ) p(y θ)p(i y, φ)dy mis 15 Special Topic
72 Buliding the models These integrals are evaluated by drawing samples (y mis, θ, φ) from p(y mis, θ, φ y obs, I) 16 Special Topic
73 Buliding the models These integrals are evaluated by drawing samples (y mis, θ, φ) from p(y mis, θ, φ y obs, I) Typically, we first compute samples (θ (l), φ (l) ) from p(θ, φ y obs, I) 16 Special Topic
74 Buliding the models These integrals are evaluated by drawing samples (y mis, θ, φ) from p(y mis, θ, φ y obs, I) Typically, we first compute samples (θ (l), φ (l) ) from p(θ, φ y obs, I) Then we draw y (l) mis p(y mis y obs, I, θ (l), φ (l) ) 16 Special Topic
75 Buliding the models These integrals are evaluated by drawing samples (y mis, θ, φ) from p(y mis, θ, φ y obs, I) Typically, we first compute samples (θ (l), φ (l) ) from p(θ, φ y obs, I) Then we draw y (l) mis p(y mis y obs, I, θ (l), φ (l) ) Computations often simplify from assumptions: p(i y, φ) = p(i y obs, φ) 16 Special Topic
76 Buliding the models These integrals are evaluated by drawing samples (y mis, θ, φ) from p(y mis, θ, φ y obs, I) Typically, we first compute samples (θ (l), φ (l) ) from p(θ, φ y obs, I) Then we draw y (l) mis p(y mis y obs, I, θ (l), φ (l) ) Computations often simplify from assumptions: p(i y, φ) = p(i y obs, φ) p(i y, φ) = p(i φ) = p(i) 16 Special Topic
77 Buliding the models These integrals are evaluated by drawing samples (y mis, θ, φ) from p(y mis, θ, φ y obs, I) Typically, we first compute samples (θ (l), φ (l) ) from p(θ, φ y obs, I) Then we draw y (l) mis p(y mis y obs, I, θ (l), φ (l) ) Computations often simplify from assumptions: p(i y, φ) = p(i y obs, φ) p(i y, φ) = p(i φ) = p(i) p(φ θ) = p(φ) p(y mis θ (l), y obs ) = p(y mis θ (l) ). 16 Special Topic
Bayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationModel Assessment and Comparisons
Model Assessment and Comparisons Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationTopic 12 Overview of Estimation
Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the
More information[Chapter 6. Functions of Random Variables]
[Chapter 6. Functions of Random Variables] 6.1 Introduction 6.2 Finding the probability distribution of a function of random variables 6.3 The method of distribution functions 6.5 The method of Moment-generating
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,
More informationOne-parameter models
One-parameter models Patrick Breheny January 22 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/17 Introduction Binomial data is not the only example in which Bayesian solutions can be worked
More informationBayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units
Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units Sahar Z Zangeneh Robert W. Keener Roderick J.A. Little Abstract In Probability proportional
More informationGibbs Sampling in Latent Variable Models #1
Gibbs Sampling in Latent Variable Models #1 Econ 690 Purdue University Outline 1 Data augmentation 2 Probit Model Probit Application A Panel Probit Panel Probit 3 The Tobit Model Example: Female Labor
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public
More informationNotes on Decision Theory and Prediction
Notes on Decision Theory and Prediction Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico October 7, 2014 1. Decision Theory Decision theory is
More informationQuiz 1. Name: Instructions: Closed book, notes, and no electronic devices.
Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices. 1.(10) What is usually true about a parameter of a model? A. It is a known number B. It is determined by the data C. It is an
More informationOpen book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you.
ISQS 5347 Final Exam Spring 2017 Open book, but no loose leaf notes and no electronic devices. Points (out of 200) are in parentheses. Put all answers on the paper provided to you. 1. Recall the commute
More informationPart 4: Multi-parameter and normal models
Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationStatistical Inference: Maximum Likelihood and Bayesian Approaches
Statistical Inference: Maximum Likelihood and Bayesian Approaches Surya Tokdar From model to inference So a statistical analysis begins by setting up a model {f (x θ) : θ Θ} for data X. Next we observe
More informationMath 218 Supplemental Instruction Spring 2008 Final Review Part A
Spring 2008 Final Review Part A SI leaders: Mario Panak, Jackie Hu, Christina Tasooji Chapters 3, 4, and 5 Topics Covered: General probability (probability laws, conditional, joint probabilities, independence)
More informationA Bayesian Treatment of Linear Gaussian Regression
A Bayesian Treatment of Linear Gaussian Regression Frank Wood December 3, 2009 Bayesian Approach to Classical Linear Regression In classical linear regression we have the following model y β, σ 2, X N(Xβ,
More informationInference for a Population Proportion
Al Nosedal. University of Toronto. November 11, 2015 Statistical inference is drawing conclusions about an entire population based on data in a sample drawn from that population. From both frequentist
More informationBayesian Inference. STA 121: Regression Analysis Artin Armagan
Bayesian Inference STA 121: Regression Analysis Artin Armagan Bayes Rule...s! Reverend Thomas Bayes Posterior Prior p(θ y) = p(y θ)p(θ)/p(y) Likelihood - Sampling Distribution Normalizing Constant: p(y
More informationPredictive Distributions
Predictive Distributions October 6, 2010 Hoff Chapter 4 5 October 5, 2010 Prior Predictive Distribution Before we observe the data, what do we expect the distribution of observations to be? p(y i ) = p(y
More informationMcGill University. Department of Epidemiology and Biostatistics. Bayesian Analysis for the Health Sciences. Course EPIB-675.
McGill University Department of Epidemiology and Biostatistics Bayesian Analysis for the Health Sciences Course EPIB-675 Lawrence Joseph Bayesian Analysis for the Health Sciences EPIB-675 3 credits Instructor:
More informationModel Checking and Improvement
Model Checking and Improvement Statistics 220 Spring 2005 Copyright c 2005 by Mark E. Irwin Model Checking All models are wrong but some models are useful George E. P. Box So far we have looked at a number
More informationMotivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University
Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined
More informationBayesian Inference in a Normal Population
Bayesian Inference in a Normal Population September 20, 2007 Casella & Berger Chapter 7, Gelman, Carlin, Stern, Rubin Sec 2.6, 2.8, Chapter 3. Bayesian Inference in a Normal Population p. 1/16 Normal Model
More informationContents 1. Contents
Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................
More informationFractional Imputation in Survey Sampling: A Comparative Review
Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical
More information6. Fractional Imputation in Survey Sampling
6. Fractional Imputation in Survey Sampling 1 Introduction Consider a finite population of N units identified by a set of indices U = {1, 2,, N} with N known. Associated with each unit i in the population
More informationBayesian Inference in a Normal Population
Bayesian Inference in a Normal Population September 17, 2008 Gill Chapter 3. Sections 1-4, 7-8 Bayesian Inference in a Normal Population p.1/18 Normal Model IID observations Y = (Y 1,Y 2,...Y n ) Y i N(µ,σ
More informationBayesian Machine Learning
Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning
More informationIEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm
IEOR E4570: Machine Learning for OR&FE Spring 205 c 205 by Martin Haugh The EM Algorithm The EM algorithm is used for obtaining maximum likelihood estimates of parameters when some of the data is missing.
More informationBayesian Inference for Normal Mean
Al Nosedal. University of Toronto. November 18, 2015 Likelihood of Single Observation The conditional observation distribution of y µ is Normal with mean µ and variance σ 2, which is known. Its density
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More informationProbability, Entropy, and Inference / More About Inference
Probability, Entropy, and Inference / More About Inference Mário S. Alvim (msalvim@dcc.ufmg.br) Information Theory DCC-UFMG (2018/02) Mário S. Alvim (msalvim@dcc.ufmg.br) Probability, Entropy, and Inference
More informationSTA 216: GENERALIZED LINEAR MODELS. Lecture 1. Review and Introduction. Much of statistics is based on the assumption that random
STA 216: GENERALIZED LINEAR MODELS Lecture 1. Review and Introduction Much of statistics is based on the assumption that random variables are continuous & normally distributed. Normal linear regression
More informationHierarchical Modeling for Multivariate Spatial Data
Hierarchical Modeling for Multivariate Spatial Data Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department
More informationSession 2.1: Terminology, Concepts and Definitions
Second Regional Training Course on Sampling Methods for Producing Core Data Items for Agricultural and Rural Statistics Module 2: Review of Basics of Sampling Methods Session 2.1: Terminology, Concepts
More informationIntroduction to Bayesian inference
Introduction to Bayesian inference Thomas Alexander Brouwer University of Cambridge tab43@cam.ac.uk 17 November 2015 Probabilistic models Describe how data was generated using probability distributions
More informationSTAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01
STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist
More informationUSEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*
USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate
More informationSAMPLING BIOS 662. Michael G. Hudgens, Ph.D. mhudgens :55. BIOS Sampling
SAMPLIG BIOS 662 Michael G. Hudgens, Ph.D. mhudgens@bios.unc.edu http://www.bios.unc.edu/ mhudgens 2008-11-14 15:55 BIOS 662 1 Sampling Outline Preliminaries Simple random sampling Population mean Population
More informationBAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS
BAYESIAN ESTIMATION OF LINEAR STATISTICAL MODEL BIAS Andrew A. Neath 1 and Joseph E. Cavanaugh 1 Department of Mathematics and Statistics, Southern Illinois University, Edwardsville, Illinois 606, USA
More informationLecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions
DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K
More informationAges of stellar populations from color-magnitude diagrams. Paul Baines. September 30, 2008
Ages of stellar populations from color-magnitude diagrams Paul Baines Department of Statistics Harvard University September 30, 2008 Context & Example Welcome! Today we will look at using hierarchical
More informationPart 2: One-parameter models
Part 2: One-parameter models 1 Bernoulli/binomial models Return to iid Y 1,...,Y n Bin(1, ). The sampling model/likelihood is p(y 1,...,y n ) = P y i (1 ) n P y i When combined with a prior p( ), Bayes
More informationGibbs Sampling in Endogenous Variables Models
Gibbs Sampling in Endogenous Variables Models Econ 690 Purdue University Outline 1 Motivation 2 Identification Issues 3 Posterior Simulation #1 4 Posterior Simulation #2 Motivation In this lecture we take
More informationLecture 4: Probabilistic Learning
DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods
More informationIntroduction to Bayesian Inference
University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ
More informationA primer on Bayesian statistics, with an application to mortality rate estimation
A primer on Bayesian statistics, with an application to mortality rate estimation Peter off University of Washington Outline Subjective probability Practical aspects Application to mortality rate estimation
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationA note on multiple imputation for general purpose estimation
A note on multiple imputation for general purpose estimation Shu Yang Jae Kwang Kim SSC meeting June 16, 2015 Shu Yang, Jae Kwang Kim Multiple Imputation June 16, 2015 1 / 32 Introduction Basic Setup Assume
More informationChapter 5. Bayesian Statistics
Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective
More informationSTA216: Generalized Linear Models. Lecture 1. Review and Introduction
STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general
More informationMarkov chain Monte Carlo
1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop
More informationBayesian Estimation of Prediction Error and Variable Selection in Linear Regression
Bayesian Estimation of Prediction Error and Variable Selection in Linear Regression Andrew A. Neath Department of Mathematics and Statistics; Southern Illinois University Edwardsville; Edwardsville, IL,
More informationProbability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.
Probability Theory To start out the course, we need to know something about statistics and probability Introduction to Probability Theory L645 Advanced NLP Autumn 2009 This is only an introduction; for
More informationClass 26: review for final exam 18.05, Spring 2014
Probability Class 26: review for final eam 8.05, Spring 204 Counting Sets Inclusion-eclusion principle Rule of product (multiplication rule) Permutation and combinations Basics Outcome, sample space, event
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationBayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification
Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification Michael Anderson, PhD Hélène Carabin, DVM, PhD Department of Biostatistics and Epidemiology The University of Oklahoma
More informationCSE 312 Final Review: Section AA
CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material
More informationICES REPORT Model Misspecification and Plausibility
ICES REPORT 14-21 August 2014 Model Misspecification and Plausibility by Kathryn Farrell and J. Tinsley Odena The Institute for Computational Engineering and Sciences The University of Texas at Austin
More informationAn Introduction to Bayesian Linear Regression
An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,
More informationSparse Linear Models (10/7/13)
STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine
More informationIntroduction to Bayesian Methods
Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative
More informationHypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33
Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett
More informationGeneralized Linear Models Introduction
Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,
More informationPreliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics, Appendix
More informationBayesian inference for sample surveys. Roderick Little Module 2: Bayesian models for simple random samples
Bayesian inference for sample surveys Roderick Little Module : Bayesian models for simple random samples Superpopulation Modeling: Estimating parameters Various principles: least squares, method of moments,
More informationCLASS NOTES Models, Algorithms and Data: Introduction to computing 2018
CLASS NOTES Models, Algorithms and Data: Introduction to computing 208 Petros Koumoutsakos, Jens Honore Walther (Last update: June, 208) IMPORTANT DISCLAIMERS. REFERENCES: Much of the material (ideas,
More informationA Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness
A Bayesian Nonparametric Approach to Monotone Missing Data in Longitudinal Studies with Informative Missingness A. Linero and M. Daniels UF, UT-Austin SRC 2014, Galveston, TX 1 Background 2 Working model
More informationStatistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach
Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score
More informationAccounting for Complex Sample Designs via Mixture Models
Accounting for Complex Sample Designs via Finite Normal Mixture Models 1 1 University of Michigan School of Public Health August 2009 Talk Outline 1 2 Accommodating Sampling Weights in Mixture Models 3
More informationMinimum Message Length Analysis of the Behrens Fisher Problem
Analysis of the Behrens Fisher Problem Enes Makalic and Daniel F Schmidt Centre for MEGA Epidemiology The University of Melbourne Solomonoff 85th Memorial Conference, 2011 Outline Introduction 1 Introduction
More information10 : HMM and CRF. 1 Case Study: Supervised Part-of-Speech Tagging
10-708: Probabilistic Graphical Models 10-708, Spring 2018 10 : HMM and CRF Lecturer: Kayhan Batmanghelich Scribes: Ben Lengerich, Michael Kleyman 1 Case Study: Supervised Part-of-Speech Tagging We will
More informationBayesian Model Diagnostics and Checking
Earvin Balderama Quantitative Ecology Lab Department of Forestry and Environmental Resources North Carolina State University April 12, 2013 1 / 34 Introduction MCMCMC 2 / 34 Introduction MCMCMC Steps in
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More informationMore Spectral Clustering and an Introduction to Conjugacy
CS8B/Stat4B: Advanced Topics in Learning & Decision Making More Spectral Clustering and an Introduction to Conjugacy Lecturer: Michael I. Jordan Scribe: Marco Barreno Monday, April 5, 004. Back to spectral
More informationBayesian Approaches Data Mining Selected Technique
Bayesian Approaches Data Mining Selected Technique Henry Xiao xiao@cs.queensu.ca School of Computing Queen s University Henry Xiao CISC 873 Data Mining p. 1/17 Probabilistic Bases Review the fundamentals
More informationMultivariate Normal & Wishart
Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.
More informationRemarks on Improper Ignorance Priors
As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive
More informationOne Parameter Models
One Parameter Models p. 1/2 One Parameter Models September 22, 2010 Reading: Hoff Chapter 3 One Parameter Models p. 2/2 Highest Posterior Density Regions Find Θ 1 α = {θ : p(θ Y ) h α } such that P (θ
More informationParametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012
Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood
More informationSTATISTICS ANCILLARY SYLLABUS. (W.E.F. the session ) Semester Paper Code Marks Credits Topic
STATISTICS ANCILLARY SYLLABUS (W.E.F. the session 2014-15) Semester Paper Code Marks Credits Topic 1 ST21012T 70 4 Descriptive Statistics 1 & Probability Theory 1 ST21012P 30 1 Practical- Using Minitab
More informationNaïve Bayes classification
Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss
More informationvariability of the model, represented by σ 2 and not accounted for by Xβ
Posterior Predictive Distribution Suppose we have observed a new set of explanatory variables X and we want to predict the outcomes ỹ using the regression model. Components of uncertainty in p(ỹ y) variability
More informationStatistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions
Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions 1999 Prentice-Hall, Inc. Chap. 4-1 Chapter Topics Basic Probability Concepts: Sample
More informationIntroduction to Bayesian Statistics
School of Computing & Communication, UTS January, 207 Random variables Pre-university: A number is just a fixed value. When we talk about probabilities: When X is a continuous random variable, it has a
More informationBasics of Modern Missing Data Analysis
Basics of Modern Missing Data Analysis Kyle M. Lang Center for Research Methods and Data Analysis University of Kansas March 8, 2013 Topics to be Covered An introduction to the missing data problem Missing
More informationQuiz 1. Name: Instructions: Closed book, notes, and no electronic devices.
Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices. 1. What is the difference between a deterministic model and a probabilistic model? (Two or three sentences only). 2. What is the
More informationBayesian Linear Models
Eric F. Lock UMN Division of Biostatistics, SPH elock@umn.edu 03/07/2018 Linear model For observations y 1,..., y n, the basic linear model is y i = x 1i β 1 +... + x pi β p + ɛ i, x 1i,..., x pi are predictors
More informationSubject CS1 Actuarial Statistics 1 Core Principles
Institute of Actuaries of India Subject CS1 Actuarial Statistics 1 Core Principles For 2019 Examinations Aim The aim of the Actuarial Statistics 1 subject is to provide a grounding in mathematical and
More informationStatistics for Managers Using Microsoft Excel (3 rd Edition)
Statistics for Managers Using Microsoft Excel (3 rd Edition) Chapter 4 Basic Probability and Discrete Probability Distributions 2002 Prentice-Hall, Inc. Chap 4-1 Chapter Topics Basic probability concepts
More informationBayesian Methods: Naïve Bayes
Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior
More information(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis
Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationBayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017
Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries
More information2 Belief, probability and exchangeability
2 Belief, probability and exchangeability We first discuss what properties a reasonable belief function should have, and show that probabilities have these properties. Then, we review the basic machinery
More information