Foundations of Statistical Inference
|
|
- Reginald Rodgers
- 5 years ago
- Views:
Transcription
1 Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2015 Julien Berestycki (University of Oxford) SB2a MT / 16
2 Lecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Julien Berestycki (University of Oxford) SB2a MT / 16
3 Example 2 2 From Wikipedia article on contingency tables: Left-handed Right-handed Total Male 9 (y 1 ) Female 4 (y 2 ) Total Hypothesis: θ 1 = Proportion of left-handed men > θ 2 = proportion of left handed women. Julien Berestycki (University of Oxford) SB2a MT / 16
4 Example 2 2 From Wikipedia article on contingency tables: Left-handed Right-handed Total Male 9 (y 1 ) Female 4 (y 2 ) Total Hypothesis: θ 1 = Proportion of left-handed men > θ 2 = proportion of left handed women. Model: y 1 Binom(n 1, θ 1 ), y 2 Binom(n 2, θ 2 ). Julien Berestycki (University of Oxford) SB2a MT / 16
5 Example 2 2 From Wikipedia article on contingency tables: Left-handed Right-handed Total Male 9 (y 1 ) Female 4 (y 2 ) Total Hypothesis: θ 1 = Proportion of left-handed men > θ 2 = proportion of left handed women. Model: y 1 Binom(n 1, θ 1 ), y 2 Binom(n 2, θ 2 ). Use uniform priors θ i U [0,1] =Beta(1, 1). Julien Berestycki (University of Oxford) SB2a MT / 16
6 Example 2 2 From Wikipedia article on contingency tables: Left-handed Right-handed Total Male 9 (y 1 ) Female 4 (y 2 ) Total Hypothesis: θ 1 = Proportion of left-handed men > θ 2 = proportion of left handed women. Model: y 1 Binom(n 1, θ 1 ), y 2 Binom(n 2, θ 2 ). Use uniform priors θ i U [0,1] =Beta(1, 1). Posteriors p(θ 1 y 1, n 1 ) = Beta(y 1 +1, n 1 y 1 +1), p(θ 2 y 2, n 2 ) = Beta(y 2 +1, n 2 y 2 +1) Then compute posterior P(Z 1 > Z 2 ) Julien Berestycki (University of Oxford) SB2a MT / 16
7 Example 2 2 From Wikipedia article on contingency tables: Left-handed Right-handed Total Male 9 (y 1 ) Female 4 (y 2 ) Total Hypothesis: θ 1 = Proportion of left-handed men > θ 2 = proportion of left handed women. Model: y 1 Binom(n 1, θ 1 ), y 2 Binom(n 2, θ 2 ). Use uniform priors θ i U [0,1] =Beta(1, 1). Posteriors p(θ 1 y 1, n 1 ) = Beta(y 1 +1, n 1 y 1 +1), p(θ 2 y 2, n 2 ) = Beta(y 2 +1, n 2 y 2 +1) Then compute posterior P(Z 1 > Z 2 ) Either compute an integral or by simulation. Julien Berestycki (University of Oxford) SB2a MT / 16
8 Example 2 2: simulations See R code. Generate M sample from joint posterior p(θ 1, θ 2 y 1, n 1, y 2, n 2 ) = p(θ 1 y 1, n 1 ) p(θ 2 y 2, n 2 ) and then use Monte-Carlo approximation P[θ 1 > θ 2 ] 1 (i) I(θ M 1 > θ(i) 2 ) Julien Berestycki (University of Oxford) SB2a MT / 16
9 Example 2 2: simulations See R code. Generate M sample from joint posterior p(θ 1, θ 2 y 1, n 1, y 2, n 2 ) = p(θ 1 y 1, n 1 ) p(θ 2 y 2, n 2 ) and then use Monte-Carlo approximation P[θ 1 > θ 2 ] 1 (i) I(θ M Outputs M=10000 : 1 > θ(i) 2 ) Posterior Simulation of Male - Female Lefties 2.5% 50% 97.5% print(mean(theta1>theta2)) [1] p(theta1 - theta2 y, n) theta1 - theta2 Julien Berestycki (University of Oxford) SB2a MT / 16
10 Contingency table analysis North Carolina State University data. EC : Extra Curricular activities in hours per week. EC < 2 2 to 12 > 12 C or better D or F Let y = (y ij ) be the matrix of counts. Julien Berestycki (University of Oxford) SB2a MT / 16
11 Frequentist analysis Usual χ 2 test from R. Pearson s Chi-squared test y i,j is cardinal of cell i, j < 2 2 to 12 > 12 C or better D or F Julien Berestycki (University of Oxford) SB2a MT / 16
12 Frequentist analysis Usual χ 2 test from R. Pearson s Chi-squared test Sum rows and columns < 2 2 to 12 > 12 total C or better D or F total Julien Berestycki (University of Oxford) SB2a MT / 16
13 Frequentist analysis Usual χ 2 test from R. Pearson s Chi-squared test E i,j = r i c j /N < 2 2 to 12 > 12 total C or better D or F total Julien Berestycki (University of Oxford) SB2a MT / 16
14 Frequentist analysis Usual χ 2 test from R. Pearson s Chi-squared test χ 2 = i,j (y i,j E i,j ) 2 /E i,j < 2 2 to 12 > 12 total C or better D or F total 6.92 Julien Berestycki (University of Oxford) SB2a MT / 16
15 Frequentist analysis Usual χ 2 test from R. Pearson s Chi-squared test χ 2 = i,j (y i,j E i,j ) 2 /E i,j < 2 2 to 12 > 12 total C or better D or F total 6.92 X-squared = , df = (3-1)(2-1) = 2, p-value = The p-value is , evidence that grades are related to time spent on extra curricular activities. Julien Berestycki (University of Oxford) SB2a MT / 16
16 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Julien Berestycki (University of Oxford) SB2a MT / 16
17 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Let p = {p 11,..., p 23 }. The model is that Y = (y 1,1,..., y 2,3 ) is a multinomial (N, p) (i.e. N trials with P(X k = (i, j)) = p i,j and y i,j = #{X k = (i, j)}.) Julien Berestycki (University of Oxford) SB2a MT / 16
18 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Let p = {p 11,..., p 23 }. The model is that Y = (y 1,1,..., y 2,3 ) is a multinomial (N, p) (i.e. N trials with P(X k = (i, j)) = p i,j and y i,j = #{X k = (i, j)}.) Bayesian method: make p a variable. Julien Berestycki (University of Oxford) SB2a MT / 16
19 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Let p = {p 11,..., p 23 }. The model is that Y = (y 1,1,..., y 2,3 ) is a multinomial (N, p) (i.e. N trials with P(X k = (i, j)) = p i,j and y i,j = #{X k = (i, j)}.) Bayesian method: make p a variable. Julien Berestycki (University of Oxford) SB2a MT / 16
20 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Let p = {p 11,..., p 23 }. The model is that Y = (y 1,1,..., y 2,3 ) is a multinomial (N, p) (i.e. N trials with P(X k = (i, j)) = p i,j and y i,j = #{X k = (i, j)}.) Bayesian method: make p a variable. Consider two models M I the two categorical variables are independent Julien Berestycki (University of Oxford) SB2a MT / 16
21 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Let p = {p 11,..., p 23 }. The model is that Y = (y 1,1,..., y 2,3 ) is a multinomial (N, p) (i.e. N trials with P(X k = (i, j)) = p i,j and y i,j = #{X k = (i, j)}.) Bayesian method: make p a variable. Consider two models M I the two categorical variables are independent the two categorical variables are dependent. M D Julien Berestycki (University of Oxford) SB2a MT / 16
22 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Let p = {p 11,..., p 23 }. The model is that Y = (y 1,1,..., y 2,3 ) is a multinomial (N, p) (i.e. N trials with P(X k = (i, j)) = p i,j and y i,j = #{X k = (i, j)}.) Bayesian method: make p a variable. Consider two models M I (p 11, p 12, p 13 ) (p 21, p 22, p 23 ) M D the two categorical variables are dependent. The Bayes factor is BF = P(y M D) P(y M I ). Julien Berestycki (University of Oxford) SB2a MT / 16
23 Bayesian analysis EC < 2 2 to 12 > 12 C or better p 11 p 12 p 13 D or F p 21 p 22 p 23 Let p = {p 11,..., p 23 }. The model is that Y = (y 1,1,..., y 2,3 ) is a multinomial (N, p) (i.e. N trials with P(X k = (i, j)) = p i,j and y i,j = #{X k = (i, j)}.) Bayesian method: make p a variable. Consider two models M I (p 11, p 12, p 13 ) (p 21, p 22, p 23 ) M D (p 11, p 12, p 13 ) (p 21, p 22, p 23 ) The Bayes factor is BF = P(y M D) P(y M I ). Julien Berestycki (University of Oxford) SB2a MT / 16
24 The Dirichlet distribution Dirichlet integral z 1 + +z k =1 z ν z ν k 1 k dz 1 dz k = Γ(ν 1) Γ(ν k ) Γ( ν i ) Julien Berestycki (University of Oxford) SB2a MT / 16
25 The Dirichlet distribution Dirichlet integral z 1 + +z k =1 z ν z ν k 1 k dz 1 dz k = Γ(ν 1) Γ(ν k ) Γ( ν i ) Dirichlet distribution Γ( ν i ) Γ(ν 1 ) Γ(ν k ) zν z ν k 1 k, z z k = 1 The means are E[Z i ] = ν i / ν i, i = 1,..., k. Julien Berestycki (University of Oxford) SB2a MT / 16
26 The Dirichlet distribution Dirichlet integral z 1 + +z k =1 z ν z ν k 1 k dz 1 dz k = Γ(ν 1) Γ(ν k ) Γ( ν i ) Dirichlet distribution Γ( ν i ) Γ(ν 1 ) Γ(ν k ) zν z ν k 1 k, z z k = 1 The means are E[Z i ] = ν i / ν i, i = 1,..., k. A representation that makes the Dirichlet easy to simulate from is the following. Julien Berestycki (University of Oxford) SB2a MT / 16
27 The Dirichlet distribution Dirichlet integral z 1 + +z k =1 z ν z ν k 1 k dz 1 dz k = Γ(ν 1) Γ(ν k ) Γ( ν i ) Dirichlet distribution Γ( ν i ) Γ(ν 1 ) Γ(ν k ) zν z ν k 1 k, z z k = 1 The means are E[Z i ] = ν i / ν i, i = 1,..., k. A representation that makes the Dirichlet easy to simulate from is the following. Let W 1,..., W k be independent Gamma (ν 1, θ),... Gamma (ν k, θ) random variables, W = W i and set Z i = W i /W, i = 1,..., k. (Does not depend on θ). Julien Berestycki (University of Oxford) SB2a MT / 16
28 Examples of 3D Dirichlet distributions Julien Berestycki (University of Oxford) SB2a MT / 16
29 Calculating marginal likelihoods The model is that f (y, p) is a P(y M D ) = P(y p)π(p)dp p Julien Berestycki (University of Oxford) SB2a MT / 16
30 Calculating marginal likelihoods The model is that f (y, p) is a P(y M D ) = P(y p)π(p)dp = p p p 23 =1 ( ) y ( ) yij pij π(p)dp11 dp 23 y ij Julien Berestycki (University of Oxford) SB2a MT / 16
31 Calculating marginal likelihoods The model is that f (y, p) is a P(y M D ) = P(y p)π(p)dp = p p p 23 =1 ( ) y ( ) yij pij π(p)dp11 dp 23 y ij where ( ) y = ( y ij )!/ y ij! y Julien Berestycki (University of Oxford) SB2a MT / 16
32 Calculating marginal likelihoods The model is that f (y, p) is a P(y M D ) = P(y p)π(p)dp where = p p p 23 =1 ( ) y ( ) yij pij π(p)dp11 dp 23 y ( ) y = ( y ij )!/ y ij! y Under M D (p 1,1, p 1,2, p 1,3 ) (p 2,1, p 2,2, p 2,3 ) so choose a uniform distribution for p i.e. Dirichlet(1,..., 1) ij Julien Berestycki (University of Oxford) SB2a MT / 16
33 Calculating marginal likelihoods The model is that f (y, p) is a P(y M D ) = P(y p)π(p)dp where = p p p 23 =1 ( ) y ( ) yij pij π(p)dp11 dp 23 y ( ) y = ( y ij )!/ y ij! y Under M D (p 1,1, p 1,2, p 1,3 ) (p 2,1, p 2,2, p 2,3 ) so choose a uniform distribution for p i.e. Dirichlet(1,..., 1) π(p) = Γ(RC), p p 23 = 1 ij Julien Berestycki (University of Oxford) SB2a MT / 16
34 Calculating marginal likelihoods P(y M D ) = ( ) y Γ(RC) y p p 23 =1 ( ) yij pij dp11 dp 23 ij Julien Berestycki (University of Oxford) SB2a MT / 16
35 Calculating marginal likelihoods P(y M D ) = = ( ) y Γ(RC) y ( ) y Γ(yij + 1) Γ(RC) y Γ( y + RC) p p 23 =1 ( ) yij pij dp11 dp 23 ij Julien Berestycki (University of Oxford) SB2a MT / 16
36 Calculating marginal likelihoods P(y M D ) = = = ( ) y Γ(RC) y ( ) y Γ(yij + 1) Γ(RC) y Γ( y + RC) ( ) y D(y + 1) y D(1 RC ) p p 23 =1 ( ) yij pij dp11 dp 23 ij Julien Berestycki (University of Oxford) SB2a MT / 16
37 Calculating marginal likelihoods P(y M D ) = = = where ( ) y Γ(RC) y ( ) y Γ(yij + 1) Γ(RC) y Γ( y + RC) ( ) y D(y + 1) y D(1 RC ) p p 23 =1 D(ν) = Γ(ν i )/Γ( ν i ) ( ) yij pij dp11 dp 23 ij and y + 1 denotes the matrix of counts with 1 added to all entries and 1 RC denotes a vector of length RC with all entries equal to 1. Julien Berestycki (University of Oxford) SB2a MT / 16
38 Calculating marginal likelihoods Under M I the probabilities are determined by the marginal probabilities p r = {p 1, p 2, } and p c = {p 1, p 2, p 3 } < 2 2 to 12 > 12 C or better p 11 p 12 p 13 p 1 D or F p 21 p 22 p 23 p 2 p 1 p 2 p 3 Julien Berestycki (University of Oxford) SB2a MT / 16
39 Calculating marginal likelihoods Under M I the probabilities are determined by the marginal probabilities p r = {p 1, p 2, } and p c = {p 1, p 2, p 3 } < 2 2 to 12 > 12 C or better p 11 p 12 p 13 p 1 D or F p 21 p 22 p 23 p 2 p 1 p 2 p 3 Under M I we have a table where p ij = p i p j. Under independence M I the prior for the row sums and column sums are independent uniform priors: Dirichlet distribution (with R=2 and C = 3 respectively) π(p r ) = Γ(R) Γ(1) p p 1 1 R = Γ(R), π(p c ) = Γ(C) Γ(1) p p 1 1 C = Γ(C) Julien Berestycki (University of Oxford) SB2a MT / 16
40 The marginal likelihood under M I is therefore ( ) y ( ) yij P(y M I ) = pi p j π(pr )π(p c )dp r dp c y p r p c ij Julien Berestycki (University of Oxford) SB2a MT / 16
41 The marginal likelihood under M I is therefore ( ) y ( ) yij P(y M I ) = pi p j π(pr )π(p c )dp r dp c y p r p c ij ( ) y = Γ(R)Γ(C) (p i ) y ( ) i dp y j r p j dpc y p r i p c j Julien Berestycki (University of Oxford) SB2a MT / 16
42 The marginal likelihood under M I is therefore ( ) y ( ) yij P(y M I ) = pi p j π(pr )π(p c )dp r dp c y p r p c ij ( ) y = Γ(R)Γ(C) (p i ) y ( ) i dp y j r p j dpc y p r i p c j ( ) y Γ(yi + 1) Γ(y j + 1) = Γ(R)Γ(C) y Γ( y + R) Γ( y + C) Julien Berestycki (University of Oxford) SB2a MT / 16
43 The marginal likelihood under M I is therefore ( ) y ( ) yij P(y M I ) = pi p j π(pr )π(p c )dp r dp c y p r p c ij ( ) y = Γ(R)Γ(C) (p i ) y ( ) i dp y j r p j dpc y p r i p c j ( ) y Γ(yi + 1) Γ(y j + 1) = Γ(R)Γ(C) y Γ( y + R) Γ( y + C) ( ) y D(yR + 1)D(y = C + 1) y D(1 R )D(1 C ) Julien Berestycki (University of Oxford) SB2a MT / 16
44 Bayes Factor Combining the two marginal likelihoods we get the Bayes Factor BF = P(y M D) P(y M I ) = D(y + 1)D(1 R )D(1 C ) D(1 RC )D(y R + 1)D(y C + 1) Julien Berestycki (University of Oxford) SB2a MT / 16
45 Bayes Factor Combining the two marginal likelihoods we get the Bayes Factor Our data is BF = P(y M D) P(y M I ) = D(y + 1)D(1 R )D(1 C ) D(1 RC )D(y R + 1)D(y C + 1) Julien Berestycki (University of Oxford) SB2a MT / 16
46 Bayes Factor Combining the two marginal likelihoods we get the Bayes Factor Our data is The Bayes factor is BF = P(y M D) P(y M I ) = D(y + 1)D(1 R )D(1 C ) D(1 RC )D(y R + 1)D(y C + 1) 11!68!3!9!23!5!1!2! 124! !121! 5!20!91!8!82!37! = 1.66 Julien Berestycki (University of Oxford) SB2a MT / 16
47 Bayes Factor Combining the two marginal likelihoods we get the Bayes Factor Our data is The Bayes factor is BF = P(y M D) P(y M I ) = D(y + 1)D(1 R )D(1 C ) D(1 RC )D(y R + 1)D(y C + 1) 11!68!3!9!23!5!1!2! 124! !121! 5!20!91!8!82!37! = 1.66 which gives modest support against independence. Julien Berestycki (University of Oxford) SB2a MT / 16
48 Normal Linear regression model Julien Berestycki (University of Oxford) SB2a MT / 16
49 Normal Linear regression model Model: Response variable n 1 vector Y = (y 1,..., y n ), predictor variables n p matrix X = (x 1,..., x p ). Julien Berestycki (University of Oxford) SB2a MT / 16
50 Normal Linear regression model Model: Response variable n 1 vector Y = (y 1,..., y n ), predictor variables n p matrix X = (x 1,..., x p ). Y = Xβ + ɛ, ɛ N(0, σ 2 I) Julien Berestycki (University of Oxford) SB2a MT / 16
51 Normal Linear regression model Model: Response variable n 1 vector Y = (y 1,..., y n ), predictor variables n p matrix X = (x 1,..., x p ). Y = Xβ + ɛ, ɛ N(0, σ 2 I) Recall that classical unbiased estimates are ˆβ = (X t X) 1 X T Y, ˆσ 2 = (Y X ˆβ) T (Y X ˆβ) Julien Berestycki (University of Oxford) SB2a MT / 16
52 Normal Linear regression model Model: Response variable n 1 vector Y = (y 1,..., y n ), predictor variables n p matrix X = (x 1,..., x p ). Y = Xβ + ɛ, ɛ N(0, σ 2 I) Recall that classical unbiased estimates are ˆβ = (X t X) 1 X T Y, ˆσ 2 = (Y X ˆβ) T (Y X ˆβ) and predicted Y is Ŷ = X ˆβ = P X Y, P X = X(X T X) 1 X T. Julien Berestycki (University of Oxford) SB2a MT / 16
53 Normal Linear regression model To sum up: Y β, σ 2, X N n (Xβ, σ 2 I) Julien Berestycki (University of Oxford) SB2a MT / 16
54 Normal Linear regression model To sum up: Y β, σ 2, X N n (Xβ, σ 2 I) Bayesian formulation: Assume that (β, σ 2 ) have a non-informative prior g(β, σ 2 ) 1 σ 2 Julien Berestycki (University of Oxford) SB2a MT / 16
55 Posterior distribution q(β, σ 2 Y ) = q(β Y, σ 2 )q(σ 2 Y ) Julien Berestycki (University of Oxford) SB2a MT / 16
56 Posterior distribution q(σ 2 Y ) q(β, σ 2 Y ) = q(β Y, σ 2 )q(σ 2 Y ) 1 (σ 2 ) ( n p IG 2 (n p)/2+1 exp, (n p)ˆσ2 2 Recall Inverse Gamma(a, b) is y a 1 exp{ b/y} } (n p)s2 { 2σ 2 ) Julien Berestycki (University of Oxford) SB2a MT / 16
57 Posterior distribution q(σ 2 Y ) q(β, σ 2 Y ) = q(β Y, σ 2 )q(σ 2 Y ) 1 (σ 2 ) ( n p IG 2 (n p)/2+1 exp, (n p)ˆσ2 2 Recall Inverse Gamma(a, b) is y a 1 exp{ b/y} } (n p)s2 { 2σ 2 ) where q(β Y, σ 2 ) = N( ˆβ, V β σ 2 ) β = (X T X) 1 X T Y, V β = (X T X) 1 Julien Berestycki (University of Oxford) SB2a MT / 16
58 Posterior distribution q(σ 2 Y ) q(β, σ 2 Y ) = q(β Y, σ 2 )q(σ 2 Y ) 1 (σ 2 ) ( n p IG 2 (n p)/2+1 exp, (n p)ˆσ2 2 Recall Inverse Gamma(a, b) is y a 1 exp{ b/y} } (n p)s2 { 2σ 2 ) where q(β Y, σ 2 ) = N( ˆβ, V β σ 2 ) β = (X T X) 1 X T Y, V β = (X T X) 1 Julien Berestycki (University of Oxford) SB2a MT / 16
59 Posterior The posterior density comes from a classical factorization of the likelihood 1 { (2πσ 2 ) n/2 exp 1 } 2σ 2 (y Xβ)T (y Xβ) knowing that (y Xβ) T (y Xβ) = (y X β) (y X β) + ( β β) T X T X( β β) Julien Berestycki (University of Oxford) SB2a MT / 16
60 Posterior The posterior density comes from a classical factorization of the likelihood 1 { (2πσ 2 ) n/2 exp 1 } 2σ 2 (y Xβ)T (y Xβ) knowing that (y Xβ) T (y Xβ) = (y X β) (y X β) + ( β β) T X T X( β β) P(β Y ) is a non-cenral multivariate t n p distribution. Julien Berestycki (University of Oxford) SB2a MT / 16
61 Posterior The posterior density comes from a classical factorization of the likelihood 1 { (2πσ 2 ) n/2 exp 1 } 2σ 2 (y Xβ)T (y Xβ) knowing that (y Xβ) T (y Xβ) = (y X β) (y X β) + ( β β) T X T X( β β) P(β Y ) is a non-cenral multivariate t n p distribution. For each j β j ˆβ j t n p s (X T X) 1 jj Julien Berestycki (University of Oxford) SB2a MT / 16
62 Prediction New covariate matrix X, predict Ỹ. Julien Berestycki (University of Oxford) SB2a MT / 16
63 Prediction New covariate matrix X, predict Ỹ. p(ỹ Y ) = p(ỹ β, σ2 )p(β, σ 2 Y )dβdσ 2 Julien Berestycki (University of Oxford) SB2a MT / 16
64 Prediction New covariate matrix X, predict Ỹ. p(ỹ Y ) = p(ỹ β, σ2 )p(β, σ 2 Y )dβdσ 2 Simulate or Julien Berestycki (University of Oxford) SB2a MT / 16
65 Prediction New covariate matrix X, predict Ỹ. p(ỹ Y ) = p(ỹ β, σ2 )p(β, σ 2 Y )dβdσ 2 Simulate or p(ỹ Y ) is a multivariate t distribution t n p ( X ˆβ, ˆσ 2 (I + X(X T X) 1 X T )) Julien Berestycki (University of Oxford) SB2a MT / 16
Lecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Jonathan Marchini (University of Oxford) BS2a MT / 15
Lecture 16 : Bayesian analysis of contingency tables. Bayesian linear regression. Jonathan Marchini (University of Oxford) BS2a MT 2013 1 / 15 Contingency table analysis North Carolina State University
More informationThe linear model is the most fundamental of all serious statistical models encompassing:
Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationBayesian Linear Models
Eric F. Lock UMN Division of Biostatistics, SPH elock@umn.edu 03/07/2018 Linear model For observations y 1,..., y n, the basic linear model is y i = x 1i β 1 +... + x pi β p + ɛ i, x 1i,..., x pi are predictors
More informationLinear Models A linear model is defined by the expression
Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose
More informationConjugate Analysis for the Linear Model
Conjugate Analysis for the Linear Model If we have good prior knowledge that can help us specify priors for β and σ 2, we can use conjugate priors. Following the procedure in Christensen, Johnson, Branscum,
More informationST 740: Linear Models and Multivariate Normal Inference
ST 740: Linear Models and Multivariate Normal Inference Alyson Wilson Department of Statistics North Carolina State University November 4, 2013 A. Wilson (NCSU STAT) Linear Models November 4, 2013 1 /
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationA Bayesian Treatment of Linear Gaussian Regression
A Bayesian Treatment of Linear Gaussian Regression Frank Wood December 3, 2009 Bayesian Approach to Classical Linear Regression In classical linear regression we have the following model y β, σ 2, X N(Xβ,
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationModule 4: Bayesian Methods Lecture 5: Linear regression
1/28 The linear regression model Module 4: Bayesian Methods Lecture 5: Linear regression Peter Hoff Departments of Statistics and Biostatistics University of Washington 2/28 The linear regression model
More informationST 740: Model Selection
ST 740: Model Selection Alyson Wilson Department of Statistics North Carolina State University November 25, 2013 A. Wilson (NCSU Statistics) Model Selection November 25, 2013 1 / 29 Formal Bayesian Model
More informationBIOS 2083 Linear Models c Abdus S. Wahed
Chapter 5 206 Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationAn Introduction to Bayesian Linear Regression
An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,
More informationPhysics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester
Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability
More informationStatistics 135 Fall 2008 Final Exam
Name: SID: Statistics 135 Fall 2008 Final Exam Show your work. The number of points each question is worth is shown at the beginning of the question. There are 10 problems. 1. [2] The normal equations
More informationMotivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University
Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined
More informationSTAT763: Applied Regression Analysis. Multiple linear regression. 4.4 Hypothesis testing
STAT763: Applied Regression Analysis Multiple linear regression 4.4 Hypothesis testing Chunsheng Ma E-mail: cma@math.wichita.edu 4.4.1 Significance of regression Null hypothesis (Test whether all β j =
More informationGeneral Linear Model: Statistical Inference
Chapter 6 General Linear Model: Statistical Inference 6.1 Introduction So far we have discussed formulation of linear models (Chapter 1), estimability of parameters in a linear model (Chapter 4), least
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More informationBayesian Ingredients. Hedibert Freitas Lopes
Normal Prior s Ingredients Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationSTAT Advanced Bayesian Inference
1 / 32 STAT 625 - Advanced Bayesian Inference Meng Li Department of Statistics Jan 23, 218 The Dirichlet distribution 2 / 32 θ Dirichlet(a 1,...,a k ) with density p(θ 1,θ 2,...,θ k ) = k j=1 Γ(a j) Γ(
More informationLecture 1 Bayesian inference
Lecture 1 Bayesian inference olivier.francois@imag.fr April 2011 Outline of Lecture 1 Principles of Bayesian inference Classical inference problems (frequency, mean, variance) Basic simulation algorithms
More informationSTAT 135 Lab 11 Tests for Categorical Data (Fisher s Exact test, χ 2 tests for Homogeneity and Independence) and Linear Regression
STAT 135 Lab 11 Tests for Categorical Data (Fisher s Exact test, χ 2 tests for Homogeneity and Independence) and Linear Regression Rebecca Barter April 20, 2015 Fisher s Exact Test Fisher s Exact Test
More informationModule 17: Bayesian Statistics for Genetics Lecture 4: Linear regression
1/37 The linear regression model Module 17: Bayesian Statistics for Genetics Lecture 4: Linear regression Ken Rice Department of Biostatistics University of Washington 2/37 The linear regression model
More informationLearning Bayesian network : Given structure and completely observed data
Learning Bayesian network : Given structure and completely observed data Probabilistic Graphical Models Sharif University of Technology Spring 2017 Soleymani Learning problem Target: true distribution
More informationBayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017
Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries
More informationTopic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.
Topic 2: Probability & Distributions ECO220Y5Y: Quantitative Methods in Economics Dr. Nick Zammit University of Toronto Department of Economics Room KN3272 n.zammit utoronto.ca November 21, 2017 Dr. Nick
More informationMultinomial Data. f(y θ) θ y i. where θ i is the probability that a given trial results in category i, i = 1,..., k. The parameter space is
Multinomial Data The multinomial distribution is a generalization of the binomial for the situation in which each trial results in one and only one of several categories, as opposed to just two, as in
More informationvariability of the model, represented by σ 2 and not accounted for by Xβ
Posterior Predictive Distribution Suppose we have observed a new set of explanatory variables X and we want to predict the outcomes ỹ using the regression model. Components of uncertainty in p(ỹ y) variability
More informationg-priors for Linear Regression
Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,
More informationIntegrated Likelihood Estimation in Semiparametric Regression Models. Thomas A. Severini Department of Statistics Northwestern University
Integrated Likelihood Estimation in Semiparametric Regression Models Thomas A. Severini Department of Statistics Northwestern University Joint work with Heping He, University of York Introduction Let Y
More informationCOS513 LECTURE 8 STATISTICAL CONCEPTS
COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions
More informationTopic 21 Goodness of Fit
Topic 21 Goodness of Fit Contingency Tables 1 / 11 Introduction Two-way Table Smoking Habits The Hypothesis The Test Statistic Degrees of Freedom Outline 2 / 11 Introduction Contingency tables, also known
More informationLecture 13 Fundamentals of Bayesian Inference
Lecture 13 Fundamentals of Bayesian Inference Dennis Sun Stats 253 August 11, 2014 Outline of Lecture 1 Bayesian Models 2 Modeling Correlations Using Bayes 3 The Universal Algorithm 4 BUGS 5 Wrapping Up
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationMultiple regression. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar
Multiple regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Multiple regression 1 / 36 Previous two lectures Linear and logistic
More informationNonparameteric Regression:
Nonparameteric Regression: Nadaraya-Watson Kernel Regression & Gaussian Process Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro,
More informationLecture 15. Hypothesis testing in the linear model
14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma
More informationIntroduction into Bayesian statistics
Introduction into Bayesian statistics Maxim Kochurov EF MSU November 15, 2016 Maxim Kochurov Introduction into Bayesian statistics EF MSU 1 / 7 Content 1 Framework Notations 2 Difference Bayesians vs Frequentists
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationSome slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2
Logistics CSE 446: Point Estimation Winter 2012 PS2 out shortly Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2 Last Time Random variables, distributions Marginal, joint & conditional
More informationComputer intensive statistical methods
Lecture 11 Markov Chain Monte Carlo cont. October 6, 2015 Jonas Wallin jonwal@chalmers.se Chalmers, Gothenburg university The two stage Gibbs sampler If the conditional distributions are easy to sample
More informationHPD Intervals / Regions
HPD Intervals / Regions The HPD region will be an interval when the posterior is unimodal. If the posterior is multimodal, the HPD region might be a discontiguous set. Picture: The set {θ : θ (1.5, 3.9)
More informationAMS-207: Bayesian Statistics
Linear Regression How does a quantity y, vary as a function of another quantity, or vector of quantities x? We are interested in p(y θ, x) under a model in which n observations (x i, y i ) are exchangeable.
More informationCSC411 Fall 2018 Homework 5
Homework 5 Deadline: Wednesday, Nov. 4, at :59pm. Submission: You need to submit two files:. Your solutions to Questions and 2 as a PDF file, hw5_writeup.pdf, through MarkUs. (If you submit answers to
More informationModule 11: Linear Regression. Rebecca C. Steorts
Module 11: Linear Regression Rebecca C. Steorts Announcements Today is the last class Homework 7 has been extended to Thursday, April 20, 11 PM. There will be no lab tomorrow. There will be office hours
More informationBayesian Inference. Chapter 2: Conjugate models
Bayesian Inference Chapter 2: Conjugate models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationTime Series and Dynamic Models
Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The
More informationDEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE
Data Provided: None DEPARTMENT OF COMPUTER SCIENCE Autumn Semester 203 204 MACHINE LEARNING AND ADAPTIVE INTELLIGENCE 2 hours Answer THREE of the four questions. All questions carry equal weight. Figures
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More information5.2 Expounding on the Admissibility of Shrinkage Estimators
STAT 383C: Statistical Modeling I Fall 2015 Lecture 5 September 15 Lecturer: Purnamrita Sarkar Scribe: Ryan O Donnell Disclaimer: These scribe notes have been slightly proofread and may have typos etc
More informationBayesian Regression (1/31/13)
STA613/CBB540: Statistical methods in computational biology Bayesian Regression (1/31/13) Lecturer: Barbara Engelhardt Scribe: Amanda Lea 1 Bayesian Paradigm Bayesian methods ask: given that I have observed
More informationSection 4.6 Simple Linear Regression
Section 4.6 Simple Linear Regression Objectives ˆ Basic philosophy of SLR and the regression assumptions ˆ Point & interval estimation of the model parameters, and how to make predictions ˆ Point and interval
More information2.6.3 Generalized likelihood ratio tests
26 HYPOTHESIS TESTING 113 263 Generalized likelihood ratio tests When a UMP test does not exist, we usually use a generalized likelihood ratio test to verify H 0 : θ Θ against H 1 : θ Θ\Θ It can be used
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationHypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33
Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett
More informationFractional Imputation in Survey Sampling: A Comparative Review
Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical
More information13.1 Categorical Data and the Multinomial Experiment
Chapter 13 Categorical Data Analysis 13.1 Categorical Data and the Multinomial Experiment Recall Variable: (numerical) variable (i.e. # of students, temperature, height,). (non-numerical, categorical)
More informationIntroduction to Machine Learning
Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,
More informationMultinomial Logistic Regression Models
Stat 544, Lecture 19 1 Multinomial Logistic Regression Models Polytomous responses. Logistic regression can be extended to handle responses that are polytomous, i.e. taking r>2 categories. (Note: The word
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationStatistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Jonathan Marchini Department of Statistics University of Oxford MT 2013 Jonathan Marchini (University of Oxford) BS2a MT 2013 1 / 27 Course arrangements Lectures M.2
More informationLog-linear Models for Contingency Tables
Log-linear Models for Contingency Tables Statistics 149 Spring 2006 Copyright 2006 by Mark E. Irwin Log-linear Models for Two-way Contingency Tables Example: Business Administration Majors and Gender A
More informationStatistics 3858 : Contingency Tables
Statistics 3858 : Contingency Tables 1 Introduction Before proceeding with this topic the student should review generalized likelihood ratios ΛX) for multinomial distributions, its relation to Pearson
More informationTopic 12 Overview of Estimation
Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the
More informationBayesian Interpretations of Regularization
Bayesian Interpretations of Regularization Charlie Frogner 9.50 Class 15 April 1, 009 The Plan Regularized least squares maps {(x i, y i )} n i=1 to a function that minimizes the regularized loss: f S
More informationNon-Parametric Bayes
Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian
More informationPMR Learning as Inference
Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning
More informationFractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling
Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Jae-Kwang Kim 1 Iowa State University June 26, 2013 1 Joint work with Shu Yang Introduction 1 Introduction
More informationDeep Poisson Factorization Machines: a factor analysis model for mapping behaviors in journalist ecosystem
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationLecture 1 Basic Statistical Machinery
Lecture 1 Basic Statistical Machinery Bruce Walsh. jbwalsh@u.arizona.edu. University of Arizona. ECOL 519A, Jan 2007. University of Arizona Probabilities, Distributions, and Expectations Discrete and Continuous
More informationLecture 6 Multiple Linear Regression, cont.
Lecture 6 Multiple Linear Regression, cont. BIOST 515 January 22, 2004 BIOST 515, Lecture 6 Testing general linear hypotheses Suppose we are interested in testing linear combinations of the regression
More informationINTRODUCTION TO BAYESIAN STATISTICS
INTRODUCTION TO BAYESIAN STATISTICS Sarat C. Dass Department of Statistics & Probability Department of Computer Science & Engineering Michigan State University TOPICS The Bayesian Framework Different Types
More informationGibbs Sampling in Latent Variable Models #1
Gibbs Sampling in Latent Variable Models #1 Econ 690 Purdue University Outline 1 Data augmentation 2 Probit Model Probit Application A Panel Probit Panel Probit 3 The Tobit Model Example: Female Labor
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationT Machine Learning: Basic Principles
Machine Learning: Basic Principles Bayesian Networks Laboratory of Computer and Information Science (CIS) Department of Computer Science and Engineering Helsinki University of Technology (TKK) Autumn 2007
More informationBayes methods for categorical data. April 25, 2017
Bayes methods for categorical data April 25, 2017 Motivation for joint probability models Increasing interest in high-dimensional data in broad applications Focus may be on prediction, variable selection,
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative
More informationProbability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014
Probability Machine Learning and Pattern Recognition Chris Williams School of Informatics, University of Edinburgh August 2014 (All of the slides in this course have been adapted from previous versions
More informationIntroduction to Statistical Data Analysis Lecture 7: The Chi-Square Distribution
Introduction to Statistical Data Analysis Lecture 7: The Chi-Square Distribution James V. Lambers Department of Mathematics The University of Southern Mississippi James V. Lambers Statistical Data Analysis
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee and Andrew O. Finley 2 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More information1/15. Over or under dispersion Problem
1/15 Over or under dispersion Problem 2/15 Example 1: dogs and owners data set In the dogs and owners example, we had some concerns about the dependence among the measurements from each individual. Let
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationTesting Independence
Testing Independence Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM 1/50 Testing Independence Previously, we looked at RR = OR = 1
More informationBayesian Graphical Models
Graphical Models and Inference, Lecture 16, Michaelmas Term 2009 December 4, 2009 Parameter θ, data X = x, likelihood L(θ x) p(x θ). Express knowledge about θ through prior distribution π on θ. Inference
More informationLecture : Probabilistic Machine Learning
Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning
More informationCS Lecture 18. Topic Models and LDA
CS 6347 Lecture 18 Topic Models and LDA (some slides by David Blei) Generative vs. Discriminative Models Recall that, in Bayesian networks, there could be many different, but equivalent models of the same
More information