On Generalized Fiducial Inference
|
|
- Gwendoline Stewart
- 6 years ago
- Views:
Transcription
1 On Generalized Fiducial Inference Jan Hannig University of North Carolina at Chapel Hill Parts of this talk are based on joint work with: Hari Iyer, Thomas C.M. Lee, Paul Patterson, Lidong E, Damian Wandler and Derek Sonderegger Colorado State University Barrett Lectures, 2009 p.1/22
2 Fiducial? Fiducial inference was mentioned only briefly during my graduate studies. I did not remember what it was about. The only think that stuck in my mind was that it is bad. Barrett Lectures, 2009 p.2/22
3 Fiducial? Fiducial inference was mentioned only briefly during my graduate studies. I did not remember what it was about. The only think that stuck in my mind was that it is bad. Oxford English Dictionary adjective TECHNICAL (of a point or line) used as a fixed basis of comparison. ORIGIN from Latin fiducia trust, confidence Merriam-Webster dictionary 1. taken as standard of reference a fiducial mark 2. founded on faith or trust 3. having the nature of a trust : FIDUCIARY Barrett Lectures, 2009 p.2/22
4 Fiducial: A Brief History goal: to construct distributions for model parameters Barrett Lectures, 2009 p.3/22
5 Fiducial: A Brief History goal: to construct distributions for model parameters introduced by Fisher (1930, 1935) Barrett Lectures, 2009 p.3/22
6 Fiducial: A Brief History goal: to construct distributions for model parameters introduced by Fisher (1930, 1935) attempt to overcome what he saw as an issue of the Bayesian approach to inference Barrett Lectures, 2009 p.3/22
7 Fiducial: A Brief History goal: to construct distributions for model parameters introduced by Fisher (1930, 1935) attempt to overcome what he saw as an issue of the Bayesian approach to inference use of a prior distribution when no prior information was available Barrett Lectures, 2009 p.3/22
8 Fiducial: A Brief History goal: to construct distributions for model parameters introduced by Fisher (1930, 1935) attempt to overcome what he saw as an issue of the Bayesian approach to inference use of a prior distribution when no prior information was available related work: Fraser (1960), Dawid and Stone (1982), Dempster (1968, 2008). Barrett Lectures, 2009 p.3/22
9 Fiducial: A Brief History goal: to construct distributions for model parameters introduced by Fisher (1930, 1935) attempt to overcome what he saw as an issue of the Bayesian approach to inference use of a prior distribution when no prior information was available related work: Fraser (1960), Dawid and Stone (1982), Dempster (1968, 2008). it is fair to say that fiducial inference failed to occupy an important place in mainstream statistics Barrett Lectures, 2009 p.3/22
10 Recent Developments Weerahandi (1989, 1993) proposed new concepts of generalized confidence intervals Barrett Lectures, 2009 p.4/22
11 Recent Developments Weerahandi (1989, 1993) proposed new concepts of generalized confidence intervals Hannig, Iyer & Patterson (2006) noted that every published generalized confidence interval was obtainable using the fiducial arguments Barrett Lectures, 2009 p.4/22
12 Recent Developments Weerahandi (1989, 1993) proposed new concepts of generalized confidence intervals Hannig, Iyer & Patterson (2006) noted that every published generalized confidence interval was obtainable using the fiducial arguments and they proved the asymptotic frequentist correctness of such intervals Barrett Lectures, 2009 p.4/22
13 Recent Developments Weerahandi (1989, 1993) proposed new concepts of generalized confidence intervals Hannig, Iyer & Patterson (2006) noted that every published generalized confidence interval was obtainable using the fiducial arguments and they proved the asymptotic frequentist correctness of such intervals Hannig (2008) have developed/modified these ideas further termed the resulting work generalized fiducial inference Barrett Lectures, 2009 p.4/22
14 What was on Fisher s mind? Barrett Lectures, 2009 p.5/22
15 What was on Fisher s mind? a Switching Principle Barrett Lectures, 2009 p.5/22
16 What was on Fisher s mind? a Switching Principle for the celebrated Maximum Likelihood method Barrett Lectures, 2009 p.5/22
17 What was on Fisher s mind? a Switching Principle for the celebrated Maximum Likelihood method density is f(x,θ), where X is variable and θ is fixed Barrett Lectures, 2009 p.5/22
18 What was on Fisher s mind? a Switching Principle for the celebrated Maximum Likelihood method density is f(x,θ), where X is variable and θ is fixed likelihood is f(x,θ), where X is fixed and θ is variable Barrett Lectures, 2009 p.5/22
19 What was on Fisher s mind? a Switching Principle for the celebrated Maximum Likelihood method density is f(x,θ), where X is variable and θ is fixed likelihood is f(x,θ), where X is fixed and θ is variable as we will see, generalized fiducial inference is also based on this idea Barrett Lectures, 2009 p.5/22
20 What was on Fisher s mind? a Switching Principle for the celebrated Maximum Likelihood method density is f(x,θ), where X is variable and θ is fixed likelihood is f(x,θ), where X is fixed and θ is variable as we will see, generalized fiducial inference is also based on this idea the switching of the roles of X and θ Barrett Lectures, 2009 p.5/22
21 Simplistic Example 1 Consider X = µ + Z, where Z N(0, 1). Barrett Lectures, 2009 p.6/22
22 Simplistic Example 1 Consider X = µ + Z, where Z N(0, 1). Observe X = 10. Then we have 10 = µ + Z. Barrett Lectures, 2009 p.6/22
23 Simplistic Example 1 Consider X = µ + Z, where Z N(0, 1). Observe X = 10. Then we have µ = 10 Z. Barrett Lectures, 2009 p.6/22
24 Simplistic Example 1 Consider X = µ + Z, where Z N(0, 1). Observe X = 10. Then we have µ = 10 Z. Though the value of Z is unknown, we know the distribution of Z. Barrett Lectures, 2009 p.6/22
25 Simplistic Example 1 Consider X = µ + Z, where Z N(0, 1). Observe X = 10. Then we have µ = 10 Z. Though the value of Z is unknown, we know the distribution of Z. Fiducial argument: P(µ = 3 ± dx) = P(10 Z = 3 ± dx) = P(Z = 7 ± dx) dx Barrett Lectures, 2009 p.6/22
26 Simplistic Example 1 Consider X = µ + Z, where Z N(0, 1). Observe X = 10. Then we have µ = 10 Z. Though the value of Z is unknown, we know the distribution of Z. Fiducial argument: P(µ = 3 ± dx) = P(10 Z = 3 ± dx) = P(Z = 7 ± dx) dx This introduces a distribution on µ. Barrett Lectures, 2009 p.6/22
27 Simplistic Example 1 Consider X = µ + Z, where Z N(0, 1). Observe X = 10. Then we have µ = 10 Z. Though the value of Z is unknown, we know the distribution of Z. Fiducial argument: P(µ = 3 ± dx) = P(10 Z = 3 ± dx) = P(Z = 7 ± dx) dx This introduces a distribution on µ. We can simulate this distribution using R µ = 10 Z, where Z N(0, 1) independent of Z. Barrett Lectures, 2009 p.6/22
28 Simplistic Example 2 Consider X i = µ + Z i where Z i are i.i.d. N(0, 1). Barrett Lectures, 2009 p.7/22
29 Simplistic Example 2 Consider X i = µ + Z i where Z i are i.i.d. N(0, 1). Observe (x 1,...,x n ). We cannot simply follow the previous idea of setting µ = x 1 Z 1,...,µ = x n Z n. Barrett Lectures, 2009 p.7/22
30 Simplistic Example 2 Consider X i = µ + Z i where Z i are i.i.d. N(0, 1). Observe (x 1,...,x n ). We cannot simply follow the previous idea of setting µ = x 1 Z1,...,µ = x n Zn. Each equation would lead to a different µ! Barrett Lectures, 2009 p.7/22
31 Simplistic Example 2 Consider X i = µ + Z i where Z i are i.i.d. N(0, 1). Observe (x 1,...,x n ). We cannot simply follow the previous idea of setting µ = x 1 Z1,...,µ = x n Zn. Each equation would lead to a different µ! Need to condition the distribution of (Z1,...,Z n) on the event that all the equations have the same µ. Barrett Lectures, 2009 p.7/22
32 Simplistic Example 2 Consider X i = µ + Z i where Z i are i.i.d. N(0, 1). Observe (x 1,...,x n ). We cannot simply follow the previous idea of setting µ = x 1 Z1,...,µ = x n Zn. Each equation would lead to a different µ! Need to condition the distribution of (Z1,...,Z n) on the event that all the equations have the same µ. The fiducial distribution can be defined as x 1 Z1 x 2 = x 1 Z1 + Z2,...,x n = x 1 Z1 + Zn. Barrett Lectures, 2009 p.7/22
33 Simplistic Example 2 Consider X i = µ + Z i where Z i are i.i.d. N(0, 1). Observe (x 1,...,x n ). We cannot simply follow the previous idea of setting µ = x 1 Z1,...,µ = x n Zn. Each equation would lead to a different µ! Need to condition the distribution of (Z1,...,Z n) on the event that all the equations have the same µ. The fiducial distribution can be defined as x 1 Z1 x 2 = x 1 Z1 + Z2,...,x n = x 1 Z1 + Zn. After simplification the fiducial distribution is N( x, 1/n). Barrett Lectures, 2009 p.7/22
34 Simplistic Example 2 Consider X i = µ + Z i where Z i are i.i.d. N(0, 1). Observe (x 1,...,x n ). We cannot simply follow the previous idea of setting µ = x 1 Z1,...,µ = x n Zn. Each equation would lead to a different µ! Need to condition the distribution of (Z1,...,Z n) on the event that all the equations have the same µ. The fiducial distribution can be defined as x 1 Z1 x 2 = x 1 Z1 + Z2,...,x n = x 1 Z1 + Zn. After simplification the fiducial distribution is N( x, 1/n). We have non-uniqueness due to Borel paradox. Barrett Lectures, 2009 p.7/22
35 Example 3 Fat data Borel paradox was caused by the fact that probability of observing our data could be 0. Barrett Lectures, 2009 p.8/22
36 Example 3 Fat data Borel paradox was caused by the fact that probability of observing our data could be 0. Due to instrument limitations we never observe our data exactly. Barrett Lectures, 2009 p.8/22
37 Example 3 Fat data Borel paradox was caused by the fact that probability of observing our data could be 0. Due to instrument limitations we never observe our data exactly. We only know that they are in some interval, i.e., we do not observe x i = π but rather < x i < Barrett Lectures, 2009 p.8/22
38 Example 3 Fat data Borel paradox was caused by the fact that probability of observing our data could be 0. Due to instrument limitations we never observe our data exactly. We only know that they are in some interval, i.e., we do not observe x i = π but rather < x i < Let X = µ + σz. Barrett Lectures, 2009 p.8/22
39 Example 3 Fat data Borel paradox was caused by the fact that probability of observing our data could be 0. Due to instrument limitations we never observe our data exactly. We only know that they are in some interval, i.e., we do not observe x i = π but rather < x i < Let X = µ + σz. If we observe a i < X i < b i we need to generate Z keeping only those values that agree with a i < µ + σz i < b i for all i. Barrett Lectures, 2009 p.8/22
40 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < 0.5. Barrett Lectures, 2009 p.9/22
41 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < mu sigma Z 1 = Barrett Lectures, 2009 p.9/22
42 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < mu sigma Z 1 = , Z 2 = Barrett Lectures, 2009 p.9/22
43 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < mu sigma Z 1 = , Z 2 = , Z 3 = Barrett Lectures, 2009 p.9/22
44 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < mu sigma Z 1 = Barrett Lectures, 2009 p.9/22
45 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < mu sigma Z 1 = , Z 2 = Barrett Lectures, 2009 p.9/22
46 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < mu sigma Z 1 = , Z 2 = , Z 3 = Barrett Lectures, 2009 p.9/22
47 Example 3 Fat data Say we observe 2.0 < X 1 < 2.1, 0.6 < X 2 < 0.7, 0.4 < X 3 < mu sigma Z 1 = , Z 2 = , Z 3 = Denote the intersection by Q. Barrett Lectures, 2009 p.9/22
48 Example 3 Fat data Set Q((a,b),u) = {(µ,σ) : a i < µ + σz i < b i }. Barrett Lectures, 2009 p.10/22
49 Example 3 Fat data Set Q((a,b),u) = {(µ,σ) : a i < µ + σz i < b i }. The fiducial distribution could be defined as Q((a,b),Z ) {Q((a,b),Z ) } Barrett Lectures, 2009 p.10/22
50 Example 3 Fat data Set Q((a,b),u) = {(µ,σ) : a i < µ + σz i < b i }. The fiducial distribution could be defined as Q((a,b),Z ) {Q((a,b),Z ) } P(Q ) P(X (a,b)) > 0, so there is no Borel paradox in the definition of fiducial distribution (1). Barrett Lectures, 2009 p.10/22
51 Example 3 Fat data Set Q((a,b),u) = {(µ,σ) : a i < µ + σz i < b i }. The fiducial distribution could be defined as Q((a,b),Z ) {Q((a,b),Z ) } P(Q ) P(X (a,b)) > 0, so there is no Borel paradox in the definition of fiducial distribution (1). Q typically contains more than one element. We can either use Dempster-Shafer calculus to interpret its meaning, or additionally choose (randomly) an element from Q. Barrett Lectures, 2009 p.10/22
52 Example 3 Fat Data Let X = µ + σz. We observe (2.0,2.1), (0.6,0.7), (0.4, 0.5), (1.4,1.5), (0.7,0.8), (0.8,0.9), (1.2,1.3), (1.2,1.3), (1.1,1.2), (1.5,1.6), (1.4,1.5), (0.4,0.5), (1.2,1.3), (0.7,0.8), (0.5,0.6). Barrett Lectures, 2009 p.11/22
53 Example 3 Fat Data Let X = µ + σz. We observe (2.0,2.1), (0.6,0.7), (0.4, 0.5), (1.4,1.5), (0.7,0.8), (0.8,0.9), (1.2,1.3), (1.2,1.3), (1.1,1.2), (1.5,1.6), (1.4,1.5), (0.4,0.5), (1.2,1.3), (0.7,0.8), (0.5,0.6). The exact distribution of Q((a,b),Z ) {Q((a,b),Z ) } is complicated. We use Gibbs sampler to get a sample from this distribution. Barrett Lectures, 2009 p.11/22
54 Example 3 Fat Data Let X = µ + σz. We observe (2.0,2.1), (0.6,0.7), (0.4, 0.5), (1.4,1.5), (0.7,0.8), (0.8,0.9), (1.2,1.3), (1.2,1.3), (1.1,1.2), (1.5,1.6), (1.4,1.5), (0.4,0.5), (1.2,1.3), (0.7,0.8), (0.5,0.6). The exact distribution of Q((a,b),Z ) {Q((a,b),Z ) } is complicated. We use Gibbs sampler to get a sample from this distribution. Each Q is a polygon. When sampling an element of Q we take a random vertex. Barrett Lectures, 2009 p.11/22
55 Example 3 Fat Data Let X = µ + σz. We observe (2.0,2.1), (0.6,0.7), (0.4, 0.5), (1.4,1.5), (0.7,0.8), (0.8,0.9), (1.2,1.3), (1.2,1.3), (1.1,1.2), (1.5,1.6), (1.4,1.5), (0.4,0.5), (1.2,1.3), (0.7,0.8), (0.5,0.6). The exact distribution of Q((a,b),Z ) {Q((a,b),Z ) } is complicated. We use Gibbs sampler to get a sample from this distribution. Each Q is a polygon. When sampling an element of Q we take a random vertex. Notice that this approach does not assume that the true value of X is uniform in the observed interval! Barrett Lectures, 2009 p.11/22
56 Example Fat Data A sample from Q((a,b),Z ) {Q((a,b),Z ) }. Barrett Lectures, 2009 p.12/22
57 Example Fat Data A sample from Q((a,b),Z ) {Q((a,b),Z ) } mu sigma 20 observations. Barrett Lectures, 2009 p.12/22
58 Example Fat Data A sample from Q((a,b),Z ) {Q((a,b),Z ) } mu sigma 20 observations, final sampled value shown. Barrett Lectures, 2009 p.12/22
59 Example Fat Data A sample from Q((a,b),Z ) {Q((a,b),Z ) } mu sigma 200 observations, final sampled value shown. Barrett Lectures, 2009 p.12/22
60 Example Fat Data A sample from Q((a,b),Z ) {Q((a,b),Z ) } mu sigma green sample from usual fiducial computed with fully known observation. Barrett Lectures, 2009 p.12/22
61 Questions What is the distribution of Q((a,b),Z ) {Q((a,b),Z ) }? Issues Barrett Lectures, 2009 p.13/22
62 Questions What is the distribution of Q((a,b),Z ) {Q((a,b),Z ) }? Issues Random environment: The observations X are random. But only partially observed. Barrett Lectures, 2009 p.13/22
63 Questions What is the distribution of Q((a,b),Z ) {Q((a,b),Z ) }? Issues Random environment: The observations X are random. But only partially observed. The set we condition on has an extremely small probability. Barrett Lectures, 2009 p.13/22
64 Questions What is the distribution of Q((a,b),Z ) {Q((a,b),Z ) }? Issues Random environment: The observations X are random. But only partially observed. The set we condition on has an extremely small probability. Geometry is complicated. Barrett Lectures, 2009 p.13/22
65 Questions What is the distribution of Q((a,b),Z ) {Q((a,b),Z ) }? Issues Random environment: The observations X are random. But only partially observed. The set we condition on has an extremely small probability. Geometry is complicated. It is an intersection of large number of random, dependent parallelograms. However it has typically surprisingly low number of vertexes. Barrett Lectures, 2009 p.13/22
66 What can we do? Let d = (d 1, d 2 ) S 2 and define Q d ((a,b),z ) the most extreme point along the direction d. (It is one of the vertexes a.s.) Barrett Lectures, 2009 p.14/22
67 What can we do? Let d = (d 1, d 2 ) S 2 and define Q d ((a,b),z ) the most extreme point along the direction d. (It is one of the vertexes a.s.) The distribution of Q d ((a,b),z ) {Q((a,b),Z ) } is proportional to X i<j c ij i c ij j σ 3 φ cij i µ σ! φ cij j µ σ! Y k i,j «««bi µ ai µ Φ Φ σ σ c ij i is either a i or b i depending on d. Barrett Lectures, 2009 p.14/22
68 What can we do? Let d = (d 1, d 2 ) S 2 and define Q d ((a,b),z ) the most extreme point along the direction d. (It is one of the vertexes a.s.) The distribution of Q d ((a,b),z ) {Q((a,b),Z ) } is proportional to X i<j c ij i c ij j σ 3 φ cij i µ σ! φ cij j µ σ! Y k i,j «««bi µ ai µ Φ Φ σ σ c ij i is either a i or b i depending on d. This allows us to compute Find a limit as b a 0. (More on this later. 2) Barrett Lectures, 2009 p.14/22
69 What can we do? Let d = (d 1, d 2 ) S 2 and define Q d ((a,b),z ) the most extreme point along the direction d. (It is one of the vertexes a.s.) The distribution of Q d ((a,b),z ) {Q((a,b),Z ) } is proportional to X i<j c ij i c ij j σ 3 φ cij i µ σ! φ cij j µ σ! Y k i,j «««bi µ ai µ Φ Φ σ σ c ij i is either a i or b i depending on d. This allows us to compute Find a limit as b a 0. (More on this later. 2) Find a limit as n. (It is consistent and asymptotically normal). Barrett Lectures, 2009 p.14/22
70 What can we do? Let d = (d 1, d 2 ) S 2 and define Q d ((a,b),z ) the most extreme point along the direction d. (It is one of the vertexes a.s.) The distribution of Q d ((a,b),z ) {Q((a,b),Z ) } is proportional to X i<j c ij i c ij j σ 3 φ cij i µ σ! φ cij j µ σ! Y k i,j «««bi µ ai µ Φ Φ σ σ c ij i is either a i or b i depending on d. This allows us to compute Find a limit as b a 0. (More on this later. 2) Find a limit as n. (It is consistent and asymptotically normal). Would love to know n(q Q d ) {Q } Barrett Lectures, 2009 p.14/22
71 Generalized Fiducial Recipe Let X be a random vector with a distribution indexed by a parameter ξ R p. Assume that X = G(U,ξ), where U has some known distribution independent of parameters, e.g, U U(0, 1). Barrett Lectures, 2009 p.15/22
72 Generalized Fiducial Recipe Let X be a random vector with a distribution indexed by a parameter ξ R p. Assume that X = G(U,ξ), where U has some known distribution independent of parameters, e.g, U U(0, 1). Define a set-valued function Q(A,u) = {ξ : G(u,ξ) A}. The function Q(A,U) is an inverse of the function G. Barrett Lectures, 2009 p.15/22
73 Generalized Fiducial Recipe Let X be a random vector with a distribution indexed by a parameter ξ R p. Assume that X = G(U,ξ), where U has some known distribution independent of parameters, e.g, U U(0, 1). Define a set-valued function Q(A,u) = {ξ : G(u,ξ) A}. The function Q(A,U) is an inverse of the function G. Assume that for any measurable S there is a random variable V (S) with support S. Barrett Lectures, 2009 p.15/22
74 Generalized Fiducial Recipe Based on X = G(U,ξ) define a generalized fiducial distribution as the conditional distribution of V (Q(A,U )) { Q(A,U ) }. (1) Here X A is the observed data and U D = U is a random variable independent of X. Barrett Lectures, 2009 p.16/22
75 Generalized Fiducial Recipe Based on X = G(U,ξ) define a generalized fiducial distribution as the conditional distribution of V (Q(A,U )) { Q(A,U ) }. (1) Here X A is the observed data and U = D U is a random variable independent of X. Let R ξ (A) be a random variable with distribution (1). If θ = π(ξ) is of interest use R θ = π(r ξ ). We will call these GFQs. Barrett Lectures, 2009 p.16/22
76 Remarks There are three sources of non-uniquness in (1). Barrett Lectures, 2009 p.17/22
77 Remarks There are three sources of non-uniquness in (1). The choice of structural equation. Barrett Lectures, 2009 p.17/22
78 Remarks There are three sources of non-uniquness in (1). The choice of structural equation. The choice of V (S): Arises if the inverse Q(A, U ) has more then one element. Barrett Lectures, 2009 p.17/22
79 Remarks There are three sources of non-uniquness in (1). The choice of structural equation. The choice of V (S): Arises if the inverse Q(A, U ) has more then one element. The conditioning on Q(A, U ) }: Arises if P {Q(A, U ) } = 0. Barrett Lectures, 2009 p.17/22
80 Remarks There are three sources of non-uniquness in (1). The choice of structural equation. The choice of V (S): Arises if the inverse Q(A, U ) has more then one element. The conditioning on Q(A, U ) }: Arises if P {Q(A, U ) } = 0. This is caused by Borel paradox. Barrett Lectures, 2009 p.17/22
81 Remarks There are three sources of non-uniquness in (1). The choice of structural equation. The choice of V (S): Arises if the inverse Q(A, U ) has more then one element. The conditioning on Q(A, U ) }: Arises if P {Q(A, U ) } = 0. This is caused by Borel paradox. Under suitable conditions the fiducial distribution leads to procedures with asymptotically correct frequentist properties. Barrett Lectures, 2009 p.17/22
82 Limit b a 0 Assume: The model parameter ξ is p-dimensional; Barrett Lectures, 2009 p.18/22
83 Limit b a 0 Assume: The model parameter ξ is p-dimensional; the structural equation factorizes as X 0 = G 0 (ξ, E 0 ) R p and X c = G c (ξ, E c ); Barrett Lectures, 2009 p.18/22
84 Limit b a 0 Assume: The model parameter ξ is p-dimensional; the structural equation factorizes as X 0 = G 0 (ξ, E 0 ) R p and X c = G c (ξ, E c ); G 0 (ξ, ) and G c(ξ, ) are one-to-one and differentiable, G 1 0 (x ( 0, ξ) denotes the inverse; G 0,e 0) is one-to-one and differentiable. Barrett Lectures, 2009 p.18/22
85 Limit b a 0 Assume: The model parameter ξ is p-dimensional; the structural equation factorizes as X 0 = G 0 (ξ, E 0 ) R p and X c = G c (ξ, E c ); G 0 (ξ, ) and G c(ξ, ) are one-to-one and differentiable, G 1 0 (x ( 0, ξ) denotes the inverse; G 0,e 0) is one-to-one and differentiable. The generalized fiducial distribution is then calculated to be r(ξ x) = f X (x ξ)j(x, ξ) Ξ f X(x ξ )J(x, ξ ) dξ, (2) where J(x, ξ) = ( ) n 1 p i det ( d dξ G 1 det d dx i G 1 i (x i,ξ)) i (x i,ξ). Barrett Lectures, 2009 p.18/22
86 Why does it work asymptotically? The reason why generalized fiducial inference works asymptotically in frequentist sense is very similar to the reason why Bayesian inference works Bernstein - von Mises theorem Barrett Lectures, 2009 p.19/22
87 Why does it work asymptotically? The reason why generalized fiducial inference works asymptotically in frequentist sense is very similar to the reason why Bayesian inference works Bernstein - von Mises theorem Roughly speaking, there is a centering T such that conditionally on the data X = x the generalized fiducial quantity R θ N(T(x), σ 2 n). Barrett Lectures, 2009 p.19/22
88 Why does it work asymptotically? The reason why generalized fiducial inference works asymptotically in frequentist sense is very similar to the reason why Bayesian inference works Bernstein - von Mises theorem Roughly speaking, there is a centering T such that conditionally on the data X = x the generalized fiducial quantity R θ N(T(x), σ 2 n). Moreover unconditionally T(X) N(θ, σ 2 n). Barrett Lectures, 2009 p.19/22
89 Why does it work asymptotically? The reason why generalized fiducial inference works asymptotically in frequentist sense is very similar to the reason why Bayesian inference works Bernstein - von Mises theorem Roughly speaking, there is a centering T such that conditionally on the data X = x the generalized fiducial quantity R θ N(T(x), σ 2 n). Moreover unconditionally T(X) N(θ, σ 2 n). The lower CI is approximately (, T(x) + z α σ n ). The coverage of this CI is approximately P (θ < T(X) + z α σ n ) = P ( z α σ n < T(X) θ) = α. Barrett Lectures, 2009 p.19/22
90 Why does it work asymptotically? Theorem (Hannig, 2007). Assume that J(x, ) is continuous in θ, π(θ) = E θ0 J(X, θ) is finite, π(θ 0 > 0, and on some neighborhood of ) θ 0 E θ0 (sup θ (θ0 δ 0,θ 0 +δ 0 ) J(X, θ) <. Then under regularity conditions Z R s2 r(θ,x) e 2/I(θ 0 ) p 2π/I(θ0 ) dθ P θ 0 0. Barrett Lectures, 2009 p.20/22
91 Why does it work asymptotically? Theorem (Hannig, 2007). Assume that J(x, ) is continuous in θ, π(θ) = E θ0 J(X, θ) is finite, π(θ 0 > 0, and on some neighborhood of ) θ 0 E θ0 (sup θ (θ0 δ 0,θ 0 +δ 0 ) J(X, θ) <. Then under regularity conditions Z R s2 r(θ,x) e 2/I(θ 0 ) p 2π/I(θ0 ) dθ P θ 0 0. The rough idea of the proof is to show that J(x, θ) π(θ) uniformly and use Bernstein-von Mises theorem for Bayesian posterior. Barrett Lectures, 2009 p.20/22
92 Why does it work asymptotically? Theorem (Hannig, 2007). Assume that J(x, ) is continuous in θ, π(θ) = E θ0 J(X, θ) is finite, π(θ 0 > 0, and on some neighborhood of ) θ 0 E θ0 (sup θ (θ0 δ 0,θ 0 +δ 0 ) J(X, θ) <. Then under regularity conditions Z R s2 r(θ,x) e 2/I(θ 0 ) p 2π/I(θ0 ) dθ P θ 0 0. The rough idea of the proof is to show that J(x, θ) π(θ) uniformly and use Bernstein-von Mises theorem for Bayesian posterior. There is a technical problem caused by the fact that π(θ) is typically improper. Barrett Lectures, 2009 p.20/22
93 Concluding Remarks Generalized fiducial distributions lead often to attractive solution with asymptotically correct frequentist coverage. Barrett Lectures, 2009 p.21/22
94 Concluding Remarks Generalized fiducial distributions lead often to attractive solution with asymptotically correct frequentist coverage. Many simulation studies show that generalized fiducial solutions have very good small sample properties. Barrett Lectures, 2009 p.21/22
95 Concluding Remarks Generalized fiducial distributions lead often to attractive solution with asymptotically correct frequentist coverage. Many simulation studies show that generalized fiducial solutions have very good small sample properties. Current popularity of generalized inference in some applied circles suggests that if computers were available 70 years ago, fiducial inference might not have been rejected. Barrett Lectures, 2009 p.21/22
96 Quotes Zabell (1992) Fiducial inference stands as R. A. Fisher s one great failure. Efron (1998) Maybe Fisher s biggest blunder will become a big hit in the 21st century! " Barrett Lectures, 2009 p.22/22
97 Quotes Zabell (1992) Fiducial inference stands as R. A. Fisher s one great failure. Efron (1998) Maybe Fisher s biggest blunder will become a big hit in the 21st century! " Barrett Lectures, 2009 p.22/22
Fiducial Inference and Generalizations
Fiducial Inference and Generalizations Jan Hannig Department of Statistics and Operations Research The University of North Carolina at Chapel Hill Hari Iyer Department of Statistics, Colorado State University
More informationON GENERALIZED FIDUCIAL INFERENCE
Statistica Sinica 9 (2009), 49-544 ON GENERALIZED FIDUCIAL INFERENCE Jan Hannig The University of North Carolina at Chapel Hill Abstract: In this paper we extend Fisher s fiducial argument and obtain a
More informationGeneralized Fiducial Inference
Generalized Fiducial Inference Parts of this short course are joint work with T. C.M Lee (UC Davis), H. Iyer (NIST) Randy Lai (U of Maine), J. Williams (UNC), Y. Cui (UNC), BFF 2018 Jan Hannig a University
More informationConfidence Distribution
Confidence Distribution Xie and Singh (2013): Confidence distribution, the frequentist distribution estimator of a parameter: A Review Céline Cunen, 15/09/2014 Outline of Article Introduction The concept
More informationRemarks on Improper Ignorance Priors
As a limit of proper priors Remarks on Improper Ignorance Priors Two caveats relating to computations with improper priors, based on their relationship with finitely-additive, but not countably-additive
More informationGeneralized Fiducial Inference: A Review
Generalized Fiducial Inference: A Review Jan Hannig Hari Iyer Randy C. S. Lai Thomas C. M. Lee September 16, 2015 Abstract R. A. Fisher, the father of modern statistics, proposed the idea of fiducial inference
More informationGeneralized fiducial confidence intervals for extremes
Extremes (2012) 15:67 87 DOI 10.1007/s10687-011-0127-9 Generalized fiducial confidence intervals for extremes Damian V. Wandler Jan Hannig Received: 1 December 2009 / Revised: 13 December 2010 / Accepted:
More informationSTAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01
STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist
More informationA union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling
A union of Bayesian, frequentist and fiducial inferences by confidence distribution and artificial data sampling Min-ge Xie Department of Statistics, Rutgers University Workshop on Higher-Order Asymptotics
More informationA BAYESIAN MATHEMATICAL STATISTICS PRIMER. José M. Bernardo Universitat de València, Spain
A BAYESIAN MATHEMATICAL STATISTICS PRIMER José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es Bayesian Statistics is typically taught, if at all, after a prior exposure to frequentist
More informationUncertain Inference and Artificial Intelligence
March 3, 2011 1 Prepared for a Purdue Machine Learning Seminar Acknowledgement Prof. A. P. Dempster for intensive collaborations on the Dempster-Shafer theory. Jianchun Zhang, Ryan Martin, Duncan Ermini
More informationThe Jeffreys Prior. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) The Jeffreys Prior MATH / 13
The Jeffreys Prior Yingbo Li Clemson University MATH 9810 Yingbo Li (Clemson) The Jeffreys Prior MATH 9810 1 / 13 Sir Harold Jeffreys English mathematician, statistician, geophysicist, and astronomer His
More informationMonte Carlo conditioning on a sufficient statistic
Seminar, UC Davis, 24 April 2008 p. 1/22 Monte Carlo conditioning on a sufficient statistic Bo Henry Lindqvist Norwegian University of Science and Technology, Trondheim Joint work with Gunnar Taraldsen,
More information1. Fisher Information
1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score
More informationA Very Brief Summary of Bayesian Inference, and Examples
A Very Brief Summary of Bayesian Inference, and Examples Trinity Term 009 Prof Gesine Reinert Our starting point are data x = x 1, x,, x n, which we view as realisations of random variables X 1, X,, X
More informationSTAT215: Solutions for Homework 2
STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator
More informationGeneralized fiducial inference for mixed linear models
Generalized fiducial inference for mixed linear models Jessi Cisewski A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements
More informationSome Curiosities Arising in Objective Bayesian Analysis
. Some Curiosities Arising in Objective Bayesian Analysis Jim Berger Duke University Statistical and Applied Mathematical Institute Yale University May 15, 2009 1 Three vignettes related to John s work
More informationPropagation of Uncertainties in Measurements: Generalized/ Fiducial Inference
Propagation of Uncertainties in Measurements: Generalized/ Fiducial Inference Jack Wang & Hari Iyer NIST, USA NMIJ-BIPM Workshop, AIST-Tsukuba, Japan, May 18-20, 2005 p. 1/31 Frameworks for Quantifying
More informationTwo examples of the use of fuzzy set theory in statistics. Glen Meeden University of Minnesota.
Two examples of the use of fuzzy set theory in statistics Glen Meeden University of Minnesota http://www.stat.umn.edu/~glen/talks 1 Fuzzy set theory Fuzzy set theory was introduced by Zadeh in (1965) as
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationIntegrated Objective Bayesian Estimation and Hypothesis Testing
Integrated Objective Bayesian Estimation and Hypothesis Testing José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es 9th Valencia International Meeting on Bayesian Statistics Benidorm
More informationA simple analysis of the exact probability matching prior in the location-scale model
A simple analysis of the exact probability matching prior in the location-scale model Thomas J. DiCiccio Department of Social Statistics, Cornell University Todd A. Kuffner Department of Mathematics, Washington
More informationlarge number of i.i.d. observations from P. For concreteness, suppose
1 Subsampling Suppose X i, i = 1,..., n is an i.i.d. sequence of random variables with distribution P. Let θ(p ) be some real-valued parameter of interest, and let ˆθ n = ˆθ n (X 1,..., X n ) be some estimate
More informationConfidence distributions in statistical inference
Confidence distributions in statistical inference Sergei I. Bityukov Institute for High Energy Physics, Protvino, Russia Nikolai V. Krasnikov Institute for Nuclear Research RAS, Moscow, Russia Motivation
More informationStatistical Methods in Particle Physics
Statistical Methods in Particle Physics Lecture 11 January 7, 2013 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline How to communicate the statistical uncertainty
More informationBayesian Inference. STA 121: Regression Analysis Artin Armagan
Bayesian Inference STA 121: Regression Analysis Artin Armagan Bayes Rule...s! Reverend Thomas Bayes Posterior Prior p(θ y) = p(y θ)p(θ)/p(y) Likelihood - Sampling Distribution Normalizing Constant: p(y
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationGeneralized Fiducial Inference: A Review and New Results
Journal of the American Statistical Association ISSN: 0162-1459 (Print) 1537-274X (Online) Journal homepage: http://www.tandfonline.com/loi/uasa20 Generalized Fiducial Inference: A Review and New Results
More informationThe likelihood principle (quotes from Berger and Wolpert, 1988)
The likelihood principle (quotes from Berger and Wolpert, 1988) Among all prescriptions for statistical behavior, the Likelihood Principle (LP) stands out as the simplest and yet most farreaching. It essentially
More informationSTOR Lecture 16. Properties of Expectation - I
STOR 435.001 Lecture 16 Properties of Expectation - I Jan Hannig UNC Chapel Hill 1 / 22 Motivation Recall we found joint distributions to be pretty complicated objects. Need various tools from combinatorics
More informationLecture 13 Fundamentals of Bayesian Inference
Lecture 13 Fundamentals of Bayesian Inference Dennis Sun Stats 253 August 11, 2014 Outline of Lecture 1 Bayesian Models 2 Modeling Correlations Using Bayes 3 The Universal Algorithm 4 BUGS 5 Wrapping Up
More informationOverall Objective Priors
Overall Objective Priors Jim Berger, Jose Bernardo and Dongchu Sun Duke University, University of Valencia and University of Missouri Recent advances in statistical inference: theory and case studies University
More informationIntroduction to Bayesian Statistics 1
Introduction to Bayesian Statistics 1 STA 442/2101 Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 42 Thomas Bayes (1701-1761) Image from the Wikipedia
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Spring, 2006 1. DeGroot 1973 In (DeGroot 1973), Morrie DeGroot considers testing the
More informationPhysics 403. Segev BenZvi. Credible Intervals, Confidence Intervals, and Limits. Department of Physics and Astronomy University of Rochester
Physics 403 Credible Intervals, Confidence Intervals, and Limits Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Summarizing Parameters with a Range Bayesian
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.
Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Introduction to Markov chain Monte Carlo The Gibbs Sampler Examples Overview of the Lecture
More informationWeak convergence of Markov chain Monte Carlo II
Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 11 The Prior Distribution Definition Suppose that one has a statistical model with parameter
More informationBayesian Sparse Linear Regression with Unknown Symmetric Error
Bayesian Sparse Linear Regression with Unknown Symmetric Error Minwoo Chae 1 Joint work with Lizhen Lin 2 David B. Dunson 3 1 Department of Mathematics, The University of Texas at Austin 2 Department of
More informationBayesian Inference: Posterior Intervals
Bayesian Inference: Posterior Intervals Simple values like the posterior mean E[θ X] and posterior variance var[θ X] can be useful in learning about θ. Quantiles of π(θ X) (especially the posterior median)
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2
Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate
More informationSTAT 425: Introduction to Bayesian Analysis
STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 2017 1 / 10 Lecture 7: Prior Types Subjective
More informationFiducial Generalized Confidence Intervals
Jan HANNIG, HariIYER, and Paul PATTERSON Fiducial Generalized Confidence Intervals Generalized pivotal quantities GPQs and generalized confidence intervals GCIs have proven to be useful tools for making
More informationInferential models: A framework for prior-free posterior probabilistic inference
Inferential models: A framework for prior-free posterior probabilistic inference Ryan Martin Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago rgmartin@uic.edu
More informationNotes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed
18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationBayesian Econometrics
Bayesian Econometrics Christopher A. Sims Princeton University sims@princeton.edu September 20, 2016 Outline I. The difference between Bayesian and non-bayesian inference. II. Confidence sets and confidence
More informationSAMPLING ALGORITHMS. In general. Inference in Bayesian models
SAMPLING ALGORITHMS SAMPLING ALGORITHMS In general A sampling algorithm is an algorithm that outputs samples x 1, x 2,... from a given distribution P or density p. Sampling algorithms can for example be
More informationStat 451 Lecture Notes Numerical Integration
Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29
More informationDefault priors and model parametrization
1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)
More informationThe binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution.
The binomial model Example. After suspicious performance in the weekly soccer match, 37 mathematical sciences students, staff, and faculty were tested for the use of performance enhancing analytics. Let
More informationStatistical techniques for data analysis in Cosmology
Statistical techniques for data analysis in Cosmology arxiv:0712.3028; arxiv:0911.3105 Numerical recipes (the bible ) Licia Verde ICREA & ICC UB-IEEC http://icc.ub.edu/~liciaverde outline Lecture 1: Introduction
More informationObjective Bayesian Hypothesis Testing
Objective Bayesian Hypothesis Testing José M. Bernardo Universitat de València, Spain jose.m.bernardo@uv.es Statistical Science and Philosophy of Science London School of Economics (UK), June 21st, 2010
More informationMathematical Statistics
Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics
More informationPoisson CI s. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA
Poisson CI s Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 1 Interval Estimates Point estimates of unknown parameters θ governing the distribution of an observed
More informationK-Means and Gaussian Mixture Models
K-Means and Gaussian Mixture Models David Rosenberg New York University October 29, 2016 David Rosenberg (New York University) DS-GA 1003 October 29, 2016 1 / 42 K-Means Clustering K-Means Clustering David
More informationLecture 1: Introduction
Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One
More informationE. Santovetti lesson 4 Maximum likelihood Interval estimation
E. Santovetti lesson 4 Maximum likelihood Interval estimation 1 Extended Maximum Likelihood Sometimes the number of total events measurements of the experiment n is not fixed, but, for example, is a Poisson
More informationMonte Carlo-based statistical methods (MASM11/FMS091)
Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 12 MCMC for Bayesian computation II March 1, 2013 J. Olsson Monte Carlo-based
More informationGENERAL THEORY OF INFERENTIAL MODELS I. CONDITIONAL INFERENCE
GENERAL THEORY OF INFERENTIAL MODELS I. CONDITIONAL INFERENCE By Ryan Martin, Jing-Shiang Hwang, and Chuanhai Liu Indiana University-Purdue University Indianapolis, Academia Sinica, and Purdue University
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationValid Prior-Free Probabilistic Inference and its Applications in Medical Statistics
October 28, 2011 Valid Prior-Free Probabilistic Inference and its Applications in Medical Statistics Duncan Ermini Leaf, Hyokun Yun, and Chuanhai Liu Abstract: Valid, prior-free, and situation-specific
More informationA REVERSE TO THE JEFFREYS LINDLEY PARADOX
PROBABILITY AND MATHEMATICAL STATISTICS Vol. 38, Fasc. 1 (2018), pp. 243 247 doi:10.19195/0208-4147.38.1.13 A REVERSE TO THE JEFFREYS LINDLEY PARADOX BY WIEBE R. P E S T M A N (LEUVEN), FRANCIS T U E R
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationBayesian Regularization
Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationFrequentist Statistics and Hypothesis Testing Spring
Frequentist Statistics and Hypothesis Testing 18.05 Spring 2018 http://xkcd.com/539/ Agenda Introduction to the frequentist way of life. What is a statistic? NHST ingredients; rejection regions Simple
More informationStatistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009
Statistics for Particle Physics Kyle Cranmer New York University 91 Remaining Lectures Lecture 3:! Compound hypotheses, nuisance parameters, & similar tests! The Neyman-Construction (illustrated)! Inverted
More informationThe Surprising Conditional Adventures of the Bootstrap
The Surprising Conditional Adventures of the Bootstrap G. Alastair Young Department of Mathematics Imperial College London Inaugural Lecture, 13 March 2006 Acknowledgements Early influences: Eric Renshaw,
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More informationStatistical Data Analysis Stat 3: p-values, parameter estimation
Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,
More informationA noninformative Bayesian approach to domain estimation
A noninformative Bayesian approach to domain estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu August 2002 Revised July 2003 To appear in Journal
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.
Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 CS students: don t forget to re-register in CS-535D. Even if you just audit this course, please do register.
More informationAdvanced Statistical Methods. Lecture 6
Advanced Statistical Methods Lecture 6 Convergence distribution of M.-H. MCMC We denote the PDF estimated by the MCMC as. It has the property Convergence distribution After some time, the distribution
More informationIntroduction to Bayesian Methods
Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationF79SM STATISTICAL METHODS
F79SM STATISTICAL METHODS SUMMARY NOTES 9 Hypothesis testing 9.1 Introduction As before we have a random sample x of size n of a population r.v. X with pdf/pf f(x;θ). The distribution we assign to X is
More informationStatistics of Small Signals
Statistics of Small Signals Gary Feldman Harvard University NEPPSR August 17, 2005 Statistics of Small Signals In 1998, Bob Cousins and I were working on the NOMAD neutrino oscillation experiment and we
More informationChapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1
Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in
More informationBayesian vs frequentist techniques for the analysis of binary outcome data
1 Bayesian vs frequentist techniques for the analysis of binary outcome data By M. Stapleton Abstract We compare Bayesian and frequentist techniques for analysing binary outcome data. Such data are commonly
More informationBayesian nonparametrics
Bayesian nonparametrics 1 Some preliminaries 1.1 de Finetti s theorem We will start our discussion with this foundational theorem. We will assume throughout all variables are defined on the probability
More informationBFF Four: Are we Converging?
BFF Four: Are we Converging? Nancy Reid May 2, 2017 Classical Approaches: A Look Way Back Nature of Probability BFF one to three: a look back Comparisons Are we getting there? BFF Four Harvard, May 2017
More informationBayesian and frequentist inference
Bayesian and frequentist inference Nancy Reid March 26, 2007 Don Fraser, Ana-Maria Staicu Overview Methods of inference Asymptotic theory Approximate posteriors matching priors Examples Logistic regression
More informationBayesian Aggregation for Extraordinarily Large Dataset
Bayesian Aggregation for Extraordinarily Large Dataset Guang Cheng 1 Department of Statistics Purdue University www.science.purdue.edu/bigdata Department Seminar Statistics@LSE May 19, 2017 1 A Joint Work
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationObjective Bayesian Statistical Inference
Objective Bayesian Statistical Inference James O. Berger Duke University and the Statistical and Applied Mathematical Sciences Institute London, UK July 6-8, 2005 1 Preliminaries Outline History of objective
More informationGENERALIZED FIDUCIAL INFERENCE FOR GRADED RESPONSE MODELS. Yang Liu
GENERALIZED FIDUCIAL INFERENCE FOR GRADED RESPONSE MODELS Yang Liu A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.
More informationElements of statistics (MATH0487-1)
Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -
More informationModule 22: Bayesian Methods Lecture 9 A: Default prior selection
Module 22: Bayesian Methods Lecture 9 A: Default prior selection Peter Hoff Departments of Statistics and Biostatistics University of Washington Outline Jeffreys prior Unit information priors Empirical
More informationBayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007
Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.
More informationObjective Bayesian and fiducial inference: some results and comparisons. Piero Veronese and Eugenio Melilli Bocconi University, Milano, Italy
Objective Bayesian and fiducial inference: some results and comparisons Piero Veronese and Eugenio Melilli Bocconi University, Milano, Italy Abstract Objective Bayesian analysis and fiducial inference
More informationPart 4: Multi-parameter and normal models
Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationStatistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm
Statistics GIDP Ph.D. Qualifying Exam Theory Jan, 06, 9:00am-:00pm Instructions: Provide answers on the supplied pads of paper; write on only one side of each sheet. Complete exactly 5 of the 6 problems.
More informationMarginal inferential models: prior-free probabilistic inference on interest parameters
Marginal inferential models: prior-free probabilistic inference on interest parameters arxiv:1306.3092v4 [math.st] 24 Oct 2014 Ryan Martin Department of Mathematics, Statistics, and Computer Science University
More information' Liberty and Umou Ono and Inseparablo "
3 5? #< q 8 2 / / ) 9 ) 2 ) > < _ / ] > ) 2 ) ) 5 > x > [ < > < ) > _ ] ]? <
More informationData Analysis and Uncertainty Part 1: Random Variables
Data Analysis and Uncertainty Part 1: Random Variables Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu 1 Topics 1. Why uncertainty exists? 2. Dealing
More information