CONVERGENCE ANALYSIS OF THE GIBBS SAMPLER FOR BAYESIAN GENERAL LINEAR MIXED MODELS WITH IMPROPER PRIORS. Vanderbilt University & University of Florida
|
|
- Sandra Morton
- 5 years ago
- Views:
Transcription
1 Submitted to the Annals of Statistics CONVERGENCE ANALYSIS OF THE GIBBS SAMPLER FOR BAYESIAN GENERAL LINEAR MIXED MODELS WITH IMPROPER PRIORS By Jorge Carlos Román and James P. Hobert Vanderbilt University & University of Florida Bayesian analysis of data from the general linear mixed model is challenging because any nontrivial prior leads to an intractable posterior density. However, if a conditionally conjugate prior density is adopted, then there is a simple Gibbs sampler that can be employed to explore the posterior density. A popular default among the conditionally conjugate priors is an improper prior that takes a product form with a flat prior on the regression parameter, and so-called power priors on each of the variance components. In this paper, a convergence rate analysis of the corresponding Gibbs sampler is undertaken. The main result is a simple, easily-checked sufficient condition for geometric ergodicity of the Gibbs Markov chain. This result is close to the best possible result in the sense that the sufficient condition is only slightly stronger than what is required to ensure posterior propriety. The theory developed in this paper is extremely important from a practical standpoint because it guarantees the existence of central limit theorems that allow for the computation of valid asymptotic standard errors for the estimates computed using the Gibbs sampler. 1. Introduction. The general linear mixed model GLMM takes the form 1 Y = Xβ + Zu + e, where Y is an N 1 data vector, X and Z are known matrices with dimensions N p and N q, respectively, β is an unknown p 1 vector of regression coefficients, u is a random vector whose elements represent the various levels of the random factors in the model, and e N N 0, σ ei. The random vectors e and u are assumed to be independent. Suppose there are r random factors in the model. Then u and Z are partitioned accordingly as The authors thank three anonymous reviewers for helpful comments and suggestions. The second author s work was supported by NSF Grants DMS and DMS AMS 000 subject classifications: Primary 60J7; secondary 6F15 Keywords and phrases: Convergence rate, geometric drift condition, geometric ergodicity, Markov chain, Monte Carlo, posterior propriety 1
2 J. C. ROMÁN & J. P. HOBERT u = u T 1 ut T ut r and Z = Z1 Z Z r, where u i is q i 1, Z i is N q i, and q q r = q. Then Zu = Z i u i, and it is assumed that u N q 0, D, where D = r σ u i I qi. Let σ denote the vector of variance components, i.e., σ = σ e σ u 1 σ u r T. For background on this model, which is sometimes called the variance components model, see Searle, Casella and McCulloch 199. A Bayesian version of the GLMM can be assembled by specifying a prior distribution for the unknown parameters, β and σ. A popular choice is the proper conditionally conjugate prior that takes β to be multivariate normal, and takes each of the variance components to be inverted gamma. One obvious reason for using such a prior is that the resulting posterior has conditional densities with standard forms, and this facilitates the use of the Gibbs sampler. In situations where there is little prior information, the hyperparameters of this proper prior are often set to extreme values as this is thought to yield a non-informative prior. Unfortunately, these extreme proper priors approximate improper priors that correspond to improper posteriors, and this results in various forms of instability. This problem has led several authors, including Daniels 1999 and Gelman 006, to discourage the use of such extreme proper priors, and to recommend alternative default priors that are improper, but lead to proper posteriors. Consider, for example, the one-way random effects model given by Y ij = β + α i + e ij, where i = 1,..., c, j = 1,..., n i, the α i s are iid N0, σ α, and the e ij s, which are independent of the α i s, are iid N0, σ e. This is an important special case of model 1. See Section 5 for a detailed explanation of how the GLMM reduces to the one-way model. The standard diffuse prior for this model, which is among those recommended by Gelman 006, has density 1/ σ e σ α. This prior, like many of the improper priors for the GLMM that have been suggested and studied in the literature, is called a power prior because it is a product of terms, each a variance component brought to a possibly negative power. Of course, like the proper conjugate priors mentioned above, power priors also lead to posteriors whose conditional densities have standard forms.
3 CONVERGENCE OF THE GIBBS SAMPLER 3 In this paper, we consider the following parametric family of priors for β, σ : [ r ] 3 pβ, σ ; a, b = σe ae+1 e be σe σu i ai+1 e b i σu i I R r+1σ, + where a = a e, a 1,..., a r and b = b e, b 1,..., b r are fixed hyperparameters, and R + := 0,. By taking b to be the vector of 0s, we can recover the power priors described above. Note that β does not appear on the right-hand side of 3; that is, we are using a so-called flat prior for β. Consequently, even if all the elements of a and b are strictly positive, so that every variance component gets a proper prior, the overall prior remains improper. There have been several studies concerning posterior propriety in this context, but it is still not known exactly which values of a and b yield proper posteriors. The best known result is due to Sun, Tsutakawa and He 001, and we state it below so that it can be used in a comparison later in this section. Define θ = β T u T T and W = X Z, so that W θ = Xβ + Zu. Let y denote the observed value of Y, and let φ d x; µ, Σ denote the N d µ, Σ density evaluated at the vector x. By definition, the posterior density is proper if where my := R r+1 + R p+q π θ, σ y dθ dσ <, 4 π θ, σ y = φ N y; W θ, σ ei φ q u; 0, D pβ, σ ; a, b. A routine calculation shows that the posterior is improper if rankx < p. The following result provides sufficient and nearly necessary conditions for propriety. Throughout the paper, the symbol P subscripted with a matrix will denote the projection onto the column space of that matrix. Theorem 1. [Sun, Tsutakawa & He 001] Assume that rankx = p and let t = rank Z T I P X Z and SSE = I P W y. If the following four conditions hold, then my <. A For each i {1,,..., r}, one of the following holds: A1 a i < b i = 0 ; A b i > 0 B For each i {1,,..., r}, q i + a i > q t C N + a e > p r a ii,0 a i D b e + SSE > 0
4 4 J. C. ROMÁN & J. P. HOBERT If my <, then the posterior density is well defined i.e. proper and is given by πθ, σ y = π θ, σ y / my, but it is intractable in the sense that posterior expectations cannot be computed in closed form, nor even by classical Monte Carlo methods. However, there is a simple two-step Gibbs sampler that can be used to approximate the intractable posterior expectations. This Gibbs sampler simulates a Markov chain, {θ n, σn} n=0, that lives on X = R p+q R r+1 +, and has invariant density πθ, σ y. If the current state of the chain is θ n, σn, then the next state, θ n+1, σn+1, is simulated using the usual two steps. Indeed, we draw θ n+1 from πθ σn, y, which is a p + q-dimensional multivariate normal density, and then we draw σn+1 from πσ θ n+1, y, which is a product of r + 1 univariate inverted gamma densities. The exact forms of these conditional densities are given in Section. Because the Gibbs Markov chain is Harris ergodic see Section, we can use it to construct consistent estimates of intractable posterior expectations. For k > 0, let L k π denote the set of functions g : R p+q R r+1 + R such that 1 m E π g k := R r+1 + R p+q gθ, σ k πθ, σ y dθ dσ <. If g L 1 π, then the Ergodic Theorem implies that the average g m := m 1 i=0 gθ i, σi is a strongly consistent estimator of E πg, no matter how the chain is started. Of course, in practice, an estimator is only useful if it is possible to compute an associated probabilistic bound on the difference between the estimate and the truth. Typically, this bound is based on a standard error. All available methods of computing a valid asymptotic standard error for g m are based on the existence of a central limit theorem CLT for g m see, e.g., Jones et al., 006; Bednorz and Latuszyński, 007; Flegal, Haran and Jones, 008; Flegal and Jones, 010. Unfortunately, even if g L k π for all k > 0, Harris ergodicity is not enough to guarantee the existence of a CLT for g m see, e.g., Roberts and Rosenthal, 1998, 004. The standard method of establishing the existence of CLTs is to prove that the underlying Markov chain converges at a geometric rate. Let BX denote the Borel sets in X, and let P n : X BX [0, 1] denote the n-step Markov transition function of the Gibbs Markov chain. That is, P n θ, σ, A is the probability that θ n, σ n A, given that the chain is started at θ 0, σ 0 = θ, σ. Also, let Π denote the posterior distribution. The chain is called geometrically ergodic if there exist a function M : X [0, and a constant ϱ [0, 1 such that, for all θ, σ X and all n = 0, 1,..., we have P n θ, σ, Π TV Mθ, σ ϱ n,
5 CONVERGENCE OF THE GIBBS SAMPLER 5 where TV denotes the total variation norm. The relationship between geometric convergence and CLTs is simple: If the chain is geometrically ergodic and E π g +δ < for some δ > 0, then there is a CLT for g m. Our main result Theorem in Section 3 provides conditions under which the Gibbs Markov chain is geometrically ergodic. The conditions of Theorem are not easy to interpret, and checking them may require some non-trivial numerical work. On the other hand, the following corollary to Theorem is a slightly weaker result whose conditions are very easy to check and understand. Corollary 1. Assume that rankx = p. If the following four conditions hold, then the Gibbs Markov chain is geometrically ergodic. A For each i {1,,..., r}, one of the following holds: A1 a i < b i = 0 ; A b i > 0 B For each i {1,,..., r}, q i + a i > q t + C N + a e > p + t + D b e + SSE > 0 As we explain in Section, the best result we could possibly hope to obtain is that the Gibbs Markov chain is geometrically ergodic whenever the posterior is proper. With this in mind, note that the conditions of Corollary 1 are very close to the conditions for propriety given in Theorem 1. In fact, the former imply the latter. To see this, assume that A, B, C and D all hold. Then, obviously, B holds, and all that remains is to show that C holds. This would follow immediately if we could establish that 5 t a i I,0 a i. We consider two cases. First, if r I,0a i = 0, then it follows that r a ii,0 a i = 0 and 5 holds since t is non-negative. On the other hand, if r I,0a i > 0, then there is at least one negative a i, and B implies that q i + a i I,0 a i > q t. This inequality combined with the fact that q = q q r yields t > q q i + a i I,0 a i a i I,0 a i,
6 6 J. C. ROMÁN & J. P. HOBERT so 5 holds, and this completes the argument. The strong similarity between the conditions of Corollary 1 and those of Theorem 1 might lead the reader to believe that the proofs of our results rely somehow on Theorem 1. This is not the case, however. In fact, we do not even assume posterior propriety before embarking on our convergence rate analysis see Section 3. The only other existing result on geometric convergence of Gibbs samplers for linear mixed models with improper priors is that of Tan and Hobert 009, who considered a slightly reparameterized version of the one-way random effects model and priors with b = b e, b 1 = 0, 0. We show in Section 5 that our Theorem specialized to the one-way model improves upon the result of Tan and Hobert 009 in the sense that our sufficient conditions for geometric convergence are weaker. Moreover, it is known in this case exactly which priors lead to proper posteriors when SSE > 0, and we use this fact to show that our results can be very close to the best possible. For example, if the standard diffuse prior is used, then the posterior is proper if and only if c 3. On the other hand, our results imply that the Gibbs Markov chain is geometrically ergodic as long as c 3, and the total sample size, N = n 1 + n + + n c, is at least c +. The extra condition that N c + is extremely weak. Indeed, SSE > 0 implies that N c + 1, so, for fixed c 3, our condition for geometric ergodicity fails only in the single case where N = c + 1. An analogue of Corollary 1 for the GLMM with proper priors can be found in Johnson and Jones 010. In contrast with our results, one of their sufficient conditions for geometric convergence is that X T Z = 0, which rarely holds in practice. Overall, the proper and improper cases are similar, in the sense that geometric ergodicity is established via geometric drift conditions in both cases. However, the drift conditions are quite disparate, and the analysis required in the improper case is substantially more demanding. Finally, we note that the linear models considered by Papaspiliopoulos and Roberts 008 are substantively different from ours because these authors assume that the variance components are known. The remainder of this paper is organized as follows. Section contains a formal definition of the Gibbs Markov chain. The main convergence result is stated and proven in Section 3, and an application involving the twoway random effects model is given in Section 4. In Section 5, we consider the one-way random effects model and compare our conditions for geometric convergence with those of Tan and Hobert 009. Finally, Section 6 concerns an interesting technical issue related to the use of improper priors.
7 CONVERGENCE OF THE GIBBS SAMPLER 7. The Gibbs Sampler. In this section, we formally define the Markov chain underlying the Gibbs sampler, and state some of its properties. Recall that θ = β T u T T, σ = σe σu 1 σu T r and π θ, σ y is the potentially improper, unnormalized posterior density defined at 4. Suppose that 6 π θ, σ y dθ < R p+q for all σ outside a set of measure zero in R r+1 +, and that 7 R r+1 + π θ, σ y dσ < for all θ outside a set of measure zero in R p+q. These two integrability conditions are necessary, but not sufficient, for posterior propriety. Keep in mind that it is not known exactly which priors yield proper posteriors. When 6 and 7 hold, we can define conditional densities as follows: πθ σ, y = π θ, σ y R p+q π θ, σ y dθ and πσ θ, y = R r+1 + π θ, σ y π θ, σ y dσ. Clearly, when the posterior is proper, these conditionals are the usual ones based on πθ, σ y. When the posterior is improper, they are incompatible conditional densities; i.e., there is no proper joint density that generates them. In either case, we can run the Gibbs sampler as usual by drawing alternately from the two conditionals. However, as we explain below, if the posterior is improper, then the resulting Markov chain cannot be geometrically ergodic. Despite this fact, we do not restrict attention to the cases where the sufficient conditions for propriety in Theorem 1 are satisfied. Indeed, we hope to close the gap that currently exists between the necessary and sufficient conditions for propriety by finding weaker conditions than those in Theorem 1 that imply geometric ergodicity and hence posterior propriety. We now provide a set of conditions that guarantee that the integrability conditions are satisfied. Define s = min { q 1 + a 1, q + a,..., q r + a r, N + a e }. The proof of the following result is straightforward and is left to the reader. Proposition 1. 7 to hold. The following four conditions are sufficient for 6 and
8 8 J. C. ROMÁN & J. P. HOBERT S1 rankx = p S min{b 1, b,..., b r } 0 S3 b e + SSE > 0 S4 s > 0 Note that SSE = SSEX, Z, y = y W ˆθ, where W = X Z and ˆθ = W T W W T y. Therefore, if condition S3 holds, then for all θ R p+q, b e + y W θ = b e + y W ˆθ + W θ W ˆθ b e + SSE > 0. Note also that if N > p + q, then SSE is strictly positive with probability one under the data generating model. Assume now that S1-S4 hold so that the conditional densities are well defined. Routine manipulation of π θ, σ y shows that πθ σ, y is a multivariate normal density with mean vector [ X m = T X 1 X T I σe 1 ZQ 1 Z T I P X ] y σe 1 Q 1 Z T, I P X y and covariance matrix [ σ V = e X T X 1 + RQ 1 R T RQ 1 Q 1 R T Q 1 ], where Q = σ e 1 Z T I P X Z + D 1 and R = X T X 1 X T Z. Things are a bit more complicated for πσ θ, y due to the possible existence of a bothersome set of measure zero. Define A = { i {1,,..., r} : b i = 0 }. If A is empty, then πσ θ, y is well defined for every θ R p+q, and it is the following product of r + 1 inverted gamma densities πσ θ, y = f IG σ e; N + a e, b e + where f IG v; c, d = r y W θ f IG σu i ; q i + a i, b i + u i, { d c Γc v c+1 e d/v v > 0 0 v 0, for c, d > 0. On the other hand, if A is nonempty, then π θ, σ y dσ = R r+1 +
9 CONVERGENCE OF THE GIBBS SAMPLER 9 whenever θ N := { θ R p+q : i A u i = 0 }. The fact that πσ θ, y is not defined when θ N is irrelevant from a simulation standpoint because the probability of observing a θ in N is zero. However, in order to perform a theoretical analysis, the Markov transition density Mtd of the Gibbs Markov chain must be defined for every θ R p+q. Obviously, the Mtd can be defined arbitrarily on a set of measure zero. Thus, for θ / N, we define πσ θ, y as in the case where A is empty, while if θ N, we define it to be f IG σ e; 1, 1 r f IGσ u i ; 1, 1. Note that this definition can also be used when A is empty if we simply define N to be in that case. The Mtd of the Gibbs Markov chain, {θ n, σn} n=0, is defined as kθ, σ θ, σ = πσ θ, y πθ σ, y. It s easy to see that the chain is ψ-irreducible, and that π θ, σ y is an invariant density. It follows that the chain is positive recurrent if and only if the posterior is proper Meyn and Tweedie, 1993, Chapter 10. Since a geometrically ergodic chain is necessarily positive recurrent, the Gibbs Markov chain cannot be geometrically ergodic when the posterior is improper. The point here is that conditions implying geometric ergodicity also imply posterior propriety. The marginal sequences, {θ n } n=0 and {σ n} n=0, are themselves Markov chains see, e.g., Liu, Wong and Kong, The σ -chain lives on R r+1 + and has Mtd given by k 1 σ σ = πσ θ, yπθ σ, y dθ, R p+q and invariant density R π θ, σ y dθ. Similarly, the θ-chain lives on R p+q p+q and has Mtd k θ θ = πθ σ, yπσ θ, y dσ, R r+1 + and invariant density R r+1 π θ, σ y dσ. Since the two marginal chains + are also ψ-irreducible, they are positive recurrent if and only if the posterior is proper. Moreover, when the posterior is proper, routine calculations show that all three chains are Harris ergodic; i.e., ψ-irreducible, aperiodic and positive Harris recurrent, see Román 01 for details. An important fact that we will exploit is that geometric ergodicity is a solidarity property for the three chains {θ n, σn} n=0, {θ n} n=0 and {σ n} n=0 ; that is, either all three are geometric or none of them is Liu, Wong and Kong, 1994; Roberts and Rosenthal, 001; Diaconis, Khare and Saloff-Coste, 008. In the next section, we prove that the Gibbs Markov chain converges at a geometric rate by proving that one of the marginal chains does.
10 10 J. C. ROMÁN & J. P. HOBERT 3. The Main Result. In order to state the main result, we need a bit more notation. For i {1,..., r}, define R i to be the q i q matrix of 0 s and 1 s such that R i u = u i. In other words, R i is the matrix that extracts u i from u. Here is our main result. Theorem. Assume that S1-S4 hold so that the Gibbs sampler is well defined. If the following two conditions hold, then the Gibbs Markov chain is geometrically ergodic. 1. For each i {1,,..., r}, one of the following holds: i a i < b i = 0 ; ii b i > 0.. There exists an s 0, 1] 0, s/ such that 8 s p + t s Γ N + a e s Γ N + a < 1, e and { Γ qi 9 s + a i s Γ q i + a i } tr R i I P Z T I PXZRi T s < 1, where t = rank Z T I P X Z and P Z T I P X Z is the projection onto the column space of Z T I P X Z. Remark 1. It is important to reiterate that, by themselves, S1-S4 do not imply that the posterior density is proper. Of course, if conditions 1. and. in Theorem hold as well, then the chain is geometric, so the posterior is necessarily proper. Remark. A numerical search could be employed to check the second condition of Theorem. Indeed, one could evaluate the left-hand sides of 8 and 9 at all values of s on a fine grid in the interval 0, 1] 0, s/. The goal, of course, would be to find a single value of s at which both 8 and 9 are satisfied. It can be shown that, if there does exist an s 0, 1] 0, s/ such that 8 and 9 hold, then N +a e > p+t and, for each i = 1,,..., r, q i + a i > trr i I P Z T I P X ZRi T. Thus, it would behoove the user to verify these simple conditions before engaging in any numerical work. Remark 3. When evaluating 9, it may be helpful to write P Z T I P X Z as U T P Λ U, where U and Λ are the orthogonal and diagonal matrices, respectively, in the spectral decomposition of Z T I P X Z. That is, U is a
11 CONVERGENCE OF THE GIBBS SAMPLER 11 q-dimensional orthogonal matrix and Λ is a diagonal matrix containing the eigenvalues of Z T I P X Z. Of course, the projection P Λ is a q q binary diagonal matrix whose ith diagonal element is 1 if and only if the ith diagonal element of Λ is positive. Remark 4. Note that tr R i I P Z T I P X ZRi T [ = tr I P Z T I P X Z = tri P Z T I P X Z = ranki P Z T I P X Z = q t. R T i R i ] Moreover, when r > 1, the matrix I P Z T I P X Z has q = q 1 + q + + q r diagonal elements, and the non-negative term trr i I P Z T I P X ZR T i is simply the sum of the q i diagonal elements that correspond to the ith random factor. Remark 5. Recall from the Introduction that Corollary 1 provides an alternative set of sufficient conditions for geometric ergodicity that are harder to satisfy, but easier to check. A proof of Corollary 1 is given at the end of this section. We will prove Theorem indirectly by proving that the σ -chain is geometrically ergodic when the conditions of Theorem hold. This is accomplished by establishing a geometric drift condition for the σ -chain. Proposition. Assume that S1-S4 hold so that the Gibbs sampler is well defined. Under the two conditions of Theorem, there exist a ρ [0, 1 and a finite constant L such that, for every σ R r+1 +, 10 E vσ σ ρ v σ + L, where the drift function is defined as vσ = ασ e s + σu i s + ασe c + σu i c, and α and c are positive constants. Hence, under the two conditions of Theorem, the σ -chain is geometrically ergodic.
12 1 J. C. ROMÁN & J. P. HOBERT Remark 6. The formulas for ρ = ρα, s, c and L = Lα, s, c are provided in the proof, as is a set of acceptable values for the pair α, c. Recall that the value of s is given to us in the hypothesis of Theorem. Proof. The proof has two parts. In Part I, we establish the validity of the geometric drift condition, 10. In Part II, we use results from Meyn and Tweedie 1993 to show that 10 implies geometric ergodicity of the σ -chain. Part I. By conditioning on θ and iterating, we can express Evσ σ as [ E αe σe s θ + E σu i s θ + αe σe c θ + E σu i c ] θ σ. We now develop upper bounds for each of the four terms inside the square brackets. Fix s S := 0, 1] 0, s/, and define G 0 s = s Γ N + a e s Γ N + a, e and, for each i {1,,..., r}, define G i s = s Γ q i + a i s Γ q i. + a i Note that, since s 0, 1], x 1 + x s x s 1 + xs whenever x 1, x 0. Thus, E σe s θ = s y W s θ G 0 s b e + [ y W θ s G 0 s b s s ] e + = G 0 s y W θ s + s G 0 sb s e. Similarly, E σu i s θ = s G i s b i + u i s G i s u i s + s G i sb s i. Now, for any c > 0, we have E σe c θ = c y W θ G 0 c b e + c G 0 c b e + SSE c, c
13 CONVERGENCE OF THE GIBBS SAMPLER 13 and, for each i {1,,..., r}, E σu i c θ = c G i c b i + u i c ui G i c[ ] c I{0} b i + b i c I 0, b i. r Recall that A = {i : b i = 0}, and note that E σ u c i θ can be bounded above by a constant if A is empty. Thus, we consider the case in which A is empty separately from the case where A. We begin with the latter, which is the more difficult case. Case I: A is non-empty. Combining the four bounds above and applying Jensen s inequality twice, we have 11 Evσ σ αg 0 s [E y W θ σ ] s + G i s [E u i σ ] s + [ G i ce u i c σ ] + κα, s, c, i A where κα, s, c = α s G 0 sb s e + s G i sb s i + α c G 0 c b e + SSE c + G i cb i c. i:b i >0 Appendix A. contains a proof of the following inequality: 1 E [ y W θ σ ] p + t σ e + I P X y + I P X Z K, where with a matrix argument denotes the Frobenius norm, and the constant K = KX, Z, y is defined and shown to be finite in Appendix A.1. It follows immediately that [ E y W θ σ ] s s p + t s σ e s + I P X y + I P X Z K. In Appendix A.3, it is shown that, for each i {1,,..., r}, we have E [ u i σ ] ξ i σ e + ζ i j=1 σ u j + R i K,
14 14 J. C. ROMÁN & J. P. HOBERT where ξ i = tr R i Z T I P X Z + Ri T, ζi = tr R i I P Z T I P X ZRi T, and A + denotes the Moore-Penrose inverse of the matrix A. It follows that 13 [ E u i σ ] s ξ s i σ e s + ζ s i σ u j s + j=1 R i K s. In Appendix A.4, it is established that, for any c 0, 1/, and for each i {1,,..., r}, we have 14 E [ u i c σ ] c Γ q i c Γ q i [ λ c c ] max σ e + c σ ui, where λ max denotes the largest eigenvalue of Z T I P X Z. Using 1-14 in 11, we have 15 Evσ σ α δ 1 s + δ s α + α δ 4c α σ e s + δ 3 s σ u j s j=1 c σ e + δ5 c c σ uj + Lα, s, c, j A where and δ 1 s := G 0 sp + t s, δ s := δ 4 c := c λ c max ξi s G i s, δ 3 s := i A G i c Γ q i c Γ q i δ 5 c := c max [G i c Γ q ] i c i A Γ q, i, ζi s G i s, Lα, s, c = κα, s, c s s + αg 0 s I P X y + I P X Z K + G i s R i K. Hence, Evσ σ ρα, s, c v σ + Lα, s, c,
15 where CONVERGENCE OF THE GIBBS SAMPLER 15 { ρα, s, c = max δ 1 s + δ s α, δ 3s, δ } 4c α, δ 5c. We must now show that there exists a triple α, s, c R + S 0, 1/ such that ρα, s, c < 1. We begin by demonstrating that, if c is small enough, then δ 5 c < 1. Define ã = max i A a i. Also, set C = 0, 1/ 0, ã. Fix c C and note that [ Γ q i ] δ 5 c = max i A + a i + c Γ q i + a i Γ q i c Γ q i For any i A, c + a i < 0, and since s > 0, it follows that 0 < q i + a i < q i + a i + c < q i. But, Γx z/γx is decreasing in x for x > z > 0, so we have, + a i + c = Γ q i + a i + c c Γ q i + a i + c Γ q i + a i Γ q i > Γ q i c Γ q, i and it follows immediately that δ 5 c < 1 whenever c C. The two conditions of Theorem imply that there exists an s S such that δ 1 s < 1 and δ 3 s < 1. Let c be any point in C, and choose α to be any number larger than { δ s } max 1 δ 1 s, δ 4c. A simple calculation shows that ρα, s, c < 1, and this completes the argument for Case I. Case II: A =. Since we no longer have to deal with E r σ u i c θ, the bound 15 becomes Evσ σ α δ 1 s + δ s α σ e s + δ 3 s. σ u j s + Lα, s, c, and there is no restriction on c other than c > 0. Note that the constant term Lα, s, c requires no alteration when we move from Case I to Case II. Hence, Evσ σ ρα, s v σ + Lα, s, c, j=1 where { ρα, s = max δ 1 s + δ } s α, δ 3s.
16 16 J. C. ROMÁN & J. P. HOBERT We must now show that there exists a α, s R + S such that ρα, s < 1. As in Case I, the two conditions of Theorem imply that there exists an s S such that δ 1 s < 1 and δ 3 s < 1. Let α be any number larger than δ s 1 δ 1 s. A simple calculation shows that ρα, s < 1, and this completes the argument for Case II. This completes Part I of the proof. Part II. We begin by establishing that the σ -chain satisfies certain properties. Recall that its Mtd is given by k 1 σ σ = πσ θ, yπθ σ, y dθ. R p+q Note that k 1 is strictly positive on R r+1 + Rr+1 +. It follows that the σ -chain is ψ-irreducible and aperiodic, and that its maximal irreducibility measure is equivalent to Lebesgue measure on R r+1 +. For definitions, see Meyn and Tweedie 1993, Chapters 4 & 5. Let P 1 denote the Markov transition function of the σ -chain; that is, for any σ R r+1 + and any Borel set A, P 1 σ, A = k 1 σ σ dσ. A We now demonstrate that the σ -chain is a Feller chain; that is, for each fixed open set O, P 1, O is a lower semi-continuous function on R r+1 +. Indeed, let { σ m} m=1 be a sequence in Rr+1 + that converges to σ R r+1 +. Then lim inf P 1 σ m m, O = lim inf k 1 σ σ m m dσ O [ ] = lim inf πσ θ, y πθ σ m m, y dθ dσ O R p+q [ ] πσ θ, y lim inf O R p+q m πθ σ m, y dθ dσ [ ] = πσ θ, y πθ σ, y dθ dσ O R p+q = P 1 σ, O, where the inequality follows from Fatou s Lemma, and the third equality follows from the fact that πθ σ, y is continuous in σ. For a proof of continuity, see Román 01. We conclude that P 1, O is lower semicontinuous, so the σ -chain is Feller.
17 CONVERGENCE OF THE GIBBS SAMPLER 17 The last thing we must do before we can appeal to the results in Meyn and Tweedie 1993 is to show that the drift function, v, is unbounded off compact sets; that is, we must show that, for every d R, the set S d = { σ R r+1 + : vσ d } is compact. Let d be such that S d is non-empty otherwise S d is trivially compact, which means that d and d/α must be larger than 1. Since vσ is a continuous function, S d is closed in R r+1 +. Now consider the following set: T d = [ d/α 1/c, d/α 1/s] [ d 1/c, d 1/s] [ d 1/c, d 1/s]. The set T d is compact in R r+1 +. Since S d T d, S d is a closed subset of a compact set in R r+1 +, so it is compact in Rr+1 +. Hence, the drift function is unbounded off compact sets. Since the σ -chain is Feller and its maximal irreducibility measure is equivalent to Lebesgue measure on R r+1 +, Meyn and Tweedie s 1993 Theorem shows that every compact set in R r+1 + is petite. Hence, for each d R, the set S d is petite, so v is unbounded off petite sets. It now follows from the drift condition 10 and an application of Meyn and Tweedie s 1993 Lemma that condition iii of Meyn and Tweedie s 1993 Theorem is satisfied, so the σ -chain is geometrically ergodic. This completes Part II of the proof. We end this section with a proof of Corollary 1. Proof of Corollary 1. It suffices to show that, together, conditions B and C of Corollary 1 imply the second condition of Theorem. Clearly, B and C imply that s/ > 1, so 0, 1] 0, s/ = 0, 1]. Take s = 1. Condition C implies s p + t Γ N s + a e s Γ p + t N + a = e N + a e < 1. Now, we know from Remark 4 that r tr R i I P Z T I P X ZRi T = q t.
18 18 J. C. ROMÁN & J. P. HOBERT Hence, { Γ qi s + a i s Γ q i + a i tr R i I P Z = T I P X ZR T i q i + a i r tr R i I P Z T I P X ZRi T min j {1,,...,r} {q j + a j } q t = min j {1,,...,r} {q j + a j } < 1, } tr R i I P Z T I PXZRi T s where the last inequality follows from condition B. 4. An Application of the Main Result. In this section, we illustrate the application of Theorem using the two-way random effects model with one observation per cell. The model equation is Y ij = β + α i + γ j + ɛ ij, where i = 1,,..., m, j = 1,,..., n, the α i s are iid N0, σ α, the γ j s are iid N0, σ γ, and the ɛ ij s are iid N0, σ e. The α i s, γ j s, and ɛ ij s are all independent. We begin by explaining how to put this model in GLMM matrix form. There are a total of N = m n observations and we arrange them using the usual lexicographical ordering Y = Y 11 Y 1n Y 1 Y n Y m1 Y mn T. Since β is a univariate parameter common to all of the observations, p = 1 and X is an N 1 column vector of ones, which we denote by 1 N. There are r = random factors with q 1 = m and q = n, so q = m + n. Letting denote Kronecker product, we can write the Z matrix as Z 1 Z, where Z 1 = I m 1 n and Z = 1 m I n. We assume throughout this section that SSE > 0. We now examine conditions 8 and 9 of Theorem for this particular model. A routine calculation shows that Z T I P X Z is a block diagonal matrix given by n I m 1 m J m m I n 1 n J n,
19 CONVERGENCE OF THE GIBBS SAMPLER 19 where J d is a d d matrix of ones and is the direct sum operator. It follows immediately that t = rank Z T I P X Z = rank I m 1 m J m +rank I n 1 n J n = m+n. Hence, 8 becomes s m + n 1 s Γ mn + a e s Γ mn + a < 1. e Now, it can be shown that 1 1 I P Z T I P X Z = m J m n J n. Hence, we have 1 trr 1 I P Z T I P X ZR1 T = tr m J m = 1, and 1 trr I P Z T I P X ZR T = tr n J n = 1. Therefore, 9 reduces to { Γ m s + a 1 s Γ m + a + Γ n + a s } 1 Γ n + a < 1. Now consider a concrete example in which m = 5, n = 6, and the prior is I R+ σei R+ σαi R+ σγ σe σα σγ. So, we are taking b e = b 1 = b = a e = 0 and a 1 = a = 1/. Corollary 1 implies that the Gibbs Markov chain is geometrically ergodic whenever m, n 6, but this result is not applicable when m = 5 and n = 6. Hence, we turn to Theorem. In this case, s = 4, so we need to find an s 0, 1] such that s max {10 s Γ s Γ , Γ s Γ Γ s } Γ < 1. The reader can check that, when s = 0.9, the left-hand side is approximately Therefore, Theorem implies that the Gibbs Markov chain is geometrically ergodic in this case.
20 0 J. C. ROMÁN & J. P. HOBERT 5. Specializing to the One-Way Random Effects Model. The only other existing results on geometric convergence of Gibbs samplers for linear mixed models with improper priors are those of Tan and Hobert 009 hereafter, T&H. These authors considered the one-way random effects model, which is a simple, but important special case of the GLMM given in 1. In this section, we show that our results improve upon those of T&H. Recall that the one-way model is given by 16 Y ij = β + α i + e ij, where i = 1,..., c, j = 1,..., n i, the α i s are iid N0, σ α, and the e ij s, which are independent of the α i s, are iid N0, σ e. It s easy to see that 16 is a special case of the GLMM. Obviously, there are a total of N = n n c observations, and we arrange them in a column vector with the usual ordering as follows Y = Y 11 Y 1n1 Y 1 Y n Y c1 Y cnc T. As in the two-way model of Section 4, β is a univariate parameter common to all of the observations, so p = 1 and X = 1 N. Here there is only one random factor with c levels, so r = 1, q = q 1 = c and Z = c 1 n i. Of ni ni j=1 y ij. course, in this case, SSE = c j=1 y ij ȳ i, where y i = n 1 i We assume throughout this section that SSE > 0. We note that T&H actually considered a slightly different parameterization of the one-way model. In their version, β does not appear in the model equation 16, but rather as the mean of the u i s. In other words, T&H used the centered parameterization, whereas here we are using the non-centered parameterization. Román 01 shows that, because β and α 1 α c T are part of a single block in the Gibbs sampler, the centered and non-centered versions of the Gibbs sampler converge at exactly the same rate. T&H considered improper priors for β, σe, σα that take the form σ e ae+1 I R+ σ eσ α a 1+1 I R+ σ α, and they showed that the Gibbs sampler for the one-way model is geometrically ergodic if a 1 < 0 and 17 { c N +a e c+3 and c min n i n i }, n { c } < exp Ψ N +a 1,
21 CONVERGENCE OF THE GIBBS SAMPLER 1 where n = max{n 1, n,..., n c } and Ψx = d dx log Γx is the digamma function. We now consider the implications of Theorem in the case of the one-way model. First, t = rank Z T I P X Z = c 1. Combining this fact with Remark 4, it follows that the two conditions of Theorem will hold if a 1 < 0 and there exists an s 0, 1 0, a 1 + c 0, a e + N such that s max {c s Γ N + a e s Γ N + a, Γ c + a1 s } e Γ c + a < 1. 1 Román 01 shows that such an s does indeed exist so the Gibbs chain is geometrically ergodic when { c } 18 N + a e c + and 1 < exp Ψ + a 1. Now, it s easy to show that { c 1 c min n i n i }, n. N Consequently, if 17 holds, then so does 18. In other words, our sufficient conditions are weaker than those of T&H, so our result improves upon theirs. Moreover, in contrast with the conditions of T&H, our conditions do not directly involve the group sample sizes, n 1, n,..., n c. Of course, the best result possible would be that the Gibbs Markov chain is geometrically ergodic whenever the posterior is proper. Our result is very close to the best possible in the important case where the standard diffuse prior is used; that is, when a 1 = 1/ and a e = 0. The posterior is proper in this case if and only if c 3 Sun, Tsutakawa and He, 001. It follows from 18 that the Gibbs Markov chain is geometrically ergodic as long as c 3 and the total sample size, N = n 1 +n + +n c, is at least c+. This additional sample size condition is extremely weak. Indeed, the positivity of SSE implies that N c + 1, so, for fixed c 3, our condition for geometric ergodicity fails only in the single case where N = c + 1. Interestingly, in this case, the conditions of Corollary 1 reduce to c 5 and N c Discussion. Our decision to work with the σ -chain rather than the θ-chain was based on an important technical difference between the two chains that stems from the fact that πσ θ, y is not continuous in θ for
22 J. C. ROMÁN & J. P. HOBERT each fixed σ when the set A is non-empty. Indeed, recall that, for θ / N, πσ θ, y = f IG σ e; N + a e, b e + but for θ N, r y W θ f IG σu i ; q i + a i, b i + u i, πσ θ, y = f IG σ e; 1, 1 r f IG σu i ; 1, 1. Also, recall that the Mtd of the σ -chain is given by k 1 σ σ = πσ θ, y πθ σ, y dθ. R p+q Since the set N has measure zero, the arbitrary part of πσ θ, y washes out of k 1. However, the same cannot be said for the θ-chain, whose Mtd is given by k θ θ = R r+1 + πθ σ, y πσ θ, y dσ. This difference between k 1 and k comes into play when we attempt to apply certain topological results from Markov chain theory, such as those in Chapter 6 of Meyn and Tweedie In particular, in our proof that the σ -chain is a Feller chain which was part of the proof of Proposition, we used the fact that πθ σ, y is continuous in σ for each fixed θ. Since πσ θ, y is not continuous, we cannot use the same argument to prove that the θ-chain is Feller. In fact, we suspect that the θ-chain is not Feller, and if this is true, it means that our method of proof will not work for the θ-chain. It is possible to circumvent the problem described above by removing the set N from the state space of the θ-chain. In this case, we are no longer required to define πσ θ, y for θ N, and since πσ θ, y is continuous for fixed σ on R p+q \ N, the Feller argument for the θ-chain will go through. On the other hand, the new state space has holes in it, and this could complicate the search for a drift function that is unbounded off compact sets. For example, consider a toy drift function given by vx = x. This function is clearly unbounded off compact sets when the state space is R, but not when the state space is R \ {0}. The modified drift function v x = x +1/x is unbounded off compact sets for the holey state space. T&H overlooked a set of measure zero similar to our N, and this oversight led to an error in the proof of their main result Proposition 3. However, Román 01 shows that T&H s proof can be repaired and that their
23 CONVERGENCE OF THE GIBBS SAMPLER 3 result is correct as stated. The fix involves deleting the offending null set from the state space, and adding a term to the drift function. A.1. Preliminary results. APPENDIX A: UPPER BOUNDS Here is our first result. Lemma 1. The following inequalities hold for all σ R r+1 + and all i {1,..., r}: 1. Q 1 Z T I P X Z + σe + I P Z T I P X Z r j=1 σ u j. tr I P X ZQ 1 Z T I P X rankz T I P X Zσe 3. R i Q 1 Ri T 1 σe 1 λ max + σu i 1 I qi. Proof. Recall from Section 3 that U T ΛU is the spectral decomposition of Z T I P X Z, and that P Λ is a binary diagonal matrix whose ith diagonal element is 1 if and only if the ith diagonal element of Λ is positive. Let σ = r j=1 σ u j. Since σ 1Iq D 1, we have σ e 1 Z T I P X Z + σ 1 I q σ e 1 Z T I P X Z + D 1, and this yields Q 1 = σe 1 Z T I P X Z + D σe 1 Z T I P X Z + σ 1 I q 19 = U T Λσ e 1 + I q σ 1 1 U. Now let Λ + be a diagonal matrix whose diagonal elements, {λ + i }q, are given by { λ + λ 1 i = i λ i 0 0 λ i = 0. Note that, for each i {1,,..., r}, we have 1 λ i σ e 1 + σ 1 λ+ i σ e + I {0} λ i σ. This shows that Λσ e 1 + I q σ 1 1 Λ + σ e + I P Λ σ.
24 4 J. C. ROMÁN & J. P. HOBERT Together with 19, this leads to Q 1 U T Λσ e 1 + I q σ 1 1 U U T Λ + σ e + I P Λ σ U = Z T I P X Z + σ e + U T I P Λ Uσ. So to prove the first statement, it remains to show that U T I P Λ U = I P Z T I P X Z. But notice that letting A = Z T I P X Z and using its spectral decomposition, we have AA T A + A T = U T ΛUU T Λ T ΛU + U T ΛU = U T ΛΛ T Λ + ΛU = U T P Λ U which implies that I P Z T I P X Z = I AA T A + A T = I U T P Λ U = U T I P Λ U. The proof of the first statement is now complete. Now let Z = I P X Z. Multiplying the first statement on the left and the right by Z and Z T, respectively, and then taking traces yields 0 tr ZQ 1 ZT tr Z ZT Z + ZT σ e + tr ZU T I P Λ U Z T σ. Since Z T Z ZT Z + is idempotent, we have tr Z ZT Z + ZT = tr ZT Z ZT Z + = rank ZT Z ZT Z + = rank Z T Z. Furthermore, tr ZU T I P Λ U Z T = tr U T I P Λ UZ T I P X Z = tr U T I P Λ UU T ΛU = tr U T I P Λ ΛU = 0, where the last line follows from the fact that I P Λ Λ = 0. It follows from 0 that tr I P X ZQ 1 Z T I P X rankz T I P X Zσ e, and the second statement has been established. Recall from Section 3 that λ max is the largest eigenvalue of Z T I P X Z, and that R i is the q i q matrix of 0s and 1 s such that R i u = u i. Now, fix i {1,,..., r} and note that Q = σ e 1 Z T I P X Z + D 1 σ e 1 λ max I q + D 1.
25 CONVERGENCE OF THE GIBBS SAMPLER 5 It follows that R i σ e 1 λ max I q + D 1 1 R T i R i Q 1 R T i, and since these two matrices are both positive definite, we have 1 R i Q 1 Ri T R i σ e 1 λ max I q + D R T i = σ e 1 λ max + σ u i 1 I qi, and this proves that the third statement is true. Let z i and y i denote the ith column of Z T = I P X Z T and the ith component of y, respectively. Also, define K to be N N+q y i sup t i t T i + w j t j t T j + w j t j t T j + w ii q t i, w R N+q + t T i j {1,,...,N}\{i} j=n+1 where, for j = 1,,..., N, t j = z j, and for j {N + 1,..., N + q}, the t j are the standard orthonormal basis vectors in R q ; that is, t N+l has a one in the lth position and zeros everywhere else. Lemma. For any σ R r+1 +, we have hσ := σ e 1 Q 1 Z T I P X y K <. The following result from Khare and Hobert 011 will be used in the proof of Lemma. Lemma 3. in R m. Then is finite. Fix n {, 3,... } and m N, and let x 1,..., x n be vectors C m,n x 1 ; x,..., x n := sup w R n + x T 1 x 1 x T 1 + n x1 w i x i x T i + w 1 I Proof of Lemma. Recall that we defined z i and y i to be the ith col- i=
26 6 J. C. ROMÁN & J. P. HOBERT umn of Z T = I P X Z T and the ith component of y, respectively. Now, hσ = Z T I P X Z + σed 1 1 Z T I P X y N = ZT Z + σ e D 1 1 zi y i where N ZT Z + σ e D 1 1 zi y i N N = z j z j T + σ ed 1 z 1 i y i = j=1 N y i K i σ, K i σ := z i z i T + For each i {1,,..., N}, define ˆK i = sup t i t T i + w R N+q + t T i j {1,,...,N}\{i} j {1,,...,N}\{i} z j z T j + σ ed 1 1 z i. N+q w j t j t T j + j=n+1 w j t j t T j + w ii q t i, and notice that K can be written as K = N y i ˆK i. Therefore, it is enough to show that, for each i {1,,..., N}, K i σ ˆK i <. Now, Ki σ = z i T z i z i T + z j z j T + σed z 1 i = z T i z i z T i + sup t T i w R N+q + = ˆK i. j {1,,...,N}\{i} j {1,,...,N}\{i} t i t T i+ z j z T j j {1,,...,N}\{i} + σ e w j t j t T j + D 1 1 σ I q N+q j=n+1 + σ e σ I q z i w j t j t T j + w i I q t i Finally, an application of Lemma 3 shows that ˆK i complete. is finite, and the proof is
27 CONVERGENCE OF THE GIBBS SAMPLER 7 Let χ k µ denote the non-central chi-square distribution with k degrees of freedom and non-centrality parameter µ. Lemma 4. If J χ k µ and γ 0, k/, then E[J γ ] γ Γ k γ Γ. k Proof. Since Γx γ/γx is decreasing for x > γ > 0, we have E[J γ µ i e µ [ ] ] = x γ 1 i! i=0 R + Γ k + i x k k +i +i 1 e x dx = γ µ i e µ Γ k + i γ i! Γ k + i i=0 γ Γ k γ Γ. k A.. An upper bound on E [ y W θ σ ]. We remind the reader that θ = β T u T T, W = X Z, and that πθ σ, y is a multivariate normal density with mean m and covariance matrix V. Thus, 1 E [ y W θ σ ] = trw V W T + y W m, and we have, trw V W T = σ e trp X + tr P X ZQ 1 Z T P X tr ZQ 1 Z T P X + tr ZQ 1 Z T = pσ e tr ZQ 1 Z T P X + tr ZQ 1 Z T = pσ e + tr ZQ 1 Z T I P X = pσ e + tr I P X ZQ 1 Z T I P X pσ e + rankz T I P X Zσ e = p + tσ e, where the inequality is an application of Lemma 1. Finally, a simple calculation shows that y W m = I P X [ I σ e 1 ZQ 1 Z T I P X ] y.
28 8 J. C. ROMÁN & J. P. HOBERT Hence, 3 y W m = I P X y σ e 1 I P X ZQ 1 Z T I P X y I P X y + σ e 1 I P X ZQ 1 Z T I P X y I P X y + I P X Z σ e 1 Q 1 Z T I P X y I P X y + I P X Z K, where denotes the Frobenius norm and the last inequality uses Lemma. Finally, combining 1, and 3 yields E [ y W θ σ ] p + t σe + I P X y + I P X Z K. A.3. An upper bound on E [ u i σ ]. Note that 4 E [ u i σ ] = E [ R i u σ ] = tr R i Q 1 Ri T + E [ R i u σ ]. By Lemma 1, we have 5 tr R i Q 1 Ri T tr Ri Z T I P X Z + Ri T σ e + tr R i I P Z T I P X ZRi T r = ξ i σe + ζ i σu j. j=1 j=1 σ u j Now, by Lemma 6 E [ R i u σ ] R i E[u σ ] = R i hσ R i K. Combining 4, 5 and 6 yields E [ u i σ ] ξ i σ e + ζ i σu j + R i K. A.4. An upper bound on E [ u i c σ ]. Fix i {1,,..., r}. Given σ, R i Q 1 Ri T 1/ u i has a multivariate normal distribution with identity covariance matrix. It follows that, conditional on σ, the distribution of u T i R iq 1 Ri T 1 u i is χ q i w. It follows from Lemma 4 that, as long as c 0, 1/, we have [ [u T E i R i Q 1 Ri T 1 ] c u i σ ] c Γ q i c Γ q i. j=1
29 Now, by Lemma 1 [ ui E c σ ] CONVERGENCE OF THE GIBBS SAMPLER 9 = σ e 1 λ max + σ u i 1 c E [ [ u T i σ e 1 λ max + σu i 1 ] c ] σ I qi u i σ e 1 λ max + σ u i 1 c E [ [u T i R i Q 1 R T i 1 u i ] c σ ] σ e 1 λ max + σ u i 1 c c Γ q i c c Γ q i c Γ q i References. [ λ c max Γ q i σ c ] e + σ c ui. Bednorz, W. and Latuszyński, K A few remarks on Fixed-width output analysis for Markov chain Monte Carlo by Jones et al. Journal of the American Statistical Association Daniels, M. J A prior for the variance in hierarchical models. Canadian Journal of Statistics Diaconis, P., Khare, K. and Saloff-Coste, L Gibbs sampling, exponential families and orthogonal polynomials with discussion. Statistical Science Flegal, J. M., Haran, M. and Jones, G. L Markov Chain Monte Carlo: Can we trust the third significant figure? Statistical Science Flegal, J. M. and Jones, G. L Batch means and spectral variance estimators in Markov chain Monte Carlo. Annals of Statistics Gelman, A Prior distributions for variance parameters in hierarchical models. Bayesian Analysis Johnson, A. A. and Jones, G. L Gibbs sampling for a Bayesian hierarchical general linear model. Electronic Journal of Statistics Jones, G. L., Haran, M., Caffo, B. S. and Neath, R Fixed-width output analysis for Markov chain Monte Carlo. Journal of the American Statistical Association Khare, K. and Hobert, J. P A spectral analytic comparison of trace-class data augmentation algorithms and their sandwich variants. Annals of Statistics Liu, J. S., Wong, W. H. and Kong, A Covariance structure of the Gibbs sampler with applications to comparisons of estimators and augmentation schemes. Biometrika Meyn, S. P. and Tweedie, R. L Markov Chains and Stochastic Stability. Springer Verlag, London. Papaspiliopoulos, O. and Roberts, G. O Stability of the Gibbs sampler for Bayesian hierarchical models. Annals of Statistics Roberts, G. O. and Rosenthal, J. S Markov chain Monte Carlo: Some practical implications of theoretical results with discussion. Canadian Journal of Statistics Roberts, G. O. and Rosenthal, J. S Markov chains and de-initializing processes. Scandinavian Journal of Statistics
30 30 J. C. ROMÁN & J. P. HOBERT Roberts, G. O. and Rosenthal, J. S General state space Markov chains and MCMC algorithms. Probability Surveys Román, J. C. 01. Convergence Analysis of Block Gibbs Samplers for Bayesian General Linear Mixed Models PhD thesis, Department of Statistics, University of Florida. Searle, S. R., Casella, G. and McCulloch, C. E Variance Components. John Wiley & Sons, New York. Sun, D., Tsutakawa, R. K. and He, Z Propriety of posteriors with improper priors in hierarchical linear mixed models. Statistica Sinica Tan, A. and Hobert, J. P Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration. Journal of Computational and Graphical Statistics Department of Mathematics Vanderbilt University Nashville, Tennessee 3740 USA jc.roman@vanderbilt.edu Department of Statistics University of Florida Gainesville, Florida 3611 USA jhobert@stat.ufl.edu
On Reparametrization and the Gibbs Sampler
On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department
More informationCONVERGENCE ANALYSIS OF THE GIBBS SAMPLER FOR BAYESIAN GENERAL LINEAR MIXED MODELS WITH IMPROPER PRIORS
The Annals of Statistics 01, Vol. 40, No. 6, 83 849 DOI: 10.114/1-AOS105 Institute of Mathematical Statistics, 01 CONVERGENCE ANALYSIS OF THE GIBBS SAMPLER FOR BAYESIAN GENERAL LINEAR MIXED MODELS WITH
More informationBlock Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration
Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration Aixin Tan and James P. Hobert Department of Statistics, University of Florida May 4, 009 Abstract
More informationAnalysis of Polya-Gamma Gibbs sampler for Bayesian logistic analysis of variance
Electronic Journal of Statistics Vol. (207) 326 337 ISSN: 935-7524 DOI: 0.24/7-EJS227 Analysis of Polya-Gamma Gibbs sampler for Bayesian logistic analysis of variance Hee Min Choi Department of Statistics
More informationGeometric ergodicity of the Bayesian lasso
Geometric ergodicity of the Bayesian lasso Kshiti Khare and James P. Hobert Department of Statistics University of Florida June 3 Abstract Consider the standard linear model y = X +, where the components
More informationThe Polya-Gamma Gibbs Sampler for Bayesian. Logistic Regression is Uniformly Ergodic
he Polya-Gamma Gibbs Sampler for Bayesian Logistic Regression is Uniformly Ergodic Hee Min Choi and James P. Hobert Department of Statistics University of Florida August 013 Abstract One of the most widely
More informationImproving the Convergence Properties of the Data Augmentation Algorithm with an Application to Bayesian Mixture Modelling
Improving the Convergence Properties of the Data Augmentation Algorithm with an Application to Bayesian Mixture Modelling James P. Hobert Department of Statistics University of Florida Vivekananda Roy
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationSpectral properties of Markov operators in Markov chain Monte Carlo
Spectral properties of Markov operators in Markov chain Monte Carlo Qian Qin Advisor: James P. Hobert October 2017 1 Introduction Markov chain Monte Carlo (MCMC) is an indispensable tool in Bayesian statistics.
More informationSTA 294: Stochastic Processes & Bayesian Nonparametrics
MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a
More informationConvergence Analysis of MCMC Algorithms for Bayesian Multivariate Linear Regression with Non-Gaussian Errors
Convergence Analysis of MCMC Algorithms for Bayesian Multivariate Linear Regression with Non-Gaussian Errors James P. Hobert, Yeun Ji Jung, Kshitij Khare and Qian Qin Department of Statistics University
More informationMinicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics
Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture
More informationPOSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS
POSTERIOR PROPRIETY IN SOME HIERARCHICAL EXPONENTIAL FAMILY MODELS EDWARD I. GEORGE and ZUOSHUN ZHANG The University of Texas at Austin and Quintiles Inc. June 2 SUMMARY For Bayesian analysis of hierarchical
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationMarkov chain Monte Carlo
Markov chain Monte Carlo Karl Oskar Ekvall Galin L. Jones University of Minnesota March 12, 2019 Abstract Practically relevant statistical models often give rise to probability distributions that are analytically
More informationA Comparison Theorem for Data Augmentation Algorithms with Applications
A Comparison Theorem for Data Augmentation Algorithms with Applications Hee Min Choi Department of Statistics University of California, Davis James P. Hobert Department of Statistics University of Florida
More informationDefault Priors and Effcient Posterior Computation in Bayesian
Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature
More informationAsymptotically Stable Drift and Minorization for Markov Chains. with Application to Albert and Chib s Algorithm
Asymptotically Stable Drift and Minorization for Markov Chains with Application to Albert and Chib s Algorithm Qian Qin and James P. Hobert Department of Statistics University of Florida December 2017
More informationApplicability of subsampling bootstrap methods in Markov chain Monte Carlo
Applicability of subsampling bootstrap methods in Markov chain Monte Carlo James M. Flegal Abstract Markov chain Monte Carlo (MCMC) methods allow exploration of intractable probability distributions by
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationWhen is a Markov chain regenerative?
When is a Markov chain regenerative? Krishna B. Athreya and Vivekananda Roy Iowa tate University Ames, Iowa, 50011, UA Abstract A sequence of random variables {X n } n 0 is called regenerative if it can
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationA Geometric Interpretation of the Metropolis Hastings Algorithm
Statistical Science 2, Vol. 6, No., 5 9 A Geometric Interpretation of the Metropolis Hastings Algorithm Louis J. Billera and Persi Diaconis Abstract. The Metropolis Hastings algorithm transforms a given
More informationSimultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationConvergence complexity analysis of Albert and Chib s algorithm for Bayesian probit regression
Convergence complexity analysis of Albert and Chib s algorithm for Bayesian probit regression Qian Qin and James P. Hobert Department of Statistics University of Florida April 2018 Abstract The use of
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationComputer intensive statistical methods
Lecture 11 Markov Chain Monte Carlo cont. October 6, 2015 Jonas Wallin jonwal@chalmers.se Chalmers, Gothenburg university The two stage Gibbs sampler If the conditional distributions are easy to sample
More informationNotes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed
18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,
More informationBayesian Nonparametric Point Estimation Under a Conjugate Prior
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda
More informationPart V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory
Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite
More informationProofs for Large Sample Properties of Generalized Method of Moments Estimators
Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my
More informationWeak convergence of Markov chain Monte Carlo II
Weak convergence of Markov chain Monte Carlo II KAMATANI, Kengo Mar 2011 at Le Mans Background Markov chain Monte Carlo (MCMC) method is widely used in Statistical Science. It is easy to use, but difficult
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public
More informationVariational Principal Components
Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop In Proceedings
More informationPartially Collapsed Gibbs Samplers: Theory and Methods. Ever increasing computational power along with ever more sophisticated statistical computing
Partially Collapsed Gibbs Samplers: Theory and Methods David A. van Dyk 1 and Taeyoung Park Ever increasing computational power along with ever more sophisticated statistical computing techniques is making
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning
More informationMotivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University
Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined
More informationON CONVERGENCE RATES OF GIBBS SAMPLERS FOR UNIFORM DISTRIBUTIONS
The Annals of Applied Probability 1998, Vol. 8, No. 4, 1291 1302 ON CONVERGENCE RATES OF GIBBS SAMPLERS FOR UNIFORM DISTRIBUTIONS By Gareth O. Roberts 1 and Jeffrey S. Rosenthal 2 University of Cambridge
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationMARGINAL MARKOV CHAIN MONTE CARLO METHODS
Statistica Sinica 20 (2010), 1423-1454 MARGINAL MARKOV CHAIN MONTE CARLO METHODS David A. van Dyk University of California, Irvine Abstract: Marginal Data Augmentation and Parameter-Expanded Data Augmentation
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More informationVARIABLE TRANSFORMATION TO OBTAIN GEOMETRIC ERGODICITY IN THE RANDOM-WALK METROPOLIS ALGORITHM
Submitted to the Annals of Statistics VARIABLE TRANSFORMATION TO OBTAIN GEOMETRIC ERGODICITY IN THE RANDOM-WALK METROPOLIS ALGORITHM By Leif T. Johnson and Charles J. Geyer Google Inc. and University of
More informationVerifying Regularity Conditions for Logit-Normal GLMM
Verifying Regularity Conditions for Logit-Normal GLMM Yun Ju Sung Charles J. Geyer January 10, 2006 In this note we verify the conditions of the theorems in Sung and Geyer (submitted) for the Logit-Normal
More informationELEC633: Graphical Models
ELEC633: Graphical Models Tahira isa Saleem Scribe from 7 October 2008 References: Casella and George Exploring the Gibbs sampler (1992) Chib and Greenberg Understanding the Metropolis-Hastings algorithm
More informationGibbs Sampling in Linear Models #2
Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationProbabilistic Graphical Models
2016 Robert Nowak Probabilistic Graphical Models 1 Introduction We have focused mainly on linear models for signals, in particular the subspace model x = Uθ, where U is a n k matrix and θ R k is a vector
More informationGeneral Bayesian Inference I
General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for
More informationNonparametric Drift Estimation for Stochastic Differential Equations
Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationThe Lebesgue Integral
The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationSPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint
More informationThe small ball property in Banach spaces (quantitative results)
The small ball property in Banach spaces (quantitative results) Ehrhard Behrends Abstract A metric space (M, d) is said to have the small ball property (sbp) if for every ε 0 > 0 there exists a sequence
More informationA Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets
A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets George A. Hagedorn Happy 60 th birthday, Mr. Fritz! Abstract. Although real, normalized Gaussian wave packets minimize the product
More informationA Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources
A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College
More informationOn the Applicability of Regenerative Simulation in Markov Chain Monte Carlo
On the Applicability of Regenerative Simulation in Markov Chain Monte Carlo James P. Hobert 1, Galin L. Jones 2, Brett Presnell 1, and Jeffrey S. Rosenthal 3 1 Department of Statistics University of Florida
More informationTesting Restrictions and Comparing Models
Econ. 513, Time Series Econometrics Fall 00 Chris Sims Testing Restrictions and Comparing Models 1. THE PROBLEM We consider here the problem of comparing two parametric models for the data X, defined by
More informationEstimating the spectral gap of a trace-class Markov operator
Estimating the spectral gap of a trace-class Markov operator Qian Qin, James P. Hobert and Kshitij Khare Department of Statistics University of Florida April 2017 Abstract The utility of a Markov chain
More informationCONVERGENCE RATES AND REGENERATION OF THE BLOCK GIBBS SAMPLER FOR BAYESIAN RANDOM EFFECTS MODELS
CONVERGENCE RATES AND REGENERATION OF THE BLOCK GIBBS SAMPLER FOR BAYESIAN RANDOM EFFECTS MODELS By AIXIN TAN A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
More informationSlice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method
Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Madeleine B. Thompson Radford M. Neal Abstract The shrinking rank method is a variation of slice sampling that is efficient at
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2
Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate
More informationThe linear model is the most fundamental of all serious statistical models encompassing:
Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationAn introduction to adaptive MCMC
An introduction to adaptive MCMC Gareth Roberts MIRAW Day on Monte Carlo methods March 2011 Mainly joint work with Jeff Rosenthal. http://www2.warwick.ac.uk/fac/sci/statistics/crism/ Conferences and workshops
More informationInvariant HPD credible sets and MAP estimators
Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in
More informationOn the Optimal Scaling of the Modified Metropolis-Hastings algorithm
On the Optimal Scaling of the Modified Metropolis-Hastings algorithm K. M. Zuev & J. L. Beck Division of Engineering and Applied Science California Institute of Technology, MC 4-44, Pasadena, CA 925, USA
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationIntroduction to Group Theory
Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationWeighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing
Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes
More informationBayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference
1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationDAVID DAMANIK AND DANIEL LENZ. Dedicated to Barry Simon on the occasion of his 60th birthday.
UNIFORM SZEGŐ COCYCLES OVER STRICTLY ERGODIC SUBSHIFTS arxiv:math/052033v [math.sp] Dec 2005 DAVID DAMANIK AND DANIEL LENZ Dedicated to Barry Simon on the occasion of his 60th birthday. Abstract. We consider
More informationBayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida
Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:
More informationLECTURE 15 Markov chain Monte Carlo
LECTURE 15 Markov chain Monte Carlo There are many settings when posterior computation is a challenge in that one does not have a closed form expression for the posterior distribution. Markov chain Monte
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationPartially Collapsed Gibbs Samplers: Theory and Methods
David A. VAN DYK and Taeyoung PARK Partially Collapsed Gibbs Samplers: Theory and Methods Ever-increasing computational power, along with ever more sophisticated statistical computing techniques, is making
More informationarxiv: v5 [math.na] 16 Nov 2017
RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationGlobal Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations
Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM
c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With
More informationHIERARCHICAL BAYESIAN ANALYSIS OF BINARY MATCHED PAIRS DATA
Statistica Sinica 1(2), 647-657 HIERARCHICAL BAYESIAN ANALYSIS OF BINARY MATCHED PAIRS DATA Malay Ghosh, Ming-Hui Chen, Atalanta Ghosh and Alan Agresti University of Florida, Worcester Polytechnic Institute
More informationA regeneration proof of the central limit theorem for uniformly ergodic Markov chains
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,
More information2. The Concept of Convergence: Ultrafilters and Nets
2. The Concept of Convergence: Ultrafilters and Nets NOTE: AS OF 2008, SOME OF THIS STUFF IS A BIT OUT- DATED AND HAS A FEW TYPOS. I WILL REVISE THIS MATE- RIAL SOMETIME. In this lecture we discuss two
More informationON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES
Submitted to the Annals of Probability ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES By Mark J. Schervish, Teddy Seidenfeld, and Joseph B. Kadane, Carnegie
More informationApproximating a Diffusion by a Finite-State Hidden Markov Model
Approximating a Diffusion by a Finite-State Hidden Markov Model I. Kontoyiannis S.P. Meyn November 28, 216 Abstract For a wide class of continuous-time Markov processes evolving on an open, connected subset
More informationON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS
Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)
More informationAnalysis of the Gibbs sampler for a model. related to James-Stein estimators. Jeffrey S. Rosenthal*
Analysis of the Gibbs sampler for a model related to James-Stein estimators by Jeffrey S. Rosenthal* Department of Statistics University of Toronto Toronto, Ontario Canada M5S 1A1 Phone: 416 978-4594.
More informationTechnical Vignette 5: Understanding intrinsic Gaussian Markov random field spatial models, including intrinsic conditional autoregressive models
Technical Vignette 5: Understanding intrinsic Gaussian Markov random field spatial models, including intrinsic conditional autoregressive models Christopher Paciorek, Department of Statistics, University
More informationIsomorphisms between pattern classes
Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.
More information