Latent variable models: a review of estimation methods

Size: px
Start display at page:

Download "Latent variable models: a review of estimation methods"

Transcription

1 Latent variable models: a review of estimation methods Irini Moustaki London School of Economics Conference to honor the scientific contributions of Professor Michael Browne

2 Outline Modeling approaches for ordinal variables Review methods of estimation for latent variable models Look at a special case: longitudinal data Goodness-of-fit tests and Model selection criteria Applications

3 Notation and general aim Manifest variables are denoted by: x 1, x 2,..., x p. Latent variables are denoted by: z 1, z 2,..., z q. One wants to find a set of latent variables z 1,..., z q, fewer in number than the observed variables (q < p), that contain essentially the same information

4 Estimation Methods Full Maximum Likelihood estimation (Bock and Aitkin, 1981; Bartholomew and Knott, 1997) E-M algorithm Newthon-Raphson maximization Limited Information Methods or Composite likelihood: lower order margins Markov Chain Monte Carlo methods (Albert 1992; Albert and Chib, 1993; Baker, 1998; Patz and Junker, 1999a,b; Dunson 2000, 2003; Fox and Glas, 2001; Shi and lee, 1998; Lee and Song, 2003; Lee, 2007.)

5 Full Maximum Likelihood Since all variables are random and only x is observed: f(x) = g(x z)h(z)dz where p g(x z) = g(x i z) i=1 (z 1,..., z q ) N(0, 1) For a random sample of size n the loglikelihood is written as: L = n log f(x h ) h=1

6 Notes q-dimension integration (Laplace, quadrature points, adaptive, Monte Carlo) Maximization (E-M, Newthon-Raphson)

7 Composite likelihoods a categorization based on Varin and Vidoni, 2005 Omission methods: remove terms that make the evaluation of the full likelihood complicated. (Besag, 1974; Azzalini, 1983) Subsetting methods: likelihoods composed of univariate, bivariate, trivariate,... margins (Cox and Reid 2004). They all fall within the context of pseudo likelihood methods (Godambe, 1960) or misspecified models (White, 1982)

8 Composite likelihoods based on marginal densities Key idea: The model still holds marginally for a specific set of variables. Use only information from the univariate or/and bivariate margins. e.g.: maximize the sum of all the log bivariate likelihoods. The estimator obtained for large n is consistent and asymptotically normal. Aim: decrease the number of integrals required without loosing too much precision.

9 Bayesian framework estimation Let us denote by v = (z, α ) the vector with all the unknown parameters The joint posterior distribution of the parameter vector v: h(v x) = g(x v)ψ(v)... g(x v)ψ(v)dv g(x v)ψ(v), (1) which is the likelihood of the data multiplied by the prior distribution (ψ(v)) of all model parameters including the latent variables and divided by a normalizing constant. Calculating the normalizing constant requires multi-dimensional integration that can be very heavy computationally if not infeasible.

10 The main steps of the Bayesian approach are: 1. Inference is based on: h(v x). 2. The mean, mode or any other percentile vector and the standard deviation of h(v x) can be used as an estimator of v and its corresponding standard error. 3. Analytic evaluation of the above expectation is impossible. Alternatives include numerical evaluation, analytic approximations and Monte Carlo integration. 4. MCMC methods are used for sampling from the posterior distribution (Metropolis Hastings algorithm, Gibbs sampling)

11 General Framework for ordinal variables Let x 1, x 2,, x p denote the observed ordinal variables and let m i denote the number of response categories of variable i. We write x i = s to mean that x i belongs to the ordered category s, s = 1, 2,..., m i. p i=1 m i: possible response patterns. Let x r = (x 1 = s 1, x 2 = s 2,..., x p = s p ) represent anyone of these. The model specifies the probability π r = π r (θ) > 0 of x r. πr = 1, for all θ where θ is the vector with all model parameters. The different approaches differ in the way π r (θ) is specified and in the way the model is estimated.

12 Modelling approaches Generalized latent variable framework (Samejima 1969, Moustaki, 2003) To take into account the ordinality property of the items we model the cumulative probabilities: γ i,s (z) = P (x i s z) = π i1 (z) + π i2 (z) + + π is (z) The response category probabilities are denoted by: π i,s (z) = γ i,s (z) γ i,s 1 (z), s = 1,..., m i Proportional odds model ln [ γi,s (z) ] 1 γ i,s (z) = α is q α ij z j, i = 1,..., p j=1

13 α i1 < α i2 < α imi 1 < α imi = Let x r = (x 1 = s 1, x 2 = s 2,..., x p = s p ) represent a full response pattern. Under the assumption of conditional independence, the conditional probability, for given z, of the response pattern x r is g(x r z) = π r (z) = p π i,s (z) = i=1 p [γ i,s (z) γ i,s 1 (z)]. (2) i=1 The unconditional probability π r of the response pattern x r is obtained by integrating π r (z) over the q-dimensional factor space: f(x r ) = π r = + where h(z) is the density function of z. + g(x r z)h(z)dz, (3)

14 Full Information ML, FIML ln L = r n r ln π r = N r p r ln π r, (4) where n r is the frequency of response pattern r, N = r n r is the sample size and p r = n r /N is the sample proportion of response pattern r. The Maximization can be done with E-M or N-R. Full ML computationally intensive for a large number of factors. Adaptive methods should be used. Pairwise likelihood does not have an advantage here.

15 MCMC estimation The Gibbs sampling (Geman and Geman, 1984; Gelfand and Smith, 1990) is a way for sampling from complex joint posterior distributions. Partition the vector of unknown parameters v into two components, v 1 = z and v 2 = α. Simulate from h(z 1 α 0, x) and h(α 1 z 1, x). The Gibbs sampler eventually produces a sequence of iterations v 0, v 1,... that form a Markov Chain that converges into the desired posterior distribution and it can be summarized in the following two steps:

16 1. Start with initial guesses of z 0, α Then simulate in the following order: Draw z 1 from h(z α 0, x). Draw α 1 from h(α z 1, x). The conditional distributions are: h(z α, x) = g(x z, α)h(z, α) g(x z, α)h(z, α)dz (5) h(α z, x) = g(x z, α)h(z, α) g(x z, α)h(z, α)dα (6)

17 When the normalizing constant is not in closed form a M-H algorithm within Gibbs is needed Computationally heavy Convergence criteria should be used Model selection criteria such as the Bayes factor require the calculation of the normalizing constant. MCMC methods give similar results with FIML (Kim, 2001; Wollack and ea., 2002; Moustaki and Knott, 2005)

18 Example: compare E-M and MCMC Social Life Feelings scale (Bartholomew and Knott, 2007) MCMC solution obtained with BUGS Two-parameter model is fitted E-M MCMC Item ˆα i0 ˆα i1 ˆα i0 ˆα i (0.13) 1.20(0.14) -2.37(0.14) 1.21(0.15) (0.06) 0.71(0.09) 0.79(0.06) 0.70(0.09) (0.09) 1.53(0.17) 0.99(0.09) 1.52(0.17) (0.13) 2.55(0.41) -0.74(0.20) 2.86(0.81) (0.07) 0.92(0.10) -1.10(0.07) 0.92(0.11)

19 Underlying variable approach assumes that the observed categorical variables are generated by a set of underlying unobserved continuous variables (Muthén, 1984; Jöreskog, 1994; Lee, Poon, and Bentler, 1990, 1992). This approach employs the classical factor analysis model: x i = λ i1 z 1 + λ i2 z λ iq z q + u i, i = 1, 2,..., p, (7) x i is x i = a τ (i) a 1 < x i τ (i) a, a = 1, 2,..., m i, (8) τ (i) 0 =, τ (i) 1 < τ (i) 2 <... < τ (i) m i 1, τ (i) m i = +, z 1,..., z q, u 1,..., u p are independent and normally distributed with z j N(0, 1) and u i N(0, ψ 2 i ).

20 ψ 2 i = 1 k j=1 λ2 ij. x 1,..., x p MV N(0, 1, P = (ρ ij )), where ρ ij = q l=1 λ ilλ jl. It follows that the probability π r (θ) of a general p-dimensional response pattern π r (θ) = P r(x 1 = s 1, x 2 = s 2,..., x p = s p ) = τ (1) s 1 τ (1) s 1 1 τ (2) s 2 τ (2) s 2 1 τ (p) sp τ (p) sp 1 φ p (u 1, u 2,... u p P)du 1 du 2 du p, (9) where the integral (9) is over the p-dimensional normal density function.

21 Estimation methods within the UVA 1. Full ML: ln L = r n r ln π r = N r p r ln π r, (10) If there is no model so that the π r are unconstrained, the maximum of ln L is ln L 1 = r n r ln p r = N r p r ln p r. Instead of maximizing (10) it is convenient to minimize the fit function F (θ) = r p r [ln p r ln π r (θ)] = r p r ln[p r /π r (θ)]. (11)

22 2. Three-stage procedures leading to limited information methods. 3. Composite likelihood methods: Use information from the univariate and bivariate margins to estimate the model. (Jöreskog and Moustaki, 2001) From the multivariate normality of x 1,..., x p, it follows π (g) a (θ) = τ (g) a τ (g) a 1 φ(u)du, (12) π (gh) ab (θ) = τ (g) a τ (g) a 1 τ (h) b τ (h) b 1 φ 2 (u, v ρ gh )dudv, (13) where φ 2 (u, v ρ) is the density function of the standardized bivariate normal distribution with correlation ρ.

23 Minimizes the sum of all univariate and bivariate fit functions F (θ) = p m g g=1 a=1 p (g) a ln[p (g) a /π (g) a (θ)]+ p g 1 m g m h g=2 h=1 a=1 b=1 p (gh) ab ln[p (gh) ab /π (gh) ab (θ)], Only data in the univariate and bivariate margins are used. This approach is quite feasible in that it can handle a large number of variables as well as a large number of factors. The asymptotic covariance matrix is often unstable in small samples, particularly if there are zero or small frequencies in the bivariate margins. The bivariate approach estimates the thresholds and the factor loadings in one single step from the univariate and bivariate margins without the use of a weight matrix.

24 Simulation study: small population p = 4 satisfying a one-factor model. m i = 5, i = 1,..., p There are Q = 625 possible response pattern. The model has 20 parameters. The sample size is N = 800. In the probit-generated data there are 163 distinct response patterns in the sample (NOR). In the logit-generated data there are 305 distinct response patterns in the sample (POM).

25 Logit gives more probability to a wider set of response patterns, whereas Probit concentrates more probabilities to a smaller set of response patterns. Table 1: Small Population: Estimated Factor Loadings NOR-Generated Data POM-Generated Data True UBN NOR POM UBN NOR POM

26 Multivariate longitudinal ordinal data Study the evolvement in time of traits, attitudes, ability, etc. Characteristics of longitudinal data: Extra dependencies to account for (within time/ between time). Factor changes over time are modelled through a multivariate normal distribution. The covariance matrix imposed on the time-related latent variables accounts for the dependence of the item responses across time. A GLLVM that accounts for dependencies within time and across time using time specific factors and a heteroscedastic item-dependent random effect term (Dunson 2003, JASA and Cagnone, Moustaki, Vasdekis, 2009, BJMSP). An autoregressive model is used to model the structural part of the model.

27 Within Structural Equation Modelling (SEM) framework longitudinal data are treated in different ways: Individual response curves over time by means of latent variable growth models. The main feature of this class of models is that the parameters of the curve, random intercept and random slope, can be viewed as latent variables. Allow for correlated errors.

28 Notation for longitudinal data Let x 1t, x 2t,..., x pt be the ordinal observed variables measured at time t, (t = 1,..., T ). The m i ordered categories of the ith item measured at time t have response category probabilities π it(1) (z t, u i ), π it(2) (z t, u i ),, π it(mi )(z t, u i ) The time-dependent latent variable z t account for the dependencies among the p items measured at time t The random effect u i accounts for the dependencies among the item i measured at different time points.

29 Model specification and assumptions The systematic component: η it(s) = τ it(s) α it z t + u i, i = 1,..., p; s = 1,..., m i ; t = 1,..., T. (14) The link between the systematic component and the conditional means of the random component distributions is η it(s) = v it(s) (γ it(s) ) where γ it(s) = P (x it s z t, u i ) and v it(s) (.) is the link function.

30 The associations among the items measured at time t are explained by the latent variable z t. Cov(η it, η i t) = α it α i tv ar(z t ), i i (15) with V ar(z t ) = φ 2(t 1) σ I(t 2) t 1 k=1 φ2(k 1) The associations among the same item measured across time (x i1,..., x it ) are explained by the item-specific random effect u i. Cov(η it, η it ) = α it α it Cov(z t, z t ) + σ 2 ui, t < t (16) Cov(z t, z t ) = φ t+t 2 σ I(t 2) t 2 k=0 φt t+2k, where I(.) is the indicator function.

31 The within-individual correlations are accounted for through modelling the covariance between latent variables z t and z t. z t = φz t 1 + δ t (17) δ t N(0, 1) and z 1 N(0, 1). u i N(0, σ 2 u) Cov(η it, η i t ) = α itα i t Cov(z t, z t ), i i, t < t (18)

32 Full ML Estimation with the EM-algorithm For a random sample of size n the complete log-likelihood is written as: n L = log f(x m, z m, u m ) m=1 = log n [log g(x m z m, u m ) + log h(z m, u m )] (19) m=1 g(x m z m, u m ) = T p g(x mit z mt, u mi ) (20) t=1 i=1

33 Let us define with ζ m = (z m, u m ) and with h(z m, u m ) = h(ζ m, φ, σ 2 u). Assuming that their common distribution function is multivariate normal their log-likelihood is, apart from a constant log h(ζ m, φ, σ 2 u) = 1 2 ln Φ 1 2 ζ mφ 1 ζ m (21) Φ = [ Γ 0 0 Ω ] (22) where Ω = diag i=1,...,p {σui 2 } and the elements of Γ are such that its inverse has

34 a well known special pattern as Γ 1 = 1 σ φ 2 φ φ 1 + φ 2 φ φ 1 + φ φ 1 + φ 2 φ φ 1

35 EM-algorithm E-step: the expected score function of the model parameters is computed. The expectation is with respect to the posterior distribution of (z m, u m ) given the observations (h(z m, u m x m )) for each individual. M-step: updated parameter estimates are obtained. The full ML is very computationally intensive as the number of items and factors increase.

36 Composite likelihood estimation Vasdekis, Cagnone, Moustaki (working paper) The model specification remains the same. S 1 = {(i, j, t); 1 i < j p; t = 1,..., T } S 2 = {(i, i, t, t ); 1 i p; 1 i p; 1 t < t T }

37 π (a it,a i t ) it,i t,z t,z t,u i,u i (θ) the joint density of (x it, x i t, z t, z t, u i, u i ) π (a it,a i t ) it,i t (θ) = P (x it = a it, x i t = a i t ; θ) the marginal probability density of any pair of observations. The bivariate probability for a pair of response (x it, x i t) is: π (a it,a i t ) it,i t (θ) = π (a it,a i t ) it,i t,z t,u i,u i (θ)dz tdu i du i and for a pair of response (x it, x i t ) π (a it,a i t ) it,i t (θ) = π (a it ) it (z t, u i )π (a i t ) (z t, u i )h(z t, z t, u i, u i )dz t dz t du i d i t

38 The latter can be three dimensional if it happens that i = i. In any case, the maximum number of integrations we have to deal with, is four. The contribution of any given individual to the log pairwise likelihood is therefore: pl(θ; y) = log π (a it,a i t ) it,i t (θ) + log π (a it,a i t ) it,i t (θ) (23) S1 S 2 Since each component of equation (23) is a likelihood object we can claim that pl(θ) = 0 is unbiased under the usual regularity conditions. The maximum pairwise likelihood estimator ˆθ P L is consistent and asymptotically normally distributed (Arnold and Strauss, 1991) under regularity conditions found in Lehman (1983), Crowder (1986) or Geys et. al. (1997). The maximization of the likelihood in (23) is done with the E-M algorithm.

39 Standard Errors For the pairwise likelihood defined in (23), the maximum pairwise likelihood estimator ˆθ P L = (ˆτ P L, ˆψ P L ) converges in probability to θ 0 and ˆθ P L N q ( θ0, J 1 (θ 0 )H(θ 0 )J 1 (θ 0 ) ) where J(θ) = E y { 2 pl(θ; y) } and H(θ) = var( pl(θ; y)) Heagerty and Lele (1998) and Varin and Vidoni (2005) give expressions for the estimation of J(θ) and H(θ).

40 Model selection criteria and Goodness-of-fit tests Varin and Vidoni (2005) propose also a composite likelihood information criterion based on the expected Kullback-Leibler information between the true density of the data and the density estimated via pairwise likelihood: pl(ˆθ P L ; y) + trĵĥ 1 (24) Limited information goodness-of-fit tests developed by Reiser (1996), Olivares and Joe (2005), Olivares and Joe (2006). X 2 e = e ˆΣ+ e e (25) where ˆΣ + e is the Moore-Penrose inverse of ˆΣe. X 2 e has an asymptotic χ 2 distribution with degrees freedom given by the rank of ˆΣ e

41 Application: British Household Panel Survey BHPS: study social and economic change at the individual and household level in Britain. The original data set consists of 6 ordinal items on social and political attitudes measured at 5 different waves (1992, 1994, 1996, 1998, 2000). The sample size was A random sample of 500 is considered. The following three items for three waves (1994=1,1996=2, 1998=3) are used: Private enterprise solves economic problems [Enterp] Govt. has obligation to provide jobs [Gov] Strong trade unions protect employees [Trunion]

42 The response alternatives are: disagree, strongly disagree. Strongly agree, Agree, Not agree/disagree, We collapsed the first two categories and the last two categories. A full ML and pairwise method are used to estimate the model. Five quadrature points are used

43 Results Model 1: No equality constraints Model 2: Thresholds equality constraints Model 3: Loadings equality constraints Model 4: Thresholds and loadings equality constraints Model selection criteria FIML Pairwise Models AIC CLIC Model Model Model Model

44 Results Parameter estimates with standard errors in brackets, BHPS FIML Pairwise Items ˆα i ˆσ 2 ui ˆα i ˆσ 2 ui Enterp (0.73) (0.52) Govern 0.86 (0.12) 5.09 (3.41) 1.05(0.09) 5.54(1.98) TrUnion 1.01 (0.18) 5.45 (3.91) 1.28(0.15) 5.63(1.43) Estimated covariance matrix of the time-dependent attitudinal latent variables ˆΓ F IML = ˆΓP air = ˆφ F IML = 0.95(0.01) ˆφP air = 0.86(0.04)

45 Simulation: Full vs. Pairwise Comparison between the two estimation methods under the following conditions of the study Setting as in the BHPS application 3 ordinal variables, 3 waves (with 4 items FIML was not able to converge) different sample sizes (200; 1000), quadrature points equal to 5, 200 replications

46 Results for loadings (variances in parentheses) Full ML Pairwise ML Mean MSE Mean MSE n=200 α 1 = 1.00 α 2 = (0.05) (0.05) 0.05 α 3 = (0.10) (0.07) 0.07 n=1000 α 1 = 1.00 α 2 = (0.01) (0.01) 0.01 α 3 = (0.02) (0.02) 0.02

47 Results for variances of item random effects Full ML Pairwise ML Mean MSE Mean MSE n=200 φ = (0.01) (0.01) 0.01 σu1 2 = (0.69) (0.54) 0.54 σu2 2 = (2.56) (1.73) 1.76 σu3 2 = (2.51) (2.06) 2.08 σ1 2 = (2.96) (1.43) 1.43 n=1000 φ = (0.00) (0.00) 0.00 σu1 2 = (0.11) (0.12) 0.12 σu2 2 = (0.38) (0.31) 0.33 σu3 2 = (0.40) (0.40) 0.41 σ1 2 = (0.35) (0.34) 0.34

48 Conclusions Full ML vs Composite likelihoods method: Full ML is in some cases not possible even for moderate size problems, composite likelihoods seem promising. Efficiency of estimators to be studied (computationally demanding). Weights either on separate marginal components or inclusion of weighted version of univariate densities to be checked. Limited information goodness-of-fit test under investigation. MCMC methods work for all models. Caution needs to be given on the convergence, parameterization, goodness-of-fit..

Factor Analysis and Latent Structure of Categorical Data

Factor Analysis and Latent Structure of Categorical Data Factor Analysis and Latent Structure of Categorical Data Irini Moustaki Athens University of Economics and Business Outline Objectives Factor analysis model Literature Approaches Item Response Theory Models

More information

Pairwise Likelihood Estimation for factor analysis models with ordinal data

Pairwise Likelihood Estimation for factor analysis models with ordinal data Working Paper 2011:4 Department of Statistics Pairwise Likelihood Estimation for factor analysis models with ordinal data Myrsini Katsikatsou Irini Moustaki Fan Yang-Wallentin Karl G. Jöreskog Working

More information

A Composite Likelihood Approach for Factor Analyzing Ordinal Data

A Composite Likelihood Approach for Factor Analyzing Ordinal Data A Composite Likelihood Approach for Factor Analyzing Ordinal Data Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio

More information

Old and new approaches for the analysis of categorical data in a SEM framework

Old and new approaches for the analysis of categorical data in a SEM framework Old and new approaches for the analysis of categorical data in a SEM framework Yves Rosseel Department of Data Analysis Belgium Myrsini Katsikatsou Department of Statistics London Scool of Economics UK

More information

A Markov chain Monte Carlo approach to confirmatory item factor analysis. Michael C. Edwards The Ohio State University

A Markov chain Monte Carlo approach to confirmatory item factor analysis. Michael C. Edwards The Ohio State University A Markov chain Monte Carlo approach to confirmatory item factor analysis Michael C. Edwards The Ohio State University An MCMC approach to CIFA Overview Motivating examples Intro to Item Response Theory

More information

Identifying and accounting for outliers and extreme response patterns in latent variable modelling

Identifying and accounting for outliers and extreme response patterns in latent variable modelling Identifying and accounting for outliers and extreme response patterns in latent variable modelling Irini Moustaki Athens University of Economics and Business Outline 1. Define the problem of outliers and

More information

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent

Latent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent Latent Variable Models for Binary Data Suppose that for a given vector of explanatory variables x, the latent variable, U, has a continuous cumulative distribution function F (u; x) and that the binary

More information

Bayes methods for categorical data. April 25, 2017

Bayes methods for categorical data. April 25, 2017 Bayes methods for categorical data April 25, 2017 Motivation for joint probability models Increasing interest in high-dimensional data in broad applications Focus may be on prediction, variable selection,

More information

Bayesian Analysis of Latent Variable Models using Mplus

Bayesian Analysis of Latent Variable Models using Mplus Bayesian Analysis of Latent Variable Models using Mplus Tihomir Asparouhov and Bengt Muthén Version 2 June 29, 2010 1 1 Introduction In this paper we describe some of the modeling possibilities that are

More information

Composite Likelihood Estimation for Latent Variable Models with Ordinal and Continuous, or Ranking Variables

Composite Likelihood Estimation for Latent Variable Models with Ordinal and Continuous, or Ranking Variables Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences 86 Composite Likelihood Estimation for Latent Variable Models with Ordinal and Continuous, or Ranking Variables

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Composite Likelihood Estimation

Composite Likelihood Estimation Composite Likelihood Estimation With application to spatial clustered data Cristiano Varin Wirtschaftsuniversität Wien April 29, 2016 Credits CV, Nancy M Reid and David Firth (2011). An overview of composite

More information

GENERALIZED LATENT TRAIT MODELS. 1. Introduction

GENERALIZED LATENT TRAIT MODELS. 1. Introduction PSYCHOMETRIKA VOL. 65, NO. 3, 391 411 SEPTEMBER 2000 GENERALIZED LATENT TRAIT MODELS IRINI MOUSTAKI AND MARTIN KNOTT LONDON SCHOOL OF ECONOMICS AND POLITICAL SCIENCE In this paper we discuss a general

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

Composite Likelihood

Composite Likelihood Composite Likelihood Nancy Reid January 30, 2012 with Cristiano Varin and thanks to Don Fraser, Grace Yi, Ximing Xu Background parametric model f (y; θ), y R m ; θ R d likelihood function L(θ; y) f (y;

More information

STA 216, GLM, Lecture 16. October 29, 2007

STA 216, GLM, Lecture 16. October 29, 2007 STA 216, GLM, Lecture 16 October 29, 2007 Efficient Posterior Computation in Factor Models Underlying Normal Models Generalized Latent Trait Models Formulation Genetic Epidemiology Illustration Structural

More information

ML estimation: Random-intercepts logistic model. and z

ML estimation: Random-intercepts logistic model. and z ML estimation: Random-intercepts logistic model log p ij 1 p = x ijβ + υ i with υ i N(0, συ) 2 ij Standardizing the random effect, θ i = υ i /σ υ, yields log p ij 1 p = x ij β + σ υθ i with θ i N(0, 1)

More information

Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models

Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models Tihomir Asparouhov 1, Bengt Muthen 2 Muthen & Muthen 1 UCLA 2 Abstract Multilevel analysis often leads to modeling

More information

Composite likelihood methods

Composite likelihood methods 1 / 20 Composite likelihood methods Nancy Reid University of Warwick, April 15, 2008 Cristiano Varin Grace Yun Yi, Zi Jin, Jean-François Plante 2 / 20 Some questions (and answers) Is there a drinks table

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood

Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood Stat 542: Item Response Theory Modeling Using The Extended Rank Likelihood Jonathan Gruhl March 18, 2010 1 Introduction Researchers commonly apply item response theory (IRT) models to binary and ordinal

More information

Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood

Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood PRE 906: Structural Equation Modeling Lecture #3 February 4, 2015 PRE 906, SEM: Estimation Today s Class An

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

Bayesian Multivariate Logistic Regression

Bayesian Multivariate Logistic Regression Bayesian Multivariate Logistic Regression Sean M. O Brien and David B. Dunson Biostatistics Branch National Institute of Environmental Health Sciences Research Triangle Park, NC 1 Goals Brief review of

More information

Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation

Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation Dimitris Rizopoulos Department of Biostatistics, Erasmus University Medical Center, the Netherlands d.rizopoulos@erasmusmc.nl

More information

MULTILEVEL IMPUTATION 1

MULTILEVEL IMPUTATION 1 MULTILEVEL IMPUTATION 1 Supplement B: MCMC Sampling Steps and Distributions for Two-Level Imputation This document gives technical details of the full conditional distributions used to draw regression

More information

Bayesian inference for factor scores

Bayesian inference for factor scores Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Fractional Imputation in Survey Sampling: A Comparative Review

Fractional Imputation in Survey Sampling: A Comparative Review Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical

More information

Lecture 5: Spatial probit models. James P. LeSage University of Toledo Department of Economics Toledo, OH

Lecture 5: Spatial probit models. James P. LeSage University of Toledo Department of Economics Toledo, OH Lecture 5: Spatial probit models James P. LeSage University of Toledo Department of Economics Toledo, OH 43606 jlesage@spatial-econometrics.com March 2004 1 A Bayesian spatial probit model with individual

More information

Outline. Clustering. Capturing Unobserved Heterogeneity in the Austrian Labor Market Using Finite Mixtures of Markov Chain Models

Outline. Clustering. Capturing Unobserved Heterogeneity in the Austrian Labor Market Using Finite Mixtures of Markov Chain Models Capturing Unobserved Heterogeneity in the Austrian Labor Market Using Finite Mixtures of Markov Chain Models Collaboration with Rudolf Winter-Ebmer, Department of Economics, Johannes Kepler University

More information

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract Bayesian analysis of a vector autoregressive model with multiple structural breaks Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus Abstract This paper develops a Bayesian approach

More information

Bayesian inference for multivariate extreme value distributions

Bayesian inference for multivariate extreme value distributions Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016 Main motivation For a parametric model Z F θ of

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

PSEUDO-MARGINAL METROPOLIS-HASTINGS APPROACH AND ITS APPLICATION TO BAYESIAN COPULA MODEL

PSEUDO-MARGINAL METROPOLIS-HASTINGS APPROACH AND ITS APPLICATION TO BAYESIAN COPULA MODEL PSEUDO-MARGINAL METROPOLIS-HASTINGS APPROACH AND ITS APPLICATION TO BAYESIAN COPULA MODEL Xuebin Zheng Supervisor: Associate Professor Josef Dick Co-Supervisor: Dr. David Gunawan School of Mathematics

More information

Markov Chain Monte Carlo, Numerical Integration

Markov Chain Monte Carlo, Numerical Integration Markov Chain Monte Carlo, Numerical Integration (See Statistics) Trevor Gallen Fall 2015 1 / 1 Agenda Numerical Integration: MCMC methods Estimating Markov Chains Estimating latent variables 2 / 1 Numerical

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Lecture Notes based on Koop (2003) Bayesian Econometrics

Lecture Notes based on Koop (2003) Bayesian Econometrics Lecture Notes based on Koop (2003) Bayesian Econometrics A.Colin Cameron University of California - Davis November 15, 2005 1. CH.1: Introduction The concepts below are the essential concepts used throughout

More information

Assessing Regime Uncertainty Through Reversible Jump McMC

Assessing Regime Uncertainty Through Reversible Jump McMC Assessing Regime Uncertainty Through Reversible Jump McMC August 14, 2008 1 Introduction Background Research Question 2 The RJMcMC Method McMC RJMcMC Algorithm Dependent Proposals Independent Proposals

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Bayesian Modeling of Conditional Distributions

Bayesian Modeling of Conditional Distributions Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Index. Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables.

Index. Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables. Index Pagenumbersfollowedbyf indicate figures; pagenumbersfollowedbyt indicate tables. Adaptive rejection metropolis sampling (ARMS), 98 Adaptive shrinkage, 132 Advanced Photo System (APS), 255 Aggregation

More information

Alternative implementations of Monte Carlo EM algorithms for likelihood inferences

Alternative implementations of Monte Carlo EM algorithms for likelihood inferences Genet. Sel. Evol. 33 001) 443 45 443 INRA, EDP Sciences, 001 Alternative implementations of Monte Carlo EM algorithms for likelihood inferences Louis Alberto GARCÍA-CORTÉS a, Daniel SORENSEN b, Note a

More information

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score

More information

2. Inference method for margins and jackknife.

2. Inference method for margins and jackknife. The Estimation Method of Inference Functions for Margins for Multivariate Models Harry Joe and James J. Xu Department of Statistics, University of British Columbia ABSTRACT An estimation approach is proposed

More information

Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling

Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Jae-Kwang Kim 1 Iowa State University June 26, 2013 1 Joint work with Shu Yang Introduction 1 Introduction

More information

Cross-sectional space-time modeling using ARNN(p, n) processes

Cross-sectional space-time modeling using ARNN(p, n) processes Cross-sectional space-time modeling using ARNN(p, n) processes W. Polasek K. Kakamu September, 006 Abstract We suggest a new class of cross-sectional space-time models based on local AR models and nearest

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An

More information

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17 MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Basics of Modern Missing Data Analysis

Basics of Modern Missing Data Analysis Basics of Modern Missing Data Analysis Kyle M. Lang Center for Research Methods and Data Analysis University of Kansas March 8, 2013 Topics to be Covered An introduction to the missing data problem Missing

More information

Online Appendix to: Marijuana on Main Street? Estimating Demand in Markets with Limited Access

Online Appendix to: Marijuana on Main Street? Estimating Demand in Markets with Limited Access Online Appendix to: Marijuana on Main Street? Estating Demand in Markets with Lited Access By Liana Jacobi and Michelle Sovinsky This appendix provides details on the estation methodology for various speci

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

A Fully Nonparametric Modeling Approach to. BNP Binary Regression

A Fully Nonparametric Modeling Approach to. BNP Binary Regression A Fully Nonparametric Modeling Approach to Binary Regression Maria Department of Applied Mathematics and Statistics University of California, Santa Cruz SBIES, April 27-28, 2012 Outline 1 2 3 Simulation

More information

Generalized Linear Models for Non-Normal Data

Generalized Linear Models for Non-Normal Data Generalized Linear Models for Non-Normal Data Today s Class: 3 parts of a generalized model Models for binary outcomes Complications for generalized multivariate or multilevel models SPLH 861: Lecture

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and

More information

Weighted pairwise likelihood estimation for a general class of random effects models

Weighted pairwise likelihood estimation for a general class of random effects models Biostatistics (2014), 15,4,pp. 677 689 doi:10.1093/biostatistics/kxu018 Advance Access publication on May 8, 2014 Weighted pairwise likelihood estimation for a general class of random effects models VASSILIS

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Likelihood and p-value functions in the composite likelihood context

Likelihood and p-value functions in the composite likelihood context Likelihood and p-value functions in the composite likelihood context D.A.S. Fraser and N. Reid Department of Statistical Sciences University of Toronto November 19, 2016 Abstract The need for combining

More information

Bayes: All uncertainty is described using probability.

Bayes: All uncertainty is described using probability. Bayes: All uncertainty is described using probability. Let w be the data and θ be any unknown quantities. Likelihood. The probability model π(w θ) has θ fixed and w varying. The likelihood L(θ; w) is π(w

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Recursive Deviance Information Criterion for the Hidden Markov Model

Recursive Deviance Information Criterion for the Hidden Markov Model International Journal of Statistics and Probability; Vol. 5, No. 1; 2016 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education Recursive Deviance Information Criterion for

More information

an introduction to bayesian inference

an introduction to bayesian inference with an application to network analysis http://jakehofman.com january 13, 2010 motivation would like models that: provide predictive and explanatory power are complex enough to describe observed phenomena

More information

F & B Approaches to a simple model

F & B Approaches to a simple model A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys

More information

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals

More information

Comparison between conditional and marginal maximum likelihood for a class of item response models

Comparison between conditional and marginal maximum likelihood for a class of item response models (1/24) Comparison between conditional and marginal maximum likelihood for a class of item response models Francesco Bartolucci, University of Perugia (IT) Silvia Bacci, University of Perugia (IT) Claudia

More information

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL COMMUN. STATIST. THEORY METH., 30(5), 855 874 (2001) POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL Hisashi Tanizaki and Xingyuan Zhang Faculty of Economics, Kobe University, Kobe 657-8501,

More information

Gibbs Sampling in Latent Variable Models #1

Gibbs Sampling in Latent Variable Models #1 Gibbs Sampling in Latent Variable Models #1 Econ 690 Purdue University Outline 1 Data augmentation 2 Probit Model Probit Application A Panel Probit Panel Probit 3 The Tobit Model Example: Female Labor

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

July 31, 2009 / Ben Kedem Symposium

July 31, 2009 / Ben Kedem Symposium ing The s ing The Department of Statistics North Carolina State University July 31, 2009 / Ben Kedem Symposium Outline ing The s 1 2 s 3 4 5 Ben Kedem ing The s Ben has made many contributions to time

More information

13 Notes on Markov Chain Monte Carlo

13 Notes on Markov Chain Monte Carlo 13 Notes on Markov Chain Monte Carlo Markov Chain Monte Carlo is a big, and currently very rapidly developing, subject in statistical computation. Many complex and multivariate types of random data, useful

More information

Nonparametric Bayesian modeling for dynamic ordinal regression relationships

Nonparametric Bayesian modeling for dynamic ordinal regression relationships Nonparametric Bayesian modeling for dynamic ordinal regression relationships Athanasios Kottas Department of Applied Mathematics and Statistics, University of California, Santa Cruz Joint work with Maria

More information

Some Issues In Markov Chain Monte Carlo Estimation For Item Response Theory

Some Issues In Markov Chain Monte Carlo Estimation For Item Response Theory University of South Carolina Scholar Commons Theses and Dissertations 2016 Some Issues In Markov Chain Monte Carlo Estimation For Item Response Theory Han Kil Lee University of South Carolina Follow this

More information

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling

CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling CS242: Probabilistic Graphical Models Lecture 7B: Markov Chain Monte Carlo & Gibbs Sampling Professor Erik Sudderth Brown University Computer Science October 27, 2016 Some figures and materials courtesy

More information

A new family of asymmetric models for item response theory: A Skew-Normal IRT family

A new family of asymmetric models for item response theory: A Skew-Normal IRT family A new family of asymmetric models for item response theory: A Skew-Normal IRT family Jorge Luis Bazán, Heleno Bolfarine, Marcia D Elia Branco Department of Statistics University of São Paulo October 04,

More information

Doing Bayesian Integrals

Doing Bayesian Integrals ASTR509-13 Doing Bayesian Integrals The Reverend Thomas Bayes (c.1702 1761) Philosopher, theologian, mathematician Presbyterian (non-conformist) minister Tunbridge Wells, UK Elected FRS, perhaps due to

More information

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:

More information

Parametric fractional imputation for missing data analysis

Parametric fractional imputation for missing data analysis 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Biometrika (????),??,?, pp. 1 15 C???? Biometrika Trust Printed in

More information

Multilevel Statistical Models: 3 rd edition, 2003 Contents

Multilevel Statistical Models: 3 rd edition, 2003 Contents Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction

More information

Recap. Vector observation: Y f (y; θ), Y Y R m, θ R d. sample of independent vectors y 1,..., y n. pairwise log-likelihood n m. weights are often 1

Recap. Vector observation: Y f (y; θ), Y Y R m, θ R d. sample of independent vectors y 1,..., y n. pairwise log-likelihood n m. weights are often 1 Recap Vector observation: Y f (y; θ), Y Y R m, θ R d sample of independent vectors y 1,..., y n pairwise log-likelihood n m i=1 r=1 s>r w rs log f 2 (y ir, y is ; θ) weights are often 1 more generally,

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

6. Fractional Imputation in Survey Sampling

6. Fractional Imputation in Survey Sampling 6. Fractional Imputation in Survey Sampling 1 Introduction Consider a finite population of N units identified by a set of indices U = {1, 2,, N} with N known. Associated with each unit i in the population

More information

A general class of latent variable models for ordinal manifest variables with covariate effects on the manifest and latent variables

A general class of latent variable models for ordinal manifest variables with covariate effects on the manifest and latent variables 337 British Journal of Mathematical and Statistical Psychology (2003), 56, 337 357 2003 The British Psychological Society www.bps.org.uk A general class of latent variable models for ordinal manifest variables

More information

Generalized common spatial factor model

Generalized common spatial factor model Biostatistics (2003), 4, 4,pp. 569 582 Printed in Great Britain Generalized common spatial factor model FUJUN WANG Eli Lilly and Company, Indianapolis, IN 46285, USA MELANIE M. WALL Division of Biostatistics,

More information

Likelihood Inference for Lattice Spatial Processes

Likelihood Inference for Lattice Spatial Processes Likelihood Inference for Lattice Spatial Processes Donghoh Kim November 30, 2004 Donghoh Kim 1/24 Go to 1234567891011121314151617 FULL Lattice Processes Model : The Ising Model (1925), The Potts Model

More information

Variational Approximations for Generalized Linear. Latent Variable Models

Variational Approximations for Generalized Linear. Latent Variable Models 1 2 Variational Approximations for Generalized Linear Latent Variable Models 3 4 Francis K.C. Hui 1, David I. Warton 2,3, John T. Ormerod 4,5, Viivi Haapaniemi 6, and Sara Taskinen 6 5 6 7 8 9 10 11 12

More information

A Model for Correlated Paired Comparison Data

A Model for Correlated Paired Comparison Data Working Paper Series, N. 15, December 2010 A Model for Correlated Paired Comparison Data Manuela Cattelan Department of Statistical Sciences University of Padua Italy Cristiano Varin Department of Statistics

More information

Plausible Values for Latent Variables Using Mplus

Plausible Values for Latent Variables Using Mplus Plausible Values for Latent Variables Using Mplus Tihomir Asparouhov and Bengt Muthén August 21, 2010 1 1 Introduction Plausible values are imputed values for latent variables. All latent variables can

More information

Threshold models with fixed and random effects for ordered categorical data

Threshold models with fixed and random effects for ordered categorical data Threshold models with fixed and random effects for ordered categorical data Hans-Peter Piepho Universität Hohenheim, Germany Edith Kalka Universität Kassel, Germany Contents 1. Introduction. Case studies

More information

Modeling the scale parameter ϕ A note on modeling correlation of binary responses Using marginal odds ratios to model association for binary responses

Modeling the scale parameter ϕ A note on modeling correlation of binary responses Using marginal odds ratios to model association for binary responses Outline Marginal model Examples of marginal model GEE1 Augmented GEE GEE1.5 GEE2 Modeling the scale parameter ϕ A note on modeling correlation of binary responses Using marginal odds ratios to model association

More information

A BAYESIAN APPROACH TO SPATIAL CORRELATIONS IN THE MULTIVARIATE PROBIT MODEL

A BAYESIAN APPROACH TO SPATIAL CORRELATIONS IN THE MULTIVARIATE PROBIT MODEL A BAYESIAN APPROACH TO SPATIAL CORRELATIONS IN THE MULTIVARIATE PROBIT MODEL by Jervyn Ang B.Sc, Simon Fraser University, 2008 a Project submitted in partial fulfillment of the requirements for the degree

More information

Machine Learning Techniques for Computer Vision

Machine Learning Techniques for Computer Vision Machine Learning Techniques for Computer Vision Part 2: Unsupervised Learning Microsoft Research Cambridge x 3 1 0.5 0.2 0 0.5 0.3 0 0.5 1 ECCV 2004, Prague x 2 x 1 Overview of Part 2 Mixture models EM

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I. Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series

More information

Kneib, Fahrmeir: Supplement to "Structured additive regression for categorical space-time data: A mixed model approach"

Kneib, Fahrmeir: Supplement to Structured additive regression for categorical space-time data: A mixed model approach Kneib, Fahrmeir: Supplement to "Structured additive regression for categorical space-time data: A mixed model approach" Sonderforschungsbereich 386, Paper 43 (25) Online unter: http://epub.ub.uni-muenchen.de/

More information