Introduction to latent class model

Size: px
Start display at page:

Download "Introduction to latent class model"

Transcription

1 Epi 950, Fall Introduction to latent class model Latent variables Manifest variables Categorical Continuous Categorical Latent class analysis Latent trait analysis Continuous Latent profile analysis Factor analysis The latent class model (LCM) is a statistical method for classifying individuals into population subgroups by using their responses to the manifest items LCM explains the relationship among categorical manifest items by positing the existence of a latent classifier

2 Epi 950, Fall

3 Epi 950, Fall Example Consider the following 2 items from a questionnaire: 1. Is the federal government too big? 2. Should Congress pass a capital gains tax cut this year? Item 1 Yes No Item 2 Yes No X 2 = 36.74, df = 1, p = 0.00; odds ratio = 4.7 These statistics were used to make claims that those two items are related

4 Epi 950, Fall But, if you adjust for another classification, such as fiscal conservatism, Conservative Item 2 Item 1 Not Item 1 Yes No conservative Yes No Yes Yes 5 20 No 20 5 Item 2 No Conservative: X 2 = 0.01, p = 0.93; odds ratio = 0.95 Not conservative: X 2 = 0.00, p = 1.00; odds ratio = 1.00 There is no relationship between Item 1 and Item 2 The item association is explained by conservatism means items 1 and 2 are conditionally independent given conservatism

5 Epi 950, Fall What if we don t or can t observe fiscal conservatism which should be conditioned on? We could fit LCM in order to allocate an object to one of these latent classes on the basis of the responses to the manifest items Questions to answer: How many underlying classes are there? What is the prevalence in each of the latent classes What is the probability that a particular individual will be in a particular class?

6 Epi 950, Fall Application Many medical conditions cannot be diagnosed directly because the root of the condition is deep-seated. For example, a definite reference test for AIDS is not observable. However, some diagnostic tests for HIV are available, and it s assumed that they are imperfect indicators of the true AIDS status. It would be useful if we could use observations of a patient s test results to estimate the sensitivity and the specificity. An LCM may help us to do this.

7 Epi 950, Fall LCM is one of the most used of all latent variable modeling. Major uses include: to reduce the dimension of data by explaining the associations between the observed variables in terms of membership of a small number of latent classes to understand the inter-relationships between observed variables to estimate the probabilities of belonging to each class on the basis of the individual s response pattern to allocate an individual to one of latent classes

8 Epi 950, Fall Review of Bernoulli trial The Bernoulli trial process is the mathematical abstraction of coin tossing that satisfies the following assumption: Each trial has two possible outcomes: 1 if success Y = 0 if failure The trials are independent The probability of success is P(Y = 1) = p and the probability of failure is P(Y = 0) = 1 p It has probability function, P(Y = y) = p y (1 p) 1 y, y = 0, 1

9 Epi 950, Fall When each trial has K (> 2) possible outcomes (e.g., rolling a dice): The random variable Y can take any value from 1 to K The probability for each event is P(Y = k) = p k for k = 1, 2,...,K with constraint K k=1 p k = 1 It has probability function, P(Y = k) = p k where I(Y = k) = I(Y =1) I(Y =2) I(Y =K) = p1 p2 pk K I(Y =k) = pk, k = 1, 2,..., K, k=1 1 if Y = k 0 if Y k

10 Epi 950, Fall When we have M (> 2) independent trials where mth trial has r m possible outcomes (e.g., rolling a dice M times): The random variable Y m can take any value from 1 to r m for m = 1, 2,...,M The probability for each event is P(Y m = k) = p mk with constraint r m k=1 p mk = 1 for m = 1, 2,...,M It has probability function, P(Y 1 = k 1, Y 2 = k 2,..., Y M = k M ) = = = M m=1 M m=1 k=1 M m=1 p I(Y m=1) m1 p I(Y m=2) m2 p I(Y m=r m ) mr m r m p mkm p I(Y m=k) mk, k m = 1, 2,..., r m ; m = 1, 2,..., M

11 Epi 950, Fall LCM: the mathematical model Y = (Y 1,...,Y M ) : M discrete items measuring latent classes Y m = 1, 2,...,r m L : the variable of latent class membership L = 1, 2,...,C L 3 Y 1 Y 2. Y M

12 Epi 950, Fall If the class membership observed, the joint probability that an individual belongs to class l and provides responses y = (y 1,...,y M ) would be where P(Y = y, L = l) = P(L = l) P(Y = y L = l) MY = P(L = l) P(Y m = y m L = l) = γ l M Y r m Y m=1 k=1 m=1 ρ I(Y m=k) mk l, γ l = P(L = l): the probability of belonging to latent class l ρ mk l = P(Y m = k L = l): the probability of response k to the mth item given a class membership in l

13 Epi 950, Fall Within a latent class l, each manifest item Y m has a multinomial distribution, where the probability of response k to the mth item is ρ mk l for k = 1,...,r m Y 1,...,Y M are independent (conditional independence) The marginal probability of a particular response pattern y = (y 1,...,y M ) without regard for the unseen class membership is P(Y = y) = = CX P(Y = y, L = l) l=1 CX l=1 γ l M Y r m Y m=1 k=1 ρ I(y m=k) mk l.

14 Epi 950, Fall Information about the underlying class structure is conveyed through: the smallest number of classes (C) that adequately describes the associations among the manifest items the response probabilities (ρ-parameters) of manifest items in a specific class the latent class prevalence (γ-parameter)

15 Epi 950, Fall An example Attitudes toward abortion are measured by following six questions: 1. If the woman s own health in seriously endangered by the pregnancy? (woman s health) 2. If there is a strong chance of serious defect in the baby? (birth defect) 3. If she became pregnant as the result of rape? (rape) 4. If the family has a very low income and cannot afford any more children? (too poor) 5. If she is married and does not want any more children? (no more children) 6. If she is not married and does not want to marry the man? (single mom)

16 Epi 950, Fall Latent classes Items I II III woman s health birth defect rape too poor no more children single mom Class prevalence The structure of the attitudes toward abortion is best understood in terms of three categories: 1. those who approve of for both social and ethical/medical reasons 2. those who approve of for ethical/medical reasons but oppose legal abortions for social reasons 3. those who oppose for both social and ethical/medical reasons

17 Epi 950, Fall Fitting a latent class model Maximum likelihood method The most common approach to estimate parameters in an LCM is the ML method using the EM algorithm Upon convergence, standard errors for the estimated parameters are obtained by inverting the Hessian matrix (i.e., the negative second derivative matrix of loglikelihood function) Bayesian method As an alternative to ML, one can apply the Bayesian method via MCMC to estimate parameters in an LCM MCMC may produce greater flexibility in model fit assessment and various hypothesis tests without appealing to large-sample approximation

18 Epi 950, Fall Estimation: ML method EM algorithm Direct maximization of the loglikelihood is complicated ML estimates can be easily calculated if class membership were known Iterating two steps (E-step and M-step) produces a sequence of parameter estimates that converges reliably to a local or global maximum of loglikelihood

19 Epi 950, Fall Expectation step (E-step) We compute the posterior probability of the class membership for each individual y i = (y i1,...,y im ), i = 1,...,n, δ l yi = P(L = l Y = y i ) = P(Y = y i, L = l) P(Y = y i ) = γ l Q M m=1 Q rm P C j=1 γ j Maximization step (M-step) Q M m=1 We update the parameter estimates by γ l = P n i=1 δ l y i n, ρ mk l = k=1 ρ I(y im=k) mk l Q rm k=1 ρ I(y im=k) mk j P n i=1 δ l y i I(y im = k) P n i=1 δ l y i

20 Epi 950, Fall Estimation: Bayesian method MCMC algorithm We re interested in describing an observed-data posterior, posterior Likelihood prior We can simulate (correlated) draws of parameters from posterior distribution via MCMC. The posterior distribution is difficult to portray If the latent class membership for each response pattern were known, the augmented-data posterior would be easy to simulate Iterating two steps (I-step and P-step) produces the stream of parameter values which can be summarized as Bayesian estimates

21 Epi 950, Fall Imputation step (I-step) We compute the posterior probability of the class membership and draw the class to which each individual belongs. Posterior step (P-step) New random values for γ = (γ 1,...,γ C ) are drawn from γ Dirichlet(n 1 +.5,..., n C +.5), where n l is the number of subjects classified into class l. New random values for ρ m l = (ρ m1 l,...,ρ mrm l) are drawn from ρ m l Dirichlet(n m1 l +.5,..., n mrm l +.5), where n mk l is the number of subjects whose response k to item m in class l.

22 Epi 950, Fall Computational issues Problems with ML Well-known computational problems associated with ML: non-identified solutions, maxima on the boundary multiple modes standard errors poor summary of uncertainty LR test cannot be used to determine number of latent classes Problems with Bayesian Bayesian inference by MCMC overcomes many of these problems, but introduces a few more: requires a prior (subjectivity) label switching problem

23 Epi 950, Fall Model selection The choice of number of latent classes should be driven in as a parsimonious fashion as possible by a balanced judgment that takes into the substantive knowledge and objective measures available for assessing model fit The loglikelihood-ratio statistics (LRT) are standard statistics to assess the model fit for LCM: where G 2 = 2 npatt r=1 O r log O r E r χ 2 df, npatt = number of possible response patterns df = npatt 1 number of free parameters

24 Epi 950, Fall Difference in LRT for testing the relative fit of a C-class model against a (C + 1)-class alternative does not have limiting chi-square distributions These measures can be used to assess the relative fit of two competing models with different number of classes (available in Mplus or Latent GOLD): information criteria (e.g., AIC, BIC, CAIC) adjusted LRT posterior probability check distribution (PPCD)

25 Epi 950, Fall Allocation to classes We wish to allocate the individuals to the identified classes using their responses LCM provides a set of allocation probabilities, called the posterior probability, P(L = l y 1, y 2,...,y M ), measuring the probability that individuals with a specific behavioral profile belongs to a specific class If most of the probabilities are close to zero or one, there is little doubt as to the class to which individual should be allocated

26 Epi 950, Fall Posterior probability By Bayes theorem, the posterior probability is δ l y = P(L = l Y = y) = = P(Y = y, L = l) P(Y = y) γ l Q M m=1 Q rm P C j=1 γ j Q M m=1 k=1 ρ I(y m=k) mk l Q rm k=1 ρ I(y m=k) mk j The posterior probability at the final solution provides an objective basis for assigning individuals into the class In contrast, γ l = P(L = l) can be thought of as the probability that a randomly chosen individual belongs to a class l

27 Epi 950, Fall Example Abortion data Response Allocated patterns (y) P(L = 1 y) P(L = 2 y) P(L = 3 y) class

28 Epi 950, Fall Example: Monitoring the Future How have attitudes toward and use of marijuana among high-school seniors changed from 1977 to 2001? Items on marijuana use: 1-3. On how many occasions (if any) have you used marijuana (grass, pot) or hashish (hash, hash oil) in your lifetime? during the last 12 months? during the last 30 days? 4. How likely is it that you will use marijuana in the next 12 months? Items on attitudes toward marijuana use: 5-7. Do YOU disapprove of people (who are 18 or older) doing each of the following? trying marijuana (pot, grass) once or twice smoking marijuana occasionally smoking marijuana regularly

29 Epi 950, Fall Model selection Number of Relative classes 2 loglikelihood G 2 df decrement The number of possible response patterns (cells) is = 3200 Chi-square approximation is not appropriate because estimated expected counts for many cells are close to 0 Differences in G 2 should not be compared with chi-square because LRT for testing the fit of an C-class model against an (C + 1)-class model do not have limiting chi-square distribution

30 Epi 950, Fall Figure 1: Estimated item-response probabilities for Lifetime and TryMJ under the (a) four-class and (b) five-class model (a) Four-class model Lifetime use (1=any use) Try marijuana (1=approve) Year 0.0 Year (b) Five-class model Year 0.0 Year Class 1 Class 2 Class 3 Class 4 Class 1 Class 2 Class 3 Class 4 Class 5

31 Epi 950, Fall Latent classes Items I II III IV Use of marijuana Lifetime months days Next 12 months Attitudes toward marijuana TryMJ OccUse RegUse The relationships among 7 items are well explained by four-class model: 1. Non-users who strongly disapprove of marijuana use 2. Non-users who approve of others doing so on experimental basis 3. Experimental users who disapprove of occasional use 4. Regular users who generally approve of use

32 Epi 950, Fall Figure 2: Historical trends in class prevalence prob Class 1 (disapproving non-users) Class 2 (approving non-users) Class 3 (disapproving experimenters) Class 4 (approving regular users) Year prob Users (Class 3+Class 4) Approvers (Class 2+Class 4) Year

33 Epi 950, Fall Using R Install and load a package called polca (released at 06/01/2006)

34 Epi 950, Fall Example: sexual attitudes The data set is extracted from the 1990 British Social Attitudes Survey. It concerns contemporary sexual attitudes. The questions addressed to 1077 individuals were as follows. 1. Should divorce be easier? 2. Do you support the law against sexual discrimination? 3. View on pre-marital sex: (wrong/not wrong) 4. View on extra-marital sex: (wrong/not wrong) 5. View on sexual relationship between individuals of the same sex: (wrong/not wrong) 6. Should gays teach in school? 7. Should gays teach in higher education? 8. Should gays hold public positions? 9. Should a female homosexual couple be allowed to adopt children? 10. Should a male homosexual couple be allowed to adopt children?

35 Epi 950, Fall > descript(attitude, n.print=45) Descriptive statistics for attitude data-set Sample: 10 items and 1077 sample units Proportions for each level of response: [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] [2,] Frequencies of total scores: Freq Pairwise Associations: Item i Item j p.value

36 Epi 950, Fall > ## Data should be integer values in polca > w <- attitude == 0 > attitude[w] <- 2 > > ## Formula for LCA > f <- cbind(v1, V2, V3, V4, V5, V6, V7, V8, V9, V10) 1 > > ## two-class model > lca2 <- polca(f, attitude, nclass=2) > two.cls <- c(lca2$llik, lca2$gsq, lca2$aic, lca2$bic) > > ## three-class model > lca3 <- polca(f, attitude, nclass=3) > three.cls <- c(lca3$llik, lca3$gsq, lca3$aic, lca3$bic) > > ## four-class model > lca4 <- polca(f, attitude, nclass=4) > four.cls <- c(lca4$llik, lca4$gsq, lca4$aic, lca4$bic)

37 Epi 950, Fall > ## five-class model > lca5 <- polca(f, attitude, nclass=5) > five.cls <- c(lca5$llik, lca5$gsq, lca5$aic, lca5$bic) > > ## six-class model > lca6 <- polca(f, attitude, nclass=6) > six.cls <- c(lca6$llik, lca6$gsq, lca6$aic, lca6$bic) > > ## seven-class model > lca7 <- polca(f, attitude, nclass=7) > seven.cls <- c(lca7$llik, lca7$gsq, lca7$aic, lca7$bic)

38 Epi 950, Fall > ## Combine model selection criteria > model.sel <- rbind(two.cls, three.cls, four.cls, five.cls, six.cls, seven.cls) > dimnames(model.sel) <- list(c("two", "three", "four", "five", "six", "seven"), c("llik", "G2", "AIC", "BIC")) > model.sel llik G2 AIC BIC two.cls three.cls four.cls five.cls six.cls seven.cls > > ## Class prevalence > round(lca4$p, 3) [1] > round(lca5$p, 3) [1]

39 Epi 950, Fall > ## Item response probabilities > lca4$probs > lca5$probs 1 Probs. of positive response Class 1 Class 2 Class 3 Class Items 1 Probs. of positive response Class 1 Class 2 Class 3 Class 4 Class Items

40 Epi 950, Fall > ## Posterior probabilities and allocated class for the first 15 individuals > cbind( round(lca4$posterior[1:15,], 3), lca4$predclass[1:15] ) [,1] [,2] [,3] [,4] [,5] [1,] [2,] [3,] [4,] [5,] [6,] [7,] [8,] [9,] [10,] [11,] [12,] [13,] [14,] [15,]

41 Epi 950, Fall > ## Numbers of individuals assigned to Class 1 > w <- lca5$predclass == 1 > sum(w) [1] 136 > ## Posterior probabilities for first 15 individuals in Class 1 > round(lca4$posterior[w,], 3)[1:15,] [,1] [,2] [,3] [,4] [1,] [2,] [3,] [4,] [5,] [6,] [7,] [8,] [9,] [10,] [11,] [12,] [13,] [14,] [15,] > round(apply(lca4$posterior[w,], 2, mean), 3) [1]

42 Epi 950, Fall > ## If you do it for Classes 2, 3, and 4, you will get > round(m.post.4, 3) cls.1 cls.2 cls.3 cls.4 cls cls cls cls > ## With a 5-class model, > round(m.post.5, 3) cls.1 cls.2 cls.3 cls.4 cls.5 cls cls cls cls cls

43 Epi 950, Fall > ## Observed frequencies > Obs <- lca4$predcell[,"observed"] > ## Expected frequencies > Exp.4 <- lca4$predcell[,"expected"] > Exp.5 <- lca5$predcell[,"expected"] > ## Standardized residuals > resid.4 <- (Obs-Exp.4) / sqrt(exp.4) > resid.5 <- (Obs-Exp.5) / sqrt(exp.5) > ## Combine results > fit <- cbind(obs, Exp.4, resid.4, Exp.5, resid.5) > tmp <- sort(obs, index.return=t, decreasing=t) > fit <- fit[tmp$ix,]

44 Epi 950, Fall > fit[1:20,] Obs Exp.4 resid.4 Exp.5 resid.5 [1,] [2,] [3,] [4,] [5,] [6,] [7,] [8,] [9,] [10,] [11,] [12,] [13,] [14,] [15,] [16,] [17,] [18,] [19,] [20,]

45 Epi 950, Fall > ## Corresponding response patterns > lca4$predcell[tmp$ix[1:20],1:10] V1 V2 V3 V4 V5 V6 V7 V8 V9 V

What is Latent Class Analysis. Tarani Chandola

What is Latent Class Analysis. Tarani Chandola What is Latent Class Analysis Tarani Chandola methods@manchester Many names similar methods (Finite) Mixture Modeling Latent Class Analysis Latent Profile Analysis Latent class analysis (LCA) LCA is a

More information

Latent Class Analysis

Latent Class Analysis Latent Class Analysis Karen Bandeen-Roche October 27, 2016 Objectives For you to leave here knowing When is latent class analysis (LCA) model useful? What is the LCA model its underlying assumptions? How

More information

Categorical and Zero Inflated Growth Models

Categorical and Zero Inflated Growth Models Categorical and Zero Inflated Growth Models Alan C. Acock* Summer, 2009 *Alan C. Acock, Department of Human Development and Family Sciences, Oregon State University, Corvallis OR 97331 (alan.acock@oregonstate.edu).

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2014 Introduction The markets can be thought of as a complex interaction of a large number of random

More information

Lecture 01: Introduction

Lecture 01: Introduction Lecture 01: Introduction Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University of South Carolina Lecture 01: Introduction

More information

Latent class analysis with multiple latent group variables

Latent class analysis with multiple latent group variables Communications for Statistical Applications and Methods 2017 Vol. 24 No. 2 173 191 https://doi.org/10.5351/csam.2017.24.2.173 Print ISSN 2287-7843 / Online ISSN 2383-4757 Latent class analysis with multiple

More information

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Application of Plausible Values of Latent Variables to Analyzing BSI-18 Factors. Jichuan Wang, Ph.D

Application of Plausible Values of Latent Variables to Analyzing BSI-18 Factors. Jichuan Wang, Ph.D Application of Plausible Values of Latent Variables to Analyzing BSI-18 Factors Jichuan Wang, Ph.D Children s National Health System The George Washington University School of Medicine Washington, DC 1

More information

Latent class analysis and finite mixture models with Stata

Latent class analysis and finite mixture models with Stata Latent class analysis and finite mixture models with Stata Isabel Canette Principal Mathematician and Statistician StataCorp LLC 2017 Stata Users Group Meeting Madrid, October 19th, 2017 Introduction Latent

More information

WinLTA USER S GUIDE for Data Augmentation

WinLTA USER S GUIDE for Data Augmentation USER S GUIDE for Version 1.0 (for WinLTA Version 3.0) Linda M. Collins Stephanie T. Lanza Joseph L. Schafer The Methodology Center The Pennsylvania State University May 2002 Dev elopment of this program

More information

Topic 2 Probability. Basic probability Conditional probability and independence Bayes rule Basic reliability

Topic 2 Probability. Basic probability Conditional probability and independence Bayes rule Basic reliability Topic 2 Probability Basic probability Conditional probability and independence Bayes rule Basic reliability Random process: a process whose outcome can not be predicted with certainty Examples: rolling

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

Probability and Probability Distributions. Dr. Mohammed Alahmed

Probability and Probability Distributions. Dr. Mohammed Alahmed Probability and Probability Distributions 1 Probability and Probability Distributions Usually we want to do more with data than just describing them! We might want to test certain specific inferences about

More information

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data Ronald Heck Class Notes: Week 8 1 Class Notes: Week 8 Probit versus Logit Link Functions and Count Data This week we ll take up a couple of issues. The first is working with a probit link function. While

More information

Be able to define the following terms and answer basic questions about them:

Be able to define the following terms and answer basic questions about them: CS440/ECE448 Section Q Fall 2017 Final Review Be able to define the following terms and answer basic questions about them: Probability o Random variables, axioms of probability o Joint, marginal, conditional

More information

Multilevel Statistical Models: 3 rd edition, 2003 Contents

Multilevel Statistical Models: 3 rd edition, 2003 Contents Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction

More information

Testing Independence

Testing Independence Testing Independence Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM 1/50 Testing Independence Previously, we looked at RR = OR = 1

More information

STAC51: Categorical data Analysis

STAC51: Categorical data Analysis STAC51: Categorical data Analysis Mahinda Samarakoon January 26, 2016 Mahinda Samarakoon STAC51: Categorical data Analysis 1 / 32 Table of contents Contingency Tables 1 Contingency Tables Mahinda Samarakoon

More information

Generalized Linear Models for Non-Normal Data

Generalized Linear Models for Non-Normal Data Generalized Linear Models for Non-Normal Data Today s Class: 3 parts of a generalized model Models for binary outcomes Complications for generalized multivariate or multilevel models SPLH 861: Lecture

More information

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016

Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An

More information

Probability. Chapter 1 Probability. A Simple Example. Sample Space and Probability. Sample Space and Event. Sample Space (Two Dice) Probability

Probability. Chapter 1 Probability. A Simple Example. Sample Space and Probability. Sample Space and Event. Sample Space (Two Dice) Probability Probability Chapter 1 Probability 1.1 asic Concepts researcher claims that 10% of a large population have disease H. random sample of 100 people is taken from this population and examined. If 20 people

More information

Application of Latent Class with Random Effects Models to Longitudinal Data. Ken Beath Macquarie University

Application of Latent Class with Random Effects Models to Longitudinal Data. Ken Beath Macquarie University Application of Latent Class with Random Effects Models to Longitudinal Data Ken Beath Macquarie University Introduction Latent trajectory is a method of classifying subjects based on longitudinal data

More information

An introduction to biostatistics: part 1

An introduction to biostatistics: part 1 An introduction to biostatistics: part 1 Cavan Reilly September 6, 2017 Table of contents Introduction to data analysis Uncertainty Probability Conditional probability Random variables Discrete random

More information

Statistical Distribution Assumptions of General Linear Models

Statistical Distribution Assumptions of General Linear Models Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions

More information

ECLT 5810 Linear Regression and Logistic Regression for Classification. Prof. Wai Lam

ECLT 5810 Linear Regression and Logistic Regression for Classification. Prof. Wai Lam ECLT 5810 Linear Regression and Logistic Regression for Classification Prof. Wai Lam Linear Regression Models Least Squares Input vectors is an attribute / feature / predictor (independent variable) The

More information

Introduction to Random Effects of Time and Model Estimation

Introduction to Random Effects of Time and Model Estimation Introduction to Random Effects of Time and Model Estimation Today s Class: The Big Picture Multilevel model notation Fixed vs. random effects of time Random intercept vs. random slope models How MLM =

More information

Chapter 26: Comparing Counts (Chi Square)

Chapter 26: Comparing Counts (Chi Square) Chapter 6: Comparing Counts (Chi Square) We ve seen that you can turn a qualitative variable into a quantitative one (by counting the number of successes and failures), but that s a compromise it forces

More information

Variable selection for model-based clustering of categorical data

Variable selection for model-based clustering of categorical data Variable selection for model-based clustering of categorical data Brendan Murphy Wirtschaftsuniversität Wien Seminar, 2016 1 / 44 Alzheimer Dataset Data were collected on early onset Alzheimer patient

More information

Lecture 8: Summary Measures

Lecture 8: Summary Measures Lecture 8: Summary Measures Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University of South Carolina Lecture 8:

More information

13.1 Categorical Data and the Multinomial Experiment

13.1 Categorical Data and the Multinomial Experiment Chapter 13 Categorical Data Analysis 13.1 Categorical Data and the Multinomial Experiment Recall Variable: (numerical) variable (i.e. # of students, temperature, height,). (non-numerical, categorical)

More information

Yu Xie, Institute for Social Research, 426 Thompson Street, University of Michigan, Ann

Yu Xie, Institute for Social Research, 426 Thompson Street, University of Michigan, Ann Association Model, Page 1 Yu Xie, Institute for Social Research, 426 Thompson Street, University of Michigan, Ann Arbor, MI 48106. Email: yuxie@umich.edu. Tel: (734)936-0039. Fax: (734)998-7415. Association

More information

1 Probability Theory. 1.1 Introduction

1 Probability Theory. 1.1 Introduction 1 Probability Theory Probability theory is used as a tool in statistics. It helps to evaluate the reliability of our conclusions about the population when we have only information about a sample. Probability

More information

MLE/MAP + Naïve Bayes

MLE/MAP + Naïve Bayes 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes MLE / MAP Readings: Estimating Probabilities (Mitchell, 2016)

More information

Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives. 1(w i = h + 1)β h + ɛ i,

Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives. 1(w i = h + 1)β h + ɛ i, Bayesian Hypothesis Testing in GLMs: One-Sided and Ordered Alternatives Often interest may focus on comparing a null hypothesis of no difference between groups to an ordered restricted alternative. For

More information

Epidemiology Wonders of Biostatistics Chapter 11 (continued) - probability in a single population. John Koval

Epidemiology Wonders of Biostatistics Chapter 11 (continued) - probability in a single population. John Koval Epidemiology 9509 Wonders of Biostatistics Chapter 11 (continued) - probability in a single population John Koval Department of Epidemiology and Biostatistics University of Western Ontario What is being

More information

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning CS 446 Machine Learning Fall 206 Nov 0, 206 Bayesian Learning Professor: Dan Roth Scribe: Ben Zhou, C. Cervantes Overview Bayesian Learning Naive Bayes Logistic Regression Bayesian Learning So far, we

More information

8 Nominal and Ordinal Logistic Regression

8 Nominal and Ordinal Logistic Regression 8 Nominal and Ordinal Logistic Regression 8.1 Introduction If the response variable is categorical, with more then two categories, then there are two options for generalized linear models. One relies on

More information

Senior Math Circles November 19, 2008 Probability II

Senior Math Circles November 19, 2008 Probability II University of Waterloo Faculty of Mathematics Centre for Education in Mathematics and Computing Senior Math Circles November 9, 2008 Probability II Probability Counting There are many situations where

More information

1 of 14 7/15/2009 9:25 PM Virtual Laboratories > 2. Probability Spaces > 1 2 3 4 5 6 7 5. Independence As usual, suppose that we have a random experiment with sample space S and probability measure P.

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan Introduction The markets can be thought of as a complex interaction of a large number of random processes,

More information

The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier. Machine Learning Fall 2017 The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

More information

Statistical Consulting Topics Classification and Regression Trees (CART)

Statistical Consulting Topics Classification and Regression Trees (CART) Statistical Consulting Topics Classification and Regression Trees (CART) Suppose the main goal in a data analysis is the prediction of a categorical variable outcome. Such as in the examples below. Given

More information

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 18 Oct. 31, 2018

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 18 Oct. 31, 2018 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Naïve Bayes Matt Gormley Lecture 18 Oct. 31, 2018 1 Reminders Homework 6: PAC Learning

More information

STA 216, GLM, Lecture 16. October 29, 2007

STA 216, GLM, Lecture 16. October 29, 2007 STA 216, GLM, Lecture 16 October 29, 2007 Efficient Posterior Computation in Factor Models Underlying Normal Models Generalized Latent Trait Models Formulation Genetic Epidemiology Illustration Structural

More information

Random Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution

Random Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution Random Variable Theoretical Probability Distribution Random Variable Discrete Probability Distributions A variable that assumes a numerical description for the outcome of a random eperiment (by chance).

More information

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7 Introduction to Generalized Univariate Models: Models for Binary Outcomes EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7 EPSY 905: Intro to Generalized In This Lecture A short review

More information

One-Way Tables and Goodness of Fit

One-Way Tables and Goodness of Fit Stat 504, Lecture 5 1 One-Way Tables and Goodness of Fit Key concepts: One-way Frequency Table Pearson goodness-of-fit statistic Deviance statistic Pearson residuals Objectives: Learn how to compute the

More information

4. Conditional Probability

4. Conditional Probability 1 of 13 7/15/2009 9:25 PM Virtual Laboratories > 2. Probability Spaces > 1 2 3 4 5 6 7 4. Conditional Probability Definitions and Interpretations The Basic Definition As usual, we start with a random experiment

More information

Conditional Probability

Conditional Probability Conditional Probability Idea have performed a chance experiment but don t know the outcome (ω), but have some partial information (event A) about ω. Question: given this partial information what s the

More information

Chapter 2: Describing Contingency Tables - I

Chapter 2: Describing Contingency Tables - I : Describing Contingency Tables - I Dipankar Bandyopadhyay Department of Biostatistics, Virginia Commonwealth University BIOS 625: Categorical Data & GLM [Acknowledgements to Tim Hanson and Haitao Chu]

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 14 Introduction One of the key properties of coin flips is independence: if you flip a fair coin ten times and get ten

More information

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume

More information

Model Estimation Example

Model Estimation Example Ronald H. Heck 1 EDEP 606: Multivariate Methods (S2013) April 7, 2013 Model Estimation Example As we have moved through the course this semester, we have encountered the concept of model estimation. Discussions

More information

(3) Review of Probability. ST440/540: Applied Bayesian Statistics

(3) Review of Probability. ST440/540: Applied Bayesian Statistics Review of probability The crux of Bayesian statistics is to compute the posterior distribution, i.e., the uncertainty distribution of the parameters (θ) after observing the data (Y) This is the conditional

More information

Vehicle Freq Rel. Freq Frequency distribution. Statistics

Vehicle Freq Rel. Freq Frequency distribution. Statistics 1.1 STATISTICS Statistics is the science of data. This involves collecting, summarizing, organizing, and analyzing data in order to draw meaningful conclusions about the universe from which the data is

More information

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS Duration - 3 hours Aids Allowed: Calculator LAST NAME: FIRST NAME: STUDENT NUMBER: There are 27 pages

More information

Investigation into the use of confidence indicators with calibration

Investigation into the use of confidence indicators with calibration WORKSHOP ON FRONTIERS IN BENCHMARKING TECHNIQUES AND THEIR APPLICATION TO OFFICIAL STATISTICS 7 8 APRIL 2005 Investigation into the use of confidence indicators with calibration Gerard Keogh and Dave Jennings

More information

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3 STA 303 H1S / 1002 HS Winter 2011 Test March 7, 2011 LAST NAME: FIRST NAME: STUDENT NUMBER: ENROLLED IN: (circle one) STA 303 STA 1002 INSTRUCTIONS: Time: 90 minutes Aids allowed: calculator. Some formulae

More information

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model

More information

Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood

Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood Maximum Likelihood Estimation; Robust Maximum Likelihood; Missing Data with Maximum Likelihood PRE 906: Structural Equation Modeling Lecture #3 February 4, 2015 PRE 906, SEM: Estimation Today s Class An

More information

ECLT 5810 Linear Regression and Logistic Regression for Classification. Prof. Wai Lam

ECLT 5810 Linear Regression and Logistic Regression for Classification. Prof. Wai Lam ECLT 5810 Linear Regression and Logistic Regression for Classification Prof. Wai Lam Linear Regression Models Least Squares Input vectors is an attribute / feature / predictor (independent variable) The

More information

Study Notes on the Latent Dirichlet Allocation

Study Notes on the Latent Dirichlet Allocation Study Notes on the Latent Dirichlet Allocation Xugang Ye 1. Model Framework A word is an element of dictionary {1,,}. A document is represented by a sequence of words: =(,, ), {1,,}. A corpus is a collection

More information

COPYRIGHTED MATERIAL. Introduction CHAPTER 1

COPYRIGHTED MATERIAL. Introduction CHAPTER 1 CHAPTER 1 Introduction From helping to assess the value of new medical treatments to evaluating the factors that affect our opinions on various controversial issues, scientists today are finding myriad

More information

lcda: Local Classification of Discrete Data by Latent Class Models

lcda: Local Classification of Discrete Data by Latent Class Models lcda: Local Classification of Discrete Data by Latent Class Models Michael Bücker buecker@statistik.tu-dortmund.de July 9, 2009 Introduction common global classification methods may be inefficient when

More information

Investigating Models with Two or Three Categories

Investigating Models with Two or Three Categories Ronald H. Heck and Lynn N. Tabata 1 Investigating Models with Two or Three Categories For the past few weeks we have been working with discriminant analysis. Let s now see what the same sort of model might

More information

Determining the number of components in mixture models for hierarchical data

Determining the number of components in mixture models for hierarchical data Determining the number of components in mixture models for hierarchical data Olga Lukočienė 1 and Jeroen K. Vermunt 2 1 Department of Methodology and Statistics, Tilburg University, P.O. Box 90153, 5000

More information

Introduction to Probability and Statistics (Continued)

Introduction to Probability and Statistics (Continued) Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:

More information

Formalizing Probability. Choosing the Sample Space. Probability Measures

Formalizing Probability. Choosing the Sample Space. Probability Measures Formalizing Probability Choosing the Sample Space What do we assign probability to? Intuitively, we assign them to possible events (things that might happen, outcomes of an experiment) Formally, we take

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 23. Decision Trees Barnabás Póczos Contents Decision Trees: Definition + Motivation Algorithm for Learning Decision Trees Entropy, Mutual Information, Information

More information

Normal distribution We have a random sample from N(m, υ). The sample mean is Ȳ and the corrected sum of squares is S yy. After some simplification,

Normal distribution We have a random sample from N(m, υ). The sample mean is Ȳ and the corrected sum of squares is S yy. After some simplification, Likelihood Let P (D H) be the probability an experiment produces data D, given hypothesis H. Usually H is regarded as fixed and D variable. Before the experiment, the data D are unknown, and the probability

More information

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture!

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture! Hierarchical Generalized Linear Models ERSH 8990 REMS Seminar on HLM Last Lecture! Hierarchical Generalized Linear Models Introduction to generalized models Models for binary outcomes Interpreting parameter

More information

Lecture 8: Classification

Lecture 8: Classification 1/26 Lecture 8: Classification Måns Eriksson Department of Mathematics, Uppsala University eriksson@math.uu.se Multivariate Methods 19/5 2010 Classification: introductory examples Goal: Classify an observation

More information

STAT 705: Analysis of Contingency Tables

STAT 705: Analysis of Contingency Tables STAT 705: Analysis of Contingency Tables Timothy Hanson Department of Statistics, University of South Carolina Stat 705: Analysis of Contingency Tables 1 / 45 Outline of Part I: models and parameters Basic

More information

Inferences on missing information under multiple imputation and two-stage multiple imputation

Inferences on missing information under multiple imputation and two-stage multiple imputation p. 1/4 Inferences on missing information under multiple imputation and two-stage multiple imputation Ofer Harel Department of Statistics University of Connecticut Prepared for the Missing Data Approaches

More information

An Introduction to Mplus and Path Analysis

An Introduction to Mplus and Path Analysis An Introduction to Mplus and Path Analysis PSYC 943: Fundamentals of Multivariate Modeling Lecture 10: October 30, 2013 PSYC 943: Lecture 10 Today s Lecture Path analysis starting with multivariate regression

More information

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics

Probability Rules. MATH 130, Elements of Statistics I. J. Robert Buchanan. Fall Department of Mathematics Probability Rules MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2018 Introduction Probability is a measure of the likelihood of the occurrence of a certain behavior

More information

Probability and Information Theory. Sargur N. Srihari

Probability and Information Theory. Sargur N. Srihari Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal

More information

Statistics for Managers Using Microsoft Excel

Statistics for Managers Using Microsoft Excel Statistics for Managers Using Microsoft Excel 7 th Edition Chapter 1 Chi-Square Tests and Nonparametric Tests Statistics for Managers Using Microsoft Excel 7e Copyright 014 Pearson Education, Inc. Chap

More information

A Bayesian Nonparametric Model for Predicting Disease Status Using Longitudinal Profiles

A Bayesian Nonparametric Model for Predicting Disease Status Using Longitudinal Profiles A Bayesian Nonparametric Model for Predicting Disease Status Using Longitudinal Profiles Jeremy Gaskins Department of Bioinformatics & Biostatistics University of Louisville Joint work with Claudio Fuentes

More information

Time-Invariant Predictors in Longitudinal Models

Time-Invariant Predictors in Longitudinal Models Time-Invariant Predictors in Longitudinal Models Today s Class (or 3): Summary of steps in building unconditional models for time What happens to missing predictors Effects of time-invariant predictors

More information

Latent classes for preference data

Latent classes for preference data Latent classes for preference data Brian Francis Lancaster University, UK Regina Dittrich, Reinhold Hatzinger, Patrick Mair Vienna University of Economics 1 Introduction Social surveys often contain questions

More information

6.3 Bernoulli Trials Example Consider the following random experiments

6.3 Bernoulli Trials Example Consider the following random experiments 6.3 Bernoulli Trials Example 6.48. Consider the following random experiments (a) Flip a coin times. We are interested in the number of heads obtained. (b) Of all bits transmitted through a digital transmission

More information

Bayesian Inference. p(y)

Bayesian Inference. p(y) Bayesian Inference There are different ways to interpret a probability statement in a real world setting. Frequentist interpretations of probability apply to situations that can be repeated many times,

More information

NELS 88. Latent Response Variable Formulation Versus Probability Curve Formulation

NELS 88. Latent Response Variable Formulation Versus Probability Curve Formulation NELS 88 Table 2.3 Adjusted odds ratios of eighth-grade students in 988 performing below basic levels of reading and mathematics in 988 and dropping out of school, 988 to 990, by basic demographics Variable

More information

Probability & statistics for linguists Class 2: more probability. D. Lassiter (h/t: R. Levy)

Probability & statistics for linguists Class 2: more probability. D. Lassiter (h/t: R. Levy) Probability & statistics for linguists Class 2: more probability D. Lassiter (h/t: R. Levy) conditional probability P (A B) = when in doubt about meaning: draw pictures. P (A \ B) P (B) keep B- consistent

More information

Multiple Sample Categorical Data

Multiple Sample Categorical Data Multiple Sample Categorical Data paired and unpaired data, goodness-of-fit testing, testing for independence University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html

More information

Lab 3: Two levels Poisson models (taken from Multilevel and Longitudinal Modeling Using Stata, p )

Lab 3: Two levels Poisson models (taken from Multilevel and Longitudinal Modeling Using Stata, p ) Lab 3: Two levels Poisson models (taken from Multilevel and Longitudinal Modeling Using Stata, p. 376-390) BIO656 2009 Goal: To see if a major health-care reform which took place in 1997 in Germany was

More information

Plausible Values for Latent Variables Using Mplus

Plausible Values for Latent Variables Using Mplus Plausible Values for Latent Variables Using Mplus Tihomir Asparouhov and Bengt Muthén August 21, 2010 1 1 Introduction Plausible values are imputed values for latent variables. All latent variables can

More information

STAT:5100 (22S:193) Statistical Inference I

STAT:5100 (22S:193) Statistical Inference I STAT:5100 (22S:193) Statistical Inference I Week 3 Luke Tierney University of Iowa Fall 2015 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall 2015 1 Recap Matching problem Generalized

More information

Exam 2 Practice Questions, 18.05, Spring 2014

Exam 2 Practice Questions, 18.05, Spring 2014 Exam 2 Practice Questions, 18.05, Spring 2014 Note: This is a set of practice problems for exam 2. The actual exam will be much shorter. Within each section we ve arranged the problems roughly in order

More information

With Question/Answer Animations. Chapter 7

With Question/Answer Animations. Chapter 7 With Question/Answer Animations Chapter 7 Chapter Summary Introduction to Discrete Probability Probability Theory Bayes Theorem Section 7.1 Section Summary Finite Probability Probabilities of Complements

More information

MLE/MAP + Naïve Bayes

MLE/MAP + Naïve Bayes 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes Matt Gormley Lecture 19 March 20, 2018 1 Midterm Exam Reminders

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

Binary Logistic Regression

Binary Logistic Regression The coefficients of the multiple regression model are estimated using sample data with k independent variables Estimated (or predicted) value of Y Estimated intercept Estimated slope coefficients Ŷ = b

More information

Lecture 2: Simple Classifiers

Lecture 2: Simple Classifiers CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 2: Simple Classifiers Slides based on Rich Zemel s All lecture slides will be available on the course website: www.cs.toronto.edu/~jessebett/csc412

More information

Multiple Group CFA Invariance Example (data from Brown Chapter 7) using MLR Mplus 7.4: Major Depression Criteria across Men and Women (n = 345 each)

Multiple Group CFA Invariance Example (data from Brown Chapter 7) using MLR Mplus 7.4: Major Depression Criteria across Men and Women (n = 345 each) Multiple Group CFA Invariance Example (data from Brown Chapter 7) using MLR Mplus 7.4: Major Depression Criteria across Men and Women (n = 345 each) 9 items rated by clinicians on a scale of 0 to 8 (0

More information

Be able to define the following terms and answer basic questions about them:

Be able to define the following terms and answer basic questions about them: CS440/ECE448 Fall 2016 Final Review Be able to define the following terms and answer basic questions about them: Probability o Random variables o Axioms of probability o Joint, marginal, conditional probability

More information

Practice problems from chapters 2 and 3

Practice problems from chapters 2 and 3 Practice problems from chapters and 3 Question-1. For each of the following variables, indicate whether it is quantitative or qualitative and specify which of the four levels of measurement (nominal, ordinal,

More information

Inference for Binomial Parameters

Inference for Binomial Parameters Inference for Binomial Parameters Dipankar Bandyopadhyay, Ph.D. Department of Biostatistics, Virginia Commonwealth University D. Bandyopadhyay (VCU) BIOS 625: Categorical Data & GLM 1 / 58 Inference for

More information