NELS 88. Latent Response Variable Formulation Versus Probability Curve Formulation

Similar documents
Mplus Short Courses Topic 2. Confirmatory Factor Analysis, And Structural Equation Modeling For Categorical, Censored, And Count Outcomes

Logistic And Probit Regression

Categorical and Zero Inflated Growth Models

Comparing IRT with Other Models

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data

Mplus Short Courses Topic 4. Growth Modeling With Latent Variables Using Mplus: Advanced Growth Models, Survival Analysis, And Missing Data

Investigating Models with Two or Three Categories

Generalized Linear Models for Non-Normal Data

Model Estimation Example

Hierarchical Generalized Linear Models. ERSH 8990 REMS Seminar on HLM Last Lecture!

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Introducing Generalized Linear Models: Logistic Regression

Further Readings On Factor Analysis Of Categorical Outcomes

Computationally Efficient Estimation of Multilevel High-Dimensional Latent Variable Models

Ron Heck, Fall Week 8: Introducing Generalized Linear Models: Logistic Regression 1 (Replaces prior revision dated October 20, 2011)

Generalized Multilevel Models for Non-Normal Outcomes

Selection endogenous dummy ordered probit, and selection endogenous dummy dynamic ordered probit models

Generalized Models: Part 1

13.1 Causal effects with continuous mediator and. predictors in their equations. The definitions for the direct, total indirect,

MODELING COUNT DATA Joseph M. Hilbe

Econometrics Lecture 5: Limited Dependent Variable Models: Logit and Probit

Introduction to Generalized Models

LOGISTIC REGRESSION Joseph M. Hilbe

Lecture 10: Alternatives to OLS with limited dependent variables. PEA vs APE Logit/Probit Poisson

Introduction to GSEM in Stata

Generalized Linear Models for Count, Skewed, and If and How Much Outcomes

Multiple Group CFA Invariance Example (data from Brown Chapter 7) using MLR Mplus 7.4: Major Depression Criteria across Men and Women (n = 345 each)

Using the Delta Method to Construct Confidence Intervals for Predicted Probabilities, Rates, and Discrete Changes 1

CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA

Specifying Latent Curve and Other Growth Models Using Mplus. (Revised )

Mplus Short Courses Day 2. Growth Modeling With Latent Variables Using Mplus

Data-analysis and Retrieval Ordinal Classification

Linear Regression Models P8111

Mplus Short Courses Topic 3. Growth Modeling With Latent Variables Using Mplus: Introductory And Intermediate Growth Models

Generalized logit models for nominal multinomial responses. Local odds ratios

Discrete Choice Modeling

Bayesian Analysis of Latent Variable Models using Mplus

Longitudinal Invariance CFA (using MLR) Example in Mplus v. 7.4 (N = 151; 6 items over 3 occasions)

Multilevel Regression Mixture Analysis

Mohammed. Research in Pharmacoepidemiology National School of Pharmacy, University of Otago

Multilevel regression mixture analysis

SEM with Non Normal Data: Robust Estimation and Generalized Models

ZERO INFLATED POISSON REGRESSION

An Introduction to Mplus and Path Analysis

Ordered Response and Multinomial Logit Estimation

DEEP, University of Lausanne Lectures on Econometric Analysis of Count Data Pravin K. Trivedi May 2005

Group comparisons in logit and probit using predicted probabilities 1

Analysis of Categorical Data. Nick Jackson University of Southern California Department of Psychology 10/11/2013

Measurement Invariance (MI) in CFA and Differential Item Functioning (DIF) in IRT/IFA

Multinomial Logistic Regression Models

Applied Econometrics Lecture 1

Latent class analysis and finite mixture models with Stata

Centering Predictor and Mediator Variables in Multilevel and Time-Series Models

INTRODUCTION TO STRUCTURAL EQUATION MODELS

Logistic Regression. Continued Psy 524 Ainsworth

What is Latent Class Analysis. Tarani Chandola

Part 8: GLMs and Hierarchical LMs and GLMs

Single-level Models for Binary Responses

Example 7b: Generalized Models for Ordinal Longitudinal Data using SAS GLIMMIX, STATA MEOLOGIT, and MPLUS (last proportional odds model only)

An Introduction to SEM in Mplus

Economics 671: Applied Econometrics Department of Economics, Finance and Legal Studies University of Alabama

Ronald Heck Week 14 1 EDEP 768E: Seminar in Categorical Data Modeling (F2012) Nov. 17, 2012

Mixture Modeling in Mplus

CS6220: DATA MINING TECHNIQUES

8 Nominal and Ordinal Logistic Regression

Semiparametric Generalized Linear Models

Logistic & Tobit Regression

Econometric Analysis of Cross Section and Panel Data

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator

Parametric Modelling of Over-dispersed Count Data. Part III / MMath (Applied Statistics) 1

Nesting and Equivalence Testing

Categorical data analysis Chapter 5

Limited Dependent Variable Models II

Relating Latent Class Analysis Results to Variables not Included in the Analysis

Chapter 9 Regression with a Binary Dependent Variable. Multiple Choice. 1) The binary dependent variable model is an example of a

4 Multicategory Logistic Regression

A Re-Introduction to General Linear Models

Statistics: A review. Why statistics?

CS6220: DATA MINING TECHNIQUES

STA 303 H1S / 1002 HS Winter 2011 Test March 7, ab 1cde 2abcde 2fghij 3

Multiple regression: Categorical dependent variables

Simple ways to interpret effects in modeling ordinal categorical data

Auxiliary Variables in Mixture Modeling: Using the BCH Method in Mplus to Estimate a Distal Outcome Model and an Arbitrary Secondary Model

Multilevel Structural Equation Modeling

General structural model Part 2: Categorical variables and beyond. Psychology 588: Covariance structure and factor models

h=1 exp (X : J h=1 Even the direction of the e ect is not determined by jk. A simpler interpretation of j is given by the odds-ratio

Random Intercept Models

ECLT 5810 Linear Regression and Logistic Regression for Classification. Prof. Wai Lam

SIMS Variance Decomposition. SIMS Variance Decomposition (Continued)

9 Generalized Linear Models

Simple logistic regression

Spring RMC Professional Development Series January 14, Generalized Linear Mixed Models (GLMMs): Concepts and some Demonstrations

Basic IRT Concepts, Models, and Assumptions

36-309/749 Experimental Design for Behavioral and Social Sciences. Dec 1, 2015 Lecture 11: Mixed Models (HLMs)

Fitting Stereotype Logistic Regression Models for Ordinal Response Variables in Educational Research (Stata)

Categorical Predictor Variables

ECON 594: Lecture #6

Interpreting and using heterogeneous choice & generalized ordered logit models

A strategy for modelling count data which may have extra zeros

Walkthrough for Illustrations. Illustration 1

Transcription:

NELS 88 Table 2.3 Adjusted odds ratios of eighth-grade students in 988 performing below basic levels of reading and mathematics in 988 and dropping out of school, 988 to 990, by basic demographics Variable Below basic mathematics Below basic reading Dropped out Sex Female vs. male 0.77** 0.70** 0.86 Race ethnicity Asian vs. white Hispanic vs. white Black vs. white Native American vs. white 0.84.60**.77** 2.02**.46**.74** 2.09** 2.87** 0.60.2.45.64 Socioeconomic status Low vs. middle High vs. middle.68** 0.49**.66** 0.44** 3.74** 0.4* 45 Latent Response Variable Formulation Versus Probability Curve Formulation Probability curve formulation in the binary u case: P (u = x) = F (β 0 + β x), (67) where F is the standard normal or logistic distribution function. Latent response variable formulation defines a threshold τ on a continuous u * variable so that u = is observed when u * exceeds τ while otherwise u = 0 is observed, where δ ~ N (0, V (δ)). u * = γ x + δ, u = 0 (68) u = τ u* 46 23

Latent Response Variable Formulation Versus Probability Curve Formulation (Continued) P (u = x) = P (u * > τ x) = P (u * τ x) = (69) = Φ[(τ γ x) V (δ) -/2 ] = Φ[( τ + γ x) V (δ) -/2 ]. (70) Standardizing to V(δ) = this defines a probit model with intercept (β 0 ) = τ and slope (β ) = γ. Alternatively, a logistic density may be assumed for δ, f [δ ; 0, π 2 /3] = df/dδ = F( F), (7) where in this case F is the logistic distribution function /( + e δ ). 47 Latent Response Variable Formulation: R 2, Standardization, And Effects On Probabilities u * = γ x + δ R 2 (u * ) = γ 2 V (x) / (γ 2 V (x) + c), where c = for probit and π 2 / 3 for logit (McKelvey & Zavoina, 975) Standardized γ refers to the effect of x on u *, SD (u * ) = Effect of x on P (u = ) depends on x value γ s = γ SD (x) / SD (u * ), γ 2 V(x) + c P (u = x) weak effect strong effect 0 x 48 24

Modeling With An Ordered Polytomous u Outcome u polytomous with 3 categories () P (u x) u = 0 u = u = 2 (2) P (u x) P (u = or 2 x) P (u = 2 x) 0 x 0 x (3) F - [P (u x)] u = or 2 u = 2 Proportional odds model (Agresti, 2002) x 49 Ordered Polytomous Outcome Using A Latent Response Variable Formulation Density u = u = 0 u = 2 0 u * τ τ 2 Latent response variable u * Latent response variable regression: * u i = γ x i + δ i u * u x = b u x = a τ 2 2 0 2 0 x = a x = b τ x 50 25

Ordered Polytomous Outcome Using A Latent Response Variable Formulation (Continued) A categorical variable u with C ordered categories, u = c, if τ j,c < u * τ j,c+ (72) for categories c = 0,, 2,,C and τ 0 =, τ C =. Example: a single x variable and a u variable with three categories. Two threshold parameters, τ and τ 2. Probit: u * = γ x + δ, with δ normal (73) P(u = 0 x) = Φ(τ γ x), (74) P(u = x) = Φ(τ 2 γ x) Φ(τ γ x), (75) P(u = 2 x) = Φ(τ 2 γ x) = Φ( τ 2 + γ x). (76) 5 Ordered Polytomous Outcome Using A Latent Response Variable Formulation (Continued) P(u = or 2 x) = P(u = x) + P(u = 2 x) (77) = Φ (τ γ x) (78) = Φ ( τ + γ x) (79) = P(u = 0 x), (80) that is, a linear probit for, P(u = 2 x) = Φ ( τ 2 + γ x), (8) P(u = or 2 x) = Φ ( τ + γ x). (82) Note: same slope γ, so parallel probability curves 52 26

Logit For Ordered Categorical Outcome P(u = 2 x) = + e (β 2 + β x), (83) P(u = or 2 x) = + e (β + β x). (84) Log odds for each of these two events is a linear expression, logit [P(u = 2 x)] = (85) = log[p(u = 2 x)/ ( P(u = 2 x))] = β 2 + β x, (86) logit [P(u = or 2 x)] = (87) = log[p(u = or 2 x)/ ( P(u = or 2 x))] = β + β x. (88) Note: same slope β, so parallel probability curves 53 Logit For Ordered Categorical Outcome (Continued) When x is a 0/ variable, logit [P(u = 2 x = )] logit [P(u = 2 x = 0)] = β (89) logit [P(u = or 2 x = )] logit [P(u = or 2 x = 0)] = β (90) showing that the ordered polytomous logistic regression model has constant odds ratios for these different outcomes. 54 27

Alcohol Consumption: Ordered Polytomous Regression u: On the days that you drink, how many drinks do you have per day, on the average? Ordinal u: ( Alameda Scoring ) 0 non-drinker -2 drinks per day 2 3-4 drinks per day 3 5 or more drinks per day x s: Age: whole years 20 64 Income: $4,999 2 $5,000 $9,999 3 $0,000 $4,999 4 $5,000 $24,999 5 $25,000 N = 73 Males with regular physical activity levels Source: Golden (982), Muthén (993) 55 Alcohol Consumption: Ordered Polytomous Regression (Continued) P(u = 0 x) = Φ (τ γ x) () P(u = x) = Φ (τ 2 γ x) Φ (τ γ x), P(u = 2 x) = Φ (τ 3 γ x) Φ (τ 2 γ x), P(u = 3 x) = Φ ( τ 3 + γ x). Ordered u gives a single slope 56 28

Alcohol Consumption: Ordered Polytomous Regression (Continued) 2 Sample Probit 0 - -2 2 Sample Probit u = or 2 or 3 ( drink) Age category u = 2 or 3 ( 3 drinks) u = 3 ( 5 drinks) u = or 2 or 3 ( drink) 0 - -2 Income category u = 2 or 3 ( 3 drinks) u = 3 ( 5 drinks) 57 Polytomous Outcome: Unordered Case Multinomial logistic regression: P(u i = c x i ) =, (9) Σ k= Κ e β 0k + β k x i for c =, 2,, K, where we standardize to which gives the log odds e β 0c + β c x i β 0Κ = 0, (92) β Κ = 0, (93) log[p(u i = c x i ) / P(u i = Κ x i )] = β 0c + β c x i, (94) for c =, 2,, K. 58 29

Multinomial Logistic Regression Special Case Of K = 2 P (u i = x i ) = e β 0 + β x i β 0 + β x i e + = = ( β 0 + β x i ) e ( β 0 + β x i ) e + e ( β 0 + β x i ) β 0 + β x i β 0 + β x i e + which is the standard logistic regression for a binary outcome. * e 59 Input For Multinomial Logistic Regression TITLE: DATA: VARIABLE: MODEL: multinomial logistic regression FILE = nlsy.dat; NAMES = u x-x3; NOMINAL = u; u ON x-x3; 60 30

Output Excerpts Multinomial Logistic Regression: 4 Categories Of ASB In The NLSY U# ON AGE94 MALE BLACK U#2 ON AGE94 MALE BLACK U#3 ON AGE94 MALE BLACK Intercepts U# U#2 U#3 Estimates -.285 2.578.58.069.87 -.606 -.37.459.999 -.822 -.748 -.324 S.E..028.5.39.022.0.39.028.0.7.74.03.25 Est./S.E. -0.045 7.086.4 3.82.702-4.357 -.3 4.43 8.53-0.485-7.258-2.600 6 Estimated Probabilities For Multinomial Logistic Regression: 4 Categories Of ASB In The NLSY Example : x s = 0 exp probability = exp/sum log odds (u=) = -.822 log odds (u=2) = -0.748 log odds (u=3) = -0.324 log odds (u=4) = 0 sum 0.62 0.473 0.723.0 2.358 0.069 0.20 0.307 0.424.00 62 3

Estimated Probabilities For Multinomial Logistic Regression: 4 Categories Of ASB In The NLSY (Continued) Example 2: x =,, log odds (u=) = -.822 + (-0.285*) + (2.578*) + (0.58* = 0.629 log odds (u=2) = -0.748 + 0.069* + 0.87* + (-0.606*) = -.098 log odds (u=3) = -0.324 + (-0.37*) +.459* + 0.999* =.87 exp probability = exp/sum log odds (u=) = 0.629 log odds (u=2) = -.098 log odds (u=3) =.87 log odds (u=4) = 0.876 0.334 6.53.0 0.200 0.036 0.657 0.07 sum 9.363.000 63 Estimated Probabilities For Multinomial Logistic Regression: 4 Categories Of ASB In The NLSY (Continued) 0.6 Class, 2.7% Class 2, 20.5% Class 3, 30.7% Class 4, 36.2% Probability 0.4 0.2 0 0 2 3 4 5 6 7 6 7 8 9 20 2 22 23 age94 64 32

Censored-Normal (Tobit) Regression y * = π 0 + π x + δ Continuous unlimited: y = y* V(δ) identifiable c L, if y * c L Continuous-censored: y = y *, if c L < y * < c U c U, if y * c U Censoring from below, c L = 0, c U = : P(y > 0 x) = F π 0 + π x V (δ) (Probit Regression) E (y y > 0, x) = π 0 + π x + f /F V (δ) Classical Tobit 65 OLS v. Tobit Regression For Censored y But Normal y * y Tobit y* Tobit OLS c L x c L x 66 33

Regression With A Count Dependent Variable 67 Poisson Regression A Poisson distribution for a count variable u i has λ P(u i = r) = ir e λ i, where u i = 0,, 2, r! P(u i ) 0.6 0.5 0.4 0.3 0.2 0. λ = 0.5 λ is the rate at which a rare event occurs 0 2 3 4 Regression equation for the log rate: e log λ i = ln λ i = β 0 + β x i u 68 34

Zero-Inflated Poisson (ZIP) Regression A Poisson variable has mean = variance. Data often have variance > mean due to preponderance of zeros. π = P (being in the zero class where only u = 0 is seen) π = P (not being in the zero class with u following a Poisson distribution) A mixture at zero: P(u = 0) = π + ( π) e λ Poisson part The ZIP model implies two regressions: logit (π i ) = γ 0 + γ x i, ln λ i = β 0 + β x i 69 Negative Binomial Regression Unobserved heterogeneity ε i is added to the Poisson model lnλ = β + β x + ε, where exp (ε) ~ Γ Poisson assumes E u = λ V i 0 i i Negative binomial assumes E u = λ ( i x i ) ( i i x i ) i ( ui x i ) = λ V ( u x ) λ ( λ α ) i i i = i + i NB with α = 0 gives Poisson. When the dispersion parameter α > 0, the NB model gives substantially higher probability for low counts and somewhat higher probability for high counts than Poisson. Further variations are zero-inflated NB and zero-truncated NB (hurdle model or two-part model). 70 35

Variable command Mplus Specifications Type of dependent variable Variance/ residual variance CATEGORICAL = u; Binary, ordered polytomous No NOMINAL = u; Unordered, polytomous (nominal) No CENSORED = y (b); = y (a); Censored normal (Tobit) Censored from below or above Yes COUNT = u; = u (i); Poisson Zero-inflated Poisson No No 7 Further Readings On Censored and Count Regressions Hilbe, J. M. (2007). Negative binomial regression. Cambridge, UK: Cambridge University Press. Lambert, D. (992). Zero-inflated Poisson regression, with an application to defects in manufacturing. Technometrics, 34, - 3. Long, S. (997). Regression models for categorical and limited dependent variables. Thousand Oaks: Sage. Maddala, G.S. (983). Limited-dependent and qualitative variables in econometrics. Cambridge: Cambridge University Press. Tobin, J (958). Estimation of relationships for limited dependent variables. Econometrica, 26, 24-36. 72 36

Path Analysis With Categorical Outcomes 73 Path Analysis With A Binary Outcome And A Continuous Mediator With Missing Data Logistic Regression Path Model female mothed homeres expect lunch expel arrest droptht7 hisp black math7 math0 hsdrop female mothed homeres expect lunch expel arrest droptht7 hisp black math7 math0 hsdrop 74 37

Input For A Path Analysis With A Binary Outcome And A Continuous Mediator With Missing Data Using Monte Carlo Integration TITLE: DATA: VARIABLE: ANALYSIS: MODEL: OUTPUT: Path analysis with a binary outcome and a continuous mediator with missing data using Monte Carlo integration FILE = lsaydropout.dat; NAMES ARE female mothed homeres math7 math0 expel arrest hisp black hsdrop expect lunch droptht7; MISSING = ALL(9999); CATEGORICAL = hsdrop; ESTIMATOR = ML; INTEGRATION = MONTECARLO(500); hsdrop ON female mothed homeres expect math7 math0 lunch expel arrest droptht7 hisp black; math0 ON female mothed homeres expect math7 lunch expel arrest droptht7 hisp black; PATTERNS STANDARDIZED TECH TECH8; 75 Output Excerpts Path Analysis With A Binary Outcome And A Continuous Mediator With Missing Data Using Monte Carlo Integration MATH0 FEMALE MOTHED HOMERES MATH7 EXPEL ARREST HISP BLACK EXPECT LUNCH DROPTHT7 MISSING DATA PATTERNS FOR Y 2 x MISSING DATA PATTERN FREQUENCIES FOR Y Pattern Frequency Pattern 639 2 Frequency 574 76 38

Output Excerpts Path Analysis With A Binary Outcome And A Continuous Mediator With Missing Data Using Monte Carlo Integration (Continued) Tests Of Model Fit Loglikelihood H0 Value -6323.75 Information Criteria Number of Free Parameters Akaike (AIC) Bayesian (BIC) Sample-Size Adjusted BIC (n* = (n + 2) / 24) 26 2698.350 2846.604 2763.999 77 Output Excerpts Path Analysis With A Binary Outcome And A Continuous Mediator With Missing Data Using Monte Carlo Integration (Continued) Model Results Estimates S.E. Est./S.E. Std StdYX HSDROP ON FEMALE MOTHED HOMERES EXPECT MATH7 MATH0 LUNCH EXPEL ARREST DROPTHT7 HISP BLACK 0.336-0.244-0.09-0.225-0.02-0.03 0.005.00 0.033 0.679-0.45 0.038 0.67 0.0 0.054 0.063 0.05 0.0 0.004 0.26 0.34 0.272 0.265 0.234 2.02-2.42 -.699-3.593-0.83-2.86.456 4.669 0.05 2.499-0.548 0.63 0.336-0.244-0.09-0.225-0.02-0.03 0.005.00 0.033 0.679-0.45 0.038 0.080-0.7-0.072-0.47-0.058-0.20-0.053 0.29 0.003 0.067-0.09 0.006 78 39

Output Excerpts Path Analysis With A Binary Outcome And A Continuous Mediator With Missing Data Using Monte Carlo Integration (Continued) Estimates S.E. Est./S.E. Std StdYX MATH0 ON FEMALE MOTHED HOMERES EXPECT MATH7 LUNCH EXPEL ARREST DROPTHT7 HISP BLACK -0.973 0.343 0.486.04 0.928-0.039 -.404-3.337 -.077-0.644-0.809 0.40 0.29 0.40 0.66 0.023 0.0 0.85.093.070 0.744 0.694-2.372.570 3.485 6. 39.509-3.450 -.650-3.052 -.007-0.866 -.65-0.973 0.343 0.486.04 0.928-0.039 -.404-3.337 -.077-0.644-0.809-0.036 0.026 0.059 0.03 0.687-0.059-0.028-0.052-0.06-0.03-0.09 79 Output Excerpts Path Analysis With A Binary Outcome And A Continuous Mediator With Missing Data Using Monte Carlo Integration (Continued) Intercepts MATH0 Thresholds HSDROP$ Estimates 0.94 -.207 S.E. Est./S.E..269 8.62 0.52-2.39 Std 0.94 StdYX 0.809 Residual Variances MATH0 65.28 2.280 28.57 65.28 0.356 Observed Variable HSDROP MATH0 R-Square 0.255 0.644 80 40

Path Analysis Of Occupational Destination V E * E S * S X F * F Figure 3: Structural Modeling of the Occupational Destination of Scientist or Engineer, Model Reference: Xie (989) Data source: 962 OCG Survey. The sample size is 4,40. V: Father s Education. X: Father s Occupation (SEI) 8 Path Analysis Of Occupational Destination (Continued) Table 2. Descriptive Statistics of Discrete Dependent Variables Variable Code Meaning Percent S: Current Occupation 0 Non-scientific/engineering Scientific/engineering 96.4 3.6 F: First Job 0 Non-scientific/engineering Scientific/engineering 98.3.7 E: Education 0 0-7 years 8- years 3.4 32.6 2 2 years 29.0 3 3 and more years 25.0 82 4

Differences Between Weighted Least Squares And Maximum Likelihood Model Estimation For Categorical Outcomes In Mplus Probit versus logistic regression Weighted least squares estimates probit regressions Maximum likelihood estimates logistic or probit regressions Modeling with underlying continuous variables versus observed categorical variables for categorical outcomes that are mediating variables Weighted least squares uses underlying continuous variables Maximum likelihood uses observed categorical outcomes 83 Differences Between Weighted Least Squares And Maximum Likelihood Model Estimation For Categorical Outcomes In Mplus (Continued) Delta versus Theta parameterization for weighted least squares Equivalent in most cases Theta parameterization needed for models where categorical outcomes are predicted by categorical dependent variables while predicting other dependent variables Missing data Weighted least squares allows missingness predicted by covariates Maximum likelihood allows MAR Testing of nested models WLSMV uses DIFFTEST Maximum likelihood (ML, MLR) uses regular or special approaches 84 42

Further Readings On Path Analysis With Categorical Outcomes MacKinnon, D.P., Lockwood, C.M., Brown, C.H., Wang, W., & Hoffman, J.M. (2007). The intermediate endpoint effect in logistic and probit regression. Clinical Trials, 4, 499-53. Xie, Y. (989). Structural equation models for ordinal variables. Sociological Methods & Research, 7, 325-352. 85 Categorical Observed And Continuous Latent Variables 86 43