Random Effects. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology. university of illinois at urbana-champaign
|
|
- Eustacia Morris
- 6 years ago
- Views:
Transcription
1 Random Effects Edps/Psych/Stat 587 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign Fall 2012
2 Outline Introduction Empirical Bayes inference Henderson s mixed-model equations BLUP: Best Linear Unbiased Prediction Shrinkage The normality assumption for random effects SAS/MIXED Reading: Snijders & Bosker: Further reference: Verbeke & Molenberghs (Chapter 7) and therein. C.J. Anderson (Illinois) Random Effects Fall / 68
3 Introduction Why get estimate of the random effects, Ûj? See how much groups (macro units) deviate from the average regression. Detect outlying groups. Predict group specific outcomes. In education might be tempted to use these to Selecting specific school for your child to attend (see Snijders & Bosker). Hold schools/teachers/students accountable. C.J. Anderson (Illinois) Random Effects Fall / 68
4 Empirical Bayes Inference Bayes Theorem Empirical Bayes Estimates Example C.J. Anderson (Illinois) Random Effects Fall / 68
5 Bayes Theorem Based on probability theory, for two discrete variables, A and B, : joint probability = (conditional prob)(marginal prob) Bayes Theroem: P(A and B) = P(A B)P(B) = P(B A)P(A) P(A B) = P(B A)P(A) P(B) or P(B A) = P(A B)P(B) P(A) C.J. Anderson (Illinois) Random Effects Fall / 68
6 Bayes Theorem: Continuous Variables Replace probabilities (i.e., P(.)) by probability density functions, i.e., f (.). For two continuous variables, x and y, and Bayes Theroem: f (x, y) = f (x y)f (y) = f (y x)f (x) or f (x y) = f (y x) = f (y x)f (x) f (y) f (x y)f (y) f (x) C.J. Anderson (Illinois) Random Effects Fall / 68
7 Bayes Theorem for Vectors We have vectors of continuous random variables: y j, observations/measures on the response variable from individuals in group j. U j, our random effects for group j. The relationships that hold for two single variables also hold for sets of random variables and f (y j, U j ) = f (y j U j )f (U j ) = f (U j y j )f (y j ) f (U j y j ) = f (y j U j )f (U j ) f (y j ) C.J. Anderson (Illinois) Random Effects Fall / 68
8 Empirical Bayes Estimates where f (U j y j ) = f (y j U j )f (U j ) f (y j ) f (U j ) is the prior distribution of the random effects. This distribution is N (0, T). f (y j ) is the marginal distribution of the response variable, Y j N (X j Γ, (Z j TZ j + σ 2 I)) C.J. Anderson (Illinois) Random Effects Fall / 68
9 Empirical Bayes Estimates (continued) f (y j U j ) is the conditional distribution of the response variable given the random effects, f (y j U j ) is N ((X j Γ + Z j U j ), σ 2 I) f (U j y j ) is the posterior distribution of the random effects; it s the distribution of the random effects conditional on the data (our observations on our response variable and our estimated parameters, ˆΓ and ˆT). C.J. Anderson (Illinois) Random Effects Fall / 68
10 Empirical Bayes Estimates (continued) f (U j y j ) = f (y j U j )f (U j ) f (y j ) Replacing everything on the right hand side of this equation gives the distribution for f (U j y j ), which is multivariate normal. This is what we need to get estimates/predictions of U j. C.J. Anderson (Illinois) Random Effects Fall / 68
11 Estimating U j One way, estimate the mean of the distribution of the posterior distribution, f (U j y j ). By the definition of the mean, algebra and some calculus thrown in, U j(γ,t) = E(U j Y j = y j ) = U j f (U j y j )du j = TZ jv 1 j (y j X j Γ) C.J. Anderson (Illinois) Random Effects Fall / 68
12 Estimating U j (continued) Û j(γ,t) = U j f (U j y j )du j = TZ jv 1 j (y j X j Γ) The integration is over all possible values of U j (like a sum). V j = (Z j TZ j + σ2 I). (y j X j Γ) = (y j ŷ j ). The estimate of U j depends on Γ and T, as well as on y j. C.J. Anderson (Illinois) Random Effects Fall / 68
13 Estimating Covariance Matrix for U j The covariance matrix for is Ûj(Γ,T) [ var(ûj) = TZ j V 1 j V 1 j ( X j Xj V 1 j X j ) 1 ] X j V 1 j Z j T. This covariance matrix isn t good for statistical inference about (Ûj(Γ,T) U j ), because it underestimates the variability of (Ûj(Γ,T) U j )... it ignores the variability of U j. For statistical inference, the variance of (Ûj(Γ,T) U j ) is used, var(ûj U j ) = T var(ûj). When Γ and T are unknown and we use estimates of them, the estimate of U j is known as the Empirical Bayes (EB) estimate, Ûj. C.J. Anderson (Illinois) Random Effects Fall / 68
14 Example: HSB A simple model. (math) ij = γ 00 + γ 10 (cses) ij + U 0j + R ij SAS/MIXED input: PROC MIXED data=hsbcent noclprint covtest method=ml ic; CLASS id; MODEL mathach = cses /solution; RANDOM intercept / subject=id type=un solution cl alpha=.05; ODS output SolutionR=RanUs; C.J. Anderson (Illinois) Random Effects Fall / 68
15 SAS/MIXED Notes solution option in the RANDOM statement tells SAS to compute Ûj s. The ODS command ( Output delivery system ) replaces the MAKE command in earlier versions of SAS (MAKE can still be used). SolutionR is an ODS table name that have random effect solution vectors. RanUs is the name of the SAS data set output that contains the contents of the random effect solution vectors. For other ODS table names, see SAS 9.1 PROC MIXED online documentation. C.J. Anderson (Illinois) Random Effects Fall / 68
16 SAS/MIXED: Output Convergence criteria met. Covariance Parameter Estimates Standard Z Cov Parm Subject Estimate Error Value Pr Z UN(1,1) id <.0001 Residual <.0001 Solution for Fixed Effects Standard t Effect Estimate Error DF Value Pr> t Intercept <.0001 cses <.0001 C.J. Anderson (Illinois) Random Effects Fall / 68
17 Solution for Random Effects Solution for Random Effects Std Err t Effect school Estimate Pred DF Value Pr> t Intercept Intercept Solution for Random Effects Effect school Alpha Lower Upper Intercept Intercept C.J. Anderson (Illinois) Random Effects Fall /
18 Solution for Random Effects Estimate are the empirical Bayes estimates. The Std Err Pred are the standard errors of (Û j(γ,t) U j ), which are useful for statistical inference. The t Value is for testing H 0 : U j = 0 versus H a : U j 0. To understand what s in the SAS/MIXED mannual, you need to know about Henderson s Mixed Model and BLUP.... But first let s look atthe Ûoj s graphically... C.J. Anderson (Illinois) Random Effects Fall / 68
19 Empirical Distribution of Û0j s C.J. Anderson (Illinois) Random Effects Fall / 68
20 Û 0j s versus Schools with 95% CL Schools ordered according to Û0j. C.J. Anderson (Illinois) Random Effects Fall / 68
21 Effect of Micro Sample Size on Ûj: N = 160, n j = 2, 5, 10, 100 C.J. Anderson (Illinois) Random Effects Fall / 68
22 Effect of Micro Sample Size on Ûj: N = 160, n j = 2, 5, 10, 100 C.J. Anderson (Illinois) Random Effects Fall / 68
23 Effect of Macro Sample Size on Ûj: n j = 10, N = 20, 50, 100, 500 C.J. Anderson (Illinois) Random Effects Fall / 68
24 Effect of Macro Sample Size on Ûj: n j = 10, N = 20, 50, 100, 500 C.J. Anderson (Illinois) Random Effects Fall / 68
25 Effect of macro Sample Size on ˆγ: n j = 10, N = 20, 50, 100, 500 Marco sample Standard size Effect Estimate Error N=20 Intercept x N=50 Intercept x N=100 Intercept x N=500 Intercept x C.J. Anderson (Illinois) Random Effects Fall / 68
26 Effect of micro Sample Size on ˆγ: N = 160, n j = 2, 5, 10, 100 Micro sample Standard size Effect Estimate Error n j = 2 Intercept x n j = 5 Intercept x n j = 10 Intercept x n j = 100 Intercept x C.J. Anderson (Illinois) Random Effects Fall / 68
27 Summary: Effect of N and n j on Ûj & ˆγ Effect on shape of distribution of Û j : Increasing nj (micro) doesn t have much of an effect. Increasing N (macro) leads to more normal looking distribution. Effect on confidence limits for Ûj (i.e. standard errors) Increasing n j (micro) leads to smaller standard errors of Û j vspace.05in Increasing N (macro) doesn t change the standard errors of Û j. Effect on parameter estimates (i.e.,γ s): none Effect on standard errors of parameters: Increase N or n j, standard errors get smaller. Why? Is there a differential effect?
28 Effect of N and n j on s.e s of γ s N n j n + s.e Intercept x Intercept x Intercept x Intercept x , 000 Intercept x , 600 Intercept x , 000 Intercept x , 000 Intercept x
29 Henderson s Mixed-Model Equations The equation Û j(γ,t) = TZ jv 1 j (y j X j Γ) was also derived by Henderson who used a system of linear equations rather than Bayes Theorem. He basically stacks the problem. C.J. Anderson (Illinois) Random Effects Fall / 68
30 Henderson s Mixed-Model Equations Staring with the linear mixed model, y 1 X 1 Z y 2 X 2 0 Z =. Γ. }{{} Γ y N X N Z N }{{}}{{}}{{} y X Z i.e., y = XΓ + ZU + R. U 1 U 2. U N } {{ } U + Given estimates of variance componentst, estimates of U can be obtained by solving the mixed-model equations for Γ and U, ( X Σ 1 X X ) ( ) ( Σ 1 Z Γ X ) Σ 1 y Z Σ 1 X ZΣ 1 Z + T 1 = U Z 1 Σ 1 y R 1 R 2. R N } {{ } R C.J. Anderson (Illinois) Random Effects Fall / 68
31 Henderson s Mixed-Model Equations Note that the within and between covariance matrices are block diagonal σ 2 I T σ 2 I Σ = and T = 0 T σ 2 I N T N. C.J. Anderson (Illinois) Random Effects Fall / 68
32 and Solution to Henderson s Equations ˆΓ = which we ve seen before. ( X ˆV 1 X ) 1 X ˆV 1 y Û = TZ ˆV 1 (y XˆΓ) Using Henderson s approach is fine so long as you don t have large data sets (otherwise it s computationally difficult). In the SAS/MIXED documentation it is reported that Henderson s estimates are used...these are the same as the EB ones. In practice, the more direct approach is more efficient. (i.e., equation for Û j given at the beginning of this section and under EB and the equations previously given for. ˆΓ and ˆT in the notes on estimation). C.J. Anderson (Illinois) Random Effects Fall / 68
33 BLUP: Best Linear Unbiased Prediction Best means a predictor or estimator is Unbiased. Has the smallest variance among all possible unbiased estimators (of a particular form). If the variance components are known, Then the Bayes predictions of U j are the Best Linear nbaised Predictors or BLUP. But the variance components are not known... C.J. Anderson (Illinois) Random Effects Fall / 68
34 BLUP: Best Linear Unbiased Prediction Since estimates of the variance components ˆT are used to estimate or predict Y j, i.e., Ŷ j = XˆΓ + ZÛ j, The EB estimates of U j are Empricial Best Linear Unbaised Predictors or EBLUP. This is related to what follows and has or interpretation of the EB estimates of U j. C.J. Anderson (Illinois) Random Effects Fall / 68
35 Shrinkage 1. Simple Model: Null/Empty 2. Example: HSB random intercept models with (cses) i j. 3. Complex/General Model. 4. Example 2: More complex model. C.J. Anderson (Illinois) Random Effects Fall / 68
36 Shrinkage: Null/Empty Model Y ij = β 0j + R ij = γ 00 + U 0j + R ij Using information from group j, the OLS estimates of β 0j is The group mean. n j ˆβ 0j = (1/n j ) Y ij = Ȳ.j i=1 C.J. Anderson (Illinois) Random Effects Fall / 68
37 Shrinkage: Null Model (continued) If we used information from all the groups, we could estimate β 0j as the mean over populations (i.e., γ 00 ); that is, ( ) M n 1 j M ˆγ 00 = j n Y ij = j So we can estimate β 0j using Group information. j=1 i=1 j=1 Information from all groups (population). A combination of information. n j M Ȳ.j = Ȳ.. C.J. Anderson (Illinois) Random Effects Fall / 68
38 Optimal Estimator of β 0j The optimal (linear) combination (BLUP) is the empirical Bayes estimator of β 0j. It is a weighted average. ˆβ EB 0j = λ j ˆβ0j + (1 λ j )ˆγ 00 ( τ 2 ) o = τ0 2 + ˆβ 0j + σ2 /n j ( τo 2 = τ0 2 + σ2 /n j ( = 1 σ2 /n j τ0 2 + σ2 /n j ( 1 ) ( Ȳ.j + 1 ) Ȳ.j + τ 2 o τ σ2 /n j τ 2 o τ0 2 + σ2 /n j ( σ 2 /n j τ σ2 /n j ) ˆγ 00 ) Ȳ.. ) Ȳ.. C.J. Anderson (Illinois) Random Effects Fall / 68
39 Optimal Estimator of β 0j (continued) ˆβ EB ( ) ( 0j = λ j ˆβ 0j + (1 λ j )ˆγ 00 = 1 σ2 /n j σ 2 ) /n j τ0 2 + Ȳ.j + σ2 /n j τ0 2 + Ȳ.. σ2 /n j The weights are both less than 1, so the EB estimate will be closer to the overall mean than the OLS estimator of β 0j. Consider the extreme cases: τ 2 0 = 0 and σ2 = 0. This phenomenon is known as Shrinkage. The observed data are shrunken toward the prior average (i.e., X j Γ) since the prior mean of the random effects is 0. C.J. Anderson (Illinois) Random Effects Fall / 68
40 HSB Example of Shrinkage Random intercept model with (cses) ij. Consider school 8367 where n j = 14. The weight for the group data is ˆτ o 2 /(ˆτ o 2 + ˆσ 2 /n j ) = /( /n j ) =.76 The weight for overall average regression is 1 ˆτ 2 o /(ˆτ 2 o + ˆσ 2 /n j ) =.23. C.J. Anderson (Illinois) Random Effects Fall / 68
41 HSB Example of Shrinkage (continued) C.J. Anderson (Illinois) Random Effects Fall / 68
42 HSB Example of Shrinkage (continued) The observed data are shrunken toward the prior average (i.e., X j Γ) where the prior mean of the random effects is 0. The variance of the estimates is less than (or equal) to the data. See SAS/INSIGHT output.... next slide... C.J. Anderson (Illinois) Random Effects Fall / 68
43 HSB: Estimate = Û0j where ˆτ 2 0 = 8.61 C.J. Anderson (Illinois) Random Effects Fall / 68
44 Shrinkage for More General/Complex Shrinkage also occurs in more complex models. Instead of developing this in terms of terms of predicted values of Y ij... Ŷ ij = ˆβ 0j. ˆβ EB 0j we can do it in and Because I had to show it to myself... C.J. Anderson (Illinois) Random Effects Fall / 68
45 Shrinkage for More General/Complex Ŷ j = X j ˆΓ + Z j Û j = X j ˆΓ + Z j (TZ jv 1 j )(y j X j ˆΓ) = X j ˆΓ Z j TZ jv 1 j X j ˆΓ + Z j TZ jv 1 j y j = (I nj Z j TZ jv 1 j )X j ˆΓ + Z j TZ jv 1 j y j = (I nj (V j σ 2 I nj )V 1 j )X j ˆΓ + (V j σ 2 I nj )V 1 j = (I nj I nj + σ 2 V 1 j = (σ 2 V 1 j )X j ˆΓ + (I nj σ 2 I nj V 1 j )y j )X j ˆΓ + (I nj σ 2 V 1 )y j j y j C.J. Anderson (Illinois) Random Effects Fall / 68
46 English translation Ŷ j = (σ 2 V 1 j )X j ˆΓ + (I nj σ 2 V 1 )y j Predictions of Y ij are weighted combinations of j The overall/average population regression (i.e., X j ˆΓ), and The data from group j (i.e., yj ). Recall that that covariance matrix for Y j is V j = Z j TZ j + σ 2 I nj C.J. Anderson (Illinois) Random Effects Fall / 68
47 Weights for Extreme Case: T = 0 V j = σ 2 I nj and V 1 j = (1/σ 2 )I nj Weight for the overall average regression is (σ 2 V 1 j ) = (σ 2 (1/σ 2 )I nj ) = I nj, Weight for group j data is (I nj σ 2 V 1 j ) = (I nj I nj ) = 0 Predicted value of the response variable is Ŷ j = (σ 2 V 1 j )X j ˆΓ + (I nj σ 2 V 1 )y j = X j ˆΓ j C.J. Anderson (Illinois) Random Effects Fall / 68
48 Weights for Extreme Case: σ 2 = 0 Then V j = Z j TZ j + σ2 I nj = Z j TZ j Weight for the overall average regression is Weight for the group j data is (σ 2 V 1 j ) = 0, (I nj σ 2 V 1 j ) = I nj Predicted value of the response variable is Ŷ j = (σ 2 V 1 j )X j ˆΓ + (I nj σ 2 V 1 )y j = y j C.J. Anderson (Illinois) Random Effects Fall / 68 j
49 Comments on Complex Case Generally, the covariance matrix for the data is in between the two extremes and the predictions are shrunken toward toward the priori average regression (X j Γ). This also implies that for any linear combination (say vector L), var(l Û j ) var(l U j ) C.J. Anderson (Illinois) Random Effects Fall / 68
50 Shrinkage Example w/ Complex Model HSB: The linear mixed model is (math) ij = γ 00 + γ 10 (cses) ij + γ 20 (female) ij + γ 30 (minority) ij +γ 01 (sector) j + γ 02 (size) j + γ 03 (SES) j +γ 11 (sector) j (cses) ij + γ 22 (size) j (female) ij +γ 23 (SES) j (female) ij + γ 31 (sector) j (minority) ij +U 0j + U 1j (female) ij + U 2j (minority) ij + R ij C.J. Anderson (Illinois) Random Effects Fall / 68
51 Shrinkage Example w/ Complex Model The Mixed Procedure Model Information Data Set WORK.HSBCENT Dependent Variable mathach Covariance Structure Unstructured Subject Effect id Estimation Method ML Residual Variance Method Profile Fixed Effects SE Method Model-Based Degrees of Freedom Method Containment Convergence criteria met. C.J. Anderson (Illinois) Random Effects Fall / 68
52 Covariance Parameter Estimates Standard Z Cov Parm Subject Estimate Error Value Pr Z UN(1,1) id <.0001 UN(2,1) id UN(2,2) id UN(3,1) id UN(3,2) id UN(3,3) id Residual <.0001 C.J. Anderson (Illinois) Random Effects Fall / 68
53 The covariances from PROC CORR are smaller (indicates shrinkage). C.J. Anderson (Illinois) Random Effects Fall / 68 Empirical Bayes Inference Henderson s Mixed-Model Equations BLUP Shrinkage Normality Assumption SAS/MIXED Covariance Parameter Estimates Estimated from MIXED and computed variances and covariances (PROC CORR) of the EB Ûj.: PROX PROC Parameter MIXED CORR intercept, intercept τ female, intercept τ female, female τ minority, intercept τ minority, female τ minority,minority τ Residual σ
54 Weight Matrices: School 8367 Lower triangle ˆσ 2 ˆV 1 j and upper I ˆσ 2 ˆV 1 j.95 * * * * * * * * * * * * * * Overall being given more weight? C.J. Anderson (Illinois) Random Effects Fall / 68
55 Normality Assumption: Random Effects Using Ûj is to examine the assumption of normality for the random effects. Problem Even when the linear mixed model is correctly specified, the distribution of the U j s are all different unless all groups have the same X j and Z j. Solution: Standardize the Ûj s, Û j = Ûj s.e. j And then examine for normality.... C.J. Anderson (Illinois) Random Effects Fall / 68
56 HSB: stdu = Û 0j /Ŝ.E. j w/ (cses ij ) C.J. Anderson (Illinois) Random Effects Fall / 68
57 Normality: Problem of Shrinkage Problem: Due to shrinkage, Ûj s show less variability than is actually present in the population of random effects. Plots of Ûj and Û j do not necessarily reflect the actual distribution of the random effects. The EB estimates of U j are very dependent on their assumed prior distribution. C.J. Anderson (Illinois) Random Effects Fall / 68
58 Impact of Non-normality How sensitive are Ûj to the normality assumption? Study by Verbeke & Molenberghs, which I ran (out of curiosity and to show you too): 1. Simulate samples from a population with 1000 marco units where each macro unit had 5 observations (i.e., n j = 5) where the random effects followed a mixture of two normal distributions, U 0j [ 1 2 N ( 2, 1) + 1 ] N (2, 1). 2 C.J. Anderson (Illinois) Random Effects Fall / 68
59 Simulated Distribution for U 0j C.J. Anderson (Illinois) Random Effects Fall / 68
60 Impact of Non-normality 2 Using the simulated U 0j s, simulate y ij s according to y ij = U 0j + R ij with R ij N (0, σ 2 ) where I used σ 2 = 1, 9, 25 and Fit random intercept model to the simulated data. 4 Examine the resulting distribution of the Û0j s. C.J. Anderson (Illinois) Random Effects Fall / 68
61 Distributions for Û0j (Note: ˆτ 2 0 5) C.J. Anderson (Illinois) Random Effects Fall / 68
62 Impact of Non-Normality Both σ 2 and τ 2 0 influence the shape of the distribution of Ûoj: If σ 2 is large relative to τ 2 0, ρ I = τ 2 0 σ 2 + τ 2 0 is small and it s difficult to detect sub-group in the random effects. That is sub-groups in the population are difficult to recognize. If a model includes random slopes, it will be easier to detect subgroups in the random effects. C.J. Anderson (Illinois) Random Effects Fall / 68
63 Non-Normality & Marginal Model Based on a number of simulation studies (by others), the wrong distributional assumption for U j, Has little effect on the ˆΓ and ˆT. (yeah) Effects the estimated standard errors of ˆΓ and ˆT. (boo) Estimated standard errors for ˆΓ are generally pretty close to the robust ones. (yeah) Estimated standard errors for ˆT can be really bad. The uncorrected s.e. s could be 5 times too large or 3 times too small. But we generally don t use s.e. s to test whether τk 2 = 0, so this result is not too critical for valid tests of such hypotheses... How would/could you test this hypothesis?
64 Checking the Normality Assumption The EB s estimates of U j depend heavily on the distribution assumption for them. Best way to check for non-normality? Compare Ûj obtained assuming normality to model with those obtained from relaxing the normality assumption. Alternative distributional assumption for Uj s is a mixture of a number of (multivariate) normal s, where k p r = 1. U j g p r N (µ r, T) r=1 Maybe a Gamma distribution (e.g., skewed distribution reaction times)... But this would be for Y ij. C.J. Anderson (Illinois) Random Effects Fall / 68
65 Alternative Distribution for U j s A mixture of (multivariate) normals would be found if There is unobserved heterogeneity in the population.... p r represents a cluster of the total population. You have not included a categorical variable that s important. Such a mixture of normals implies a range of possible non-normal distributions for U j s. For examples, see Verbeke & Molenberghs (page 90). C.J. Anderson (Illinois) Random Effects Fall / 68
66 SAS/MIXED Options Options to produce predictions and estimates of the random effects, as well as a little data manipulation: PROC MIXED data=hsbcent noclprint covtest method=ml ic; CLASS id; MODEL mathach = cses /solution outpred=hsbpred outpredm=hsbpm; RANDOM intercept / subject=id type=un solution cl; ODS output SolutionR=RanUs; C.J. Anderson (Illinois) Random Effects Fall / 68
67 Data Steps DATA tmp99; SET hsbpred; predij=pred; residij=residual; lowij=lower; upij= upper; stdij=stderrpred; KEEP predij residij lowij upij stdij id; PROC SORT data=tmp99; by id; PROC SORT data=hsbpm; by id; C.J. Anderson (Illinois) Random Effects Fall / 68
68 More SAS Data & Tricks DATA predict; merge tmp99 hsbpm; by id; SAS/INSIGHT (WORD file & demonstration) SAS/ASSIST (Word file & demonstration) C.J. Anderson (Illinois) Random Effects Fall / 68
Random Effects. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology. c Board of Trustees, University of Illinois
Random Effects Edps/Psych/Stat 587 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2017 Outline Introduction Empirical Bayes inference Henderson
More informationRandom Intercept Models
Random Intercept Models Edps/Psych/Soc 589 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2019 Outline A very simple case of a random intercept
More informationEstimation: Problems & Solutions
Estimation: Problems & Solutions Edps/Psych/Soc 587 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2019 Outline 1. Introduction: Estimation
More informationEstimation: Problems & Solutions
Estimation: Problems & Solutions Edps/Psych/Stat 587 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2017 Outline 1. Introduction: Estimation of
More informationStatistical Inference: The Marginal Model
Statistical Inference: The Marginal Model Edps/Psych/Stat 587 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2017 Outline Inference for fixed
More informationModels for Clustered Data
Models for Clustered Data Edps/Psych/Stat 587 Carolyn J Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2017 Outline Notation NELS88 data Fixed Effects ANOVA
More informationModels for Clustered Data
Models for Clustered Data Edps/Psych/Soc 589 Carolyn J Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2019 Outline Notation NELS88 data Fixed Effects ANOVA
More informationSerial Correlation. Edps/Psych/Stat 587. Carolyn J. Anderson. Fall Department of Educational Psychology
Serial Correlation Edps/Psych/Stat 587 Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 017 Model for Level 1 Residuals There are three sources
More informationRandom Coefficient Model (a.k.a. multilevel model) (Adapted from UCLA Statistical Computing Seminars)
STAT:5201 Applied Statistic II Random Coefficient Model (a.k.a. multilevel model) (Adapted from UCLA Statistical Computing Seminars) School math achievement scores The data file consists of 7185 students
More informationDesigning Multilevel Models Using SPSS 11.5 Mixed Model. John Painter, Ph.D.
Designing Multilevel Models Using SPSS 11.5 Mixed Model John Painter, Ph.D. Jordan Institute for Families School of Social Work University of North Carolina at Chapel Hill 1 Creating Multilevel Models
More informationIntroduction to the Analysis of Hierarchical and Longitudinal Data
Introduction to the Analysis of Hierarchical and Longitudinal Data Georges Monette, York University with Ye Sun SPIDA June 7, 2004 1 Graphical overview of selected concepts Nature of hierarchical models
More informationTesting Indirect Effects for Lower Level Mediation Models in SAS PROC MIXED
Testing Indirect Effects for Lower Level Mediation Models in SAS PROC MIXED Here we provide syntax for fitting the lower-level mediation model using the MIXED procedure in SAS as well as a sas macro, IndTest.sas
More informationCorrelation and the Analysis of Variance Approach to Simple Linear Regression
Correlation and the Analysis of Variance Approach to Simple Linear Regression Biometry 755 Spring 2009 Correlation and the Analysis of Variance Approach to Simple Linear Regression p. 1/35 Correlation
More informationSAS Syntax and Output for Data Manipulation: CLDP 944 Example 3a page 1
CLDP 944 Example 3a page 1 From Between-Person to Within-Person Models for Longitudinal Data The models for this example come from Hoffman (2015) chapter 3 example 3a. We will be examining the extent to
More informationLab 11. Multilevel Models. Description of Data
Lab 11 Multilevel Models Henian Chen, M.D., Ph.D. Description of Data MULTILEVEL.TXT is clustered data for 386 women distributed across 40 groups. ID: 386 women, id from 1 to 386, individual level (level
More informationSAS Syntax and Output for Data Manipulation:
CLP 944 Example 5 page 1 Practice with Fixed and Random Effects of Time in Modeling Within-Person Change The models for this example come from Hoffman (2015) chapter 5. We will be examining the extent
More informationIntroduction to Within-Person Analysis and RM ANOVA
Introduction to Within-Person Analysis and RM ANOVA Today s Class: From between-person to within-person ANOVAs for longitudinal data Variance model comparisons using 2 LL CLP 944: Lecture 3 1 The Two Sides
More informationTopic 17 - Single Factor Analysis of Variance. Outline. One-way ANOVA. The Data / Notation. One way ANOVA Cell means model Factor effects model
Topic 17 - Single Factor Analysis of Variance - Fall 2013 One way ANOVA Cell means model Factor effects model Outline Topic 17 2 One-way ANOVA Response variable Y is continuous Explanatory variable is
More informationApplied Statistics and Econometrics
Applied Statistics and Econometrics Lecture 6 Saul Lach September 2017 Saul Lach () Applied Statistics and Econometrics September 2017 1 / 53 Outline of Lecture 6 1 Omitted variable bias (SW 6.1) 2 Multiple
More informationConfidence Intervals, Testing and ANOVA Summary
Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0
More informationRandom Coefficients Model Examples
Random Coefficients Model Examples STAT:5201 Week 15 - Lecture 2 1 / 26 Each subject (or experimental unit) has multiple measurements (this could be over time, or it could be multiple measurements on a
More informationSome general observations.
Modeling and analyzing data from computer experiments. Some general observations. 1. For simplicity, I assume that all factors (inputs) x1, x2,, xd are quantitative. 2. Because the code always produces
More informationSimple Linear Regression
Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)
More informationBusiness Statistics. Lecture 10: Course Review
Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,
More informationEmpirical Application of Simple Regression (Chapter 2)
Empirical Application of Simple Regression (Chapter 2) 1. The data file is House Data, which can be downloaded from my webpage. 2. Use stata menu File Import Excel Spreadsheet to read the data. Don t forget
More informationI L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,
More informationStatistical Distribution Assumptions of General Linear Models
Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions
More informationSimple Linear Regression for the Climate Data
Prediction Prediction Interval Temperature 0.2 0.0 0.2 0.4 0.6 0.8 320 340 360 380 CO 2 Simple Linear Regression for the Climate Data What do we do with the data? y i = Temperature of i th Year x i =CO
More informationIntroduction to Random Effects of Time and Model Estimation
Introduction to Random Effects of Time and Model Estimation Today s Class: The Big Picture Multilevel model notation Fixed vs. random effects of time Random intercept vs. random slope models How MLM =
More informationNotation Survey. Paul E. Johnson 1 2 MLM 1 / Department of Political Science
MLM 1 / 52 Notation Survey Paul E. Johnson 1 2 1 Department of Political Science 2 Center for Research Methods and Data Analysis, University of Kansas 2015 MLM 2 / 52 Outline 1 Raudenbush and Bryk 2 Snijders
More informationMultiple Regression Analysis. Part III. Multiple Regression Analysis
Part III Multiple Regression Analysis As of Sep 26, 2017 1 Multiple Regression Analysis Estimation Matrix form Goodness-of-Fit R-square Adjusted R-square Expected values of the OLS estimators Irrelevant
More informationover Time line for the means). Specifically, & covariances) just a fixed variance instead. PROC MIXED: to 1000 is default) list models with TYPE=VC */
CLP 944 Example 4 page 1 Within-Personn Fluctuation in Symptom Severity over Time These data come from a study of weekly fluctuation in psoriasis severity. There was no intervention and no real reason
More informationST505/S697R: Fall Homework 2 Solution.
ST505/S69R: Fall 2012. Homework 2 Solution. 1. 1a; problem 1.22 Below is the summary information (edited) from the regression (using R output); code at end of solution as is code and output for SAS. a)
More informationVariance. Standard deviation VAR = = value. Unbiased SD = SD = 10/23/2011. Functional Connectivity Correlation and Regression.
10/3/011 Functional Connectivity Correlation and Regression Variance VAR = Standard deviation Standard deviation SD = Unbiased SD = 1 10/3/011 Standard error Confidence interval SE = CI = = t value for
More informationIntroduction to SAS proc mixed
Faculty of Health Sciences Introduction to SAS proc mixed Analysis of repeated measurements, 2017 Julie Forman Department of Biostatistics, University of Copenhagen Outline Data in wide and long format
More informationBiostatistics Workshop Longitudinal Data Analysis. Session 4 GARRETT FITZMAURICE
Biostatistics Workshop 2008 Longitudinal Data Analysis Session 4 GARRETT FITZMAURICE Harvard University 1 LINEAR MIXED EFFECTS MODELS Motivating Example: Influence of Menarche on Changes in Body Fat Prospective
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More information36-402/608 Homework #10 Solutions 4/1
36-402/608 Homework #10 Solutions 4/1 1. Fixing Breakout 17 (60 points) You must use SAS for this problem! Modify the code in wallaby.sas to load the wallaby data and to create a new outcome in the form
More informationUNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017
UNIVERSITY OF MASSACHUSETTS Department of Mathematics and Statistics Basic Exam - Applied Statistics Tuesday, January 17, 2017 Work all problems 60 points are needed to pass at the Masters Level and 75
More informationAdvantages of Mixed-effects Regression Models (MRM; aka multilevel, hierarchical linear, linear mixed models) 1. MRM explicitly models individual
Advantages of Mixed-effects Regression Models (MRM; aka multilevel, hierarchical linear, linear mixed models) 1. MRM explicitly models individual change across time 2. MRM more flexible in terms of repeated
More informationCategorical Predictor Variables
Categorical Predictor Variables We often wish to use categorical (or qualitative) variables as covariates in a regression model. For binary variables (taking on only 2 values, e.g. sex), it is relatively
More informationGeneral Linear Model (Chapter 4)
General Linear Model (Chapter 4) Outcome variable is considered continuous Simple linear regression Scatterplots OLS is BLUE under basic assumptions MSE estimates residual variance testing regression coefficients
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationStatistical Inference with Regression Analysis
Introductory Applied Econometrics EEP/IAS 118 Spring 2015 Steven Buck Lecture #13 Statistical Inference with Regression Analysis Next we turn to calculating confidence intervals and hypothesis testing
More informationApplied Regression Analysis
Applied Regression Analysis Chapter 3 Multiple Linear Regression Hongcheng Li April, 6, 2013 Recall simple linear regression 1 Recall simple linear regression 2 Parameter Estimation 3 Interpretations of
More informationFor more information about how to cite these materials visit
Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/
More informationSAS Code for Data Manipulation: SPSS Code for Data Manipulation: STATA Code for Data Manipulation: Psyc 945 Example 1 page 1
Psyc 945 Example page Example : Unconditional Models for Change in Number Match 3 Response Time (complete data, syntax, and output available for SAS, SPSS, and STATA electronically) These data come from
More informationMultiple Linear Regression
Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there
More informationSTAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)
STAT40 Midterm Exam University of Illinois Urbana-Champaign October 19 (Friday), 018 3:00 4:15p SOLUTIONS (Yellow) Question 1 (15 points) (10 points) 3 (50 points) extra ( points) Total (77 points) Points
More informationLongitudinal Data Analysis Using SAS Paul D. Allison, Ph.D. Upcoming Seminar: October 13-14, 2017, Boston, Massachusetts
Longitudinal Data Analysis Using SAS Paul D. Allison, Ph.D. Upcoming Seminar: October 13-14, 217, Boston, Massachusetts Outline 1. Opportunities and challenges of panel data. a. Data requirements b. Control
More informationSubject-specific observed profiles of log(fev1) vs age First 50 subjects in Six Cities Study
Subject-specific observed profiles of log(fev1) vs age First 50 subjects in Six Cities Study 1.4 0.0-6 7 8 9 10 11 12 13 14 15 16 17 18 19 age Model 1: A simple broken stick model with knot at 14 fit with
More informationy ˆ i = ˆ " T u i ( i th fitted value or i th fit)
1 2 INFERENCE FOR MULTIPLE LINEAR REGRESSION Recall Terminology: p predictors x 1, x 2,, x p Some might be indicator variables for categorical variables) k-1 non-constant terms u 1, u 2,, u k-1 Each u
More informationReview of Statistics 101
Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods
More informationResearch Design: Topic 18 Hierarchical Linear Modeling (Measures within Persons) 2010 R.C. Gardner, Ph.d.
Research Design: Topic 8 Hierarchical Linear Modeling (Measures within Persons) R.C. Gardner, Ph.d. General Rationale, Purpose, and Applications Linear Growth Models HLM can also be used with repeated
More informationHomoskedasticity. Var (u X) = σ 2. (23)
Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This
More informationAn Introduction to Multilevel Models. PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012
An Introduction to Multilevel Models PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012 Today s Class Concepts in Longitudinal Modeling Between-Person vs. +Within-Person
More informationThe scatterplot is the basic tool for graphically displaying bivariate quantitative data.
Bivariate Data: Graphical Display The scatterplot is the basic tool for graphically displaying bivariate quantitative data. Example: Some investors think that the performance of the stock market in January
More informationSimple linear regression
Simple linear regression Biometry 755 Spring 2008 Simple linear regression p. 1/40 Overview of regression analysis Evaluate relationship between one or more independent variables (X 1,...,X k ) and a single
More informationStat 579: Generalized Linear Models and Extensions
Stat 579: Generalized Linear Models and Extensions Linear Mixed Models for Longitudinal Data Yan Lu April, 2018, week 15 1 / 38 Data structure t1 t2 tn i 1st subject y 11 y 12 y 1n1 Experimental 2nd subject
More informationMixed Models II - Behind the Scenes Report Revised June 11, 2002 by G. Monette
Mixed Models II - Behind the Scenes Report Revised June 11, 2002 by G. Monette What is a mixed model "really" estimating? Paradox lost - paradox regained - paradox lost again. "Simple example": 4 patients
More informationScatter plot of data from the study. Linear Regression
1 2 Linear Regression Scatter plot of data from the study. Consider a study to relate birthweight to the estriol level of pregnant women. The data is below. i Weight (g / 100) i Weight (g / 100) 1 7 25
More informationNewsom Psy 510/610 Multilevel Regression, Spring
Psy 510/610 Multilevel Regression, Spring 2017 1 Diagnostics Chapter 10 of Snijders and Bosker (2012) provide a nice overview of assumption tests that are available. I will illustrate only one statistical
More informationStatistical Modelling in Stata 5: Linear Models
Statistical Modelling in Stata 5: Linear Models Mark Lunt Arthritis Research UK Epidemiology Unit University of Manchester 07/11/2017 Structure This Week What is a linear model? How good is my model? Does
More informationOutline. Linear OLS Models vs: Linear Marginal Models Linear Conditional Models. Random Intercepts Random Intercepts & Slopes
Lecture 2.1 Basic Linear LDA 1 Outline Linear OLS Models vs: Linear Marginal Models Linear Conditional Models Random Intercepts Random Intercepts & Slopes Cond l & Marginal Connections Empirical Bayes
More informationReview of Statistics
Review of Statistics Topics Descriptive Statistics Mean, Variance Probability Union event, joint event Random Variables Discrete and Continuous Distributions, Moments Two Random Variables Covariance and
More informationLecture 6 Multiple Linear Regression, cont.
Lecture 6 Multiple Linear Regression, cont. BIOST 515 January 22, 2004 BIOST 515, Lecture 6 Testing general linear hypotheses Suppose we are interested in testing linear combinations of the regression
More informationLecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is
Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is Q = (Y i β 0 β 1 X i1 β 2 X i2 β p 1 X i.p 1 ) 2, which in matrix notation is Q = (Y Xβ) (Y
More informationThe Simple Linear Regression Model
The Simple Linear Regression Model Lesson 3 Ryan Safner 1 1 Department of Economics Hood College ECON 480 - Econometrics Fall 2017 Ryan Safner (Hood College) ECON 480 - Lesson 3 Fall 2017 1 / 77 Bivariate
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationScatter plot of data from the study. Linear Regression
1 2 Linear Regression Scatter plot of data from the study. Consider a study to relate birthweight to the estriol level of pregnant women. The data is below. i Weight (g / 100) i Weight (g / 100) 1 7 25
More informationLecture 4 Multiple linear regression
Lecture 4 Multiple linear regression BIOST 515 January 15, 2004 Outline 1 Motivation for the multiple regression model Multiple regression in matrix notation Least squares estimation of model parameters
More informationTwo-Variable Regression Model: The Problem of Estimation
Two-Variable Regression Model: The Problem of Estimation Introducing the Ordinary Least Squares Estimator Jamie Monogan University of Georgia Intermediate Political Methodology Jamie Monogan (UGA) Two-Variable
More informationmultilevel modeling: concepts, applications and interpretations
multilevel modeling: concepts, applications and interpretations lynne c. messer 27 october 2010 warning social and reproductive / perinatal epidemiologist concepts why context matters multilevel models
More informationSimple Linear Regression
Simple Linear Regression EdPsych 580 C.J. Anderson Fall 2005 Simple Linear Regression p. 1/80 Outline 1. What it is and why it s useful 2. How 3. Statistical Inference 4. Examining assumptions (diagnostics)
More informationBivariate Data: Graphical Display The scatterplot is the basic tool for graphically displaying bivariate quantitative data.
Bivariate Data: Graphical Display The scatterplot is the basic tool for graphically displaying bivariate quantitative data. Example: Some investors think that the performance of the stock market in January
More informationAnswers to Problem Set #4
Answers to Problem Set #4 Problems. Suppose that, from a sample of 63 observations, the least squares estimates and the corresponding estimated variance covariance matrix are given by: bβ bβ 2 bβ 3 = 2
More informationStatistics for exp. medical researchers Regression and Correlation
Faculty of Health Sciences Regression analysis Statistics for exp. medical researchers Regression and Correlation Lene Theil Skovgaard Sept. 28, 2015 Linear regression, Estimation and Testing Confidence
More informationChapter 2 Inferences in Simple Linear Regression
STAT 525 SPRING 2018 Chapter 2 Inferences in Simple Linear Regression Professor Min Zhang Testing for Linear Relationship Term β 1 X i defines linear relationship Will then test H 0 : β 1 = 0 Test requires
More informationLecture 14 Simple Linear Regression
Lecture 4 Simple Linear Regression Ordinary Least Squares (OLS) Consider the following simple linear regression model where, for each unit i, Y i is the dependent variable (response). X i is the independent
More informationProblems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B
Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2
More informationMultiple Regression Analysis
Multiple Regression Analysis y = β 0 + β 1 x 1 + β 2 x 2 +... β k x k + u 2. Inference 0 Assumptions of the Classical Linear Model (CLM)! So far, we know: 1. The mean and variance of the OLS estimators
More informationBias Variance Trade-off
Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]
More informationBayesian linear regression
Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding
More informationThe general linear regression with k explanatory variables is just an extension of the simple regression as follows
3. Multiple Regression Analysis The general linear regression with k explanatory variables is just an extension of the simple regression as follows (1) y i = β 0 + β 1 x i1 + + β k x ik + u i. Because
More informationSTAT 5200 Handout #23. Repeated Measures Example (Ch. 16)
Motivating Example: Glucose STAT 500 Handout #3 Repeated Measures Example (Ch. 16) An experiment is conducted to evaluate the effects of three diets on the serum glucose levels of human subjects. Twelve
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationy response variable x 1, x 2,, x k -- a set of explanatory variables
11. Multiple Regression and Correlation y response variable x 1, x 2,, x k -- a set of explanatory variables In this chapter, all variables are assumed to be quantitative. Chapters 12-14 show how to incorporate
More informationWeighted Least Squares
Weighted Least Squares The standard linear model assumes that Var(ε i ) = σ 2 for i = 1,..., n. As we have seen, however, there are instances where Var(Y X = x i ) = Var(ε i ) = σ2 w i. Here w 1,..., w
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationDescribing Within-Person Change over Time
Describing Within-Person Change over Time Topics: Multilevel modeling notation and terminology Fixed and random effects of linear time Predicted variances and covariances from random slopes Dependency
More informationRegression: Main Ideas Setting: Quantitative outcome with a quantitative explanatory variable. Example, cont.
TCELL 9/4/205 36-309/749 Experimental Design for Behavioral and Social Sciences Simple Regression Example Male black wheatear birds carry stones to the nest as a form of sexual display. Soler et al. wanted
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationLecture 2. The Simple Linear Regression Model: Matrix Approach
Lecture 2 The Simple Linear Regression Model: Matrix Approach Matrix algebra Matrix representation of simple linear regression model 1 Vectors and Matrices Where it is necessary to consider a distribution
More informationOrdinary Least Squares Regression
Ordinary Least Squares Regression Goals for this unit More on notation and terminology OLS scalar versus matrix derivation Some Preliminaries In this class we will be learning to analyze Cross Section
More informationIntroduction to Linear Regression Rebecca C. Steorts September 15, 2015
Introduction to Linear Regression Rebecca C. Steorts September 15, 2015 Today (Re-)Introduction to linear models and the model space What is linear regression Basic properties of linear regression Using
More informationA Re-Introduction to General Linear Models (GLM)
A Re-Introduction to General Linear Models (GLM) Today s Class: You do know the GLM Estimation (where the numbers in the output come from): From least squares to restricted maximum likelihood (REML) Reviewing
More informationMixed Models for Longitudinal Binary Outcomes. Don Hedeker Department of Public Health Sciences University of Chicago.
Mixed Models for Longitudinal Binary Outcomes Don Hedeker Department of Public Health Sciences University of Chicago hedeker@uchicago.edu https://hedeker-sites.uchicago.edu/ Hedeker, D. (2005). Generalized
More informationApplied Statistics and Econometrics
Applied Statistics and Econometrics Lecture 5 Saul Lach September 2017 Saul Lach () Applied Statistics and Econometrics September 2017 1 / 44 Outline of Lecture 5 Now that we know the sampling distribution
More information