Factorial Within-Subject Design. Full Model and F tests. R example Y ijk = µ + j + k + i +( ) jk +( ) ji +( ) ki +( ) jki + ijk PSYCH 710
|
|
- Nancy McDaniel
- 5 years ago
- Views:
Transcription
1 Facrial Within-Subject Design PSYCH 70 Higher-Order Within-Subjects NOV Week 3 Pr. Patrick Bennett x3 within-subjects facrial design & B are crossed, fixed facrs subjects a rom facr typically, observation per cell not possible measure withincell cannot dtinguh contributions error subject x treatment interaction -cell variation data from each subject are correlated B B B3 B B B3 subject n n n n n n subject n n n n n n subject 3 n n n n n n subject n n n n n n Full Model F tests Y ijk µ + j + k + i +( ) jk +( ) ji +( ) ki +( ) jki + ijk Stattical significance all parameters evaluated by comparing SSresiduals obtained with nested models error terms F tests slightly more complicated: - effect evaluated with x Subjects term - effect B evaluated with B x Subjects term - x B interaction evaluated with x B x Subjects term Effects noe (dtracrs) stimulus orientation on letter dcrimination (noe) x 3 (orientation) within-subject facrial design Note order columns in wide data mat: > rtdata subj absent.a0 absent.a absent.a8 present.a0 present.a present.a8 s s s s s s s s s s
2 Extract dependent s & sre in matrix dd Gaussian noe make NOV realtic Create data frame that describes within-subjects design > rt.mat <- as.matrix(rtdata[:0,:7]) > set.seed(509); > rt.nz <- matrix(dataround(rnorm(n0*6,sd00)),nrow0,ncol6) > rt.mat <- rt.mat + rt.nz Create multivariate linear model object with -subjects s: > rt.mlm <- lm(rt.mat~) N.B. No -subjects s so only have intercept in mula Note how order levels corresponds order columns in my matrix dependent s: > rt.mat absent.a0 absent.a absent.a8 present.a0 present.a present.a Use nova in car library create within-subjects mula Summary assuming sphericity: > summary(rt.aov,multivariatef) > library(car) > rt.aov <- nova(rt.mlm,idatart.idata,idesign~noe*,type"iii") > summary(rt.aov,multivariatef) data frame that contains within subjects facrs mula that tells R that facrs were crossed same as noe++noe: Univariate Type III Repeated-Measures NOV ssuming Sphericity SS num Df Error SS den Df (Intercept) e-09 *** noe ** ** noe: ** Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' '
3 output. Greenhouse-Geser (G-G) epsilon, ˆ, 0.73 in table. However, I still prefer estimate useestimate corrected p ˆ0.76 lted final 0.73 output. Greenhouse-Geser (G-G) epsilon,,values 0.76 in part 85 NOV output. Greenhouse-Geser (G-G) estimate epsilon, ˆ, epsilon,, 0.89 noe: noe: epsilon,, 0.89 output. Greenhouse-Geser (G-G) estimate epsilon,(h-f) ˆ, (H-F) estimate 0.76estimate 0.73 noe: (H-F) estimate epsilon,, noe: Occasionally, > :>In situations itstard mula noe: Occasionally, In0.89 such situations stard (H-F) estimate epsilon, :, such itit In nova comm, notice how I specify within-subjects design withnoe: one-sided practice 0.8 noe: Occasionally, > : In such situations stard set set. Eir G-GG-G or H-FH-F adjusted values aresuch acceptable, but but I prefer use noe:. Eir or adjusted p:values are acceptable, I stard prefer use noe : 0.8practice set pp > In situations practice.because Eir G-G orless Occasionally, H-F adjustedso, values acceptable, butiti prefer use H-F adjustment it itslightly conservative. noe: interaction significant, H-F adjustment because slightly less conservative. So, are noe: interaction significant, > library(car) practice set adjustment. Eir because G-G H-F adjusted p values are acceptable, but I prefer use H-F or less conservative. So,,, noe: interaction p significant, (, 8)7.5, 7.5, 0.8, slightly 0.007, as are s (, 8)7.79, 7.79, 0.89, 0.005, 0.8, p itp0.007, as are s F (,F8) 0.89, p0.005, > rt.aov <- nova(rt.mlm,idatart.idata,idesign~noe*,type"iii") F (,F8) H-F adjustment it0.8, slightly less conservative. So, noe: significant, F (, because 7.5, F p0.007, as are s, F (, 8) interaction 7.79, 0.89, p 0.005, noe, 9)5.69, p5.69, ) noe, F (, 9)(, p > summary(rt.aov,multivariatef) F (, 8) 7.5, 0.8, 0.007, s, F (, 8) 7.79, 0.89, p 0.005, noe, F (,p 9) 5.69, pasare Univariate Type III Repeated-Measures NOV ssuming Sphericity noe, F (, 9) 5.69, p association.. association SS num Df Error SS den (Intercept) noe noe: Signif. codes: 0 '***' 0.00 '**' 0.0 '*' PSY70 Df..Your association textbook equation which an relative Your textbook givesgives equation which an relative e-09 ***.. association sum, nor common measure ** sumtextbook errorerror, due due subjects. nor measure Your gives,, equation which subjects. ancommon relative association dependent levels a within-subject facr ** association dependent levels a within-subject facr sum, error, due subjects. nor common measure key di erence measure your textbook gives equation which an defined relativein textbook Your **, which relative sum error one defined here ho, which, relative sum error association dependent a within-subject facr due subjects levels hled. Variation due measure subjects included in denominar sum, error due subjects. nor common (Keppel Wickens, 00; Kirk, 995). For example, (Keppel Wickens, 00; Kirk, 995). example, denominar 0.05 '.' 0. ' ', which For sum included error defined here. your textbook, but it in association dependent relative levels a not within-subject facr Sphericity tests p adjustments: (Keppel Wickens, 00; Kirk, 995). For example, association indexed by Equation will be than value, which relative sum errorlarger!y (B, () B) () can be used calculate Cohen Mauchly Tests Sphericity Equation 6 in chapter your textbook. Partial (B, B) in + (Keppel Wickens, 00; Kirk, 995).!YFor example, e + e - f ()!Y (B, 995). following equation shows how Cohen s calculate f : B) + Test stattic p-value can beeffects calculated from an NOV table with mula: Sphericity applies all within-subjects e can be calculated from an NOV table with mula: v! () that have more than degree--freedom u (a )(F ) Y (B, noe: B) can be calculated from an NOV with mula: u!y (B,!Ytable (3) B) )(F ) e (a (B, B) (a )(F ) + nab ˆ t!y (B, (3) f B) (a (a)(f ) +)nab! )(F with mula: Y (B, B) can be calculated an NOV Greenhouse-Geser Corrections (3) Partialfrom table B! calculated by: Strength ssociation & Effect Size Y (B, B) Partial B calculated by: Departure from Sphericity Chapter PSY ** noe: * Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' Partial ** noe: ** --Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' Partial Y B (, B)!Y B (, Chapter RT !Y (B, B) 0 e + can be calculated from an NOV table with mula:!y (B, B) (a (a )(F ) )(F ) + nab (b (b )(FB ) )(FB ) + nab 6 8 () Figure : Interaction plot data in rt.mat. (3) Partial B calculated by:!y B (, B) Partial B calculated by: () 6 ()! Partial 0.96 () (5) 0.85 (5) 0.78 Cohen s f (5) (a )(b )(FB )Table : Strength association sizes rt.mat data. (5) (a )(b )(FB ) + nab..3 simple s :noe interaction was significant, so we should examine simple s. First plot data get an idea what interaction might mean. following comms w create Figure. PSY70 Chapter > rt.mat Source noe noe: Your equation which an relative textbook do notgives recalculate F with sum MS, error, due subjects. nor common measure error dferror from association dependent levels a within-subject facr analys, which relative sum error (Keppel Wickens, 00; Kirk, each simple effect a 995). - For example, B) B B. Par (b simple effects<- colmeans(rt.mat[:0,:3]) ) > ( absent.means way within-subject NOV ) + nab )(FB ) + nab )(FB ) Partial B calculated by: (b )(FB ) + nab (a )(b) )(FB )!Y B (, B)(b )(FB!Y B (, Partial B B) calculated by: (a (a )(b )(b )(F )(FB ) + nab B )!Y B (, B)(b )(FB ) + nab (a )(b )(FB ) + nab (a )(b )(FB )!Y B (, Partial B calculated by:b) (a )(b )(F ) + nab B simple effects terms each simple )(F Partial calculated by:(b B calculated by:!yb B (, B) first part summary lts NOV table. noe: interaction significant, F (, 8) 7.5, p 0.003, as are s, F (, 8) 7.79, p , noe, F (, 9) 5.69, p Note that denominar degrees freedom are di erent two s because di erent error terms are used evaluate significance noe. sphericity assumption was introduced in Chapter. It applies here, o. results in PSY70 NOV table assume that sphericity valid, but course we need evaluate it bee accepting p values lted in table. second part nova output shows results Mauchly test sphericity. Notice that sphericity tests are done only noe: terms, not noe. reason deviations from sphericity are not examined noe that that facr has only two levels ree one degree freedom: sphericity necessarily valid F tests that have one degree freedom in numerar, ree sphericity need not be evaluated that. Mauchly tests sphericity are not significant (p > 0. in both cases), so we could use p values lted in NOV table. However, I still prefer use corrected p values lted in final part output. Greenhouse-Geser (G-G) estimate epsilon, ˆ, noe: (H-F) estimate epsilon,, 0.89 use simple Occasionally, > : In such situations it stard 0.8 effects noe: decompose practice set an. interaction Eir G-G or H-F adjusted p values are acceptable, but I prefer use absent H-F adjustment because it slightly less conservative. So, noe: interaction significant, -subjects designs, present Funlike (, 8) 7.5, 0.8, p 0.007, as are s, F (, 8) 7.79, 0.89, p 0.005, it better use separate noe, F (, 9) 5.69, p error.. effect association (a )(Fbe ) (bobvious It(ashould )(FB how ) change Equation 6 calculate!yb (B, (3) () s sizes (b )(FB ) B)!Y B (, calculated by: B) (a )(F ) + nab f data analyzed in section.. are lted in Table. (b )(F ) + nab B! () absent.a0 absent.a absent.a8 absent.a0 absent.a absent.a8 present.a0 present.a present.a ) > ( present.means 3<- colmeans(rt.mat[:0,:6]) present.a0 present.a present.a > x. <- c(0,,8); analyze simple effect at each level noe first, separate data from noe present & noe absent conditions > rt.absent <- rt.mat[,:3] > plot(xc(0,,8),absent.means,"b",ylimc(50,800),ylab"rt",xlab"") > rt.present <- rt.mat[,:6] > points(xc(0,,8),present.means,"b",pch9) dependent s are sred in matrix rt.mat. syntax rt.mat[, :3] a way specifying > legend(x0,y750,legendc("absent","present"),pchc(,9)) all rows in columns through 3, equivalent rt[:0,:3]. If we wanted access data from data frame rtdata, we could use following comms: It appears that di erence noe absent present conditions increased with > rt.absent <- as.matrix(rtdata[,:]) stimulus. lternatively, can say that appears be larger in no > rt.present we <- as.matrix(rtdata[,5:7]) condition than in However, noe absent I willin evaluate idea tests we s we added condition. noe numbers rtdata, th so I will use by conducting numbers in rt.mat. Next, one-way NOV evaluate when noe present. First, we use lm at eachconduct levela noe. create a multivariate lm object: In a two-way, -subjects facrial design, simple s were evaluated by doin > ang.present.mlm <- lm(rt.present~) one-way NOVs, but using M Sresiduals from overall analys as error term. However, fo we create a datause frame separate that contains one three-level facr names analys. ree, subjects designs itnext, better error terms each <- facr(xc(,,3),labelc("a0","a","a8")) simple s > essentially identical conducting a set one-way, within-subject NO
4 simple effect (noe present) > ang.present.mlm <- lm(rt.present~) > <- facr(xc(,,3),labelc("a0","a","a8")) >.idata <- data.frame() compute anova: multivariate linear model create data frame that describes level within-subject facr > ang.present.aov <- nova(ang.present.mlm,idata.idata,idesign~,type"iii") > summary(ang.present.aov,multivariatef) Univariate Type III Repeated-Measures NOV ssuming Sphericity SS num Df Error SS den Df (Intercept) e-08 *** e-0 *** Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' print summary simple effect (noe present) Mauchly Tests Sphericity Test stattic p-value Greenhouse-Geser Corrections Departure from Sphericity *** Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' '.7 e-0 *** Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' N.B. We do not recalculate F using MSerror dferror from original NOV simple effect (noe absent) > ang.absent.mlm <- lm(rt.absent~) > ang.absent.aov <- nova(ang.absent.mlm,idata.idata,idesign~,type"iii") > summary(ang.absent.aov,multivariatef) Univariate Type III Repeated-Measures NOV ssuming Sphericity SS num Df Error SS den Df (Intercept) e-09 *** Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' linear contrasts similar procedures used with -way within-subjects NOV use contrast weights create composite scores - converts multivariate analys univariate analys use t test evaluate null hypos Mauchly Tests Sphericity Test stattic p-value Greenhouse-Geser Corrections Departure from Sphericity
5 evaluate linear trend RT across separately at each level noe > lin.c <- c(-,0,) contrast weights > rt.pres.lin <- rt.present %*% lin.c > t.test(rt.pres.lin) -tailed t test data: rt.pres.lin t.80, df 9, p-value alternative hypos: true mean not equal mean x 86.8 composite scores (noe present) evaluate linear trend RT across separately at each level noe > rt.absent.lin <- rt.absent %*% lin.c > t.test(rt.absent.lin) -tailed t test data: rt.absent.lin t 0.37, df 9, p-value alternative hypos: true mean not equal mean x.7 composite scores (noe absent) evaluate linear trend RT across on entire data set > rt.mat absent.a0 absent.a absent.a8 present.a0 present.a present.a > lin.c <- c(-,0,,-,0,) > rt.lin <- rt.mat %*% lin.c data matrix contrast weights (linear trend ignoring noe) composite scores evaluate linear trend RT across ignoring noe > lin.c <- c(-,0,,-,0,) > rt.lin <- rt.mat %*% lin.c > t.test(rt.lin) contrast weights (linear trend ignoring noe) composite scores -tailed t test data: rt.lin t 3.8, df 9, p-value linear trend ignoring noe significant alternative hypos: true mean not equal mean x 99.5
6 does linear trend RT across differ across noe levels? weights (-,0,) - (-,0,) (-,0,,,0,-) > myc <- c(-,0,,,0,-) contrast weights (linear trend x noe interaction) > rt.lin.x.noe <- rt.mat %*% myc > t.test(rt.lin.x.noe) composite scores -tailed t test data: rt.lin.x.noe t -5.05, df 9, p-value noe x linear trend interaction significant alternative hypos: true mean not equal using NOV evaluate linear trend x noe interaction > lin.c <- c(-,0,) > rt.pres.lin <- rt.present %*% lin.c > rt.absent.lin <- rt.absent %*% lin.c > lin.scores <- cbind(rt.pres.lin,rt.absent.lin) > dimnames(lin.scores)[[]] <- c("nz.pres","nz.absnt") > lin.scores nz.pres nz.absnt convert 6-column data matrix a -column data matrix composite scores using NOV evaluate linear trend x noe interaction split-plot designs evaluate effect noe with -way within-subjects NOV > lin.scores.mlm <- lm(lin.scores~) > nz <- as.facr(c("present","absent")) > lin.scores.aov <- nova(lin.scores.mlm,idatadata.frame(nz),idesign~nz,type"iii") > summary(lin.scores.aov,multivariatef) Univariate Type III Repeated-Measures NOV ssuming Sphericity SS num Df Error SS den Df (Intercept) ** nz *** Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' what do se effects mean? split-plot designs have -subject & within-subject facrs analyzed same way as within-subjects design except we include -subjects facrs in multivariate linear model
7 a within-subjects. I will use same sry, but I ve created my own fake data: case test inmative: it tells us that average linear trend score di ers significantly from zero. > mydata Note again that F value intercept t value obtained in our earlier test th same hypos (t 3.8.). If we get same values as our t tests, why would we ever want use an a a a3 F test? F test useful in situations where second has more than two levels. In current young situation, example, it young not clear how3a t test could be used evaluate a trend x noe interaction if 57 noe had more than two 3levels. In fact, young 63it0 not possible test such an interaction using a single t test, but it possible perm F 6 test667see if trend (or or linear contrast) varies across n levels young or within-subject 5. young Univariate Type III Repeated-Measures SS num Df Error SS den (Intercept) : Signif. codes: 0 '***' 0.00 '**' young PSY70 Chapter 7 old Split plot designs 8 old Some experiments use mixtures within9 old 5 5 -subjects facrs. Such designs ten are called split Signif. codes: 0 '***'00.00 old'**' '*' 0.05 '.' 0. ' ' plot designs. oldanalys Your textbook illustrates data from split-plot a experiment using data presented in Tables old hypotical experiment measured RT in a young subjects (Table.7) * senior subjects (Table.5). Each subject participated in three in which vual stimuli were Next, I use lm Note that data frame contains facrconditions, which a -subjects. --presented at di erent s. th design, Note that -subject ( age) stimulus ainmultivariate-lm object.'.' Signif. codes: 0 '***'create 0.00 '**' 0.0 '*' ' ' -subjects mula now includes -subjects a within-subjects. I will use same sry, but I ve created my own fake data: : > summary(dat.old.aov,multivariatef) > mydata.mlm <- lm(cbind(a,a,a3)~+,datamydata) > mydata split-plot designs split-plot designs reing partsnov ssuming analys aresphericity same as bee: Univariate III Repeated-Measures atype a a3 Error SS den Df young 50 7 SS 5 num >Df <- as.facr(c("a","a","a3")) (Intercept) e-05 *** > mydata.idata <- 5 data.frame() young mydata.aov 77 <- nova(mydata.mlm,idatamydata.idata,idesign~,type"iii") > Bennett, 3 young 63 0 PJ PSY70 --> summary(mydata.aov,multivariatef) Signif. youngcodes: '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' 5 young ssuming Sphericity Univariate Type III Repeated-Measures NOV 6 young SS num Df Error SS den Df 7Mauchly oldtests Sphericity (Intercept) e- *** 8 old old 5 Test5stattic p-value old : * old old codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' Greenhouse-Geser Signif. Corrections Note data frame contains facr, which a -subjects. Next, I use lm that Departure from Sphericity create a multivariate-lm object. Note that -subjects mula now includes -subjects eps Pr(>F[GG])Mauchly Tests Sphericity GG: Test stattic p-value > mydata.mlm <- lm(cbind(a,a,a3)~+,datamydata) reing parts : analys are same as bee: > <- as.facr(c("a","a","a3")) that sum squares error terms in two analyses are 37 77, that sum >Notice mydata.idata <-data.frame() Corrections se values, <3, equalsgreenhouse-geser sum squares s error term in original split-plot analys. > mydata.aov nova(mydata.mlm,idatamydata.idata,idesign~,type"iii") Departure lso note that average two mean from squaresphericity values, ( )/ 57.5, same as > summary(mydata.aov,multivariatef) PSY '*' 0.05 '.' 0. ' ' Test stattic p-value Greenhouse-Geser Corrections Departure from Sphericity : Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' Chapter PSY70 0 NOV ssuming Sphericity Df e- *** * Mauchly Tests Sphericity : sphericity assumption applies all effects that Chapter a within-subjects include facr (i.e., even x within interactions) : * Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' Notice that F p values are same ( within rounding error) as values lted in split-plot NOV table. In fact, two analyses areshows equivalent: analys p NOV table uncorrected values s, -subjects facr in a split-plot design same as a -subjects NOV on average score : Examination table will show that s angl each subject. sums squares means squares residuals lted in two NOV evaluated with di erent error terms. nova comm assumes that - within-sub tables di er because di erent numbers data points are analyzed in two cases, but ratio sare uses appropriate error term generate unbiased F tests (see Table.7, mean squares are same. One more thing: if evaluation fixed, -subjects equivalent 596not in your textbook Chapter expected meanlevel squares Bennett, PJ n in each a one-way, -subjects NOV, PSY70 n it should matter if we have di erent th design). second part output con matter: results having unequal Mauchly tests sphericity: Note that a test done interaction as well a -subject. nd, indeed, it does not n on -subjects. In general, sphericity assumption applies within-subjects facrs an doesdfnot cause significant problems with within-subjects analys. Sum Sq Mean Sq F value Pr(>F) Next, let s consider error term that ** used interactions evaluate within-subjectsfacrs. facr. third part output shows G-G that contain within-subjects In case where both s were within-subject adjusted facrs, each error term so I will use H-F adjusted p values. re a signifi p values. was evaluated Mauchlywith test an significant, was interaction that with subjects: was evaluated with S, B was evaluated that : interaction, F (, 0)., 0.65, p 0.07, but s angl Signif. codes: '***' '*' 0.05 '.' 0. ' 'S. In current case, if B within-subjects Chapter with B S, 0 B was'**' evaluated with B not significant., n it evaluated with B S/, which mean square interaction B subjects nested > summary(aov(a~+,datamydata)) Let s think about what di erent components NOV table actually mean. First conside within. Th error term equivalent weighted average values MSB S at each level -subjects. To illustrate what th represents, I am going Df Sum Sq, Mean Sq F.value -subjects Pr(>F) following code calculates NOV within-subject, one-way -subjects analys on averaged within-subject scores: series -way within subjects NOVs, separately each : mean square error term in original analys. Hence, error term that used evaluate GG eps within-subjects in a split-plot analys can Pr(>F[GG]) be thought Bennett, as an average error terms in a series PJ PSY within-subjects NOVs. : simple s Signif. codes: '**' 0.0 '*' 0.05 '.' ' series -subjects NOVs0 '***' ' ** Our previous analys found a significant : interaction, so we should examine simple s >> summary(aov(a3~+,datamydata)) dat.young <- subset(mydata,"young") > y.mat <- mydata[,:] We start by examining simple Pr(>F[HF]) -subject facr, at each level withinhf eps > y.avg <- rowmeans(y.mat) --> dat.old <- subset(mydata,"old") subject facr,. Each analys uses a0.67 separate error term calculated on particular subset data 0.3 Mean Sq F value Pr(>F) > dat.young.mlm <-lm(cbind(a,a,a3)~,datadat.young) > summary(aov(y.avg~,datamydata)) Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' Df Sum Sq begin examined, ree equivalent0.67 a one-way : * subjects NOV. > dat.old.mlm <-lm(cbind(a,a,a3)~,datadat.old) > dat.young.aov <- nova(dat.young.mlm,idatamydata.idata,idesign~,type"iii") > summary(aov(a~+,datamydata)) > names(mydata) analyses indicate simple age significant only in first level, >dat.old.aov <-that nova(dat.old.mlm,idatamydata.idata,idesign~,type"iii") Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' F (, 0).3, p [] "" "a" "a" "a3" Here tables or parts output): Weare can alsonov look at simple(i ve deleted within-subject at each level NOV table shows uncorrected p values 08 08s ,. Notice that each analys simply a one-way within-subjects NOV, that we use PSY70 Examination Chapter s >subject Bennett, PJ PSY70 : 0 table 8 will show that are 8 summary(dat.young.aov,multivariatef) a separate error term each analys. We did se analyses in previous section, so I will just > summary(aov(a~+,datamydata)) reprint summaries here: evaluated with di erent error terms. nova comm assumes that - within-subjects > summary(aov(a3~+,datamydata)) s are fixed, uses appropriate error term generate unbiased F tests (see Table Univariate page Type III Repeated-Measures NOV ssuming Sphericity >.7, summary(dat.young.aov,multivariatef) > summary(dat.old.aov,multivariatef) SS num Df Error SS den Df 596 in your textbook expected mean squares th design). second part output contains Univariate Type III Repeated-Measures NOV ssuming Sphericity Univariate Type0 III Repeated-Measures NOV ssuming Sphericity ** (Intercept) e-08 *** Df Sum Sq Mean Sq F value Pr(>F) results Mauchly 5 tests sphericity: Note that a test done interaction as well as SS num Df Error SS den Df SS num Df Error SS den Df ** assumption within-subjects. In general, sphericity applies within-subjects facrs all0 (Intercept) e-08 *** (Intercept) e-05 *** ** interactions that contain within-subjects facrs. third part output shows G-G H-F Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' -- Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' --adjusted p values. Mauchly test significant, so I will use H-F adjusted p values. re asignif. significant codes: 0only '***' in 0.00 '*' 0.05 '.' 0. ' ' Signif. codes: 0 '***' 0.00 '**' 0.0 '*' 0.05 '.' 0. ' ' analyses indicate that simple age significant '**' first0.0 level, > summary(aov(a~+,datamydata)) : interaction, F (, 0)., 0.65, p 0.07, but s are F (, 0).3, p Mauchly Tests Sphericity not significant. We can also look at simple within-subject at each level Mauchly Tests Sphericity Df Sum Sq Mean Sq F value Pr(>F)what di erent Mauchly Tests Sphericity Let s think about components NOV table actually mean. First consider subject. Notice that each analys simply a one-way within-subjects NOV, that we use Test p-value -subjects. To illustrate what th represents, I am going Test do stattic a stattic p-value Test stattic p-value a separate error term each analys. We did se analyses in previous section, 0.8 so I will just 0.9 one-way -subjects analys on averaged within-subject scores: reprint summaries here: > summary(aov(a3~+,datamydata)) > y.mat <- mydata[,:] > summary(dat.young.aov,multivariatef) Greenhouse-Geser Corrections Greenhouse-Geser Corrections Greenhouse-Geser Corrections > y.avg <- rowmeans(y.mat) Df Sum Sq Mean Sq F value Pr(>F) Departure from Sphericity Departure from Sphericity Departure from Sphericity > summary(aov(y.avg~,datamydata)) Univariate Type III Repeated-Measures NOV ssuming Sphericity simple effect at each Chapt simple effect in each SS num Df Error SS den Df * (Intercept) e-08 *** * analyses indicate that simple age significant only in first level, Signif. ** codes: 0 '***' 0.00 '**' F (, 0).3, p '*' 0.05 '.' 0. ' ' Chapter
8 linear contrasts on -subject : - calculate mean score each subject - analyze mean scores as -way -subject design on within-subject : - use contrast weights convert measures composite scores - use t-test or anova determine if scores differ across s (i.e., contrast x interaction) > y.mat<-as.matrix( mydata[,:] ) > lin.c <- c(-,0,) > mydata$lin.scores <- y.mat %*% lin.c > mydata contrast (linear trend) does not differ significantly s a a a3 lin.scores young young young young young young old old old old old old > t.test(lin.scores~,datamydata) Welch Two Sample t-test data: lin.scores by t.779, df 9.99, p-value alternative hypos: true difference in means not equal mean in young mean in old test overall contrast ignoring differences > y.mat<-as.matrix( mydata[,:] ) > lin.c <- c(-,0,) > mydata$lin.scores <- y.mat %*% lin.c > mydata a a a3 lin.scores young young young young young young old old old old old old mean contrast (linear trend) does not differ significantly from zero > t.test(mydata$lin.scores) data: mydata$lin.scores t -.30, df, p-value 0.99 alternative hypos: true mean not equal mean x - test overall contrast while controlling difference > lin.scores.aov <- aov(lin.scores~,datamydata) > summary(lin.scores.aov,interceptt) (Intercept) Intercept gr mean when using sum--zero definition effects test mean contrast test x contrast interaction
Notes on Maxwell & Delaney
Notes on Maxwell & Delaney PSY710 12 higher-order within-subject designs Chapter 11 discussed the analysis of data collected in experiments that had a single, within-subject factor. Here we extend those
More informationStatistics Lab One-way Within-Subject ANOVA
Statistics Lab One-way Within-Subject ANOVA PSYCH 710 9 One-way Within-Subjects ANOVA Section 9.1 reviews the basic commands you need to perform a one-way, within-subject ANOVA and to evaluate a linear
More informationBIOL 458 BIOMETRY Lab 8 - Nested and Repeated Measures ANOVA
BIOL 458 BIOMETRY Lab 8 - Nested and Repeated Measures ANOVA PART 1: NESTED ANOVA Nested designs are used when levels of one factor are not represented within all levels of another factor. Often this is
More informationANOVA in SPSS. Hugo Quené. opleiding Taalwetenschap Universiteit Utrecht Trans 10, 3512 JK Utrecht.
ANOVA in SPSS Hugo Quené hugo.quene@let.uu.nl opleiding Taalwetenschap Universiteit Utrecht Trans 10, 3512 JK Utrecht 7 Oct 2005 1 introduction In this example I ll use fictitious data, taken from http://www.ruf.rice.edu/~mickey/psyc339/notes/rmanova.html.
More informationNotes on Maxwell & Delaney
Notes on Maxwell & Delaney PSY710 9 Designs with Covariates 9.1 Blocking Consider the following hypothetical experiment. We want to measure the effect of a drug on locomotor activity in hyperactive children.
More informationStatistics Lab #6 Factorial ANOVA
Statistics Lab #6 Factorial ANOVA PSYCH 710 Initialize R Initialize R by entering the following commands at the prompt. You must type the commands exactly as shown. options(contrasts=c("contr.sum","contr.poly")
More informationTopic 12. The Split-plot Design and its Relatives (Part II) Repeated Measures [ST&D Ch. 16] 12.9 Repeated measures analysis
Topic 12. The Split-plot Design and its Relatives (Part II) Repeated Measures [ST&D Ch. 16] 12.9 Repeated measures analysis Sometimes researchers make multiple measurements on the same experimental unit.
More informationTopic 12. The Split-plot Design and its Relatives (continued) Repeated Measures
12.1 Topic 12. The Split-plot Design and its Relatives (continued) Repeated Measures 12.9 Repeated measures analysis Sometimes researchers make multiple measurements on the same experimental unit. We have
More informationRepeated Measures Analysis of Variance
Repeated Measures Analysis of Variance Review Univariate Analysis of Variance Group A Group B Group C Repeated Measures Analysis of Variance Condition A Condition B Condition C Repeated Measures Analysis
More informationFood consumption of rats. Two-way vs one-way vs nested ANOVA
Food consumption of rats Lard Gender Fresh Rancid 709 592 Male 679 538 699 476 657 508 Female 594 505 677 539 Two-way vs one-way vs nested NOV T 1 2 * * M * * * * * * F * * * * T M1 M2 F1 F2 * * * * *
More informationANOVA Longitudinal Models for the Practice Effects Data: via GLM
Psyc 943 Lecture 25 page 1 ANOVA Longitudinal Models for the Practice Effects Data: via GLM Model 1. Saturated Means Model for Session, E-only Variances Model (BP) Variances Model: NO correlation, EQUAL
More informationWITHIN-PARTICIPANT EXPERIMENTAL DESIGNS
1 WITHIN-PARTICIPANT EXPERIMENTAL DESIGNS I. Single-factor designs: the model is: yij i j ij ij where: yij score for person j under treatment level i (i = 1,..., I; j = 1,..., n) overall mean βi treatment
More informationT. Mark Beasley One-Way Repeated Measures ANOVA handout
T. Mark Beasley One-Way Repeated Measures ANOVA handout Profile Analysis Example In the One-Way Repeated Measures ANOVA, two factors represent separate sources of variance. Their interaction presents an
More informationANCOVA. Psy 420 Andrew Ainsworth
ANCOVA Psy 420 Andrew Ainsworth What is ANCOVA? Analysis of covariance an extension of ANOVA in which main effects and interactions are assessed on DV scores after the DV has been adjusted for by the DV
More informationCalculating Fobt for all possible combinations of variances for each sample Calculating the probability of (F) for each different value of Fobt
PSY 305 Module 5-A AVP Transcript During the past two modules, you have been introduced to inferential statistics. We have spent time on z-tests and the three types of t-tests. We are now ready to move
More informationMIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010
MIXED MODELS FOR REPEATED (LONGITUDINAL) DATA PART 2 DAVID C. HOWELL 4/1/2010 Part 1 of this document can be found at http://www.uvm.edu/~dhowell/methods/supplements/mixed Models for Repeated Measures1.pdf
More informationRecall that a measure of fit is the sum of squared residuals: where. The F-test statistic may be written as:
1 Joint hypotheses The null and alternative hypotheses can usually be interpreted as a restricted model ( ) and an model ( ). In our example: Note that if the model fits significantly better than the restricted
More informationAnalysis of Variance: Repeated measures
Repeated-Measures ANOVA: Analysis of Variance: Repeated measures Each subject participates in all conditions in the experiment (which is why it is called repeated measures). A repeated-measures ANOVA is
More informationGLM Repeated-measures designs: One within-subjects factor
GLM Repeated-measures designs: One within-subjects factor Reading: SPSS dvanced Models 9.0: 2. Repeated Measures Homework: Sums of Squares for Within-Subject Effects Download: glm_withn1.sav (Download
More informationFactorial and Unbalanced Analysis of Variance
Factorial and Unbalanced Analysis of Variance Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 04-Jan-2017 Nathaniel E. Helwig (U of Minnesota)
More information" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2
Notation and Equations for Final Exam Symbol Definition X The variable we measure in a scientific study n The size of the sample N The size of the population M The mean of the sample µ The mean of the
More informationRegression and the 2-Sample t
Regression and the 2-Sample t James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Regression and the 2-Sample t 1 / 44 Regression
More informationLecture 10: F -Tests, ANOVA and R 2
Lecture 10: F -Tests, ANOVA and R 2 1 ANOVA We saw that we could test the null hypothesis that β 1 0 using the statistic ( β 1 0)/ŝe. (Although I also mentioned that confidence intervals are generally
More informationMultiple Predictor Variables: ANOVA
Multiple Predictor Variables: ANOVA 1/32 Linear Models with Many Predictors Multiple regression has many predictors BUT - so did 1-way ANOVA if treatments had 2 levels What if there are multiple treatment
More informationLecture 10. Factorial experiments (2-way ANOVA etc)
Lecture 10. Factorial experiments (2-way ANOVA etc) Jesper Rydén Matematiska institutionen, Uppsala universitet jesper@math.uu.se Regression and Analysis of Variance autumn 2014 A factorial experiment
More informationMODELS WITHOUT AN INTERCEPT
Consider the balanced two factor design MODELS WITHOUT AN INTERCEPT Factor A 3 levels, indexed j 0, 1, 2; Factor B 5 levels, indexed l 0, 1, 2, 3, 4; n jl 4 replicate observations for each factor level
More informationANOVA TESTING 4STEPS. 1. State the hypothesis. : H 0 : µ 1 =
Introduction to Statistics in Psychology PSY 201 Professor Greg Francis Lecture 35 ANalysis Of VAriance Ignoring (some) variability TESTING 4STEPS 1. State the hypothesis. : H 0 : µ 1 = µ 2 =... = µ K,
More informationMultivariate Analysis of Variance
Chapter 15 Multivariate Analysis of Variance Jolicouer and Mosimann studied the relationship between the size and shape of painted turtles. The table below gives the length, width, and height (all in mm)
More informationStat 412/512 TWO WAY ANOVA. Charlotte Wickham. stat512.cwick.co.nz. Feb
Stat 42/52 TWO WAY ANOVA Feb 6 25 Charlotte Wickham stat52.cwick.co.nz Roadmap DONE: Understand what a multiple regression model is. Know how to do inference on single and multiple parameters. Some extra
More informationFactorial BG ANOVA. Psy 420 Ainsworth
Factorial BG ANOVA Psy 420 Ainsworth Topics in Factorial Designs Factorial? Crossing and Nesting Assumptions Analysis Traditional and Regression Approaches Main Effects of IVs Interactions among IVs Higher
More informationMAT3378 ANOVA Summary
MAT3378 ANOVA Summary April 18, 2016 Before you do the analysis: How many factors? (one-factor/one-way ANOVA, two-factor ANOVA etc.) Fixed or Random or Mixed effects? Crossed-factors; nested factors or
More informationANOVA Randomized Block Design
Biostatistics 301 ANOVA Randomized Block Design 1 ORIGIN 1 Data Structure: Let index i,j indicate the ith column (treatment class) and jth row (block). For each i,j combination, there are n replicates.
More informationANOVA (Analysis of Variance) output RLS 11/20/2016
ANOVA (Analysis of Variance) output RLS 11/20/2016 1. Analysis of Variance (ANOVA) The goal of ANOVA is to see if the variation in the data can explain enough to see if there are differences in the means.
More informationLast updated: Oct 18, 2012 LINEAR REGRESSION PSYC 3031 INTERMEDIATE STATISTICS LABORATORY. J. Elder
Last updated: Oct 18, 2012 LINEAR REGRESSION Acknowledgements 2 Some of these slides have been sourced or modified from slides created by A. Field for Discovering Statistics using R. Simple Linear Objectives
More informationFACTORIAL DESIGNS and NESTED DESIGNS
Experimental Design and Statistical Methods Workshop FACTORIAL DESIGNS and NESTED DESIGNS Jesús Piedrafita Arilla jesus.piedrafita@uab.cat Departament de Ciència Animal i dels Aliments Items Factorial
More informationStats fest Analysis of variance. Single factor ANOVA. Aims. Single factor ANOVA. Data
1 Stats fest 2007 Analysis of variance murray.logan@sci.monash.edu.au Single factor ANOVA 2 Aims Description Investigate differences between population means Explanation How much of the variation in response
More informationExample: Poisondata. 22s:152 Applied Linear Regression. Chapter 8: ANOVA
s:5 Applied Linear Regression Chapter 8: ANOVA Two-way ANOVA Used to compare populations means when the populations are classified by two factors (or categorical variables) For example sex and occupation
More informationIntroduction and Background to Multilevel Analysis
Introduction and Background to Multilevel Analysis Dr. J. Kyle Roberts Southern Methodist University Simmons School of Education and Human Development Department of Teaching and Learning Background and
More informationSCHOOL OF MATHEMATICS AND STATISTICS
RESTRICTED OPEN BOOK EXAMINATION (Not to be removed from the examination hall) Data provided: Statistics Tables by H.R. Neave MAS5052 SCHOOL OF MATHEMATICS AND STATISTICS Basic Statistics Spring Semester
More informationSTAT 5200 Handout #23. Repeated Measures Example (Ch. 16)
Motivating Example: Glucose STAT 500 Handout #3 Repeated Measures Example (Ch. 16) An experiment is conducted to evaluate the effects of three diets on the serum glucose levels of human subjects. Twelve
More informationRepeated-Measures ANOVA in SPSS Correct data formatting for a repeated-measures ANOVA in SPSS involves having a single line of data for each
Repeated-Measures ANOVA in SPSS Correct data formatting for a repeated-measures ANOVA in SPSS involves having a single line of data for each participant, with the repeated measures entered as separate
More informationChapter 14: Repeated-measures designs
Chapter 14: Repeated-measures designs Oliver Twisted Please, Sir, can I have some more sphericity? The following article is adapted from: Field, A. P. (1998). A bluffer s guide to sphericity. Newsletter
More informationR 2 and F -Tests and ANOVA
R 2 and F -Tests and ANOVA December 6, 2018 1 Partition of Sums of Squares The distance from any point y i in a collection of data, to the mean of the data ȳ, is the deviation, written as y i ȳ. Definition.
More information610 - R1A "Make friends" with your data Psychology 610, University of Wisconsin-Madison
610 - R1A "Make friends" with your data Psychology 610, University of Wisconsin-Madison Prof Colleen F. Moore Note: The metaphor of making friends with your data was used by Tukey in some of his writings.
More informationStat 5303 (Oehlert): Randomized Complete Blocks 1
Stat 5303 (Oehlert): Randomized Complete Blocks 1 > library(stat5303libs);library(cfcdae);library(lme4) > immer Loc Var Y1 Y2 1 UF M 81.0 80.7 2 UF S 105.4 82.3 3 UF V 119.7 80.4 4 UF T 109.7 87.2 5 UF
More informationChapter 16 One-way Analysis of Variance
Chapter 16 One-way Analysis of Variance I am assuming that most people would prefer to see the solutions to these problems as computer printout. (I will use R and SPSS for consistency.) 16.1 Analysis of
More informationVariance Decomposition in Regression James M. Murray, Ph.D. University of Wisconsin - La Crosse Updated: October 04, 2017
Variance Decomposition in Regression James M. Murray, Ph.D. University of Wisconsin - La Crosse Updated: October 04, 2017 PDF file location: http://www.murraylax.org/rtutorials/regression_anovatable.pdf
More informationAnalysis of Covariance: Comparing Regression Lines
Chapter 7 nalysis of Covariance: Comparing Regression ines Suppose that you are interested in comparing the typical lifetime (hours) of two tool types ( and ). simple analysis of the data given below would
More informationVariance Decomposition and Goodness of Fit
Variance Decomposition and Goodness of Fit 1. Example: Monthly Earnings and Years of Education In this tutorial, we will focus on an example that explores the relationship between total monthly earnings
More information10/31/2012. One-Way ANOVA F-test
PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 1. Situation/hypotheses 2. Test statistic 3.Distribution 4. Assumptions One-Way ANOVA F-test One factor J>2 independent samples
More informationExtensions of One-Way ANOVA.
Extensions of One-Way ANOVA http://www.pelagicos.net/classes_biometry_fa18.htm What do I want You to Know What are two main limitations of ANOVA? What two approaches can follow a significant ANOVA? How
More informationMultivariate Tests. Mauchly's Test of Sphericity
General Model Within-Sujects Factors Dependent Variale IDLS IDLF IDHS IDHF IDHCLS IDHCLF Descriptive Statistics IDLS IDLF IDHS IDHF IDHCLS IDHCLF Mean Std. Deviation N.0.70.0.0..8..88.8...97 Multivariate
More informationStatistical Inference Part 2. t test for single mean. t tests useful for PSYCH 710. Review of Statistical Inference (Part 2) Week 2
Statistical Inference Part PSYCH 70 eview Statistical Inference (Part ) Week Pr. Patrick ennett t-tests Effect Size Equivalence Tests Consequences Low Power s P t tests useful for t test for single mean
More informationQuestions 3.83, 6.11, 6.12, 6.17, 6.25, 6.29, 6.33, 6.35, 6.50, 6.51, 6.53, 6.55, 6.59, 6.60, 6.65, 6.69, 6.70, 6.77, 6.79, 6.89, 6.
Chapter 7 Reading 7.1, 7.2 Questions 3.83, 6.11, 6.12, 6.17, 6.25, 6.29, 6.33, 6.35, 6.50, 6.51, 6.53, 6.55, 6.59, 6.60, 6.65, 6.69, 6.70, 6.77, 6.79, 6.89, 6.112 Introduction In Chapter 5 and 6, we emphasized
More informationTopic 20: Single Factor Analysis of Variance
Topic 20: Single Factor Analysis of Variance Outline Single factor Analysis of Variance One set of treatments Cell means model Factor effects model Link to linear regression using indicator explanatory
More informationGeneral Linear Model. Notes Output Created Comments Input. 19-Dec :09:44
GET ILE='G:\lare\Data\Accuracy_Mixed.sav'. DATASET NAME DataSet WINDOW=RONT. GLM Jigsaw Decision BY CMCTools /WSACTOR= Polynomial /METHOD=SSTYPE(3) /PLOT=PROILE(CMCTools*) /EMMEANS=TABLES(CMCTools) COMPARE
More informationANOVA: Analysis of Variance
ANOVA: Analysis of Variance Marc H. Mehlman marcmehlman@yahoo.com University of New Haven The analysis of variance is (not a mathematical theorem but) a simple method of arranging arithmetical facts so
More informationSimple, Marginal, and Interaction Effects in General Linear Models
Simple, Marginal, and Interaction Effects in General Linear Models PRE 905: Multivariate Analysis Lecture 3 Today s Class Centering and Coding Predictors Interpreting Parameters in the Model for the Means
More informationKeppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means
Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means 4.1 The Need for Analytical Comparisons...the between-groups sum of squares averages the differences
More informationOther hypotheses of interest (cont d)
Other hypotheses of interest (cont d) In addition to the simple null hypothesis of no treatment effects, we might wish to test other hypothesis of the general form (examples follow): H 0 : C k g β g p
More informationOne-Way ANOVA Calculations: In-Class Exercise Psychology 311 Spring, 2013
One-Way ANOVA Calculations: In-Class Exercise Psychology 311 Spring, 2013 1. You are planning an experiment that will involve 4 equally sized groups, including 3 experimental groups and a control. Each
More informationLab 3 A Quick Introduction to Multiple Linear Regression Psychology The Multiple Linear Regression Model
Lab 3 A Quick Introduction to Multiple Linear Regression Psychology 310 Instructions.Work through the lab, saving the output as you go. You will be submitting your assignment as an R Markdown document.
More informationR Output for Linear Models using functions lm(), gls() & glm()
LM 04 lm(), gls() &glm() 1 R Output for Linear Models using functions lm(), gls() & glm() Different kinds of output related to linear models can be obtained in R using function lm() {stats} in the base
More informationMultiple Regression: Example
Multiple Regression: Example Cobb-Douglas Production Function The Cobb-Douglas production function for observed economic data i = 1,..., n may be expressed as where O i is output l i is labour input c
More informationMultivariate models for pretest posttest data and a comparison to univariate models
University of Twente Bachelorthesis Multivariate models for pretest posttest data and a comparison to univariate models Sven Kleine Bardenhorst s1543377 January 30, 2017 supervised by Prof. Dr. Ir. Jean-Paul
More informationANOVA: Analysis of Variance
ANOVA: Analysis of Variance Marc H. Mehlman marcmehlman@yahoo.com University of New Haven The analysis of variance is (not a mathematical theorem but) a simple method of arranging arithmetical facts so
More informationTests of Linear Restrictions
Tests of Linear Restrictions 1. Linear Restricted in Regression Models In this tutorial, we consider tests on general linear restrictions on regression coefficients. In other tutorials, we examine some
More informationMAT3378 (Winter 2016)
MAT3378 (Winter 2016) Assignment 2 - SOLUTIONS Total number of points for Assignment 2: 12 The following questions will be marked: Q1, Q2, Q4 Q1. (4 points) Assume that Z 1,..., Z n are i.i.d. normal random
More informationOne-Way ANOVA Cohen Chapter 12 EDUC/PSY 6600
One-Way ANOVA Cohen Chapter 1 EDUC/PSY 6600 1 It is easy to lie with statistics. It is hard to tell the truth without statistics. -Andrejs Dunkels Motivating examples Dr. Vito randomly assigns 30 individuals
More information3. Design Experiments and Variance Analysis
3. Design Experiments and Variance Analysis Isabel M. Rodrigues 1 / 46 3.1. Completely randomized experiment. Experimentation allows an investigator to find out what happens to the output variables when
More informationCorrelation and Regression: Example
Correlation and Regression: Example 405: Psychometric Theory Department of Psychology Northwestern University Evanston, Illinois USA April, 2012 Outline 1 Preliminaries Getting the data and describing
More informationGLM Repeated Measures
GLM Repeated Measures Notation The GLM (general linear model) procedure provides analysis of variance when the same measurement or measurements are made several times on each subject or case (repeated
More informationMixed Model: Split plot with two whole-plot factors, one split-plot factor, and CRD at the whole-plot level (e.g. fancier split-plot p.
STAT:5201 Applied Statistic II Mixed Model: Split plot with two whole-plot factors, one split-plot factor, and CRD at the whole-plot level (e.g. fancier split-plot p.422 OLRT) Hamster example with three
More informationInference for the Regression Coefficient
Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression line. We can shows that b 0 and b 1 are the unbiased estimates
More information1 Use of indicator random variables. (Chapter 8)
1 Use of indicator random variables. (Chapter 8) let I(A) = 1 if the event A occurs, and I(A) = 0 otherwise. I(A) is referred to as the indicator of the event A. The notation I A is often used. 1 2 Fitting
More informationRegression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph.
Regression, Part I I. Difference from correlation. II. Basic idea: A) Correlation describes the relationship between two variables, where neither is independent or a predictor. - In correlation, it would
More informationPrepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti
Prepared by: Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang Use in experiment, quasi-experiment
More information22s:152 Applied Linear Regression. Take random samples from each of m populations.
22s:152 Applied Linear Regression Chapter 8: ANOVA NOTE: We will meet in the lab on Monday October 10. One-way ANOVA Focuses on testing for differences among group means. Take random samples from each
More informationChapter 14 Repeated-Measures Designs
Chapter 14 Repeated-Measures Designs [As in previous chapters, there will be substantial rounding in these answers. I have attempted to make the answers fit with the correct values, rather than the exact
More informationDeciphering Math Notation. Billy Skorupski Associate Professor, School of Education
Deciphering Math Notation Billy Skorupski Associate Professor, School of Education Agenda General overview of data, variables Greek and Roman characters in math and statistics Parameters vs. Statistics
More information22s:152 Applied Linear Regression. There are a couple commonly used models for a one-way ANOVA with m groups. Chapter 8: ANOVA
22s:152 Applied Linear Regression Chapter 8: ANOVA NOTE: We will meet in the lab on Monday October 10. One-way ANOVA Focuses on testing for differences among group means. Take random samples from each
More informationThe t-test: A z-score for a sample mean tells us where in the distribution the particular mean lies
The t-test: So Far: Sampling distribution benefit is that even if the original population is not normal, a sampling distribution based on this population will be normal (for sample size > 30). Benefit
More informationCOMPARING SEVERAL MEANS: ANOVA
LAST UPDATED: November 15, 2012 COMPARING SEVERAL MEANS: ANOVA Objectives 2 Basic principles of ANOVA Equations underlying one-way ANOVA Doing a one-way ANOVA in R Following up an ANOVA: Planned contrasts/comparisons
More informationNo other aids are allowed. For example you are not allowed to have any other textbook or past exams.
UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences Sample Exam Note: This is one of our past exams, In fact the only past exam with R. Before that we were using SAS. In
More informationAnalysis of variance. Gilles Guillot. September 30, Gilles Guillot September 30, / 29
Analysis of variance Gilles Guillot gigu@dtu.dk September 30, 2013 Gilles Guillot (gigu@dtu.dk) September 30, 2013 1 / 29 1 Introductory example 2 One-way ANOVA 3 Two-way ANOVA 4 Two-way ANOVA with interactions
More informationAnalysis of Variance: Part 1
Analysis of Variance: Part 1 Oneway ANOVA When there are more than two means Each time two means are compared the probability (Type I error) =α. When there are more than two means Each time two means are
More informationG562 Geometric Morphometrics. Statistical Tests. Department of Geological Sciences Indiana University. (c) 2012, P. David Polly
Statistical Tests Basic components of GMM Procrustes This aligns shapes and minimizes differences between them to ensure that only real shape differences are measured. PCA (primary use) This creates a
More informationSampling distribution of t. 2. Sampling distribution of t. 3. Example: Gas mileage investigation. II. Inferential Statistics (8) t =
2. The distribution of t values that would be obtained if a value of t were calculated for each sample mean for all possible random of a given size from a population _ t ratio: (X - µ hyp ) t s x The result
More informationANOVA continued. Chapter 11
ANOVA continued Chapter 11 Zettergren (003) School adjustment in adolescence for previously rejected, average, and popular children. Effect of peer reputation on academic performance and school adjustment
More information13 Simple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 3 Simple Linear Regression 3. An industrial example A study was undertaken to determine the effect of stirring rate on the amount of impurity
More informationPsy 420 Final Exam Fall 06 Ainsworth. Key Name
Psy 40 Final Exam Fall 06 Ainsworth Key Name Psy 40 Final A researcher is studying the effect of Yoga, Meditation, Anti-Anxiety Drugs and taking Psy 40 and the anxiety levels of the participants. Twenty
More informationResearch Design - - Topic 8 Hierarchical Designs in Analysis of Variance (Kirk, Chapter 11) 2008 R.C. Gardner, Ph.D.
Research Design - - Topic 8 Hierarchical Designs in nalysis of Variance (Kirk, Chapter 11) 008 R.C. Gardner, Ph.D. Experimental Design pproach General Rationale and pplications Rules for Determining Sources
More informationReference: Chapter 14 of Montgomery (8e)
Reference: Chapter 14 of Montgomery (8e) 99 Maghsoodloo The Stage Nested Designs So far emphasis has been placed on factorial experiments where all factors are crossed (i.e., it is possible to study the
More informationAnalyzing More Complex Experimental Designs
Analyzing More Complex Experimental Designs Experimental Constraints In the real world, you may find it impossible to obtain completely independent samples We already talked about some ways to handle simple
More informationWeek 7 Multiple factors. Ch , Some miscellaneous parts
Week 7 Multiple factors Ch. 18-19, Some miscellaneous parts Multiple Factors Most experiments will involve multiple factors, some of which will be nuisance variables Dealing with these factors requires
More informationStatistics 512: Solution to Homework#11. Problems 1-3 refer to the soybean sausage dataset of Problem 20.8 (ch21pr08.dat).
Statistics 512: Solution to Homework#11 Problems 1-3 refer to the soybean sausage dataset of Problem 20.8 (ch21pr08.dat). 1. Perform the two-way ANOVA without interaction for this model. Use the results
More informationExtensions of One-Way ANOVA.
Extensions of One-Way ANOVA http://www.pelagicos.net/classes_biometry_fa17.htm What do I want You to Know What are two main limitations of ANOVA? What two approaches can follow a significant ANOVA? How
More informationFactorial Analysis of Variance
Factorial Analysis of Variance Conceptual Example A repeated-measures t-test is more likely to lead to rejection of the null hypothesis if a) *Subjects show considerable variability in their change scores.
More informationThe Multiple Regression Model
Multiple Regression The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (Y) & or more independent variables (X i ) Multiple Regression Model with k Independent Variables:
More informationy i s 2 X 1 n i 1 1. Show that the least squares estimators can be written as n xx i x i 1 ns 2 X i 1 n ` px xqx i x i 1 pδ ij 1 n px i xq x j x
Question 1 Suppose that we have data Let x 1 n x i px 1, y 1 q,..., px n, y n q. ȳ 1 n y i s 2 X 1 n px i xq 2 Throughout this question, we assume that the simple linear model is correct. We also assume
More information