Ordinary Least Squares (OLS): Simple Linear Regression (SLR) Assessment: Goodness of Fit & Precision
|
|
- Damon Boone
- 5 years ago
- Views:
Transcription
1 Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Assessment: Goodness of Ft & Precson How close? Goodness of Ft and Inference Brng on the ANOVA Table! (SST, SSE and ) Goodness of Ft I: Mean Squared Error (MSE) and Root MSE (RMSE) Goodness of Ft II: R-squared Assessment of Goodness of Ft Examples n Excel and Stata Comparng SLR Models Usng Goodness of Ft Metrcs Inference/Precson: Standard Errors and t Stats How close? Goodness of Ft v Inference/Precson After we have derved the OLS parameter estmates, 0 ˆβ and ˆβ, the queston always arses: How well dd we do? How close are the estmated coeffcents to the true parameters, β 0 and β? We'll have several answers None wll be entrely satsfactory though they wll be nformatve, nonetheless Qualty of the Overall Model Goodness of Ft: Goodness of ft metrcs tell us somethng about the qualty of the overall model, about how well the predcteds ft the actuals They may not tell us as much as we'd lke to know about how precsely we've estmated the true parameters But f we have a lot of data and the goodness of ft metrcs look good, then we should feel pretty good about our estmated coeffcents, even though there s always some probablty that they are way off a GOF Metrc I: MSE (Mean Squared Error) MSE s almost sort of lke an average squared resdual I say almost sort of lke because nstead of takng an average (and dvdng by n), we dvde the sum of the squared resduals by n- (The choce reflects an nterest n unbasedness) b GOF Metrc II: R (Coeffcent of Determnaton) There are two equvalent ways to thnk about R One nterpretaton s that t measures the proporton of the varaton n the y's (the actuals) explaned by the ys ˆ ' (the predcteds) Alternatvely, t captures the magntude of the correlaton between the y's and the ys, ˆ ' the actuals and the predcteds It wll turn out that 0 R, and so f we have R close to we say that goodness of ft s hgh, and f t's close to 0, goodness of ft s low In contrast, t won t always be so obvous whether the MSE's are large or small n magntude
2 SLR Assessment: Goodness of Ft & Inference v4 3 Qualty of the Indvdual Parameter Estmates Precson/Inference: At the end of ths secton, we wll brefly dscuss the concepts of standard errors and t stats, and show how they can be used to say somethng about precson of estmaton of the unknown parameters A more formal treatment of ths approach wll have to wat, but there are some useful rules of thumb for assessng precson, whch we can ntroduce at ths juncture Later we wll make some dstrbutonal assumptons whch wll enable us to do nference construct Confdence Intervals and run Hypothess Tests Whle those nferental tools won t wth certanty answer the queston How Close?, they wll gve us probablstc assessments as to how close our estmated coeffcents are to the true unknown parameter values Those probablstc assessments wll nvolve levels of confdence for confdence ntervals and sgnfcance levels for hypothess testng Stay tuned! 4 But before turnng to R we frst need to ntroduce some ANOVA (Analyss of Varance) termnology and results Brng on the ANOVA (SST, SSE and ) 5 Some defntons whch wll be useful n assessng the MSE/RMSE and metrcs: a SST: Total Sum of Squares ( y R goodness of ft Ths the sum squared devatons of the actual values of the dependent varable from ther mean SST Snce Syy = ( y =, SST = ( n ) S yy ( n ) tmes the varance of n n the actuals b SSE: Explaned Sum of Squares = ( yˆ Ths s the sum squared devatons of the predcted values of the dependent varable from the mean of the actual values If there s a constant term n the model, the mean of the actuals s also the mean of the predcteds, y = yˆ In ths most common case: SSE = ( yˆ ) ( ˆ ˆ y = y and so SSE = ( yˆ = ( n ) S yy ˆˆ ( n ) tmes the varance of the predcteds c : Resdual Sum of Squares uˆ = ( y yˆ ) Ths s the sum squared resduals, the squared dfferences between the actual and predcted values of the dependent varable Everyone doesn t always use the same termnology for these concepts In Stata regresson output, SST s SS Total, s SS Resdual, and SSE s SS Model And some authors flp the defntons of SSE and
3 SLR Assessment: Goodness of Ft & Inference v4 Snce u ˆ = 0, the resduals by constructon have mean 0, and so Suu ˆˆ = or n put dfferently, = ( n ) S uu ˆˆ ( n ) tmes the varance of the resduals 6 To summarze: yy, (n-) tmes the varance of the actuals ( ˆ ) ( ) yy ˆˆ, (n-) tmes the varance of the predcteds ˆ ( ˆ ) ( ) uu ˆˆ, (n-) tmes the varance of the resduals SST = ( y = ( n ) S SSE = y y = n S = u = y y = n S 7 Result: SST = SSE +, f there s a constant term n the model a Or dvdng through by (n-), we have SST = SSE + n n n, or Syy = Syy ˆˆ + Suu ˆˆ b In words: The sample varance of the actuals s the sum of the sample varances of the predcteds and of the resduals c What drves ths result? Snce we have a constant term n the regresson, the mean of the predcted values s the same as the means of the actuals or put dfferently: ŷ = y 8 You shouldn t be too surprsed by ths result Earler we showed that OLS effectvely decomposed the y's nto two uncorrelated parts, predcted and resduals And snce y ˆ ˆ = y + u and ˆ ρ yu ˆˆ = 0, the sample varance of the actuals wll be the sum of the sample varances of the predcteds and of the resduals whch s exactly the result above, S = S + S So perhaps you saw ths comng yy yy ˆˆ uu ˆˆ 9 Ths result does not necessarly hold f there s no constant (ntercept) term n the model But do not fear! There are lots of good reasons for ncludng a constant term n your model In Proof: The trck s to add and subtract y ˆ nsde the expresson and to then smplfy: ( y ) ( ˆ ˆ y = y y + y = ( y ˆ ) ( ˆ ) ( ˆ )( ˆ y + y y + y y y = + SSE + ( y ˆ )( ˆ y y So we just need to prove that ( y ˆ )( ˆ y y = uˆ ( yˆ = 0, the sample covarance of the predcted values and the resduals s zero But snce y y ˆ β ˆ β x ( ˆ β ˆ β x) ˆ β ( x x) and ths s ˆ = =, uˆ ˆ ˆ ˆ ( y = β u ( x x) 0, snce u ˆ ( x x ) = 0 3
4 SLR Assessment: Goodness of Ft & Inference v4 fact, general practce s to always nclude a constant term n your model unless you have a specfc reason not to do so Goodness of Ft I: Mean Squared Error (MSE/RMSE) 0 MSE provdes one measure of how close your predcted values are to the actuals: a MSE = 3 measured n squared unts of the dependent varable n To put the metrc n the same unts as the y's, we take the square root of the MSE ths gves us Root Mean Squared Error (RMSE), whch s sometmes called the standard error of the regresson Ths metrc s sort of lke an average devaton of predcted from actuals but not qute, gve the specfcs of the calculaton and for reason prevously dscussed a RMSE = MSE = n measured n unts of the dependent varable Sometmes we also look at, Mean Absolute Error (MAE), a goodness of ft metrc closely related to RMSE: a MAE = y ˆ y, where y ˆ y s the absolute value of the th resdual n b MAE's are not typcally ncluded n standard regresson package results but they can usually be easly obtaned 3 One of the challenges n workng wth MSEs, RMSEs and MAEs s nterpretng magntudes On ther face, t's not obvous whether these metrcs are small or large n magntude So you'll need to brng other nformaton to bear n formng an opnon as to how well your model has ft the data 4 Our alternatve metrc, the Coeffcent of Determnaton ( R ), provdes more readly nterpreted results Goodness of Ft II: R-squared 5 Our second goodness of ft metrc, the Coeffcent of Determnaton, s defned by: R = SST a So long as there s a constant term n the model (so the mean predcted value s the same as the mean actual value), = SST SSE, and so R SSE ( yˆ = = = SST SST ( y 3 As you'll see later n the semester, we dvde by (n-) rather than n to acheve unbasedness 4
5 SLR Assessment: Goodness of Ft & Inference v4 b Then R SSE ( yˆ / ( n ) Sample Var( predcted) S = = = = SST ( y ) / ( ) SampleVar( actual) S y n c By constructon, 0 R (f there s a constant term n the model) hgher values mean that you've done a better job explanng the varaton n the actuals Don t get too excted f R s close to, or too depressed f t s close to 0 Dong good econometrcs s way more than just maxmzng R If your model does not have a constant term then ths last formula need not be the case Further: If your model does not have a constant/ntercept term then you should not pay too much, f any, attenton to R 6 Interpretaton I: Rato of Varances Gven the results above, R-squared s the rato of the Sample Varance of the predcted to the Sample Varance of the actuals the percent of the varaton of the actuals explaned by the model 7 Interpretaton II: Correlaton between predcted and actuals R s also the square of the sample correlaton between the ndependent and dependent varables, as well as the sample correlaton between the actuals and predcteds: ρ = ρ = xy yy ˆ R a Ths s an mportant result so here's a quck proof: We know that ˆ Sxy Sy β = = ρxy S S xx x Snce y ˆ ˆ ˆ = β0 + βx, the sample varance of the predcted values wll be defned by: S ( ˆ ˆ ˆ ˆ β + β x β β x) Sxx n 0 0 ˆ yy ˆˆ = = β But then R ˆ S yy ˆˆ β S S xx y S S xx yy Sxx = = = ρxy = ρxy = ρxy Syy Syy Sx Syy Sxx Syy or put dfferently: Snce SSE = ρ SST (see followng), xy yy ˆˆ yy SSE ρ xy = = R SST ( ) ( ˆ ˆ ˆ ˆ ) ˆ ˆ β0 β ( β0 β ) β ( ) And so SSE = y y = + x + x = x x S SST SSE = ρ x x = ρ x x = ρ SST y xy ( ) xy ( ) xy Sx ( x x) 5
6 SLR Assessment: Goodness of Ft & Inference v4 v And snce ρ xy = ρ (the correlaton of the x's and y's s the same as the correlaton yy ˆ between the predcted and the actuals), we have the desred result: R = ρ = ρ xy yy ˆ b When we move to MLR models, wth multple explanatory varables, we lose the connecton between R and ρ xy but the connecton to the correlaton between predcted and actuals wll carry forward ( ρ = R for MLR models as well) ŷy Assessment of Goodness of Ft 8 The two goodness of ft metrcs (R-squared and MSE/RMSE) tell you somethng about how well your model captures/explans the varaton n the dependent varable y They alone, however, do not tell you how well you ve estmated the unknown parameter values β0 and β In some cases, R-squared wll be hgh and MSE/RMSE wll be low, and your parameter estmates wll be qute poor and vce-versa a Example: Suppose you have a sample of sze two Wth just two data ponts, R = and MSE = 0 and you have n all lkelhood come up wth mserable estmates of the unknown parameter values 9 Here are a couple examples, wth just fve observatons randomly generated usng a true relatonshp gven by the sold red lne and the dashed black lne shows you the OLS estmated SLR relatonshp for the gven dataset In both cases, the R s above5, and the estmated relatonshp s all wrong So n matters too! 0 nobs Matters Too! We wll see later that the qualty of the parameter estmates depends on R-squared (or MSE/RMSE) and the number of observatons n the dataset And so f R- squared s hgh and MSE/RMSE s low, and you have lots of data, then you have probably done a pretty good job estmatng the unknown parameter values nobs Matters Too! 6
7 SLR Assessment: Goodness of Ft & Inference v4 Examples n Excel and Stata Contnung wth the bodyfat example n Excel Generate the predcted, y ˆ ˆ ˆ = β0 + βx, and resduals, uˆ ˆ ( ˆ ˆ = y y = y β0 + βx) Generate s by squarng the resduals and summng those (use the SUMSQ() functon to save a step) Use the COUNT() functon to count your observatons, and generate MSE = and n RMSE = MSE To generate SSEs, demean the predcteds, and compute the sum squared of those, agan usng SUMSQ() And use SUMSQ() to compute SST usng the demeaned Brozek observatons Once you have all of these, you can verfy that + SSE = SST And wth SSE and SST, dvde by n- to generate the sample varances of the explaned and the actuals You can now compute 3 4 R =, SST SSE R =, SST R four ways: Sample Var( predcted) S yy ˆˆ R = SampleVar( actual) = S, and R = ρ = ρ xy yy ˆ Here s what your results mght look lke: yy I have posted bodyfat example xlsx to llustrate 7
8 SLR Assessment: Goodness of Ft & Inference v4 Runnng the Regresson n Excel When you run the regresson n Excel, you ll get the followng SUMMARY OUTPUT Regresson Statstcs Multple R 0636 R Square Adjusted R Square Standard Error 635 Observatons 5 ANOVA df SS MS F Sgnfcance F Regresson 5,669 5, E-7 Resdual 50 9, Total 5 5,0790 Coeffcents Standard Error t Stat P-value Lower 95% Upper 95% Intercept (9995) 389 (48) 39776E-05 (47004) (5899) wgt E You can fnd SSE,, SST, MSE, RMSE and R-squared n there you just need to know where to look The SS s are all n the SS column, wth Regresson for SSE, Resdual for, and Total for SST MSE can be found n the MS column, row Resdual R squared s reported under Regresson Statstcs, and what Excel calls the Standard Error of the regresson, we call RMSE So the statstcs are all there you just need to know where to look Runnng the Regresson n Stata reg brozek wgt Source SS df MS Number of obs = F(, 50) = 506 Model Prob > F = Resdual R-squared = Adj R-squared = Total Root MSE = brozek Coef Std Err t P> t [95% Conf Interval] wgt _cons predct bfathat (opton xb assumed 8
9 SLR Assessment: Goodness of Ft & Inference v4 Agan, you can fnd SSE,, SST, MSE, RMSE and R-squared n there you just need to know where to look The SS s are agan n column SS, but Stata now puts the SSEs n the Model row MSE s are agan n column MS and row Resdual And R-squared and Root MSE (RMSE) are n the regresson stats n the upper rght corner We agan fnd that + SSE = SST: d and R-squared s ndeed those correlatons squared: corr Brozek bfathat wgt (obs=5) Brozek bfathat wgt Brozek 0000 bfathat wgt d 63^ Comparng SLR Models Usng Goodness of Ft Metrcs For the appled econometrcan, the journey s as mportant as the fnal destnaton And there's plenty of scence and art along the way Each regresson analyss tells you somethng and leads to the next analyss Ultmately, you typcally converge on your preferred model but there was plenty of learnng along the way And that learnng defntely nformed your analyss As part of the learnng process, econometrcans are always comparng results across models, and makng decsons about how to move forward We'll have a lot more to say about that later, but gven that we are n the mdst of Goodness of Ft metrcs, why not say a few words about how to use those metrcs to compare models? You can use R and MSE/RMSE to compare the performance of dfferent SLR models but only to a lmted extent And you must be careful! If the dfferent models all have the same LHS data (so the y's are the same n the dfferent models both n terms of number and n terms of values), then the SSTs and Syy ' s wll be the same across the models, and you can compare R 's and MSE/RMSE's Under these condtons the R ' s and the MSE/RMSE's wll move n opposte drectons, snce: R > R > < < MSE < MSE SST SST n n 9
10 SLR Assessment: Goodness of Ft & Inference v4 So under these condtons, models wth hgher R 's (and lower MSE/RMSE's) do a better job of fttng the data, and n that sense are preferable But: If the y's are not the same across the dfferent models, then R 's and MSE/RMSE's are not drectly comparable and won t tell you much unless you make some adjustments Here are some examples usng the bodyfat dataset Example : Predctng Brozek wth four dfferent SLR Models Here are the results from four SLR models, where Brozek s the common LHS varable and hgt, wgt, abd, and BMI are the canddate RHS varables () () (3) (4) Brozek Brozek Brozek Brozek hgt -089 (-4) wgt 06*** (7) abd 0585*** (3) BMI 547*** (679) _cons 37*** -9995*** -350*** -04*** (344) (-48) (-49) (-86) N R-sq rss rmse t statstcs n parentheses * p<005, ** p<00, *** p<000 The syntax for the esttab output was: esttab, r scalar(rss rmse) compress The optons n the esttab statement: r: dsplays R rss: dsplays s rmse: dsplays RMSE and compress compresses the output so t s not as wde and fts better on the page 0
11 SLR Assessment: Goodness of Ft & Inference v4 Notce that R ncreases as you go from hgt (0008), to wgt (0376), to abd (066) and then decreases wth BMI (0530) And as advertsed, RMSE moves n exactly the opposte drecton Lookng across the four models, abd (wast sze) has most explanatory power (hghest R 's and lowest MSE/RMSE's), BMI s n second place, wgt s a bt behnd BMI and hgt trals the feld by a hefty margn Example : Takng ln's and mxng and matchng In ths example take ln's of Brozek and abd and run four models, mxng and matchng In Models () and () Brozek s frst regressed on abd, and then on lnabd; n Models (3) and (4) ths s repeated wth lnbrozek now the dependent varable Here are the results: esttab, r scalar(rss rmse) compress () () (3) (4) Brozek Brozek lnbrozek lnbrozek abd 0585*** 00337*** (3) (785) lnabd 56*** 399*** (83) (899) _cons -350*** -348*** *** (-49) (-) (-59) (-536) N R-sq rss rmse t statstcs n parentheses * p<005, ** p<00, *** p<000 (Note that 's have been manually added to the table you'll learn why below) It s temptng to say that Model () s the best because t has the hghest R or maybe you thnk that Model (4) s the best because t has the lowest MSE/RMSE Perhaps the dfferent recommendatons should be your frst clue that R 's and MSE/RMSE's mght not under these crcumstances tell you the best model The R 's and MSE/RMSE's n Models () and () are comparable to one another (snce they have the same LHS varable) and the R 's and MSE/RMSE's n Models (3) and (4) are also comparable to one another (they also have the same LHS varable) But you cannot, wthout addtonal computatons, compare the frst two Models to the last two Models, because they have dfferent LHS varables
12 SLR Assessment: Goodness of Ft & Inference v4 So Model () performs better than (), and (4) does better than (3) but don t you dare try to compare () and (4) wthout addtonal computatons And besdes, f you tred to do that, you'd pck () on the bass of hgher R 's or maybe (4) on the bass of lower MSE/RMSE's Comparablty across models wth dfferng LHS varables s clearly an ssue Now you see why the 's are n the results table? Inference/Precson: Standard Errors and t Stats As mentoned above, more formal use of the standard error to assess precson of parameter estmaton awats the dscusson of nference, confdence ntervals and hypothess testng However there are some useful rules of thumb for assessng precson usng standard errors We now turn to those Standard Errors: Standard errors (se's)provde us wth a measure of precson n the estmaton of the unknown parameters Knowng the se alone however s typcally not very helpful, snce t s often dffcult to know whether as partcular standard error s small or large We wll crcumvent ths shortcomng by creatng a t stat, whch effectvely standardzes the standard error, and gves us a metrc that s more readly nterpretable Before turnng to the t stat, let's frst defne the standard error In OLS/SLR models the standard error assocated wth the slope estmate s defned by: 4 RMSE RMSE se = ˆ β ( x x) = x S n Perhaps not surprsngly, the standard error s: ncreasng n RMSE (reported standard errors wll be smaller wth models that do a better job of fttng the data), decreasng n n (more observatons wll lead to smaller reported standard errors), and decreasng n the varance of x (ths s perhaps less ntutve, but ncreased varance n your RHS varable s a good thng and wll lead to a smaller reported standard errors) 4 The proof of ths s very straghtforward and wll come later We almost never pay attenton to the standard error of the constant term
13 SLR Assessment: Goodness of Ft & Inference v4 t-stat: Comparng the standard error to the estmated coeffcent, ˆβ, often tells us somethng about how relably we've estmated the unknown slope parameter, β Before assessng relablty, though, we'll need to defne one more term, the t stat: t ˆ β ˆ = β se ˆ β The absolute value of t stat tells you the magntude of the estmated slope coeffcent, ˆβ, measured n unts of standard errors Once you know the t stat, you can apply some general rules of thumb to assess precson of estmaton In general, the larger the t stat, the greater the lkely precson (as you'll see later, n also matters n assessng precson) so you should take comfort seeng hgh t stats, and fret over low ones In terms of ranges and emotons, and assumng a szable n: f t > or so then you have lkely done a pretty good job of estmatng the unknown slope parameter, β, f t < sh then you have lkely done a not so good job of estmatng β, and and for n-between magntudes of t whle the results aren't as strong as you mght lke, there's hope and reason to beleve that wth further work your model wll be somethng to brag about So defntely no reason to lose hope! Connecton between t and the R There's a connecton between the measure of precson, R measure of goodness of ft, as well as SSE and : R SSE t = ˆ ( n ) ( n ) β R = Who knew that the Goodness-of-Ft and precson metrcs were connected? ˆ β ˆ β ( x ˆ x) β ( x x) Proof: t ˆ = = = β se MSE / ( n ) ˆ SSE = ˆ β ( x x) β We know from the proof of SSE SSE / SST R And so t = ( n ) ( n ) ( n ) = β / SST = R ˆ xy t ˆ β, and ρ = R that These equatons make t clear that precson n estmaton s a functon of both R, how well the model fts the data, as well and the number of observatons, n It may not be so obvous, but ths expresson s ncreasng n n and R And so deally, both n and R are large 3
14 SLR Assessment: Goodness of Ft & Inference v4 Note that snce SSE + = SST, the t stat wll depend on how the SSTs are dvded between SSEs and s, snce t ˆ β wll be proportonal to SSE, for gven n The hgher the SSE/ rato, the greater the magntude of the t stat Example: Model () from above reg Brozek abd Source SS df MS Number of obs = F(, 50) = Model Prob > F = Resdual R-squared = Adj R-squared = Total Root MSE = Brozek Coef Std Err t P> t [95% Conf Interval] abd _cons The reported t stat for the abd varable s 3 Applyng the formulas, we have: R 66 t = ˆ ( n ) β R = 338 = and so t ˆ β = = 8 SSE t = ˆ ( n ) β = = and so t ˆ β = = Importance of n and R : If you have hgh R but low n, or hgh n (lots of observatons) but poor ft (low R ), then t's lkely that your slope estmate s not so precse But a healthy R together wth lots of observatons means that that you have lkely done a nce job estmatng the unknown parameter, β So: low n and low (low n and hgh hgh n and hgh R : bad news get back to work R ) or (hgh n and low R : well done! R ) stll not so great 4
Ordinary Least Squares (OLS): Simple Linear Regression (SLR) Assessment: Goodness of Fit & Precision
Ordnary Least quares (OL): mple Lnear Regresson (LR) Assessment: Goodness of Ft & Precson How close? Goodness of Ft v. Inference Goodness of Ft I: Mean quared Error (ME) and Root ME (RME) Brng on the ANOVA
More informationβ0 + β1xi. You are interested in estimating the unknown parameters β
Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS
More informationβ0 + β1xi. You are interested in estimating the unknown parameters β
Revsed: v3 Ordnar Least Squares (OLS): Smple Lnear Regresson (SLR) Analtcs The SLR Setup Sample Statstcs Ordnar Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals)
More information1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands
Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationStatistics for Economics & Business
Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable
More informationβ0 + β1xi and want to estimate the unknown
SLR Models Estmaton Those OLS Estmates Estmators (e ante) v. estmates (e post) The Smple Lnear Regresson (SLR) Condtons -4 An Asde: The Populaton Regresson Functon B and B are Lnear Estmators (condtonal
More informationDepartment of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6
Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More information[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students.
PPOL 59-3 Problem Set Exercses n Smple Regresson Due n class /8/7 In ths problem set, you are asked to compute varous statstcs by hand to gve you a better sense of the mechancs of the Pearson correlaton
More informationInterval Estimation in the Classical Normal Linear Regression Model. 1. Introduction
ECONOMICS 35* -- NOTE 7 ECON 35* -- NOTE 7 Interval Estmaton n the Classcal Normal Lnear Regresson Model Ths note outlnes the basc elements of nterval estmaton n the Classcal Normal Lnear Regresson Model
More informationx i1 =1 for all i (the constant ).
Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by
More informationStatistics for Business and Economics
Statstcs for Busness and Economcs Chapter 11 Smple Regresson Copyrght 010 Pearson Educaton, Inc. Publshng as Prentce Hall Ch. 11-1 11.1 Overvew of Lnear Models n An equaton can be ft to show the best lnear
More informationY = β 0 + β 1 X 1 + β 2 X β k X k + ε
Chapter 3 Secton 3.1 Model Assumptons: Multple Regresson Model Predcton Equaton Std. Devaton of Error Correlaton Matrx Smple Lnear Regresson: 1.) Lnearty.) Constant Varance 3.) Independent Errors 4.) Normalty
More informationComparison of Regression Lines
STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence
More informationSTAT 3008 Applied Regression Analysis
STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,
More informationLecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding
Lecture 9: Lnear regresson: centerng, hypothess testng, multple covarates, and confoundng Sandy Eckel seckel@jhsph.edu 6 May 008 Recall: man dea of lnear regresson Lnear regresson can be used to study
More informationLecture 9: Linear regression: centering, hypothesis testing, multiple covariates, and confounding
Recall: man dea of lnear regresson Lecture 9: Lnear regresson: centerng, hypothess testng, multple covarates, and confoundng Sandy Eckel seckel@jhsph.edu 6 May 8 Lnear regresson can be used to study an
More informationBasic Business Statistics, 10/e
Chapter 13 13-1 Basc Busness Statstcs 11 th Edton Chapter 13 Smple Lnear Regresson Basc Busness Statstcs, 11e 009 Prentce-Hall, Inc. Chap 13-1 Learnng Objectves In ths chapter, you learn: How to use regresson
More informationStatistics for Managers Using Microsoft Excel/SPSS Chapter 13 The Simple Linear Regression Model and Correlation
Statstcs for Managers Usng Mcrosoft Excel/SPSS Chapter 13 The Smple Lnear Regresson Model and Correlaton 1999 Prentce-Hall, Inc. Chap. 13-1 Chapter Topcs Types of Regresson Models Determnng the Smple Lnear
More informationEconomics 130. Lecture 4 Simple Linear Regression Continued
Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do
More informationLecture 6: Introduction to Linear Regression
Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6
More informationIntroduction to Regression
Introducton to Regresson Dr Tom Ilvento Department of Food and Resource Economcs Overvew The last part of the course wll focus on Regresson Analyss Ths s one of the more powerful statstcal technques Provdes
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More information17 - LINEAR REGRESSION II
Topc 7 Lnear Regresson II 7- Topc 7 - LINEAR REGRESSION II Testng and Estmaton Inferences about β Recall that we estmate Yˆ ˆ β + ˆ βx. 0 μ Y X x β0 + βx usng To estmate σ σ squared error Y X x ε s ε we
More informationLecture Notes for STATISTICAL METHODS FOR BUSINESS II BMGT 212. Chapters 14, 15 & 16. Professor Ahmadi, Ph.D. Department of Management
Lecture Notes for STATISTICAL METHODS FOR BUSINESS II BMGT 1 Chapters 14, 15 & 16 Professor Ahmad, Ph.D. Department of Management Revsed August 005 Chapter 14 Formulas Smple Lnear Regresson Model: y =
More informationwhere I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).
11.4.1 Estmaton of Multple Regresson Coeffcents In multple lnear regresson, we essentally solve n equatons for the p unnown parameters. hus n must e equal to or greater than p and n practce n should e
More informationChapter 15 - Multiple Regression
Chapter - Multple Regresson Chapter - Multple Regresson Multple Regresson Model The equaton that descrbes how the dependent varable y s related to the ndependent varables x, x,... x p and an error term
More informationSTATISTICS QUESTIONS. Step by Step Solutions.
STATISTICS QUESTIONS Step by Step Solutons www.mathcracker.com 9//016 Problem 1: A researcher s nterested n the effects of famly sze on delnquency for a group of offenders and examnes famles wth one to
More informationx yi In chapter 14, we want to perform inference (i.e. calculate confidence intervals and perform tests of significance) in this setting.
The Practce of Statstcs, nd ed. Chapter 14 Inference for Regresson Introducton In chapter 3 we used a least-squares regresson lne (LSRL) to represent a lnear relatonshp etween two quanttatve explanator
More informationIntroduction to Dummy Variable Regressors. 1. An Example of Dummy Variable Regressors
ECONOMICS 5* -- Introducton to Dummy Varable Regressors ECON 5* -- Introducton to NOTE Introducton to Dummy Varable Regressors. An Example of Dummy Varable Regressors A model of North Amercan car prces
More informationLearning Objectives for Chapter 11
Chapter : Lnear Regresson and Correlaton Methods Hldebrand, Ott and Gray Basc Statstcal Ideas for Managers Second Edton Learnng Objectves for Chapter Usng the scatterplot n regresson analyss Usng the method
More informationTopic 7: Analysis of Variance
Topc 7: Analyss of Varance Outlne Parttonng sums of squares Breakdown the degrees of freedom Expected mean squares (EMS) F test ANOVA table General lnear test Pearson Correlaton / R 2 Analyss of Varance
More informationLecture 3 Stat102, Spring 2007
Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture
More informationModule Contact: Dr Susan Long, ECO Copyright of the University of East Anglia Version 1
UNIVERSITY OF EAST ANGLIA School of Economcs Man Seres PG Examnaton 016-17 ECONOMETRIC METHODS ECO-7000A Tme allowed: hours Answer ALL FOUR Questons. Queston 1 carres a weght of 5%; Queston carres 0%;
More information/ n ) are compared. The logic is: if the two
STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence
More informationResource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis
Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques
More informationECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics
ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott
More information18. SIMPLE LINEAR REGRESSION III
8. SIMPLE LINEAR REGRESSION III US Domestc Beers: Calores vs. % Alcohol Ftted Values and Resduals To each observed x, there corresponds a y-value on the ftted lne, y ˆ ˆ = α + x. The are called ftted values.
More informationLinear Regression Analysis: Terminology and Notation
ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented
More informationHere is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)
Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,
More informationa. (All your answers should be in the letter!
Econ 301 Blkent Unversty Taskn Econometrcs Department of Economcs Md Term Exam I November 8, 015 Name For each hypothess testng n the exam complete the followng steps: Indcate the test statstc, ts crtcal
More informationChapter 14 Simple Linear Regression
Chapter 4 Smple Lnear Regresson Chapter 4 - Smple Lnear Regresson Manageral decsons often are based on the relatonshp between two or more varables. Regresson analss can be used to develop an equaton showng
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More information28. SIMPLE LINEAR REGRESSION III
8. SIMPLE LINEAR REGRESSION III Ftted Values and Resduals US Domestc Beers: Calores vs. % Alcohol To each observed x, there corresponds a y-value on the ftted lne, y ˆ = βˆ + βˆ x. The are called ftted
More informationChapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.
Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the
More information2016 Wiley. Study Session 2: Ethical and Professional Standards Application
6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton
More informationLecture 4 Hypothesis Testing
Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationReminder: Nested models. Lecture 9: Interactions, Quadratic terms and Splines. Effect Modification. Model 1
Lecture 9: Interactons, Quadratc terms and Splnes An Manchakul amancha@jhsph.edu 3 Aprl 7 Remnder: Nested models Parent model contans one set of varables Extended model adds one or more new varables to
More informationProperties of Least Squares
Week 3 3.1 Smple Lnear Regresson Model 3. Propertes of Least Squares Estmators Y Y β 1 + β X + u weekly famly expendtures X weekly famly ncome For a gven level of x, the expected level of food expendtures
More informationStatistics MINITAB - Lab 2
Statstcs 20080 MINITAB - Lab 2 1. Smple Lnear Regresson In smple lnear regresson we attempt to model a lnear relatonshp between two varables wth a straght lne and make statstcal nferences concernng that
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationStatistics II Final Exam 26/6/18
Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the
More informationBiostatistics 360 F&t Tests and Intervals in Regression 1
Bostatstcs 360 F&t Tests and Intervals n Regresson ORIGIN Model: Y = X + Corrected Sums of Squares: X X bar where: s the y ntercept of the regresson lne (translaton) s the slope of the regresson lne (scalng
More informationBIO Lab 2: TWO-LEVEL NORMAL MODELS with school children popularity data
Lab : TWO-LEVEL NORMAL MODELS wth school chldren popularty data Purpose: Introduce basc two-level models for normally dstrbuted responses usng STATA. In partcular, we dscuss Random ntercept models wthout
More informationPubH 7405: REGRESSION ANALYSIS. SLR: INFERENCES, Part II
PubH 7405: REGRESSION ANALSIS SLR: INFERENCES, Part II We cover te topc of nference n two sessons; te frst sesson focused on nferences concernng te slope and te ntercept; ts s a contnuaton on estmatng
More informatione i is a random error
Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown
More informationsince [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation
Econ 388 R. Butler 204 revsons Lecture 4 Dummy Dependent Varables I. Lnear Probablty Model: the Regresson model wth a dummy varables as the dependent varable assumpton, mplcaton regular multple regresson
More informationSTAT 3340 Assignment 1 solutions. 1. Find the equation of the line which passes through the points (1,1) and (4,5).
(out of 15 ponts) STAT 3340 Assgnment 1 solutons (10) (10) 1. Fnd the equaton of the lne whch passes through the ponts (1,1) and (4,5). β 1 = (5 1)/(4 1) = 4/3 equaton for the lne s y y 0 = β 1 (x x 0
More informationOrdinary Least Squares (OLS): Multiple Linear Regression (MLR) Assessment I What s New? & Goodness-of-Fit
Ordinary Least Squares (OLS): Multiple Linear egression (ML) Assessment I What s New? & Goodness-of-Fit Introduction OLS: A Quick Comparison of SL and ML Assessment Not much that's new! ML Goodness-of-Fit:
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationNANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MTH352/MH3510 Regression Analysis
NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION 014-015 MTH35/MH3510 Regresson Analyss December 014 TIME ALLOWED: HOURS INSTRUCTIONS TO CANDIDATES 1. Ths examnaton paper contans FOUR (4) questons
More informationLab 4: Two-level Random Intercept Model
BIO 656 Lab4 009 Lab 4: Two-level Random Intercept Model Data: Peak expratory flow rate (pefr) measured twce, usng two dfferent nstruments, for 17 subjects. (from Chapter 1 of Multlevel and Longtudnal
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationChapter 5: Hypothesis Tests, Confidence Intervals & Gauss-Markov Result
Chapter 5: Hypothess Tests, Confdence Intervals & Gauss-Markov Result 1-1 Outlne 1. The standard error of 2. Hypothess tests concernng β 1 3. Confdence ntervals for β 1 4. Regresson when X s bnary 5. Heteroskedastcty
More informationOutline. Zero Conditional mean. I. Motivation. 3. Multiple Regression Analysis: Estimation. Read Wooldridge (2013), Chapter 3.
Outlne 3. Multple Regresson Analyss: Estmaton I. Motvaton II. Mechancs and Interpretaton of OLS Read Wooldrdge (013), Chapter 3. III. Expected Values of the OLS IV. Varances of the OLS V. The Gauss Markov
More informationCorrelation and Regression. Correlation 9.1. Correlation. Chapter 9
Chapter 9 Correlaton and Regresson 9. Correlaton Correlaton A correlaton s a relatonshp between two varables. The data can be represented b the ordered pars (, ) where s the ndependent (or eplanator) varable,
More informationCorrelation and Regression
Correlaton and Regresson otes prepared by Pamela Peterson Drake Index Basc terms and concepts... Smple regresson...5 Multple Regresson...3 Regresson termnology...0 Regresson formulas... Basc terms and
More informationChapter 4: Regression With One Regressor
Chapter 4: Regresson Wth One Regressor Copyrght 2011 Pearson Addson-Wesley. All rghts reserved. 1-1 Outlne 1. Fttng a lne to data 2. The ordnary least squares (OLS) lne/regresson 3. Measures of ft 4. Populaton
More informationLaboratory 3: Method of Least Squares
Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationJanuary Examinations 2015
24/5 Canddates Only January Examnatons 25 DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR STUDENT CANDIDATE NO.. Department Module Code Module Ttle Exam Duraton (n words)
More informationChapter 3. Two-Variable Regression Model: The Problem of Estimation
Chapter 3. Two-Varable Regresson Model: The Problem of Estmaton Ordnary Least Squares Method (OLS) Recall that, PRF: Y = β 1 + β X + u Thus, snce PRF s not drectly observable, t s estmated by SRF; that
More informationBasically, if you have a dummy dependent variable you will be estimating a probability.
ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationECON 351* -- Note 23: Tests for Coefficient Differences: Examples Introduction. Sample data: A random sample of 534 paid employees.
Model and Data ECON 35* -- NOTE 3 Tests for Coeffcent Dfferences: Examples. Introducton Sample data: A random sample of 534 pad employees. Varable defntons: W hourly wage rate of employee ; lnw the natural
More informationRegression Analysis. Regression Analysis
Regresson Analyss Smple Regresson Multvarate Regresson Stepwse Regresson Replcaton and Predcton Error 1 Regresson Analyss In general, we "ft" a model by mnmzng a metrc that represents the error. n mn (y
More informationEcon Statistical Properties of the OLS estimator. Sanjaya DeSilva
Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate
More informationThis column is a continuation of our previous column
Comparson of Goodness of Ft Statstcs for Lnear Regresson, Part II The authors contnue ther dscusson of the correlaton coeffcent n developng a calbraton for quanttatve analyss. Jerome Workman Jr. and Howard
More informationLecture 2: Prelude to the big shrink
Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationStatistics Chapter 4
Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More informationis the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors
Multple Lnear and Polynomal Regresson wth Statstcal Analyss Gven a set of data of measured (or observed) values of a dependent varable: y versus n ndependent varables x 1, x, x n, multple lnear regresson
More informationDO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR. Introductory Econometrics 1 hour 30 minutes
25/6 Canddates Only January Examnatons 26 Student Number: Desk Number:...... DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR Department Module Code Module Ttle Exam Duraton
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationTopic 23 - Randomized Complete Block Designs (RCBD)
Topc 3 ANOVA (III) 3-1 Topc 3 - Randomzed Complete Block Desgns (RCBD) Defn: A Randomzed Complete Block Desgn s a varant of the completely randomzed desgn (CRD) that we recently learned. In ths desgn,
More informationDr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur
Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed
More informationChapter 14 Simple Linear Regression Page 1. Introduction to regression analysis 14-2
Chapter 4 Smple Lnear Regresson Page. Introducton to regresson analyss 4- The Regresson Equaton. Lnear Functons 4-4 3. Estmaton and nterpretaton of model parameters 4-6 4. Inference on the model parameters
More informationThe SAS program I used to obtain the analyses for my answers is given below.
Homework 1 Answer sheet Page 1 The SAS program I used to obtan the analyses for my answers s gven below. dm'log;clear;output;clear'; *************************************************************; *** EXST7034
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationChapter 5 Multilevel Models
Chapter 5 Multlevel Models 5.1 Cross-sectonal multlevel models 5.1.1 Two-level models 5.1.2 Multple level models 5.1.3 Multple level modelng n other felds 5.2 Longtudnal multlevel models 5.2.1 Two-level
More informationProfessor Chris Murray. Midterm Exam
Econ 7 Econometrcs Sprng 4 Professor Chrs Murray McElhnney D cjmurray@uh.edu Mdterm Exam Wrte your answers on one sde of the blank whte paper that I have gven you.. Do not wrte your answers on ths exam.
More informationChapter 6. Supplemental Text Material
Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.
More informationQuestion 1 carries a weight of 25%; question 2 carries 20%; question 3 carries 25%; and question 4 carries 30%.
UNIVERSITY OF EAST ANGLIA School of Economcs Man Seres PGT Examnaton 017-18 FINANCIAL ECONOMETRICS ECO-7009A Tme allowed: HOURS Answer ALL FOUR questons. Queston 1 carres a weght of 5%; queston carres
More informationJAB Chain. Long-tail claims development. ASTIN - September 2005 B.Verdier A. Klinger
JAB Chan Long-tal clams development ASTIN - September 2005 B.Verder A. Klnger Outlne Chan Ladder : comments A frst soluton: Munch Chan Ladder JAB Chan Chan Ladder: Comments Black lne: average pad to ncurred
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More information