The Classical Linear Regression Model and Least Squares
|
|
- Darleen Harmon
- 5 years ago
- Views:
Transcription
1 1 The Classcal Lnear Regresson Model and Least Squares Greene Chapters:, 3 R scrpt mod1_a, mod1_b, mod1_c, mod1_d Regresson Analyss Much work n appled econometrcs s based on regresson analyss. Defnton A regresson s a stpulated relatonshp between the expectaton of a dependent varable and a combnaton of explanatory data and unknown parameters. Genercally: E( y x, β) f ( x, β ) (1) We can perform regresson analyss n fully parametrc and sem-parametrc frameworks. We usually wrte the regresson model as: y E y x, β + y E y x, β E y x, β + f x, β + () where s an "error term" that ncludes everythng we can't explan about y even knowng x and (hypothetcally) populaton parameter vector β. What's n the error term? The error term s also called the "dsturbance term". It can nclude some or all of the followng components: 1. Specfcaton error: There are other factors than x that drve the observed varablty n y, and / or the functonal relatonshp between y and x s ncorrect. The former s essentally unavodable n most applcatons, and does not necessarly jeopardze our ablty to estmate the effect of x on y. The latter s a bgger problem. It can produce neffcent and, n some cases, msleadng estmates.. Measurement error (also known as "errors n varables"): Usually a mnor problem f we msmeasure y, but potentally a serous ssue f we ms-measure elements of x. 3. Human erratc behavor: The stpulated relatonshp between y and x s generally correct, but unts of observaton (usually people) slghtly change ther behavor from case to case, ntroducng devatons from the stpulated relatonshp. Ths s a truly random part of the error term, and n most cases does not jeopardze estmaton results. In a parametrc framework, we would assgn a densty to (such as (, ) n σ ), where n stands for "normally dstrbuted". In that case, σ becomes an addtonal parameter that has to be estmated. In a E to preserve the relatonshp n (1). sem-parametrc framework we would smply state that A specal case of a regresson model s a model that s lnear n β,.e. one that can be wrtten as y x β + (3)
2 Such models can be estmated va a technque called "Least Squares (LS)". If certan assumpton on hold, the model s called "Classcal Lnear Regresson Model" (CLRM), and estmaton can proceed va "Ordnary Least Squares" (OLS), the topc of the next secton. R practce: Buldng a regresson model for study tme R scrpt mod1_a llustrates how to buld a regresson relatonshp wth smulated data. The scrpt also shows the gan n accuracy as the regresson model chosen by the researcher approaches the true model. Notaton Assume you have a sample of wages and explanatory varables for n ndvduals. For a sngle observaton ( person, frm, household, etc), the CLRM can be wrtten as: y β x + β x + + β x + x β + (4) 1 1 k k The β - terms are unknown populaton parameters. In a regresson context, they are often referred to as "coeffcents". In practce, x 1 s often smply the number "1", whch then makes β 1 the ntercept or "constant term" of the regresson model. The remanng β - terms are "slope coeffcents". By conventon, I wll ndex the β - terms from 1 through k. (In some texts, the ndex s from to k, whch mples that the total number of coeffcents s k+1). We can stack these equatons for all 1..n ndvduals: y β x + β x + + β x k 1k 1 y β x + β x + + β x k k y β x + β x + + β x + n 1 n1 n k nk n (5) Next, we note that the left hand sde and the error terms can be compactly expressed as a vectors y y y y and. For a gven ndvdual, the rght hand sde (mnus [ ] [ ] 1 n 1 the error term) can be wrtten as an nner product of vectors: n β x + β x + + β x x β x 1 1 k k [ x x x ] β1 β βk 1 k, β where (6) We can now wrte the entre system of lnear equatons as:
3 3 x1 β x1 x11 x1 x1 k x1 x x xβ x k where y β X β X x β x x x x n n n1 n nk (7) Assumptons The CLRM rests on 4 key assumptons. An optonal 5th assumpton (normalty of the error term) can be added n cases where a fully parametrc approach s desred or needed. Assumpton 1: The stpulated relatonshp s lnear n parameters (.e. n β ). Note: Ths does not mean that the elements of data matrx X also have to be lnear. It's OK to have log-terms, squared terms, etc n the X matrx. Certan nonlnear forms of y are also permssble, as long as they can be transformed to lnearty. For example, the followng common model s a permssble CLRM: ( ) ln ( y ) y exp x β+ xβ + (8) The log-transformaton on both sdes preserves lnearty. If all of the x 's are logged as well, the model s called "log-log" or "log-lnear". If all of the x 's are lnear, the model s called "sem-log". Naturally, a mx of logged and lnear regressors s also possble. As we wll see below, loggng ether sde changes the nterpretaton of margnal effects. Assumpton : The data matrx X has to be "full rank". X has dmensons n x k, n>k, so "full rank" mples that X s rank "k". In words: All varables n X are lnearly ndependent. If ths wasn't the case, there would be an nfnte number of solutons for the estmate of β. Example: see R scrpt mod1_b Assumpton 3: The expectaton of all error terms, condtonal on X, s zero,.e E ( 1 X) E ( ) E ( ) X X E ( n X) (9) In words, ths mples that no element of the data matrx contans any nformaton about the expectaton of any error term, so s a true vector of "unknown" or "unobservable" effects. If ths assumpton s volated, our estmated parameters wll "pck up" undesred effects from "the stuff n the error term", thus producng msleadng results. Ths s the nfamous "omtted varable (OV)" problem, whch comes n many szes & shapes. We wll talk about OV's at length later n ths course. As dscussed n Greene, ch., Assumpton 3 also mples the followng relatonshps:
4 4 E [ ] ( X) [ y X] Cov E, Xβ (1) Assumpton 3 s often ms-nterpreted and can be confusng. Here s a closer look. Frst, we need to dstngush between the dstrbuton of the error term and the dstrbuton of the explanatory varables. In the CLRM, we assume that each observaton (say a gven ndvdual ) s assgned her own error term,. In addton, error terms are ndependent from each other. Thus, we can focus on one error term at the tme when examnng the underpnnngs of Assumpton 3. In the CLRM, by Assumpton 4, ths error term follows a dstrbuton wth mean zero and varance σ, and all error terms,, 1 n, follow the exact same dstrbuton. Thus we can thnk of all error terms as realzatons from the same dstrbuton. However, we wll later encounter models where each has ts own dstrbuton (.e. a volaton of Assumpton 4), but Assumpton 3 may stll hold. Thus, for ths dscusson of Assumpton 3, we wll make no further assumpton regardng the exact dstrbuton of, and how t relates to the dstrbuton of j, j, other than nvokng ndependence. In other words, t's OK to thnk of each as followng ts own dstrbuton. Now consder some explanatory varable x k (example: "SAT score"). Generally, we consder each explanatory varable as a true random varable n the statstcal sense, followng some underlyng dstrbuton, potentally jontly wth other regressors. Indvdual 's realzaton of ths varable s x k. The set of realzatons of xk for the sample at hand s vector x k. For cross-sectonal regresson, where each observaton corresponds to a dfferent ndvdual (or "cross-sectonal unt") t makes sense to thnk of each xk, 1 n n xk as realzaton from the populaton dstrbuton f ( x k ). Ths dstrbuton s shared by all ndvduals n the sample. In tme seres analyss ths s not as clear-cut, snce x tk (example: water use n month t) could feasbly follow a dfferent dstrbuton than x( t 1) (water use n month t-1). Thus, k as wth the error term, we can assume wthout loss of generalty that each xk, 1 nk, 1 Kfollows ts own (unspecfed) dstrbuton wthn the context of ths dscusson. So let's frst focus on a sngle draw of the error term,, and a sngle draw of some regressor for person, x k. For smplcty, let's also assume that there are no other explanatory varables n the model (so we don't have to worry about potental correlaton of xk wth xl, l k ). Ths means we can drop the "k" subscrpt for now. To talk about condtonal expectatons n a meanngful way, we must assume that and x (potentally) follow a jont dstrbuton, say f (, x). Ths jont densty can always be wrtten as a product of a margnal and a condtonal densty,.e. (, ) ( ) ( ) ( ) f x f x f x f x f (11)
5 5 The margnal densty, margnal (uncondtonal) expectaton, and condtonal expectaton of as follows: are gven ( ) (, ) f f x dx x E f d ( ) ( ) E x f x d (1) We can now derve an mportant relatonshp, called the "Law of Iterated Expectatons" (see Greene p. 16 and p. 17): E ( ) f ( ) d f (, x) dx d f ( x) f ( x) dx d x x x f x d f x dx E E x ( ) ( ) ( ) x (13) Ths says that the uncondtonal expectaton of can be nterpreted as the expectaton, over x, of the condtonal expectaton of. x Now back to Assumpton 3: It mples that E ( ) x uncondtonal expectaton s zero as well, snce. By (13), ths automatcally mples that the Ex E x Ex. Assumpton 3 s often nterpreted as " and x are uncorrelated",.e. have a covarance of zero. Let's check f ths s correct,.e. f E ( x ) cov (, x ) : ( x) E, x E x E x, x ( ) +, x ( ), x ( ( ) ), x ( ) + ( ) ( ( )) ( ( )) ( ) ( ) ( ) ( ) cov, * E x E x E x E E x E x E E x E E x E E x ( ) ( ) + ( xe ( )) ( ( )) x E E x Ex E x E x + E E x ( ( )) ( ( )) ( ( ) ( )) ( ( )( )) E E x x E E E x x E E E x x + E E x x x x E xe x E E x E E x E x E E x E x x x x E xe x E E x E x x x E xe x E x E x E E x x E x x x (14)
6 6 Naturally, f E ( ) x E ( x ) ( x ) the last lne n (14) wll always equal zero, so ndeed we have cov,. Note that the last lne also corresponds to Greene's Theorem B. on p. 17. Is the reverse true,.e. does cov (, x ) E ( x ) hold? It wll as long as we have ( ) ( ) x E x, else cov, x E E x E x E x E E x * (15) x x regardless of the value of E ( x ) regresson model.. Naturally, ths can only hold for the constant term n a gven From Assumpton 3 t also follows that E y x E xβ + x E xβ + E x E xβ + xβ (16).e. we have ndeed a true regresson relatonshp. For ths reason Assumpton 3 s often referred to as the "regresson" assumpton. Now let's go a step further and assume that there are two explanatory varables n the model, xk and x l, and that these two regressors are potentally correlated wth each other and wth the error term. (For example, xl could be "frst year GPA" x k could be "SAT score", and both could, n theory, be correlated wth the unobserved varable "ablty", or "spunk"). Thus, we are now consderng the tr-varate densty f, x, x. Ths does not pose any addtonal complcatons. k l Assumpton 3 smply requres that As before, ths mples E ( ) snce ( ) ( ) ( ) xk, xl k l xk, xl E x, x f x, x d k l k l E E E x, x E (17) By analogy to our dervaton above, the covarance between and each of the two regressors must be zero as well. Pushng ths one last step further, we now extend Assumpton 3 also to observatons on regressors for other ndvduals n the sample. Consder a second ndvdual, j, for whom we observe realzatons for the same two regressors, x jk and x jl. Ths smply extends the exposton to a fve-varate jont densty,.e. f, x, x, x, x, and Assumpton 3 requres that ( k l jk jl ) E x, x, x, x f x, x, x, x d k l jk jl k l jk jl All other results follow by analogy. Thus, we can compactly express Assumpton 3 for any generc CLRM as shown n (9).
7 7 R scrpt mod1_c llustrates the mplcatons of Assumpton 3. Assumpton 4 Homoskedastcty ( equal varance) of error terms: ( ) σ V 1 n (18) So we assume all n error terms are "drawn" from the same dstrbuton wth mean and varance Error terms are uncorrelated ("non-autocorrelated"): ( ) ( ) Cov, E E E E, j (19) j j j j Volatons of these assumptons s called "heteroskedastcty" and "autocorrelaton", respectvely. We'll address these ssues later n ths course. For the full model, Assumpton 4 can be wrtten as σ n σ 1 n σ V E E σ I n 1 n n n σ () Dsturbances that satsfy ths property are called "sphercal". For the dependent varable, ths mples: ( ) ( + ) + + (, ) V y X V Xβ V Xβ V Cov Xβ V σ I (1) Assumpton 5 The error terms follow a normal dstrbuton,.e. ~ n, σ or ~ n, σ I () For the dependent varable, ths mples: ( σ ) y ~ n Xβ, I (3) Ths normalty assumpton s not necessary to derve parameter estmates for β, but t s needed to derve some exact statstcal results for estmators, and to construct certan test statstcs.
8 8 The statstcal propertes of X In some applcatons, the elements of the observed data X can be vewed as fxed (e.g. n expermental settngs where all "data" are perfectly controlled), but n most cases X tself wll follow some dstrbuton n the underlyng populaton. If so, we vew the entre analyss as "condtonal on X", whch means we essentally "freeze" X at the observed values. We understand that a dfferent X mght produce dfferent results, but we hope that these dfferences would vansh wth (hypothetcal) collecton of more and more data. What's mportant s Assumpton 3: Whchever mechansm generates X s unrelated to the mechansm that generates the error terms. Estmaton The populaton model at the observaton level s gven by (4). We seek an estmator for β wth desrable statstcal propertes. We wll denote ths estmator genercally as b, and the resultng estmate for E[y x ] as y ˆ,.e. [ ] Eˆ y x y ˆ xb (4) The term yˆ s often called "ftted value". The dfference between actually observed y and estmated (or "predcted") yˆ s called "resdual" e,.e. e y yˆ y xb (5) In passng, we note that ths mples the followng equalty x β + x b + e and Xβ + Xb + e. (6) To fnd b, a natural strategy mght be to get yˆ as close to y as possble for all observatons. By (5) ths mples "makng the resduals as small as possble". A more popular crteron s to mnmze the sum of squared resduals, whch penalzes larger resduals relatvely more. See slde of the PowerPont sldes "OLSregresson" on our course web page, and also Greene Fg. 3.1 (p. ). Denotng a canddate for b as b, our optmzaton goal s thus gven as mn e e y Xb y Xb y y b X y y Xb + b X Xb b y y y Xb + b X Xb (7) We solve for b by dervng the Frst Order Condtons (FOC), usng "matrx calculus" (see Matrx Algebra notes) ee Xy + XXb b Xy + XXb (8)
9 9 The second lne s often referred to as the Least Squares "Normal Equaton". If X s full rank, X'X wll be symmetrc and full rank as well, thus ts nverse wll exst, and we can solve (8) for b to obtan 1 b XX Xy (9) As shown n Greene p. 1, ths soluton s ndeed a mnmum and unque, snce the second dervatve yelds a postve defnte matrx. The soluton s called the Ordnary Least Squares (OLS) estmator. See scrpt mod1_b for an example of OLS usng wage data. Orthogonalty and Projecton In Eucldean space, "orthogonaly" between vectors a and b (some b, not our OLS estmator) arses f two vectors form a rght angle. Mathematcally, ths mples that ther nner product s zero,.e. a'b. Orthogonalty s also possble between a matrx X and a conformable vector a. In that case, t mples that every column of X s orthogonal to a, and that X'a. In contrast, populaton orthogonalty between two varables mples that the two varables are perfectly uncorrelated for the entre populaton, n the sense that the expectaton of ther nner product s zero (essentally our Assumpton 3 for the CLRM for any regressor x and the error term). However, for any fnte set of draws for these varables (.e. for a gven sample), orthogonalty n the Eucldean sense s unlkely to hold (though we would expect the nner product to be relatvely close to zero). Perfect (Eucldean) orthogonalty between explanatory varables s generally a very desrable property n regresson analyss. In realty, t hardly ever occurs (the best we can hope for s "low correlaton"), but n expermental settngs researches can 'desgn" orthogonal explanatory varables. We wll vst the topc of "orthogonalty" many tmes n ths course. For the OLS model, we have orthogonalty between data matrx X and the resduals by constructon. Startng wth the normal equaton, we have: XXb Xy X y Xb Xe (3) Intutvely, ths makes a lot of sense - the resduals should only contan what X couldn't explan about y. If the frst column of X s a vector of "1's" (.e. we are estmatng a regresson model wth a constant term - the standard case, we gan a few more nterestng nsghts from the normal equaton and relatonshps that flow from t: () The resduals sum to zero: e [ ] ce Xe c ck e ce k (31) () The regresson hyperplane passes through the data means:
10 1 c [ ] c XXb Xy c 1 ck b y c k ck 1st equaton: n n n [ c ] 1 ck b y n c1 c b y k n n n n n 1 c1 n ck nb y n x'b y where x 1 c1 n ck n Next, let's derve two mportant matrces n regresson analyss: -1-1 e y Xb y X( X'X) X y I X( X'X) X y My and -1 yˆ y- e y My I M y X( X'X) Xy Py In regresson jargon, M s known as the "resdual maker" and P s the "projecton matrx". Note that both matrces are a functon of X. So wth each new X, we get a new M and P. Also, both matrces are symmetrc and dempotent ( means M*M M). Intutvely, just remember that M turns y nto "everythng X couldn't explan",.e. the resduals, and P turn y nto "everythng X can explan",.e. the ftted values. Some useful relatonshps that follow: (3) (33) MX PM MP PX X y Py + My ee ey yy yy ˆˆ + ee (34) Parttoned Regresson and Partal Regresson R scrpt mod1_d Queston: What computatons are nvolved n obtanng, n solaton, a subset of the coeffcents of a multple regresson model (whle stll controllng for all other effects)? In the early days of regresson analyss, the resultng strategy of "parttoned regresson" was an mportant "shortcut" to reduce computaton tme. Today, we show ths prmarly to gan further nsghts nto the mechancs of OLS. Frst, assume we have data matrx X and correspondng coeffcent vector β, but we are prmarly nterested n the effects of a subset of varables (columns) of X. Denote ths subset as X and the remanng varables as X 1. Conformably, also splt the coeffcent vector nto parts β and β 1. We can then wrte the CLRM as: y Xβ+ Xβ Xβ + (35)
11 11 Our objectve s to fnd the OLS soluton for b. The normal equatons can now be wrtten as XX 1 1 XX 1 b1 Xy 1 XX 1 XX b Xy (36) Frst, use the frst set of equatons to solve for b 1 n terms of X 1, X, y, and b : 1 XXb + XXb Xy b XX X y Xb (37) Note that f all columns of X 1 are orthogonal to all columns of X (a rare occurrence to say the least...), X' 1 X, and we have the usual OLS normal equatons, and 1 b XX Xy. Ths mples that we could then estmate b 1 smply by regressng y on X 1. Intutvely, ths further mples that there s no correlaton between X 1 and X, so we don't have to "control" for the effects of X when estmatng the effects of X 1 on y. (Note: However, we would stll lose effcency and explanatory power for our model f X has somethng to say about y). Combnng (37) wth the second set of normal equatons from (36) yelds the soluton for b : XX b + XX b Xy XX XX X y Xb + XX b Xy b XMX XMy M I X XX X where M 1 s the resdual maker matrx n a regresson of anythng on X 1. Note agan that ths reduces to the standard OLS formula f X' 1 X. We can alternatvely wrte the parttoned regresson as * ( *) 1 * * where b X X X y * * 1 1 X MX, y My (38) (39) Thus, we can thnk of ths as a two-step process: In step 1 we regress all columns of X on X 1 and collect the resduals (.e. get M 1 X ). These resduals wll then contan what's left of the nformaton n X after nettng out the effect of X 1. We do the same for y by regressng t on X 1 and collectng resduals. In step we then regress the purged-of-x 1 -effects verson of y on the purged-of-x 1 -effects verson of X to get the pure effect of X on y,.e. b. Ths s the famous "Frsch-Waugh Theorem" (Greene p. 8). The process can easly extended to derve the parttoned result for an ndvdual coeffcent. In that case, just thnk of X as a sngle column (say "z"), and of b as a scalar, say "c". We then get ( ) 1 where, * * * * * * 1 1 c z z z y z Mz y My (4) If you consder z to be a new varable to be added to X, you get the result n Corollary 3..1, p. 34. R scrpt mod1_d llustrates the Parttoned Regresson approach. Note that the estmated standard devaton for the error term, and thus the standard errors and t-values for the parttoned regresson results are a bt dfferent from those obtaned n the full model. Ths dfference les solely n the (n-k) correcton n the denomnator for the s-formula, snce "k" dffers between the full and parttoned model. Naturally,
12 1 ths dfference dmnshes under ncreasng sample sze, as n wll vastly outwegh k n ether case. (You can verfy ths by droppng the "mnus k" correcton n the s formula for both models the s.e.'s and t- values wll now be dentcal) Goodness of Ft and Analyss of Varance (ANOVA) A specal case of the resdual-maker matrx M 1 s the (n by n) devaton-from-the-mean matrx M : ( ' ) 1 M I (41) where I s the n by n dentty matrx and s an n by 1 vector of ones. Intutvely, ths matrx "regresses" any vector, say x, or matrx, say X, aganst a column of ones and captures the resultng resduals. These resduals wll smply be the devaton of each observaton n x or X from ts respectve column mean. Example: x1 x1 x x x x x Mx x x x x x n n [ ] [ ] X c c c M X c c c c c c 1 k 1 1 k k (4) We can now compactly wrte our CLRM n "devaton from the mean" form as My MXb + Me MXb + e (43) The second equalty follows from the fact that the mean of the resdual vector s zero,.e. Me the model ncludes an ntercept). Notng that t then follows that e'mx e X, we can derve the sum of squared dfferences of the elements of y from ther mean, known as "Total Sum of Squares" (SST) as e(iff My My ymy bxmxb + emxb + bxme + eme bxm Xb+ ee ym ˆ yˆ + ee (44) If the regresson ncludes an ntercept ( constant term), we have ŷ y and yˆ yˆ yˆ y, and we can nterpret (44) as Total Sum of Squares Regresson Sum of Squares + Error Sum of Squares, or SST SSR + SSE. Note: The last term s also known as "sum of squared resduals", but snce the resultng acronym would also be SSR, we'll stck wth Greene's termnology of "Error Sum of Squares". Intuton: Recall that the fundamental am of regresson analyss s to explan observed varaton n y va observed varaton n X. Equaton (44) says that we can break down the varaton n y nto two components: A part that's explaned by the varaton of X (SSR), and a part that our regresson cannot explan (SSE). It s thus natural to use the rato of SSR / SST as Goodness-of-Ft measure for our regresson model. Ths s the "Coeffcent of Determnaton", better known as "R ".
13 13 R b X MXb ee SSR / SST 1 ym y ym y (45) Ths statstc les between and 1. A value of "" mples that X has nothng to say about y. In a - dmensonal model, ths would be a flat regresson lne through the mean of y. A value of 1 mples a "perfect ft" - y actually les n the hyperplane descrbed by the columns of X. As dscussed n more detal n Greene, Ch. 3, a problem wth ths ft measure s that t never declnes as more explanatory varables, even meanngless ones, are added to the model. In other words, the conventonal R measure doesn't penalze for superfluous regressors, or, alternatvely, doesn't reward for parsmony. We therefore prefer the "Adjusted R " gven as /( 1) ee / n k ee n 1 R 1 1 ym y n ym y n k (46) Thus, as the number of regressors (ncludng ntercept) k ncreases relatve to sample sze n, the last term ncreases, and the adjusted ft deterorates. The ordnary and adjusted R are related as follows: R ( R ) n n k (47) Note that the precedng dervatons break down when the regresson model does not nclude a constant term. For models wthout constant term, the nterpretaton of R becomes ambguous, and ths measure should not be used. Instead, you may want to consder the followng alternatve goodness-of-ft measures, the Akake Informaton Crteron (AIC) and the Schwartz or Bayesan Informaton Crteron (BIC). Both work for models wthout constant term and, for that matter, also for nonlnear regresson models. They also reward parsmony. They are usually gven n log-form as follows: ee K ee Klog( n) log AIC log + log BIC log + n n n n (48) (see Greene ch. 5 for more detals). Note that both statstcs DECLINE as model ft mproves. (So a smaller number mples a better ft). Some software packages produce a summary of squared devatons n a table accompanyng the man regresson results. Ths table s often referred to as "Analyss of Varance" (ANOVA) Table. In STATA for example, "Model" SS means SSR, "Resdual" SS means SSE, and "Total" SS means SST. See scrpt mod1_d for basc goodness-of-ft dervatons. Fnally, f you wsh to use R to choose between two models, the followng must hold: 1. The vector of dependent observatons must be dentcal for both models. (So you can't compare a model that s lnear n y to one that uses, say, log y. Also, sample szes must be dentcal)
14 14. The models have to be lnear-n-parameters. For nonlnear regresson models use AIC, BIC or related measures. 3. Both models must have a constant term, and the mean of the error term n the populaton model must be zero.
The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationPrimer on High-Order Moment Estimators
Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc
More informationLinear Regression Analysis: Terminology and Notation
ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented
More informationChapter 3. Two-Variable Regression Model: The Problem of Estimation
Chapter 3. Two-Varable Regresson Model: The Problem of Estmaton Ordnary Least Squares Method (OLS) Recall that, PRF: Y = β 1 + β X + u Thus, snce PRF s not drectly observable, t s estmated by SRF; that
More informationStatistics for Economics & Business
Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable
More informationx i1 =1 for all i (the constant ).
Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationOutline. Zero Conditional mean. I. Motivation. 3. Multiple Regression Analysis: Estimation. Read Wooldridge (2013), Chapter 3.
Outlne 3. Multple Regresson Analyss: Estmaton I. Motvaton II. Mechancs and Interpretaton of OLS Read Wooldrdge (013), Chapter 3. III. Expected Values of the OLS IV. Varances of the OLS V. The Gauss Markov
More informationChapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.
Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics
ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott
More informationIntroduction to Regression
Introducton to Regresson Dr Tom Ilvento Department of Food and Resource Economcs Overvew The last part of the course wll focus on Regresson Analyss Ths s one of the more powerful statstcal technques Provdes
More informationEcon Statistical Properties of the OLS estimator. Sanjaya DeSilva
Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationStatistics for Managers Using Microsoft Excel/SPSS Chapter 13 The Simple Linear Regression Model and Correlation
Statstcs for Managers Usng Mcrosoft Excel/SPSS Chapter 13 The Smple Lnear Regresson Model and Correlaton 1999 Prentce-Hall, Inc. Chap. 13-1 Chapter Topcs Types of Regresson Models Determnng the Smple Lnear
More informationDepartment of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6
Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.
More informationEconomics 130. Lecture 4 Simple Linear Regression Continued
Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do
More informationIII. Econometric Methodology Regression Analysis
Page Econ07 Appled Econometrcs Topc : An Overvew of Regresson Analyss (Studenmund, Chapter ) I. The Nature and Scope of Econometrcs. Lot s of defntons of econometrcs. Nobel Prze Commttee Paul Samuelson,
More informationThe Ordinary Least Squares (OLS) Estimator
The Ordnary Least Squares (OLS) Estmator 1 Regresson Analyss Regresson Analyss: a statstcal technque for nvestgatng and modelng the relatonshp between varables. Applcatons: Engneerng, the physcal and chemcal
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More information1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands
Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of
More informationBasic Business Statistics, 10/e
Chapter 13 13-1 Basc Busness Statstcs 11 th Edton Chapter 13 Smple Lnear Regresson Basc Busness Statstcs, 11e 009 Prentce-Hall, Inc. Chap 13-1 Learnng Objectves In ths chapter, you learn: How to use regresson
More informatione i is a random error
Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationDr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur
Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationChapter 15 - Multiple Regression
Chapter - Multple Regresson Chapter - Multple Regresson Multple Regresson Model The equaton that descrbes how the dependent varable y s related to the ndependent varables x, x,... x p and an error term
More informationEcon107 Applied Econometrics Topic 9: Heteroskedasticity (Studenmund, Chapter 10)
I. Defnton and Problems Econ7 Appled Econometrcs Topc 9: Heteroskedastcty (Studenmund, Chapter ) We now relax another classcal assumpton. Ths s a problem that arses often wth cross sectons of ndvduals,
More informationChapter 4: Regression With One Regressor
Chapter 4: Regresson Wth One Regressor Copyrght 2011 Pearson Addson-Wesley. All rghts reserved. 1-1 Outlne 1. Fttng a lne to data 2. The ordnary least squares (OLS) lne/regresson 3. Measures of ft 4. Populaton
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationβ0 + β1xi. You are interested in estimating the unknown parameters β
Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS
More informationDr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur
Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models
More informationStatistics for Business and Economics
Statstcs for Busness and Economcs Chapter 11 Smple Regresson Copyrght 010 Pearson Educaton, Inc. Publshng as Prentce Hall Ch. 11-1 11.1 Overvew of Lnear Models n An equaton can be ft to show the best lnear
More informationLecture 3 Specification
Lecture 3 Specfcaton 1 OLS Estmaton - Assumptons CLM Assumptons (A1) DGP: y = X + s correctly specfed. (A) E[ X] = 0 (A3) Var[ X] = σ I T (A4) X has full column rank rank(x)=k-, where T k. Q: What happens
More informationHowever, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values
Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly
More informationLecture 4 Hypothesis Testing
Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to
More informationBasically, if you have a dummy dependent variable you will be estimating a probability.
ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy
More informationCorrelation and Regression
Correlaton and Regresson otes prepared by Pamela Peterson Drake Index Basc terms and concepts... Smple regresson...5 Multple Regresson...3 Regresson termnology...0 Regresson formulas... Basc terms and
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationComparison of Regression Lines
STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationTHE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE
THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for
More informationSTAT 3008 Applied Regression Analysis
STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,
More information18. SIMPLE LINEAR REGRESSION III
8. SIMPLE LINEAR REGRESSION III US Domestc Beers: Calores vs. % Alcohol Ftted Values and Resduals To each observed x, there corresponds a y-value on the ftted lne, y ˆ ˆ = α + x. The are called ftted values.
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More informationLecture 2: Prelude to the big shrink
Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationChapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of
Chapter 7 Generalzed and Weghted Least Squares Estmaton The usual lnear regresson model assumes that all the random error components are dentcally and ndependently dstrbuted wth constant varance. When
More informationChapter 14 Simple Linear Regression
Chapter 4 Smple Lnear Regresson Chapter 4 - Smple Lnear Regresson Manageral decsons often are based on the relatonshp between two or more varables. Regresson analss can be used to develop an equaton showng
More information2016 Wiley. Study Session 2: Ethical and Professional Standards Application
6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More information28. SIMPLE LINEAR REGRESSION III
8. SIMPLE LINEAR REGRESSION III Ftted Values and Resduals US Domestc Beers: Calores vs. % Alcohol To each observed x, there corresponds a y-value on the ftted lne, y ˆ = βˆ + βˆ x. The are called ftted
More informationwhere I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).
11.4.1 Estmaton of Multple Regresson Coeffcents In multple lnear regresson, we essentally solve n equatons for the p unnown parameters. hus n must e equal to or greater than p and n practce n should e
More informationLecture 3 Stat102, Spring 2007
Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture
More information/ n ) are compared. The logic is: if the two
STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationSIMPLE LINEAR REGRESSION
Smple Lnear Regresson and Correlaton Introducton Prevousl, our attenton has been focused on one varable whch we desgnated b x. Frequentl, t s desrable to learn somethng about the relatonshp between two
More informationFirst Year Examination Department of Statistics, University of Florida
Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve
More informationA Comparative Study for Estimation Parameters in Panel Data Model
A Comparatve Study for Estmaton Parameters n Panel Data Model Ahmed H. Youssef and Mohamed R. Abonazel hs paper examnes the panel data models when the regresson coeffcents are fxed random and mxed and
More informationY = β 0 + β 1 X 1 + β 2 X β k X k + ε
Chapter 3 Secton 3.1 Model Assumptons: Multple Regresson Model Predcton Equaton Std. Devaton of Error Correlaton Matrx Smple Lnear Regresson: 1.) Lnearty.) Constant Varance 3.) Independent Errors 4.) Normalty
More informationProfessor Chris Murray. Midterm Exam
Econ 7 Econometrcs Sprng 4 Professor Chrs Murray McElhnney D cjmurray@uh.edu Mdterm Exam Wrte your answers on one sde of the blank whte paper that I have gven you.. Do not wrte your answers on ths exam.
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models
More information7.1. Single classification analysis of variance (ANOVA) Why not use multiple 2-sample 2. When to use ANOVA
Sngle classfcaton analyss of varance (ANOVA) When to use ANOVA ANOVA models and parttonng sums of squares ANOVA: hypothess testng ANOVA: assumptons A non-parametrc alternatve: Kruskal-Walls ANOVA Power
More informationDiscussion of Extensions of the Gauss-Markov Theorem to the Case of Stochastic Regression Coefficients Ed Stanek
Dscusson of Extensons of the Gauss-arkov Theorem to the Case of Stochastc Regresson Coeffcents Ed Stanek Introducton Pfeffermann (984 dscusses extensons to the Gauss-arkov Theorem n settngs where regresson
More informationIntroduction to Dummy Variable Regressors. 1. An Example of Dummy Variable Regressors
ECONOMICS 5* -- Introducton to Dummy Varable Regressors ECON 5* -- Introducton to NOTE Introducton to Dummy Varable Regressors. An Example of Dummy Varable Regressors A model of North Amercan car prces
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationChat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980
MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and
More informationLecture 6: Introduction to Linear Regression
Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6
More informationStatistics for Managers Using Microsoft Excel/SPSS Chapter 14 Multiple Regression Models
Statstcs for Managers Usng Mcrosoft Excel/SPSS Chapter 14 Multple Regresson Models 1999 Prentce-Hall, Inc. Chap. 14-1 Chapter Topcs The Multple Regresson Model Contrbuton of Indvdual Independent Varables
More informationResource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis
Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques
More informationJanuary Examinations 2015
24/5 Canddates Only January Examnatons 25 DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR STUDENT CANDIDATE NO.. Department Module Code Module Ttle Exam Duraton (n words)
More information[ ] λ λ λ. Multicollinearity. multicollinearity Ragnar Frisch (1934) perfect exact. collinearity. multicollinearity. exact
Multcollnearty multcollnearty Ragnar Frsch (934 perfect exact collnearty multcollnearty K exact λ λ λ K K x+ x+ + x 0 0.. λ, λ, λk 0 0.. x perfect ntercorrelated λ λ λ x+ x+ + KxK + v 0 0.. v 3 y β + β
More informationNow we relax this assumption and allow that the error variance depends on the independent variables, i.e., heteroskedasticity
ECON 48 / WH Hong Heteroskedastcty. Consequences of Heteroskedastcty for OLS Assumpton MLR. 5: Homoskedastcty var ( u x ) = σ Now we relax ths assumpton and allow that the error varance depends on the
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationInterval Estimation in the Classical Normal Linear Regression Model. 1. Introduction
ECONOMICS 35* -- NOTE 7 ECON 35* -- NOTE 7 Interval Estmaton n the Classcal Normal Lnear Regresson Model Ths note outlnes the basc elements of nterval estmaton n the Classcal Normal Lnear Regresson Model
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationChapter 14 Simple Linear Regression Page 1. Introduction to regression analysis 14-2
Chapter 4 Smple Lnear Regresson Page. Introducton to regresson analyss 4- The Regresson Equaton. Lnear Functons 4-4 3. Estmaton and nterpretaton of model parameters 4-6 4. Inference on the model parameters
More informationThis column is a continuation of our previous column
Comparson of Goodness of Ft Statstcs for Lnear Regresson, Part II The authors contnue ther dscusson of the correlaton coeffcent n developng a calbraton for quanttatve analyss. Jerome Workman Jr. and Howard
More information[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students.
PPOL 59-3 Problem Set Exercses n Smple Regresson Due n class /8/7 In ths problem set, you are asked to compute varous statstcs by hand to gve you a better sense of the mechancs of the Pearson correlaton
More informationSystems of Equations (SUR, GMM, and 3SLS)
Lecture otes on Advanced Econometrcs Takash Yamano Fall Semester 4 Lecture 4: Sstems of Equatons (SUR, MM, and 3SLS) Seemngl Unrelated Regresson (SUR) Model Consder a set of lnear equatons: $ + ɛ $ + ɛ
More informationChapter 12 Analysis of Covariance
Chapter Analyss of Covarance Any scentfc experment s performed to know somethng that s unknown about a group of treatments and to test certan hypothess about the correspondng treatment effect When varablty
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationEmpirical Methods for Corporate Finance. Identification
mprcal Methods for Corporate Fnance Identfcaton Causalt Ultmate goal of emprcal research n fnance s to establsh a causal relatonshp between varables.g. What s the mpact of tangblt on leverage?.g. What
More informationHere is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)
Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationβ0 + β1xi. You are interested in estimating the unknown parameters β
Revsed: v3 Ordnar Least Squares (OLS): Smple Lnear Regresson (SLR) Analtcs The SLR Setup Sample Statstcs Ordnar Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals)
More informationDepartment of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution
Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable
More informationIf we apply least squares to the transformed data we obtain. which yields the generalized least squares estimator of β, i.e.,
Econ 388 R. Butler 04 revsons lecture 6 WLS I. The Matrx Verson of Heteroskedastcty To llustrate ths n general, consder an error term wth varance-covarance matrx a n-by-n, nxn, matrx denoted as, nstead
More informationβ0 + β1xi and want to estimate the unknown
SLR Models Estmaton Those OLS Estmates Estmators (e ante) v. estmates (e post) The Smple Lnear Regresson (SLR) Condtons -4 An Asde: The Populaton Regresson Functon B and B are Lnear Estmators (condtonal
More informationEconomics 101. Lecture 4 - Equilibrium and Efficiency
Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of
More informationStatistics Chapter 4
Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment
More informationMaximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models
ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for
More informationsince [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation
Econ 388 R. Butler 204 revsons Lecture 4 Dummy Dependent Varables I. Lnear Probablty Model: the Regresson model wth a dummy varables as the dependent varable assumpton, mplcaton regular multple regresson
More informationStatistics II Final Exam 26/6/18
Statstcs II Fnal Exam 26/6/18 Academc Year 2017/18 Solutons Exam duraton: 2 h 30 mn 1. (3 ponts) A town hall s conductng a study to determne the amount of leftover food produced by the restaurants n the
More information