What regression does. x is called the regressor The linear form is a natural first assumption, unless theory rejects it. positive or negative.

Size: px
Start display at page:

Download "What regression does. x is called the regressor The linear form is a natural first assumption, unless theory rejects it. positive or negative."

Transcription

1 Sectn Smple Regressn What regressn es Relatnshp between varables Often n ecnmcs we beleve that there s a (perhaps causal) relatnshp between tw varables Usually mre than tw, but that s eferre t anther ay We call ths the ecnmc mel Eample: Grae n Ecn 01 vs number f rm-mates takng Ecn 01 Functnal frm Is the relatnshp lnear? y1 s calle the regressr The lnear frm s a natural frst assumptn, unless thery rejects t s slpe, whch etermnes whether relatnshp between an y s pstve r negatve 1 s ntercept r cnstant term, whch etermnes where the lnear relatnshp ntersects the y as Is t plausble that ths s an eact, etermnstc relatnshp? Data (almst) never ft eactly alng lne Why? Measurement errr (ncrrect efntn r msmeasurement) Other varables that affect y Relatnshp s nt purely lnear Relatnshp may be fferent fr fferent bservatns S the ecnmc mel must be mele as etermnng the epecte value f y E y 1 : The cntnal mean f y gven s 1 te that ths says nthng abut ther aspects f the strbutn (ther than the epecte value) Hw es a change n affect the varance f y? (We assume that t es nt) Hw es a change n affect the mean, r the 75 th percentle, r any ther aspect f the strbutn f y? (If y s assume t be nrmal, then everythng abut the strbutn epens nly n mean an varance) ~ 15 ~

2 Other regressn technques (n partcular, quantle regressn) allw us t eamne the mpact f n aspects f the strbutn f y ther than the mean Ang an errr term fr a stchastc relatnshp gves us the actual value f y: y1 e Errr term e captures all f the abve prblems Errr term s cnsere t be a ranm varable an s nt bserve rectly Varance f e s, whch s the cntnal varance f y gven, the varance f the cntnal strbutn f y gven The smplest, but ften nt val, assumptn s that the cntnal varance s the same fr all bservatns n ur sample (hmskeastcty) E y, whch means that the epecte value f y ncreases by unts when ncreases by ne unt Des t matter whch varable s n the left-han se? At ne level, n: 1 y 1 e, s y v, where 1,, v e Fr purpses f mst estmatrs, yes: We shall see that a crtcally mprtant assumptn s that the errr term s nepenent f the regressrs r egenus varables Are the errrs shcks t y fr gven r shcks t fr gven y? It mght nt seem lke there s much fference, but the assumptn s crucal t val estmatn Egenety: s egenus wth respect t y f shcks t y nt affect, e, y es nt cause Where the ata cme frm? Sample an ppulatn We bserve a sample f bservatns n y an Depenng n cntet these samples may be Drawn frm a larger ppulatn, such as census ata r surveys Generate by a specfc ata-generatng prcess (DGP) as n tmeseres bservatns We usually wul lke t assume that the bservatns n ur sample are cv y, y 0, j statstcally nepenent, r at least uncrrelate: j ~ 16 ~

3 We wll assume ntally (fr a few weeks) that the values f are chsen as n an eperment: they are nt ranm We wll a ranm regressrs sn an scver that they n t change thngs much as lng as s nepenent f e Gals f regressn True regressn lne: actual relatnshp n ppulatn r DGP True an f (e ) Sample f bservatns cmes frm rawng ranm realzatns f e frm f (e ) an plttng pnts apprprately abve an belw the true regressn lne We want t fn an estmate regressn lne that cmes as clse t the true regressn lne as pssble, base n the bserve sample f y an pars: Estmate values f parameters 1 an Estmate prpertes f prbablty strbutn f errr term e Make nferences abut the abve estmates Use the estmates t make cntnal frecasts f y Determne the statstcal relablty f these frecasts Summarzng assumptns f smple regressn mel Assumptn #0: (Implct an unstate) The mel as specfe apples t all unts n the ppulatn an therefre all unts n the sample All unts n the ppulatn uner cnseratn have the same frm f the relatnshp, the same ceffcents, an errr terms wth the same prpertes If the Unte States an Mal are n the ppulatn, they really have the same parameters? Ths assumptn unerles everythng we n ecnmetrcs, an thus t must always be cnsere very carefully n chsng a specfcatn an a sample, an n ecng fr what ppulatn the results carry mplcatns SR1: y1 e SR: Ee 0, s Ey1 te that f s ranm, we make these cntnal epectatns Ee 0 E y 1 var e var y SR3: If s ranm, ths becmes var e var y We shul (an wll) cnser the mre general case n whch varance vares acrss bservatns: heterskeastcty ~ 17 ~

4 SR4: e ej y yj cv, cv, 0 Ths, t, can be relae: autcrrelatn SR5: s nn-ranm an takes n at least tw values We wll allw ranm later an see that Ee 0 mples that e must be uncrrelate wth e ~ 0, SR6: (ptnal) Ths s cnvenent, but nt crtcal snce the law f large numbers assures that fr a we varety f strbutns f e, ur estmatrs cnverge t nrmal as the sample gets large Eample: Assess the valty f these assumptns fr 01 rm-mate mel Strateges fr btanng regressn estmatrs What s an estmatr? A rule (frmula) fr calculatng an estmate f a parameter ( 1,, r ) base n the sample values y, Estmatrs are ften ente by ^ ver the varable beng estmate: An estmatr f mght be ente ˆ Hw mght we estmate the ceffcents f the smple regressn mel? Three strateges: Meth f least-squares Meth f mamum lkelh Meth f mments All three strateges wth the SR assumptns lea t the same estmatr rule: the rnary least-squares regressn estmatr: (b 1, b, s ) Meth f least squares Estmatn strategy: Make sum f square y-evatns ( resuals ) f bserve values frm the estmate regressn lne as small as pssble Gven ceffcent estmates b 1, b, resuals are efne as e y b1 b Or eˆ y yˆ, wth y b1 b ˆ ˆ Why nt mnmze the sum f the resuals? We n t want sum f resuals t be large negatve number: Mnmze sum f resuals by havng all resuals nfntely negatve Many alternatve lnes that make sum f resuals zer (whch s esrable) because pstves an negatves cancel ut Why use square rather than abslute value t eal wth cancellatn f pstves an negatves? ~ 18 ~

5 Square functn s cntnuusly fferentable; abslute value functn s nt Least-squares estmatn s much easer than least-absluteevatn estmatn Prmnence f Gaussan (nrmal) strbutn n nature an statstcal thery fcuses us n varance, whch s epectatn f square Least-abslute-evatn estmatn s ccasnally ne (specal case f quantle regressn), but nt cmmn Least-abslute-evatn regressn gves less mprtance t large utlers than least-squares because squarng gves large emphass t resuals wth large abslute value Tens t raw the regressn lne twar these pnts t elmnate large square resuals Least-squares crtern functn: ˆ ~ 19 ~ S e y b b Least-squares estmatrs s the slutn t mns Snce S s a cntnuusly fferentable functn f the estmate parameters, we can fferentate an set the partal ervatves equal t zer t get the leastsquares nrmal equatns: S y b1 b0, b 1 y b b S y b1 b0 b 1 1 y b b y b1 b 0 b1 y b te that the b 1 cntn assures that the regressn lne passes thrugh the pnt, y b1, b Substtutng the secn cntn nt the frst ve by : y y b b y y b 0 0 b y y y y ˆ XY ˆ X The b estmatr s the sample cvarance f an y ve by the sample varance f What happens f s cnstant acrss all bservatns n ur sample?

6 Denmnatr s zer an we can t calculate b Ths s ur frst encunter wth the prblem f cllnearty: f s a cnstant then s a lnear cmbnatn f the ther regressr the cnstant ne that s multple by b 1 Cllnearty (r multcllnearty) wll be mre f a prblem n multple regressn If t s etreme (r perfect), t means that we can t calculate the slpe estmates The abve equatns are the rnary least-squares (OLS) ceffcent estmatrs Meth f mamum lkelh Cnser the jnt prbablty ensty functn f y an, f (y, 1, ) The functn s wrtten s cntnal n the ceffcents t make eplct that the jnt strbutn f y an are affecte by the parameters Ths functn measures the prbablty ensty f any partcular cmbnatn f y an values, whch can be lsely thught f as hw prbable that utcme s, gven the parameter values Fr a gven set f parameters, sme bservatns f y an are less lkely than thers Fr eample, f 1 = 0 an < 0, then t s less lkely that we wul see bservatns where y > 0 when > 0, than bservatns wth y < 0 The ea f mamum-lkelh estmatn s t chse a set f parameters that makes the lkelh f bservng the sample that we actually have as hgh as pssble The lkelh functn s just the jnt ensty functn turne n ts hea: L,, y f, y, 1 1 If the bservatns are nepenent ranm raws frm entcal prbablty strbutns (they are IID), then the verall sample ensty (lkelh) functn s the pruct f the ensty (lkelh) functn f the nvual bservatns: f 1, y1,, y,, n, yn 1, f, y 1, 1 n L,, y,, y,,, y L,, y n n 1 1 If the cntnal prbablty strbutn f e cntnal n s Gaussan (nrmal) wth mean zer an varance :,,,, 1 f y 1 L 1 y e y 1 1 Because f the epnental functn, Gaussan lkelh functns are usually manpulate n lgs ~ 0 ~

7 te that because the lg functn s mntnc, mamzng the lg-lkelh functn s equvalent t mamzng the lkelh functn tself 1 1 Fr an nvual bservatn: ln L ln y Aggregatng ver the sample: 1 ln L,, y ln L,, y ln y ln y The nly part f ths epressn that epens n r n the sample s the fnal summatn Because f the negatve sgn, mamzng the lkelh functn (wth respect t ) s equvalent t mnmzng the summatn But ths summatn s just the sum f square resuals that we mnmze n OLS Thus, OLS s MLE f the strbutn f e cntnal n s Gaussan wth mean zer an cnstant varance, an f the bservatns are IID Meth f mments Anther general strategy fr btanng estmatrs s t set estmates f selecte ppulatn mments equal t ther sample cunterparts Ths s calle the meth f mments In rer t emply the meth f mments, we have t make sme specfc assumptns abut the ppulatn/dgp mments Assume Ee 0, Ths means that the ppulatn/dgp mean f the errr term s zer Crrespnng t ths assumptn abut the ppulatn mean f e s the sample mean cntn 1 eˆ 0 Thus we set the sample mean t the value we have assume fr the ppulatn mean Assume cv e, 0, whch s equvalent t E E( ) e 0 Crrespnng t ths assumptn abut the ppulatn cvarance between the regressr an the errr term s the sample 1 cvarance cntn: ˆ e 0 Agan, we set the sample mment t the zer value that we have assume fr the ppulatn mment ~ 1 ~

8 Pluggng the epressn fr the resual nt the sample mment epressns abve: 1 y b1 b0, b1 y b Ths s the same as the ntercept estmate equatn fr the least-squares estmatr abve 1 y b1 b0, y yb b 0, y yb 0, y y b Ths s eactly the same equatn as fr the OLS estmatr Thus, f we assume that Ee 0, an cv e, 0 n the ppulatn, then the OLS estmatr can be erve by the meth f mments as well (te that bth f these mment cntns fllw frm the etene assumptn SR that E(e ) = 0) Evaluatng alternatve estmatrs (nt mprtant fr cmparsn here snce all three are same, but are they any g?) Desrable crtera Unbaseness: estmatr s n average equal t the true value E ˆ Small varance: estmatr s usually clse t ts epecte value var ˆE ˆ Eˆ Small RMSE can balance varance wth bas: RMSE MSE MSE E ˆ We wll talk abut BLUE estmatrs as mnmum varance wthn the class f unbase estmatrs Samplng strbutn f OLS estmatrs b 1 an b are ranm varables: they are functns f the ranm varables y an e We can thnk f the prbablty strbutn f b as ccurrng ver repeate ranm samples frm the unerlyng ppulatn r DGP ~ ~

9 In many (mst) cases, we cannt erve the strbutn f an estmatr theretcally, but must rely n Mnte Carl smulatn t estmate t (See belw) Because OLS estmatr (uner ur assumptns) s lnear, we can erve ts strbutn We can wrte the OLS slpe estmatr as b y y 1 e y 1 1 e 1 e e The thr step uses the prperty y 1, snce the epecte value f e s zer Fr nw, we are assumng that s nn-ranm, as n a cntrlle eperment If s fe, then the nly part f the frmula abve that s ranm s e The frmula shws that the slpe estmate s lnear n e Ths means that f e s Gaussan, then the slpe estmate wll als be Gaussan Even f e s nt Gaussan, the slpe estmate wll cnverge t a Gaussan strbutn as lng as sme mest assumptns abut ts strbutn are satsfe Because all the varables are nn-ranm, they can cme utse when we take epectatns, s e Ee E b 1 1 E 1 1 What abut the varance f b? ~ 3 ~

10 We wll the etals f the analytcal wrk n matr frm because t s easer var b E b E 1 Ee 1 1 HGL equatns 14 an 16 prve frmulas fr varance f b 1 an the cvarance between the ceffcents: var b 1 b b 1 1 cv 1, 0 1 te that the cvarance between the slpe an ntercept estmatrs s negatve f 0 : verestmatng ne wll ten t cause us t unerestmate the ther What etermnes the varance f b? Smaller varance f errr mre precse estmatrs Larger number f bservatns mre precse estmatrs Mre spersn f bservatns arun mean mre precse estmatrs What we knw abut the verall prbablty strbutn f b? If assumptn SR6 s satsfe an e s nrmal, then b s als nrmal because t s a lnear functn f the e varables an lnear functns f nrmally strbute varables are als nrmally strbute If assumptn SR6 s nt satsfe, then b cnverges t a nrmal strbutn as prve sme weak cntns n the strbutn f e are satsfe These epressns are the true varance/cvarance f the estmate ceffcent vectr Hwever, because we nt knw, t s nt f practcal use t us We ~ 4 ~

11 nee an estmatr fr n rer t calculate a stanar errr f the ceffcents: an estmate f ther stanar evatn 1 The requre estmate n the classcal case s s eˆ We ve by because ths s the number f egrees f freem n ur regressn Degrees f freem are a very mprtant ssue n ecnmetrcs It refers t hw many ata pnts are avalable n ecess f the mnmum number requre t estmate the mel In ths case, t takes mnmally tw pnts t efne a lne, s the smallest pssble number f bservatns fr whch we can ft a bvarate regressn s Any bservatns beyn make t (generally) mpssble t ft a lne perfectly thrugh all bservatns Thus, s the number f egrees f freem n the sample We always ve sums f square resuals by the number f egrees f freem n rer t get unbase varance estmates Fr eample, n calculatng the sample varance, we use 1 s z z because there are 1 egrees f 1 1 freem left after usng ne t calculate the mean Here, we have tw ceffcents t estmate, nt just ne, s we ve by The stanar errr f each ceffcent s the square rt f the crrespnng agnal element f that estmate cvarance matr te that the HGL tet uses an alternatve frmula base n 1 ˆ eˆ 1 Hw g s the OLS estmatr? Ths estmatr fr s base because there are nly egrees f freem n the resuals are use up n estmatng the parameters In large samples they are equvalent 1 Is OLS the best estmatr? Uner what cntns? Uner classcal regressn assumptns SR1 SR5 (but nt necessarly SR6) the Gauss- Markv Therem shws that the OLS estmatr s BLUE Any ther estmatr that s unbase an lnear n e has hgher varance than b te that (5, 0) s an estmatr wth zer varance, but t s base n the general case ~ 5 ~

12 Vlatn f any f the SR1 SR5 assumptns usually means that there s a better estmatr Intructn t Stata Stata wrks n a ataset (ta fle) Stata cmmans: Enter at prmpt Chse frm menu/wnws Enter nt a fle fr batch eecutn The Stata screen Lg fles Results wnw Cmman wnw Varables wnw Revew wnw Prpertes wnw Set ne up s stuents can see t later Openng a ata set Shw ata etr/brwser Cmmans t statstcal analyss summarze reg Graphcs cmmans Use menus t get see all ptns wthut rememberng hw t type Sample analyss: Ree Ecn 01 graes Depenent varable gpnts Shw summary statstcs Pnt ut screte strbutn: Is ths a prblem? Regressn n sngle varable: hsgpa Interpretng ceffcents (nte that ntercept s autmatcally nclue: nnt ptn) Pnt ut stanar errr, t statstc, p value, cnfent lmts te mssng bservatns Shw utreg usng graeregs, se Alternatve: regress n rr Shw hw utreg as clumns utreg usng graeregs, se merge Calculate precte values wth prect prect gpahat Graph actual an precte vs rr ~ 6 ~

13 Dsplay hypthetcal precte values wth margns margns, at(rr=(5 4 3 )) Transfrmatn: satc100 = satv100 + satm100 Regress n satc100 Cmpare t hsgpa regressn Regressn n ummy varable Regress n female Interpretatn f ceffcents Categry mean prectns: margns female Multple regressn emnstratn Reg gpnts rr satv100 satm100 female Shw utreg wth multple varables utreg usng graeregs, se merge A takng t regressn an nterpret Use margns t slate prectns f hypthetcal nvual varables wth thers at means margns, at(rr = (5 4 3 )) atmeans margnsplt Mnte Carl meths Base n HGL Appen G Hw we evaluate an estmatr such as OLS? Uner smple assumptns, we can smetmes calculate the estmatr s theretcal prbablty strbutn We can ften calculate the theretcal strbutn t whch the estmatr cnverges n large samples even when we cannt calculate the small-sample strbutn In general (an, n partcular, when we cannt calculate the true strbutn), we can smulate the mel ver thusans f samples t estmate ts strbutn The estmatn f the prbablty strbutn f an estmatr thrugh smulatn s calle Mnte Carl smulatn an s an ncreasngly mprtant tl n ecnmetrcs Cnser smple Mnte Carl eample: (MC Class Demta) Let s suppse that we are wrkng wth a gven, fe = 157 We have fe, gven values f the varable fr all 157 bservatns Usng HGL s e9-13ta wth avertsng varable as We assume that the true ppulatn values f 1 an are 10 an 3 Clse t estmate values fr regressn f sales n avertsng The true errr term s IID nrmal wth varance 009 (stanar evatn 03) ~ 7 ~

14 T use Mnte Carl t smulate the strbutn f the OLS estmatrs, we generate M replcatns f the samplng eperment: M sets f 157 IID (0, 009) smulate bservatns n e usng ranm number generatr (We wul generate sample values fr f t were nt beng taken as fe) Calculate the M sets f 157 values f y fr each bservatn as e wth knwn values f the parameters an an smulate values f e Run M regressns fr the M smulate samples, keepng the estmate values f nterest (presumably ˆ 1 an ˆ, but pssbly als ther values) Lk at strbutn f the estmatrs ver M replcatns t apprmate the actual strbutn Mean Varance/stanar evatn/stanar errr Quantles fr use n nference Demnstrate usng Stata Setup ata Create fle s alreay n MC Class Demta prgram lstest g e=rnrmal(0, 03) g y=10 + 3*+e reg y rp e y en La t nt memry: run lstest Run smulatn wth 5000 replcatns smulate b=_b[], reps(5000): lstest Shw summary stats, hstgram, centles (5, 975) Least-squares regressn mel n matr ntatn (Frm Grffths, Hll, an Juge, Sectn 54) We can wrte the th bservatn f the bvarate lnear regressn mel as y e 1 Arrangng the bservatns vertcally gves us such equatns: y e, y e 1 y e 1, ~ 8 ~

15 Ths s a system f lnear equatns that can be cnvenently rewrtten n matr frm There s n real nee fr the matr representatn wth nly ne regressr because the equatns are smple, but when we a regressrs the matr ntatn s mre useful Let y be an 1 clumn vectr: y1 y y y Let X be an matr: X 1 s a 1 clumn vectr f ceffcents: 1 An e s an 1 vectr f the errr terms: e1 e e e Then y Xβ e epresses the system f equatns very cmpactly (Wrte ut matrces an shw hw multplcatn wrks fr sngle bservatn) In matr ntatn, eˆ yxb s the vectr f resuals Summng squares f the elements f a clumn vectr n matr ntatn s just the nner pruct: eˆ ˆˆ, 1 ee where prme entes matr transpse Thus we want t mnmze ths epressn fr least squares ee ˆˆ yxb yxb ybx yxb yy bxy bxxb Dfferentatng wth respect t the ceffcent vectr an settng t zer yels Xy XXb 0, r XXb Xy Pre-multplyng by the nverse f XX yels the OLS ceffcent frmula: 1 b XX Xy (Ths s ne f the few frmulas that yu nee t memrze) ~ 9 ~

16 te symmetry between matr frmula an scalar frmula Xy s the sum f the crss pruct f the tw varables an XX s the sum f squares f the regressr The frmer s n the numeratr (an nt nverte) an the latter s n the enmnatr (an nverte) In matr ntatn, we can epress ur estmatr n terms f e as 1 1 XX XXβ e β 1 β XX Xe b XX Xy 1 1 XX XX XX Xe When s nn-stchastc, the cvarance matr f the ceffcent estmatr s als easy t cmpute uner the OLS assumptns Cvarance matrces: The cvarance f a vectr ranm varable s a matr wth varances n the agnal an cvarances n the ffagnals Fr an M 1 vectr ranm varable z, the cvarance matr s t the fllwng uter pruct: cv( z) E zez zez E z1 Ez E z1 Ez z Ez E z1 Ez z M Ez E z1 Ez z Ez E z Ez E z Ez z M Ez E z1 Ez z M Ez E z Ez z M Ez E z M Ez In ur regressn mel, f e s IID wth mean zer an varance, then Ee = 0 an cv e E ee I, wth I beng the rer- entty matr We can then cmpute the cvarance matr f the (unbase) estmatr as cvb E bβbβ 1 1 E XX Xe XX Xe 1 1 E XX XeeX XX 1 1 XX XE eexxx XX XX XX XX What happens t var b as gets large? Summatns n XX have atnal terms, s they get larger Ths means that nverse ~ 30 ~

17 matr gets smaller an varance ecreases: mre bservatns mples mre accurate estmatrs te that varance als ncreases as the varance f the errr term ges up Mre mprecse ft mples less precse ceffcent estmates Our estmate cvarance matr f the ceffcents s then s 1 XX The (, ) element f ths matr s s ˆ 1 1 e Ths s the frmula we calculate n class fr the scalar system Thus, t summarze, when the classcal assumptns hl an e s nrmally strbute, b~, 1 β XX Asympttc prpertes f OLS bvarate regressn estmatr (Base n S&W, Chapter 17 t cvere n class Sprng 014) Cnvergence n prbablty (prbablty lmts) Assume that S 1, S,, S, s a sequence f ranm varables In practce, they are gng t be estmatrs base n 1,,, bservatns p S f an nly f lm Pr S 0 fr any > 0 Thus, fr any small value f, we can make the prbablty that S s further frm than arbtrarly small by chsng large enugh p If S, then we can wrte plm S = Ths means that the entre prbablty strbutn f S cnverges n the value as gets large Estmatrs that cnverge n prbablty t the true parameter value are calle cnsstent estmatrs Cnvergence n strbutn If the sequence f ranm varables {S } has cumulatve prbablty strbutns F 1, F,, F,, then S lm F t F t, fr all t at whch F s cntnuus S f an nly f If a sequence f ranm varables cnverges n strbutn t the nrmal strbutn, t s calle asympttcally nrmal ~ 31 ~

18 Prpertes f prbablty lmts an cnvergence n strbutn Prbablty lmts are very frgvng: Slutsky s Therem states that plm (S + R ) = plm S + plm R plm (S R ) = plm S plm R plm (S / R ) = plm S / plm R The cntnuus-mappng therem gves us Fr cntnuus functns g, plm g(s ) = g(plm S ) An f S g S g S S, then Further, we can cmbne prbablty lmts an cnvergence n strbutn t get If plm a = a an S, then a S S as a S a S S / a S/ a These are very useful snce t means that asympttcally we can treat any Central lmt therems cnsstent estmatr as a cnstant equal t the true value There s a varety wth slghtly fferent cntns Basc result: If {S } s a sequence f estmatrs f, then fr a we varety f unerlyng strbutns, S the unerlyng statstc Applyng asympttc thery t the OLS mel 0,, where s the varance f Uner the mre general cntns than the nes that we have typcally assume (nclung, specfcally, the fnte kurtss assumptn, but nt the hmskeastcty assumptn r the assumptn f fe regressrs), the OLS estmatr satsfes the cntns fr cnsstency an asympttc nrmalty var E( ) e 0, Ths s general case wth var heterskeastcty Wth hmskeastcty, the varable reuces t the usual frmula: b 0, var b plm ˆ, as prven n Sectn 173 b b t b se b Chce fr t statstc: 0, 1 ~ 3 ~

19 If hmskeastc, nrmal errr term, then eact strbutn s t If heterskeastc r nn-nrmal errr (wth fnte 4 th mment), then eact strbutn s unknwn, but asympttc strbutn s nrmal Whch s mre reasnable fr any gven applcatn? Lnearty an nnlnearty The OLS estmatr s a lnear estmatr because b s lnear n e (whch s because y s lnear n ), nt because y s lnear n OLS can easly hanle nnlnear relatnshps between y an lny = 1 + y = 1 + etc Dummy (ncatr) varables take the value zer r ne Eample: MALE = 1 f male an 0 f female y MALE e 1 Fr females, E y MALE 1 Fr males, E y MALE 1 Thus, s the fference between the epecte value f males an females ~ 33 ~

This is natural first assumption, unless theory rejects it.

This is natural first assumption, unless theory rejects it. Sectn Smple Regressn What regressn es Relatnshp between varables Often n ecnmcs we beleve that there s a (perhaps causal) relatnshp between tw varables. Usually mre than tw, but that s eferre t anther

More information

What regression does. = β + β x : The conditional mean of y given x is β + β x 1 2

What regression does. = β + β x : The conditional mean of y given x is β + β x 1 2 Sectn Smple Regressn What regressn es Relatnshp between varables Often n ecnmcs we beleve that there s a (perhaps causal) relatnshp between tw varables Usually mre than tw, but that s eferre t anther ay

More information

Inference in Simple Regression

Inference in Simple Regression Sectn 3 Inference n Smple Regressn Havng derved the prbablty dstrbutn f the OLS ceffcents under assumptns SR SR5, we are nw n a pstn t make nferental statements abut the ppulatn parameters: hypthess tests

More information

Section 10 Regression with Stochastic Regressors

Section 10 Regression with Stochastic Regressors Sectn 10 Regressn wth Stchastc Regressrs Meanng f randm regressrs Untl nw, we have assumed (aganst all reasn) that the values f x have been cntrlled by the expermenter. Ecnmsts almst never actually cntrl

More information

Regression with Stochastic Regressors

Regression with Stochastic Regressors Sectn 9 Regressn wth Stchastc Regressrs Meanng f randm regressrs Untl nw, we have assumed (aganst all reasn) that the values f x have been cntrlled by the expermenter. Ecnmsts almst never actually cntrl

More information

Reproducing kernel Hilbert spaces. Nuno Vasconcelos ECE Department, UCSD

Reproducing kernel Hilbert spaces. Nuno Vasconcelos ECE Department, UCSD Reprucng ernel Hlbert spaces Nun Vascncels ECE Department UCSD Classfcatn a classfcatn prblem has tw tpes f varables X -vectr f bservatns features n the wrl Y - state class f the wrl Perceptrn: classfer

More information

Physics 107 HOMEWORK ASSIGNMENT #20

Physics 107 HOMEWORK ASSIGNMENT #20 Physcs 107 HOMEWORK ASSIGNMENT #0 Cutnell & Jhnsn, 7 th etn Chapter 6: Prblems 5, 7, 74, 104, 114 *5 Cncept Smulatn 6.4 prves the ptn f explrng the ray agram that apples t ths prblem. The stance between

More information

ENGI 4421 Probability & Statistics

ENGI 4421 Probability & Statistics Lecture Ntes fr ENGI 441 Prbablty & Statstcs by Dr. G.H. Gerge Asscate Prfessr, Faculty f Engneerng and Appled Scence Seventh Edtn, reprnted 018 Sprng http://www.engr.mun.ca/~ggerge/441/ Table f Cntents

More information

Section 14 Limited Dependent Variables

Section 14 Limited Dependent Variables Sectn 14 Lmted Dependent Varables What s a lmted dependent varable? Our standard assumptn f an errr term that s nrmally dstrbuted cndtnal n the regressrs mples that the dependent varable can be (wth pstve

More information

Fall 2010 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. (n.b. for now, we do not require that k. vectors as a k 1 matrix: ( )

Fall 2010 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. (n.b. for now, we do not require that k. vectors as a k 1 matrix: ( ) Fall 00 Analyss f Epermental Measrements B. Esensten/rev. S. Errede Let s nvestgate the effect f a change f varables n the real & symmetrc cvarance matr aa the varance matr aa the errr matr V [ ] ( )(

More information

element k Using FEM to Solve Truss Problems

element k Using FEM to Solve Truss Problems sng EM t Slve Truss Prblems A truss s an engneerng structure cmpsed straght members, a certan materal, that are tpcall pn-ned at ther ends. Such members are als called tw-rce members snce the can nl transmt

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatnal Data Assmlatn (4D-Var) 4DVAR, accrdng t the name, s a fur-dmensnal varatnal methd. 4D-Var s actually a smple generalzatn f 3D-Var fr bservatns that are dstrbuted n tme. he equatns are the same,

More information

Chapter 3, Solution 1C.

Chapter 3, Solution 1C. COSMOS: Cmplete Onlne Slutns Manual Organzatn System Chapter 3, Slutn C. (a If the lateral surfaces f the rd are nsulated, the heat transfer surface area f the cylndrcal rd s the bttm r the tp surface

More information

Problem 1. Refracting Surface (Modified from Pedrotti 2-2)

Problem 1. Refracting Surface (Modified from Pedrotti 2-2) .70 Optc Hmewrk # February 8, 04 Prblem. Reractng Surace (Me rm Pertt -) Part (a) Fermat prncple requre that every ray that emanate rm the bject an pae thrugh the mage pnt mut be chrnu (.e., have equal

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Basics of heteroskedasticity

Basics of heteroskedasticity Sect 8 Heterskedastcty ascs f heterskedastcty We have assumed up t w ( ur SR ad MR assumpts) that the varace f the errr term was cstat acrss bservats Ths s urealstc may r mst ecmetrc applcats, especally

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Chapter 7. Systems 7.1 INTRODUCTION 7.2 MATHEMATICAL MODELING OF LIQUID LEVEL SYSTEMS. Steady State Flow. A. Bazoune

Chapter 7. Systems 7.1 INTRODUCTION 7.2 MATHEMATICAL MODELING OF LIQUID LEVEL SYSTEMS. Steady State Flow. A. Bazoune Chapter 7 Flud Systems and Thermal Systems 7.1 INTODUCTION A. Bazune A flud system uses ne r mre fluds t acheve ts purpse. Dampers and shck absrbers are eamples f flud systems because they depend n the

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Theory of a vertically loaded Suction Pile in SAND

Theory of a vertically loaded Suction Pile in SAND Thery f a vertcally lae Suctn Ple n SAND 1. Cnventn Water t COG Z L Sl z COG D t1 Fgure 1: Overvew f man cmpnents Fgure : Overvew f man parameters Z D L t 1 t W φ φ e c κ p ρ sl γ sl Waterepth Penetratnepth

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Chapter 6 : Gibbs Free Energy

Chapter 6 : Gibbs Free Energy Wnter 01 Chem 54: ntrductry hermdynamcs Chapter 6 : Gbbs Free Energy... 64 Defntn f G, A... 64 Mawell Relatns... 65 Gbbs Free Energy G(,) (ure substances)... 67 Gbbs Free Energy fr Mtures... 68 ΔG f deal

More information

The Simple Linear Regression Model: Theory

The Simple Linear Regression Model: Theory Chapter 3 The mple Lear Regress Mdel: Ther 3. The mdel 3.. The data bservats respse varable eplaatr varable : : Plttg the data.. Fgure 3.: Dsplag the cable data csdered b Che at al (993). There are 79

More information

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9.

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9. Sectin 7 Mdel Assessment This sectin is based n Stck and Watsn s Chapter 9. Internal vs. external validity Internal validity refers t whether the analysis is valid fr the ppulatin and sample being studied.

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

Lecture 12. Heat Exchangers. Heat Exchangers Chee 318 1

Lecture 12. Heat Exchangers. Heat Exchangers Chee 318 1 Lecture 2 Heat Exchangers Heat Exchangers Chee 38 Heat Exchangers A heat exchanger s used t exchange heat between tw fluds f dfferent temperatures whch are separated by a sld wall. Heat exchangers are

More information

Section 3: Detailed Solutions of Word Problems Unit 1: Solving Word Problems by Modeling with Formulas

Section 3: Detailed Solutions of Word Problems Unit 1: Solving Word Problems by Modeling with Formulas Sectn : Detaled Slutns f Wrd Prblems Unt : Slvng Wrd Prblems by Mdelng wth Frmulas Example : The factry nvce fr a mnvan shws that the dealer pad $,5 fr the vehcle. If the stcker prce f the van s $5,, hw

More information

55:041 Electronic Circuits

55:041 Electronic Circuits 55:04 Electrnc Crcuts Feedback & Stablty Sectns f Chapter 2. Kruger Feedback & Stablty Cnfguratn f Feedback mplfer S S S S fb Negate feedback S S S fb S S S S S β s the feedback transfer functn Implct

More information

A/2 l,k. Problem 1 STRATEGY. KNOWN Resistance of a complete spherical shell: r rk. Inner and outer radii

A/2 l,k. Problem 1 STRATEGY. KNOWN Resistance of a complete spherical shell: r rk. Inner and outer radii Prblem 1 STRATEGY KNOWN Resstance f a cmplete sphercal shell: R ( r r / (4 π r rk sphere Inner an uter ra r an r, SOLUTION Part 1: Resstance f a hemsphercal shell: T calculate the resstance f the hemsphere,

More information

1. An incident ray from the object to the mirror, parallel to the principal axis and then reflected through the focal point F.

1. An incident ray from the object to the mirror, parallel to the principal axis and then reflected through the focal point F. Hmewrk- Capter 25 4. REASONING Te bject stance ( = cm) s srter tan te cal lengt ( = 8 cm) te mrrr, s we expect te mage t be vrtual, appearng ben te mrrr. Takng Fgure 25.8a as ur mel, we wll trace ut: tree

More information

_J _J J J J J J J J _. 7 particles in the blue state; 3 particles in the red state: 720 configurations _J J J _J J J J J J J J _

_J _J J J J J J J J _. 7 particles in the blue state; 3 particles in the red state: 720 configurations _J J J _J J J J J J J J _ Dsrder and Suppse I have 10 partcles that can be n ne f tw states ether the blue state r the red state. Hw many dfferent ways can we arrange thse partcles amng the states? All partcles n the blue state:

More information

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction ECONOMICS 35* -- NOTE 7 ECON 35* -- NOTE 7 Interval Estmaton n the Classcal Normal Lnear Regresson Model Ths note outlnes the basc elements of nterval estmaton n the Classcal Normal Lnear Regresson Model

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS

More information

Spring 2002 Lecture #17

Spring 2002 Lecture #17 1443-51 Sprng 22 Lecture #17 r. Jaehn Yu 1. Cndtns fr Equlbrum 2. Center f Gravty 3. Elastc Prpertes f Slds Yung s dulus Shear dulus ulk dulus Tday s Hmewrk Assgnment s the Hmewrk #8!!! 2 nd term eam n

More information

Design of Analog Integrated Circuits

Design of Analog Integrated Circuits Desgn f Analg Integrated Crcuts I. Amplfers Desgn f Analg Integrated Crcuts Fall 2012, Dr. Guxng Wang 1 Oerew Basc MOS amplfer structures Cmmn-Surce Amplfer Surce Fllwer Cmmn-Gate Amplfer Desgn f Analg

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

A New Method for Solving Integer Linear. Programming Problems with Fuzzy Variables

A New Method for Solving Integer Linear. Programming Problems with Fuzzy Variables Appled Mathematcal Scences, Vl. 4, 00, n. 0, 997-004 A New Methd fr Slvng Integer Lnear Prgrammng Prblems wth Fuzzy Varables P. Pandan and M. Jayalakshm Department f Mathematcs, Schl f Advanced Scences,

More information

The Ordinary Least Squares (OLS) Estimator

The Ordinary Least Squares (OLS) Estimator The Ordnary Least Squares (OLS) Estmator 1 Regresson Analyss Regresson Analyss: a statstcal technque for nvestgatng and modelng the relatonshp between varables. Applcatons: Engneerng, the physcal and chemcal

More information

Lucas Imperfect Information Model

Lucas Imperfect Information Model Lucas Imerfect Infrmatn Mdel 93 Lucas Imerfect Infrmatn Mdel The Lucas mdel was the frst f the mdern, mcrfundatns mdels f aggregate suly and macrecnmcs It bult drectly n the Fredman-Phels analyss f the

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Transient Conduction: Spatial Effects and the Role of Analytical Solutions

Transient Conduction: Spatial Effects and the Role of Analytical Solutions Transent Cnductn: Spatal Effects and the Rle f Analytcal Slutns Slutn t the Heat Equatn fr a Plane Wall wth Symmetrcal Cnvectn Cndtns If the lumped capactance apprxmatn can nt be made, cnsderatn must be

More information

CHAPTER 3 ANALYSIS OF KY BOOST CONVERTER

CHAPTER 3 ANALYSIS OF KY BOOST CONVERTER 70 CHAPTER 3 ANALYSIS OF KY BOOST CONERTER 3.1 Intrductn The KY Bst Cnverter s a recent nventn made by K.I.Hwu et. al., (2007), (2009a), (2009b), (2009c), (2010) n the nn-slated DC DC cnverter segment,

More information

Introduction to Electronic circuits.

Introduction to Electronic circuits. Intrductn t Electrnc crcuts. Passve and Actve crcut elements. Capactrs, esstrs and Inductrs n AC crcuts. Vltage and current dvders. Vltage and current surces. Amplfers, and ther transfer characterstc.

More information

Circuits Op-Amp. Interaction of Circuit Elements. Quick Check How does closing the switch affect V o and I o?

Circuits Op-Amp. Interaction of Circuit Elements. Quick Check How does closing the switch affect V o and I o? Crcuts Op-Amp ENGG1015 1 st Semester, 01 Interactn f Crcut Elements Crcut desgn s cmplcated by nteractns amng the elements. Addng an element changes vltages & currents thrughut crcut. Example: clsng a

More information

[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students.

[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students. PPOL 59-3 Problem Set Exercses n Smple Regresson Due n class /8/7 In ths problem set, you are asked to compute varous statstcs by hand to gve you a better sense of the mechancs of the Pearson correlaton

More information

Wp/Lmin. Wn/Lmin 2.5V

Wp/Lmin. Wn/Lmin 2.5V UNIVERITY OF CALIFORNIA Cllege f Engneerng Department f Electrcal Engneerng and Cmputer cences Andre Vladmrescu Hmewrk #7 EEC Due Frday, Aprl 8 th, pm @ 0 Cry Prblem #.5V Wp/Lmn 0.0V Wp/Lmn n ut Wn/Lmn.5V

More information

Feedback Principle :-

Feedback Principle :- Feedback Prncple : Feedback amplfer s that n whch a part f the utput f the basc amplfer s returned back t the nput termnal and mxed up wth the nternal nput sgnal. The sub netwrks f feedback amplfer are:

More information

CONVEX COMBINATIONS OF ANALYTIC FUNCTIONS

CONVEX COMBINATIONS OF ANALYTIC FUNCTIONS rnat. J. Math. & Math. S. Vl. 6 N. (983) 33534 335 ON THE RADUS OF UNVALENCE OF CONVEX COMBNATONS OF ANALYTC FUNCTONS KHALDA. NOOR, FATMA M. ALOBOUD and NAEELA ALDHAN Mathematcs Department Scence Cllege

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Shell Stiffness for Diffe ent Modes

Shell Stiffness for Diffe ent Modes Engneerng Mem N 28 February 0 979 SUGGESTONS FOR THE DEFORMABLE SUBREFLECTOR Sebastan vn Herner Observatns wth the present expermental versn (Engneerng Dv nternal Reprt 09 July 978) have shwn that a defrmable

More information

The support vector machine. Nuno Vasconcelos ECE Department, UCSD

The support vector machine. Nuno Vasconcelos ECE Department, UCSD he supprt vectr machne Nun Vascncels ECE Department UCSD Outlne e have talked abut classfcatn and lnear dscrmnants then e dd a detur t talk abut kernels h d e mplement a nn-lnear bundar n the lnear dscrmnant

More information

p(z) = 1 a e z/a 1(z 0) yi a i x (1/a) exp y i a i x a i=1 n i=1 (y i a i x) inf 1 (y Ax) inf Ax y (1 ν) y if A (1 ν) = 0 otherwise

p(z) = 1 a e z/a 1(z 0) yi a i x (1/a) exp y i a i x a i=1 n i=1 (y i a i x) inf 1 (y Ax) inf Ax y (1 ν) y if A (1 ν) = 0 otherwise Dustn Lennon Math 582 Convex Optmzaton Problems from Boy, Chapter 7 Problem 7.1 Solve the MLE problem when the nose s exponentally strbute wth ensty p(z = 1 a e z/a 1(z 0 The MLE s gven by the followng:

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

Exercises H /OOA> f Wo AJoTHS l^»-l S. m^ttrt /A/ ?C,0&L6M5 INFERENCE FOR DISTRIBUTIONS OF CATEGORICAL DATA. tts^e&n tai-ns 5 2%-cas-hews^, 27%

Exercises H /OOA> f Wo AJoTHS l^»-l S. m^ttrt /A/ ?C,0&L6M5 INFERENCE FOR DISTRIBUTIONS OF CATEGORICAL DATA. tts^e&n tai-ns 5 2%-cas-hews^, 27% /A/ mttrt?c,&l6m5 INFERENCE FOR DISTRIBUTIONS OF CATEGORICAL DATA Exercses, nuts! A cmpany clams that each batch f ttse&n ta-ns 5 2%-cas-hews, 27% almnds, 13% macadama nuts, and 8% brazl nuts. T test ths

More information

Lecture 13: Markov Chain Monte Carlo. Gibbs sampling

Lecture 13: Markov Chain Monte Carlo. Gibbs sampling Lecture 13: Markv hain Mnte arl Gibbs sampling Gibbs sampling Markv chains 1 Recall: Apprximate inference using samples Main idea: we generate samples frm ur Bayes net, then cmpute prbabilities using (weighted)

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Conduction Heat Transfer

Conduction Heat Transfer Cnductn Heat Transfer Practce prblems A steel ppe f cnductvty 5 W/m-K has nsde and utsde surface temperature f C and 6 C respectvely Fnd the heat flw rate per unt ppe length and flux per unt nsde and per

More information

Problem Set 5 Solutions - McQuarrie Problems 3.20 MIT Dr. Anton Van Der Ven

Problem Set 5 Solutions - McQuarrie Problems 3.20 MIT Dr. Anton Van Der Ven Prblem Set 5 Slutns - McQuarre Prblems 3.0 MIT Dr. Antn Van Der Ven Fall Fall 003 001 Prblem 3-4 We have t derve the thermdynamc prpertes f an deal mnatmc gas frm the fllwng: = e q 3 m = e and q = V s

More information

PHYSICS 536 Experiment 12: Applications of the Golden Rules for Negative Feedback

PHYSICS 536 Experiment 12: Applications of the Golden Rules for Negative Feedback PHYSICS 536 Experment : Applcatns f the Glden Rules fr Negatve Feedback The purpse f ths experment s t llustrate the glden rules f negatve feedback fr a varety f crcuts. These cncepts permt yu t create

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation Econ 388 R. Butler 204 revsons Lecture 4 Dummy Dependent Varables I. Lnear Probablty Model: the Regresson model wth a dummy varables as the dependent varable assumpton, mplcaton regular multple regresson

More information

Statistics MINITAB - Lab 2

Statistics MINITAB - Lab 2 Statstcs 20080 MINITAB - Lab 2 1. Smple Lnear Regresson In smple lnear regresson we attempt to model a lnear relatonshp between two varables wth a straght lne and make statstcal nferences concernng that

More information

Exploiting vector space properties for the global optimization of process networks

Exploiting vector space properties for the global optimization of process networks Exptng vectr space prpertes fr the gbal ptmzatn f prcess netwrks Juan ab Ruz Ignac Grssmann Enterprse Wde Optmzatn Meetng March 00 Mtvatn - The ptmzatn f prcess netwrks s ne f the mst frequent prblems

More information

Chapter 14 Simple Linear Regression

Chapter 14 Simple Linear Regression Chapter 4 Smple Lnear Regresson Chapter 4 - Smple Lnear Regresson Manageral decsons often are based on the relatonshp between two or more varables. Regresson analss can be used to develop an equaton showng

More information

Statistics Chapter 4

Statistics Chapter 4 Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

Approach: (Equilibrium) TD analysis, i.e., conservation eqns., state equations Issues: how to deal with

Approach: (Equilibrium) TD analysis, i.e., conservation eqns., state equations Issues: how to deal with Schl f Aerspace Chemcal D: Mtvatn Prevus D Analyss cnsdered systems where cmpstn f flud was frzen fxed chemcal cmpstn Chemcally eactng Flw but there are numerus stuatns n prpulsn systems where chemcal

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

SIMULATION OF THREE PHASE THREE LEG TRANSFORMER BEHAVIOR UNDER DIFFERENT VOLTAGE SAG TYPES

SIMULATION OF THREE PHASE THREE LEG TRANSFORMER BEHAVIOR UNDER DIFFERENT VOLTAGE SAG TYPES SIMULATION OF THREE PHASE THREE LEG TRANSFORMER BEHAVIOR UNDER DIFFERENT VOLTAGE SAG TYPES Mhammadreza Dlatan Alreza Jallan Department f Electrcal Engneerng, Iran Unversty f scence & Technlgy (IUST) e-mal:

More information

Big Data Analytics! Special Topics for Computer Science CSE CSE Mar 31

Big Data Analytics! Special Topics for Computer Science CSE CSE Mar 31 Bg Data Analytcs! Specal Tpcs fr Cmputer Scence CSE 4095-001 CSE 5095-005! Mar 31 Fe Wang Asscate Prfessr Department f Cmputer Scence and Engneerng fe_wang@ucnn.edu Intrductn t Deep Learnng Perceptrn In

More information

Inference in the Multiple-Regression

Inference in the Multiple-Regression Sectin 5 Mdel Inference in the Multiple-Regressin Kinds f hypthesis tests in a multiple regressin There are several distinct kinds f hypthesis tests we can run in a multiple regressin. Suppse that amng

More information

Interference is when two (or more) sets of waves meet and combine to produce a new pattern.

Interference is when two (or more) sets of waves meet and combine to produce a new pattern. Interference Interference is when tw (r mre) sets f waves meet and cmbine t prduce a new pattern. This pattern can vary depending n the riginal wave directin, wavelength, amplitude, etc. The tw mst extreme

More information

Experiment 6: Constructing a Microscope

Experiment 6: Constructing a Microscope Experment 6: Cnstructng a Mcrscpe Pre-lab Preparatn: Revew the llwng sectns rm the Yung an reeman textbk page reerences gven r th etn: Sectn 3.: Thn Lenses, p. 7 8 Sectn 3.7: The Magner, p. 89 90 Sectn

More information

Basic Business Statistics, 10/e

Basic Business Statistics, 10/e Chapter 13 13-1 Basc Busness Statstcs 11 th Edton Chapter 13 Smple Lnear Regresson Basc Busness Statstcs, 11e 009 Prentce-Hall, Inc. Chap 13-1 Learnng Objectves In ths chapter, you learn: How to use regresson

More information

ENGI 4430 Parametric Vector Functions Page 2-01

ENGI 4430 Parametric Vector Functions Page 2-01 ENGI 4430 Parametric Vectr Functins Page -01. Parametric Vectr Functins (cntinued) Any nn-zer vectr r can be decmpsed int its magnitude r and its directin: r rrˆ, where r r 0 Tangent Vectr: dx dy dz dr

More information

β0 + β1xi and want to estimate the unknown

β0 + β1xi and want to estimate the unknown SLR Models Estmaton Those OLS Estmates Estmators (e ante) v. estmates (e post) The Smple Lnear Regresson (SLR) Condtons -4 An Asde: The Populaton Regresson Functon B and B are Lnear Estmators (condtonal

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

Conservation of Energy

Conservation of Energy Cnservatn f Energy Equpment DataStud, ruler 2 meters lng, 6 n ruler, heavy duty bench clamp at crner f lab bench, 90 cm rd clamped vertcally t bench clamp, 2 duble clamps, 40 cm rd clamped hrzntally t

More information

The Support Vector Machine

The Support Vector Machine he Supprt Vectr Machne Nun Vascncels (Ken Kreutz-Delgad) UC San Deg Gemetrc Interpretatn Summarzng, the lnear dscrmnant decsn rule 0 f g> ( ) > 0 h*( ) = 1 f g ( ) < 0 has the fllng prpertes th It dvdes

More information

AP Statistics Notes Unit Two: The Normal Distributions

AP Statistics Notes Unit Two: The Normal Distributions AP Statistics Ntes Unit Tw: The Nrmal Distributins Syllabus Objectives: 1.5 The student will summarize distributins f data measuring the psitin using quartiles, percentiles, and standardized scres (z-scres).

More information

Lecture 3 Specification

Lecture 3 Specification Lecture 3 Specfcaton 1 OLS Estmaton - Assumptons CLM Assumptons (A1) DGP: y = X + s correctly specfed. (A) E[ X] = 0 (A3) Var[ X] = σ I T (A4) X has full column rank rank(x)=k-, where T k. Q: What happens

More information

Modelling of Clock Behaviour. Don Percival. Applied Physics Laboratory University of Washington Seattle, Washington, USA

Modelling of Clock Behaviour. Don Percival. Applied Physics Laboratory University of Washington Seattle, Washington, USA Mdelling f Clck Behaviur Dn Percival Applied Physics Labratry University f Washingtn Seattle, Washingtn, USA verheads and paper fr talk available at http://faculty.washingtn.edu/dbp/talks.html 1 Overview

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Revsed: v3 Ordnar Least Squares (OLS): Smple Lnear Regresson (SLR) Analtcs The SLR Setup Sample Statstcs Ordnar Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals)

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

A Note on Equivalences in Measuring Returns to Scale

A Note on Equivalences in Measuring Returns to Scale Internatnal Jurnal f Busness and Ecnmcs, 2013, Vl. 12, N. 1, 85-89 A Nte n Equvalences n Measurng Returns t Scale Valentn Zelenuk Schl f Ecnmcs and Centre fr Effcenc and Prductvt Analss, The Unverst f

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

Lecture 6 More on Complete Randomized Block Design (RBD)

Lecture 6 More on Complete Randomized Block Design (RBD) Lecture 6 More on Complete Randomzed Block Desgn (RBD) Multple test Multple test The multple comparsons or multple testng problem occurs when one consders a set of statstcal nferences smultaneously. For

More information

Chapter 2 Transformations and Expectations. , and define f

Chapter 2 Transformations and Expectations. , and define f Revew for the prevous lecture Defnton: support set of a ranom varable, the monotone functon; Theorem: How to obtan a cf, pf (or pmf) of functons of a ranom varable; Eamples: several eamples Chapter Transformatons

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall Analyss of Expermental Measurements B. Esensten/rev. S. Erree Hypothess Testng, Lkelhoo Functons an Parameter Estmaton: We conser estmaton of (one or more parameters to be the expermental etermnaton

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

A Note on the Linear Programming Sensitivity. Analysis of Specification Constraints. in Blending Problems

A Note on the Linear Programming Sensitivity. Analysis of Specification Constraints. in Blending Problems Aled Mathematcal Scences, Vl. 2, 2008, n. 5, 241-248 A Nte n the Lnear Prgrammng Senstvty Analyss f Secfcatn Cnstrants n Blendng Prblems Umt Anc Callway Schl f Busness and Accuntancy Wae Frest Unversty,

More information