This is natural first assumption, unless theory rejects it.

Size: px
Start display at page:

Download "This is natural first assumption, unless theory rejects it."

Transcription

1 Sectn Smple Regressn What regressn es Relatnshp between varables Often n ecnmcs we beleve that there s a (perhaps causal) relatnshp between tw varables. Usually mre than tw, but that s eferre t anther ay. We call ths the ecnmc mel. Functnal frm Is the relatnshp lnear? y Ths s natural frst assumptn, unless thery rejects t. s slpe, whch etermnes whether relatnshp between an y s pstve r negatve. s ntercept r cnstant term, whch etermnes where the lnear relatnshp ntersects the y as. Is t plausble that ths s an eact, etermnstc relatnshp?. Data (almst) never ft eactly alng lne. Why? Measurement errr (ncrrect efntn r msmeasurement) Other varables that affect y Relatnshp s nt purely lnear Relatnshp may be fferent fr fferent bservatns S the ecnmc mel must be mele as etermnng the epecte value f y E y : The cntnal mean f y gven s Ang an errr term fr a stchastc relatnshp gves us the actual value f y: y e Errr term e captures all f the abve prblems. Errr term s cnsere t be a ranm varable an s nt bserve rectly. Varance f e s, whch s the cntnal varance f y gven, the varance f the cntnal strbutn f y gven. The smplest, but nt usually val, assumptn s that the cntnal varance s the same fr all bservatns n ur sample (hmskeastcty) ~ 6 ~

2 E y, whch means that the epecte value f y ncreases by unts when ncreases by ne unt Des t matter whch varable s n the left-han se? At ne level, n: y e, s y v, where,, v e. Fr purpses f mst estmatrs, yes: We shall see that a crtcally mprtant assumptn s that the errr term s nepenent f the regressrs r egenus varables. Are the errrs shcks t y fr gven r shcks t fr gven y? It mght nt seem lke there s much fference, but the assumptn s crucal t val estmatn. Egenety: s egenus wth respect t y f shcks t y nt affect,.e., y es nt cause. Where the ata cme frm? Sample an ppulatn We bserve a sample f bservatns n y an. Depenng n cntet these samples may be Drawn frm a larger ppulatn, such as census ata r surveys Generate by a specfc ata-generatng prcess (DGP) as n tmeseres bservatns We usually wul lke t assume that the bservatns n ur sample are cv y, y 0, j. statstcally nepenent, r at least uncrrelate: j We wll assume ntally (fr a few weeks) that the values f are chsen as n an eperment: they are nt ranm. We wll a ranm regressrs sn an scver that they n t change thngs much as lng as s nepenent f e. Gals f regressn True regressn lne: actual relatnshp n ppulatn r DGP True an f (e ) Sample f bservatns cmes frm rawng ranm realzatns f e frm f (e ) an plttng pnts apprprately abve an belw the true regressn lne. We want t fn an estmate regressn lne that cmes as clse t the true regressn lne as pssble, base n the bserve sample f y an pars: Estmate values f parameters an ~ 7 ~

3 Estmate prpertes f prbablty strbutn f errr term e Make nferences abut the abve estmates Use the estmates t make cntnal frecasts f y Determne the statstcal relablty f these frecasts Summarzng assumptns f smple regressn mel Assumptn #0: (Implct an unstate) The mel as specfe apples t all unts n the ppulatn an therefre all unts n the sample. All unts n the ppulatn uner cnseratn have the same frm f the relatnshp, the same ceffcents, an errr terms wth the same prpertes. If the Unte States an Mal are n the ppulatn, they really have the same parameters? Ths assumptn unerles everythng we n ecnmetrcs, an thus t must always be cnsere very carefully n chsng a specfcatn an a sample, an n ecng fr what ppulatn the results carry mplcatns. SR: y e SR: Ee 0, s E y te that f s ranm, we make these cntnal epectatns Ee 0 E y SR3: var e var y If s ranm, ths becmes var e var y We shul (an wll) cnser the mre general case n whch varance vares acrss bservatns: heterskeastcty SR4: e ej y yj cv, cv, 0 Ths, t, can be relae: autcrrelatn SR5: s nn-ranm an takes n at least tw values We wll allw ranm later an see that Ee 0 mples that e must be uncrrelate wth. SR6: (ptnal) e~ 0, Ths s cnvenent, but nt crtcal snce the law f large numbers assures that fr a we varety f strbutns f e, ur estmatrs cnverge t nrmal as the sample gets large Strateges fr btanng regressn estmatrs What s an estmatr? ~ 8 ~

4 A rule (frmula) fr calculatng an estmate f a parameter (,, r ) base n the sample values y, Estmatrs are ften ente by ^ ver the varable beng estmate: An estmatr f mght be ente ˆ Hw mght we estmate the ceffcents f the smple regressn mel? Three strateges: Meth f least-squares Meth f mments Meth f mamum lkelh All three strateges wth the SR assumptns lea t the same estmatr rule: the rnary least-squares regressn estmatr: (b, b, s ) Meth f least squares Estmatn strategy: Make sum f square y-evatns ( resuals ) f bserve values frm the estmate regressn lne as small as pssble. Gven ceffcent estmates b, b, resuals are efne as eˆ y b b Or eˆ ˆ y y, wth yˆ b b Why nt mnmze the sum f the resuals? We n t want sum f resuals t be large negatve number: Mnmze sum f resuals by havng all resuals nfntely negatve. Many alternatve lnes that make sum f resuals zer (whch s esrable) because pstves an negatves cancel ut. Why use square rather than abslute value t eal wth cancellatn f pstves an negatves? Square functn s cntnuusly fferentable; abslute value functn s nt. Least-squares estmatn s much easer than least-absluteevatn estmatn. Prmnence f Gaussan (nrmal) strbutn n nature an statstcal thery fcuses us n varance, whch s epectatn f square. Least-abslute-evatn estmatn s ccasnally ne (specal case f quantle regressn), but nt cmmn. Least-abslute-evatn regressn gves less mprtance t large utlers than least-squares because squarng gves large emphass t resuals wth large abslute value. Tens t raw the regressn lne twar these pnts t elmnate large square resuals. Least-squares crtern functn: S eˆ y b b ~ 9 ~ Least-squares estmatrs s the slutn t mns. Snce S s a cntnuusly fferentable functn f the estmate parameters, we can b, b

5 fferentate an set the partal ervatves equal t zer t get the leastsquares nrmal equatns: S y b b0, b y b b 0. S y b b 0 b y b b 0 y b b 0 b y b. te that the b cntn assures that the regressn lne passes thrugh the pnt, y. Substtutng the secn cntn nt the frst ve by : y y b b y y b 0 0 b y y y y ˆ XY ˆ X The b estmatr s the sample cvarance f an y ve by the sample varance f. What happens f s cnstant acrss all bservatns n ur sample? Denmnatr s zer an we can t calculate b. Ths s ur frst encunter wth the prblem f cllnearty: f s a cnstant then s a lnear cmbnatn f the ther regressr the cnstant ne that s multple by b. Cllnearty (r multcllnearty) wll be mre f a prblem n multple regressn. If t s etreme (r perfect), t means that we can t calculate the slpe estmates. The abve equatns are the rnary least-squares (OLS) ceffcent estmatrs. Meth f mments Anther general strategy fr btanng estmatrs s t set estmates f selecte ppulatn mments equal t ther sample cunterparts. Ths s calle the meth f mments. In rer t emply the meth f mments, we have t make sme specfc assumptns abut the ppulatn/dgp mments.. ~ 0 ~

6 Assume Ee 0,. Ths means that the ppulatn/dgp mean f the errr term s zer. Crrespnng t ths assumptn abut the ppulatn mean f e s the sample mean cntn eˆ 0. Thus we set the sample mean t the value we have assume fr the ppulatn mean. Assume cv e, 0, whch s equvalent t E E( ) e 0. Crrespnng t ths assumptn abut the ppulatn cvarance between the regressr an the errr term s the sample cvarance cntn: ˆ e 0. Agan, we set the sample mment t the zer value that we have assume fr the ppulatn mment. Pluggng the epressn fr the resual nt the sample mment epressns abve: y b b 0, b y b. Ths s the same as the ntercept estmate equatn fr the least-squares estmatr abve. y b b0, y yb b 0, y yb 0, y y b. Ths s eactly the same equatn as fr the OLS estmatr. cv e, 0 n the ppulatn, then the OLS estmatr can be erve by the meth f mments as well. Thus, f we assume that Ee 0, an (te that bth f these mment cntns fllw frm the etene assumptn SR that E(e ) = 0.) Meth f mamum lkelh Cnser the jnt prbablty ensty functn f y an, f (y,, ). The functn s wrtten s cntnal n the ceffcents t make eplct that the jnt strbutn f y an are affecte by the parameters. ~ ~

7 Ths functn measures the prbablty ensty f any partcular cmbnatn f y an values, whch can be lsely thught f as hw prbable that utcme s, gven the parameter values. Fr a gven set f parameters, sme bservatns f y an are less lkely than thers. Fr eample, f = 0 an < 0, then t s less lkely that we wul see bservatns where y > 0 when > 0, than bservatns wth y < 0. The ea f mamum-lkelh estmatn s t chse a set f parameters that makes the lkelh f bservng the sample that we actually have as hgh as pssble. The lkelh functn s just the jnt ensty functn turne n ts hea: L,, y f, y,. If the bservatns are nepenent ranm raws frm entcal prbablty strbutns (they are IID), then the verall sample ensty (lkelh) functn s the pruct f the ensty (lkelh) functn f the nvual bservatns: f, y,, y,, n, yn, f, y, n L,, y,, y,,, y L,, y. n n If the cntnal prbablty strbutn f e cntnal n s Gaussan (nrmal) wth mean zer an varance :,,,, f y L y e y Because f the epnental functn, Gaussan lkelh functns are usually manpulate n lgs. te that because the lg functn s mntnc, mamzng the lg-lkelh functn s equvalent t mamzng the lkelh functn tself. ln L ln y Fr an nvual bservatn: Aggregatng ver the sample: ln L,, y ln L,, y ln y ln y. ~ ~

8 The nly part f ths epressn that epens n r n the sample s the fnal summatn. Because f the negatve sgn, mamzng the lkelh functn (wth respect t ) s equvalent t mnmzng the summatn. But ths summatn s just the sum f square resuals that we mnmze n OLS. Thus, OLS s MLE f the strbutn f e cntnal n s Gaussan wth mean zer an cnstant varance, an f the bservatns are IID. Evaluatng alternatve estmatrs (nt mprtant fr cmparsn here snce all three are same, but are they any g?) Desrable crtera Unbaseness: estmatr s n average equal t the true value E ˆ Small varance: estmatr s usually clse t ts epecte value var ˆ E ˆ Eˆ Small RMSE can balance varance wth bas: RMSE MSE MSE E ˆ We wll talk abut BLUE estmatrs as mnmum varance wthn the class f unbase estmatrs. Samplng strbutn f OLS estmatrs b an b are ranm varables: they are functns f the ranm varables y an e. We can thnk f the prbablty strbutn f b as ccurrng ver repeate ranm samples frm the unerlyng ppulatn r DGP. In many (mst) cases, we cannt erve the strbutn f an estmatr theretcally, but must rely n Mnte Carl smulatn t estmate t. (See belw) Because OLS estmatr (uner ur assumptns) s lnear, we can erve ts strbutn. ~ 3 ~

9 We can wrte the OLS slpe estmatr as b y y e y e e e The thr step uses the prperty y, snce the epecte value f e s zer. Fr nw, we are assumng that s nn-ranm, as n a cntrlle eperment. If s fe, then the nly part f the frmula abve that s ranm s e. The frmula shws that the slpe estmate s lnear n e. Ths means that f e s Gaussan, then the slpe estmate wll als be Gaussan. Even f e s nt Gaussan, the slpe estmate wll cnverge t a Gaussan strbutn as lng as sme mest assumptns abut ts strbutn are satsfe. Because all the varables are nn-ranm, they can cme utse when we take epectatns, s e Ee Eb E. What abut the varance f b? We wll the etals f the analytcal wrk n matr frm because t s easer ~ 4 ~

10 var b Eb E Ee. ~ 5 ~ HGL equatns.4 an.6 prve frmulas fr varance f b an the cvarance between the ceffcents: var b b b cv, 0 te that the cvarance between the slpe an ntercept estmatrs s negatve: verestmatng ne wll ten t cause us t unerestmate the ther What etermnes the varance f b? Smaller varance f errr mre precse estmatrs Larger number f bservatns mre precse estmatrs Mre spersn f bservatns arun mean mre precse estmatrs What we knw abut the verall prbablty strbutn f b? If assumptn SR6 s satsfe an e s nrmal, then b s als nrmal because t s a lnear functn f the e varables an lnear functns f nrmally strbute varables are als nrmally strbute. If assumptn SR6 s nt satsfe, then b cnverges t a nrmal strbutn as prve sme weak cntns n the strbutn f e are satsfe. These epressns are the true varance/cvarance f the estmate ceffcent vectr. Hwever, because we nt knw, t s nt f practcal use t us. We nee an estmatr fr n rer t calculate a stanar errr f the ceffcents: an estmate f ther stanar evatn.

11 The requre estmate n the classcal case s s eˆ. We ve by because ths s the number f egrees f freem n ur regressn. Degrees f freem are a very mprtant ssue n ecnmetrcs. It refers t hw many ata pnts are avalable n ecess f the mnmum number requre t estmate the mel. In ths case, t takes mnmally tw pnts t efne a lne, s the smallest pssble number f bservatns fr whch we can ft a bvarate regressn s. Any bservatns beyn make t (generally) mpssble t ft a lne perfectly thrugh all bservatns. Thus, s the number f egrees f freem n the sample. We always ve sums f square resuals by the number f egrees f freem n rer t get unbase varance estmates. Fr eample, n calculatng the sample varance, we use s z z because there are egrees f freem left after usng ne t calculate the mean. Here, we have tw ceffcents t estmate, nt just ne, s we ve by. The stanar errr f each ceffcent s the square rt f the crrespnng agnal element f that estmate cvarance matr. te that the HGL tet uses an alternatve frmula base n ˆ eˆ. Ths estmatr fr s base because there are nly egrees f freem n the resuals are use up n estmatng the parameters. In large samples they are equvalent. Hw g s the OLS estmatr? Is OLS the best estmatr? Uner what cntns? Uner classcal regressn assumptns SR SR5 (but nt necessarly SR6) the Gauss- Markv Therem shws that the OLS estmatr s BLUE. Any ther estmatr that s unbase an lnear n e has hgher varance than b. te that (5, 0) s an estmatr wth zer varance, but t s base n the general case. Vlatn f any f the SR SR5 assumptns usually means that there s a better estmatr. ~ 6 ~

12 Least-squares regressn mel n matr ntatn (Frm Grffths, Hll, an Juge, Sectn 5.4) We can wrte the th bservatn f the bvarate lnear regressn mel as y e. Arrangng the n bservatns vertcally gves us n such equatns: y e, y e, y. e Ths s a system f lnear equatns that can be cnvenently rewrtten n matr frm. There s n real nee fr the matr representatn wth nly ne regressr because the equatns are smple, but when we a regressrs the matr ntatn s mre useful. Let y be an clumn vectr: y y y. y Let X be an matr: X. s a clumn vectr f ceffcents:. An e s an n vectr f the errr terms: e e e. e Then y X e epresses the system f equatns very cmpactly. (Wrte ut matrces an shw hw multplcatn wrks fr sngle bservatn.) In matr ntatn, eˆ yxb s the vectr f resuals. ~ 7 ~

13 Summng squares f the elements f a clumn vectr n matr ntatn s just the nner pruct: eˆ ˆˆ, ee where prme entes matr transpse. Thus we want t mnmze ths epressn fr least squares. ee ˆˆ yxb yxb ybx yxb yy bxy bxxb. Dfferentatng wth respect t the ceffcent vectr an settng t zer yels Xy XXb 0, r XXb Xy. Pre-multplyng by the nverse f XX yels the OLS ceffcent frmula:. b XX Xy (Ths s ne f the few frmulas that yu nee t memrze.) te symmetry between matr frmula an scalar frmula. Xy s the sum f the crss pruct f the tw varables an XX s the sum f squares f the regressr. The frmer s n the numeratr (an nt nverte) an the latter s n the enmnatr (an nverte). In matr ntatn, we can epress ur estmatr n terms f e as XX XX e XX Xe. b XX Xy XX XX XX Xe When s nn-stchastc, the cvarance matr f the ceffcent estmatr s als easy t cmpute uner the OLS assumptns. Cvarance matrces: The cvarance f a vectr ranm varable s a matr wth varances n the agnal an cvarances n the ffagnals. Fr an M vectr ranm varable z, the cvarance matr s t the fllwng uter pruct: cv( z) E zez zez E z Ez Ez Ezz Ez Ez EzzM Ez E z Ez z Ez E z Ez E z Ez z M Ez. Ez EzzM Ez Ez EzzM Ez EzM Ez In ur regressn mel, f e s IID wth mean zer an varance, then cv e E ee I, wth I beng the rer- entty Ee = 0 an matr. ~ 8 ~

14 We can then cmpute the cvarance matr f the (unbase) estmatr as cv E XX Xe XX Xe be bb E XX XeeX XX XX XE eexxx XX XX XX XX What happens t var. b as gets large? Summatns n XX have atnal terms, s they get larger. Ths means that nverse matr gets smaller an varance ecreases: mre bservatns mples mre accurate estmatrs. te that varance als ncreases as the varance f the errr term ges up. Mre mprecse ft mples less precse ceffcent estmates. Our estmate cvarance matr f the ceffcents s then s XX. The (, ) element f ths matr s s ˆ e Ths s the frmula we calculate n class fr the scalar system. Thus, t summarze, when the classcal assumptns hl an e s nrmally strbute, b~, XX.. Asympttc prpertes f OLS bvarate regressn estmatr (Base n S&W, Chapter 7.) Cnvergence n prbablty (prbablty lmts) Assume that S, S,, S, s a sequence f ranm varables. In practce, they are gng t be estmatrs base n,,, bservatns. ~ 9 ~

15 p S f an nly f lm Pr S 0 fr any > 0. Thus, fr any small value f, we can make the prbablty that S s further frm than arbtrarly small by chsng large enugh. p If S, then we can wrte plm S =. Ths means that the entre prbablty strbutn f S cnverges n the value as gets large. Estmatrs that cnverge n prbablty t the true parameter value are calle cnsstent estmatrs. Cnvergence n strbutn If the sequence f ranm varables {S } has cumulatve prbablty strbutns F, F,, F,, then whch F s cntnuus. S S f an nly f F t F t lm, fr all t at If a sequence f ranm varables cnverges n strbutn t the nrmal strbutn, t s calle asympttcally nrmal. Prpertes f prbablty lmts an cnvergence n strbutn Prbablty lmts are very frgvng: Slutsky s Therem states that plm (S + R ) = plm S + plm R plm (S R ) = plm S plm R plm (S / R ) = plm S / plm R The cntnuus-mappng therem gves us Fr cntnuus functns g, plm g(s ) = g(plm S ) An f S g S g S. S, then Further, we can cmbne prbablty lmts an cnvergence n strbutn t get If plm a = a an S S, then as as a S a S S / a S / a These are very useful snce t means that asympttcally we can treat any cnsstent estmatr as a cnstant equal t the true value. Central lmt therems There s a varety wth slghtly fferent cntns. Basc result: If {S } s a sequence f estmatrs f, then fr a we varety f S 0,, where s the varance f unerlyng strbutns, the unerlyng statstc. Applyng asympttc thery t the OLS mel Uner the mre general cntns than the nes that we have typcally assume (nclung, specfcally, the fnte kurtss assumptn, but nt the ~ 30 ~

16 hmskeastcty assumptn r the assumptn f fe regressrs), the OLS estmatr satsfes the cntns fr cnsstency an asympttc nrmalty. b var E( ) e 0,. Ths s general case wth var heterskeastcty. Wth hmskeastcty, the varable reuces t the usual frmula: b 0,. var plm ˆ, as prven n Sectn 7.3. b b t b se.. b 0,. Chce fr t statstc: If hmskeastc, nrmal errr term, then eact strbutn s t. If heterskeastc r nn-nrmal errr (wth fnte 4 th mment), then eact strbutn s unknwn, but asympttc strbutn s nrmal Whch s mre reasnable fr any gven applcatn? Lnearty an nnlnearty The OLS estmatr s a lnear estmatr because b s lnear n e (whch s because y s lnear n ), nt because y s lnear n. OLS can easly hanle nnlnear relatnshps between y an. lny = + y = + etc. Dummy (ncatr) varables take the value zer r ne. Eample: MALE = f male an 0 f female. y MALE e Fr females, E y MALE Fr males, Ey MALE Thus, s the fference between the epecte value f males an females. ~ 3 ~

What regression does. x is called the regressor The linear form is a natural first assumption, unless theory rejects it. positive or negative.

What regression does. x is called the regressor The linear form is a natural first assumption, unless theory rejects it. positive or negative. Sectn Smple Regressn What regressn es Relatnshp between varables Often n ecnmcs we beleve that there s a (perhaps causal) relatnshp between tw varables Usually mre than tw, but that s eferre t anther ay

More information

What regression does. = β + β x : The conditional mean of y given x is β + β x 1 2

What regression does. = β + β x : The conditional mean of y given x is β + β x 1 2 Sectn Smple Regressn What regressn es Relatnshp between varables Often n ecnmcs we beleve that there s a (perhaps causal) relatnshp between tw varables Usually mre than tw, but that s eferre t anther ay

More information

Inference in Simple Regression

Inference in Simple Regression Sectn 3 Inference n Smple Regressn Havng derved the prbablty dstrbutn f the OLS ceffcents under assumptns SR SR5, we are nw n a pstn t make nferental statements abut the ppulatn parameters: hypthess tests

More information

Section 10 Regression with Stochastic Regressors

Section 10 Regression with Stochastic Regressors Sectn 10 Regressn wth Stchastc Regressrs Meanng f randm regressrs Untl nw, we have assumed (aganst all reasn) that the values f x have been cntrlled by the expermenter. Ecnmsts almst never actually cntrl

More information

Regression with Stochastic Regressors

Regression with Stochastic Regressors Sectn 9 Regressn wth Stchastc Regressrs Meanng f randm regressrs Untl nw, we have assumed (aganst all reasn) that the values f x have been cntrlled by the expermenter. Ecnmsts almst never actually cntrl

More information

Reproducing kernel Hilbert spaces. Nuno Vasconcelos ECE Department, UCSD

Reproducing kernel Hilbert spaces. Nuno Vasconcelos ECE Department, UCSD Reprucng ernel Hlbert spaces Nun Vascncels ECE Department UCSD Classfcatn a classfcatn prblem has tw tpes f varables X -vectr f bservatns features n the wrl Y - state class f the wrl Perceptrn: classfer

More information

Physics 107 HOMEWORK ASSIGNMENT #20

Physics 107 HOMEWORK ASSIGNMENT #20 Physcs 107 HOMEWORK ASSIGNMENT #0 Cutnell & Jhnsn, 7 th etn Chapter 6: Prblems 5, 7, 74, 104, 114 *5 Cncept Smulatn 6.4 prves the ptn f explrng the ray agram that apples t ths prblem. The stance between

More information

ENGI 4421 Probability & Statistics

ENGI 4421 Probability & Statistics Lecture Ntes fr ENGI 441 Prbablty & Statstcs by Dr. G.H. Gerge Asscate Prfessr, Faculty f Engneerng and Appled Scence Seventh Edtn, reprnted 018 Sprng http://www.engr.mun.ca/~ggerge/441/ Table f Cntents

More information

Section 14 Limited Dependent Variables

Section 14 Limited Dependent Variables Sectn 14 Lmted Dependent Varables What s a lmted dependent varable? Our standard assumptn f an errr term that s nrmally dstrbuted cndtnal n the regressrs mples that the dependent varable can be (wth pstve

More information

Fall 2010 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. (n.b. for now, we do not require that k. vectors as a k 1 matrix: ( )

Fall 2010 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. (n.b. for now, we do not require that k. vectors as a k 1 matrix: ( ) Fall 00 Analyss f Epermental Measrements B. Esensten/rev. S. Errede Let s nvestgate the effect f a change f varables n the real & symmetrc cvarance matr aa the varance matr aa the errr matr V [ ] ( )(

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatnal Data Assmlatn (4D-Var) 4DVAR, accrdng t the name, s a fur-dmensnal varatnal methd. 4D-Var s actually a smple generalzatn f 3D-Var fr bservatns that are dstrbuted n tme. he equatns are the same,

More information

element k Using FEM to Solve Truss Problems

element k Using FEM to Solve Truss Problems sng EM t Slve Truss Prblems A truss s an engneerng structure cmpsed straght members, a certan materal, that are tpcall pn-ned at ther ends. Such members are als called tw-rce members snce the can nl transmt

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Theory of a vertically loaded Suction Pile in SAND

Theory of a vertically loaded Suction Pile in SAND Thery f a vertcally lae Suctn Ple n SAND 1. Cnventn Water t COG Z L Sl z COG D t1 Fgure 1: Overvew f man cmpnents Fgure : Overvew f man parameters Z D L t 1 t W φ φ e c κ p ρ sl γ sl Waterepth Penetratnepth

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

Chapter 7. Systems 7.1 INTRODUCTION 7.2 MATHEMATICAL MODELING OF LIQUID LEVEL SYSTEMS. Steady State Flow. A. Bazoune

Chapter 7. Systems 7.1 INTRODUCTION 7.2 MATHEMATICAL MODELING OF LIQUID LEVEL SYSTEMS. Steady State Flow. A. Bazoune Chapter 7 Flud Systems and Thermal Systems 7.1 INTODUCTION A. Bazune A flud system uses ne r mre fluds t acheve ts purpse. Dampers and shck absrbers are eamples f flud systems because they depend n the

More information

Chapter 3, Solution 1C.

Chapter 3, Solution 1C. COSMOS: Cmplete Onlne Slutns Manual Organzatn System Chapter 3, Slutn C. (a If the lateral surfaces f the rd are nsulated, the heat transfer surface area f the cylndrcal rd s the bttm r the tp surface

More information

Chapter 6 : Gibbs Free Energy

Chapter 6 : Gibbs Free Energy Wnter 01 Chem 54: ntrductry hermdynamcs Chapter 6 : Gbbs Free Energy... 64 Defntn f G, A... 64 Mawell Relatns... 65 Gbbs Free Energy G(,) (ure substances)... 67 Gbbs Free Energy fr Mtures... 68 ΔG f deal

More information

Basics of heteroskedasticity

Basics of heteroskedasticity Sect 8 Heterskedastcty ascs f heterskedastcty We have assumed up t w ( ur SR ad MR assumpts) that the varace f the errr term was cstat acrss bservats Ths s urealstc may r mst ecmetrc applcats, especally

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

_J _J J J J J J J J _. 7 particles in the blue state; 3 particles in the red state: 720 configurations _J J J _J J J J J J J J _

_J _J J J J J J J J _. 7 particles in the blue state; 3 particles in the red state: 720 configurations _J J J _J J J J J J J J _ Dsrder and Suppse I have 10 partcles that can be n ne f tw states ether the blue state r the red state. Hw many dfferent ways can we arrange thse partcles amng the states? All partcles n the blue state:

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9.

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9. Sectin 7 Mdel Assessment This sectin is based n Stck and Watsn s Chapter 9. Internal vs. external validity Internal validity refers t whether the analysis is valid fr the ppulatin and sample being studied.

More information

The Simple Linear Regression Model: Theory

The Simple Linear Regression Model: Theory Chapter 3 The mple Lear Regress Mdel: Ther 3. The mdel 3.. The data bservats respse varable eplaatr varable : : Plttg the data.. Fgure 3.: Dsplag the cable data csdered b Che at al (993). There are 79

More information

Problem 1. Refracting Surface (Modified from Pedrotti 2-2)

Problem 1. Refracting Surface (Modified from Pedrotti 2-2) .70 Optc Hmewrk # February 8, 04 Prblem. Reractng Surace (Me rm Pertt -) Part (a) Fermat prncple requre that every ray that emanate rm the bject an pae thrugh the mage pnt mut be chrnu (.e., have equal

More information

Circuits Op-Amp. Interaction of Circuit Elements. Quick Check How does closing the switch affect V o and I o?

Circuits Op-Amp. Interaction of Circuit Elements. Quick Check How does closing the switch affect V o and I o? Crcuts Op-Amp ENGG1015 1 st Semester, 01 Interactn f Crcut Elements Crcut desgn s cmplcated by nteractns amng the elements. Addng an element changes vltages & currents thrughut crcut. Example: clsng a

More information

Section 3: Detailed Solutions of Word Problems Unit 1: Solving Word Problems by Modeling with Formulas

Section 3: Detailed Solutions of Word Problems Unit 1: Solving Word Problems by Modeling with Formulas Sectn : Detaled Slutns f Wrd Prblems Unt : Slvng Wrd Prblems by Mdelng wth Frmulas Example : The factry nvce fr a mnvan shws that the dealer pad $,5 fr the vehcle. If the stcker prce f the van s $5,, hw

More information

Introduction to Electronic circuits.

Introduction to Electronic circuits. Intrductn t Electrnc crcuts. Passve and Actve crcut elements. Capactrs, esstrs and Inductrs n AC crcuts. Vltage and current dvders. Vltage and current surces. Amplfers, and ther transfer characterstc.

More information

Spring 2002 Lecture #17

Spring 2002 Lecture #17 1443-51 Sprng 22 Lecture #17 r. Jaehn Yu 1. Cndtns fr Equlbrum 2. Center f Gravty 3. Elastc Prpertes f Slds Yung s dulus Shear dulus ulk dulus Tday s Hmewrk Assgnment s the Hmewrk #8!!! 2 nd term eam n

More information

Transient Conduction: Spatial Effects and the Role of Analytical Solutions

Transient Conduction: Spatial Effects and the Role of Analytical Solutions Transent Cnductn: Spatal Effects and the Rle f Analytcal Slutns Slutn t the Heat Equatn fr a Plane Wall wth Symmetrcal Cnvectn Cndtns If the lumped capactance apprxmatn can nt be made, cnsderatn must be

More information

A/2 l,k. Problem 1 STRATEGY. KNOWN Resistance of a complete spherical shell: r rk. Inner and outer radii

A/2 l,k. Problem 1 STRATEGY. KNOWN Resistance of a complete spherical shell: r rk. Inner and outer radii Prblem 1 STRATEGY KNOWN Resstance f a cmplete sphercal shell: R ( r r / (4 π r rk sphere Inner an uter ra r an r, SOLUTION Part 1: Resstance f a hemsphercal shell: T calculate the resstance f the hemsphere,

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

1. An incident ray from the object to the mirror, parallel to the principal axis and then reflected through the focal point F.

1. An incident ray from the object to the mirror, parallel to the principal axis and then reflected through the focal point F. Hmewrk- Capter 25 4. REASONING Te bject stance ( = cm) s srter tan te cal lengt ( = 8 cm) te mrrr, s we expect te mage t be vrtual, appearng ben te mrrr. Takng Fgure 25.8a as ur mel, we wll trace ut: tree

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Conduction Heat Transfer

Conduction Heat Transfer Cnductn Heat Transfer Practce prblems A steel ppe f cnductvty 5 W/m-K has nsde and utsde surface temperature f C and 6 C respectvely Fnd the heat flw rate per unt ppe length and flux per unt nsde and per

More information

Lecture 12. Heat Exchangers. Heat Exchangers Chee 318 1

Lecture 12. Heat Exchangers. Heat Exchangers Chee 318 1 Lecture 2 Heat Exchangers Heat Exchangers Chee 38 Heat Exchangers A heat exchanger s used t exchange heat between tw fluds f dfferent temperatures whch are separated by a sld wall. Heat exchangers are

More information

CHAPTER 3 ANALYSIS OF KY BOOST CONVERTER

CHAPTER 3 ANALYSIS OF KY BOOST CONVERTER 70 CHAPTER 3 ANALYSIS OF KY BOOST CONERTER 3.1 Intrductn The KY Bst Cnverter s a recent nventn made by K.I.Hwu et. al., (2007), (2009a), (2009b), (2009c), (2010) n the nn-slated DC DC cnverter segment,

More information

Wp/Lmin. Wn/Lmin 2.5V

Wp/Lmin. Wn/Lmin 2.5V UNIVERITY OF CALIFORNIA Cllege f Engneerng Department f Electrcal Engneerng and Cmputer cences Andre Vladmrescu Hmewrk #7 EEC Due Frday, Aprl 8 th, pm @ 0 Cry Prblem #.5V Wp/Lmn 0.0V Wp/Lmn n ut Wn/Lmn.5V

More information

A New Method for Solving Integer Linear. Programming Problems with Fuzzy Variables

A New Method for Solving Integer Linear. Programming Problems with Fuzzy Variables Appled Mathematcal Scences, Vl. 4, 00, n. 0, 997-004 A New Methd fr Slvng Integer Lnear Prgrammng Prblems wth Fuzzy Varables P. Pandan and M. Jayalakshm Department f Mathematcs, Schl f Advanced Scences,

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction ECONOMICS 35* -- NOTE 7 ECON 35* -- NOTE 7 Interval Estmaton n the Classcal Normal Lnear Regresson Model Ths note outlnes the basc elements of nterval estmaton n the Classcal Normal Lnear Regresson Model

More information

CONVEX COMBINATIONS OF ANALYTIC FUNCTIONS

CONVEX COMBINATIONS OF ANALYTIC FUNCTIONS rnat. J. Math. & Math. S. Vl. 6 N. (983) 33534 335 ON THE RADUS OF UNVALENCE OF CONVEX COMBNATONS OF ANALYTC FUNCTONS KHALDA. NOOR, FATMA M. ALOBOUD and NAEELA ALDHAN Mathematcs Department Scence Cllege

More information

55:041 Electronic Circuits

55:041 Electronic Circuits 55:04 Electrnc Crcuts Feedback & Stablty Sectns f Chapter 2. Kruger Feedback & Stablty Cnfguratn f Feedback mplfer S S S S fb Negate feedback S S S fb S S S S S β s the feedback transfer functn Implct

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Feedback Principle :-

Feedback Principle :- Feedback Prncple : Feedback amplfer s that n whch a part f the utput f the basc amplfer s returned back t the nput termnal and mxed up wth the nternal nput sgnal. The sub netwrks f feedback amplfer are:

More information

Lecture 3 Specification

Lecture 3 Specification Lecture 3 Specfcaton 1 OLS Estmaton - Assumptons CLM Assumptons (A1) DGP: y = X + s correctly specfed. (A) E[ X] = 0 (A3) Var[ X] = σ I T (A4) X has full column rank rank(x)=k-, where T k. Q: What happens

More information

Design of Analog Integrated Circuits

Design of Analog Integrated Circuits Desgn f Analg Integrated Crcuts I. Amplfers Desgn f Analg Integrated Crcuts Fall 2012, Dr. Guxng Wang 1 Oerew Basc MOS amplfer structures Cmmn-Surce Amplfer Surce Fllwer Cmmn-Gate Amplfer Desgn f Analg

More information

p(z) = 1 a e z/a 1(z 0) yi a i x (1/a) exp y i a i x a i=1 n i=1 (y i a i x) inf 1 (y Ax) inf Ax y (1 ν) y if A (1 ν) = 0 otherwise

p(z) = 1 a e z/a 1(z 0) yi a i x (1/a) exp y i a i x a i=1 n i=1 (y i a i x) inf 1 (y Ax) inf Ax y (1 ν) y if A (1 ν) = 0 otherwise Dustn Lennon Math 582 Convex Optmzaton Problems from Boy, Chapter 7 Problem 7.1 Solve the MLE problem when the nose s exponentally strbute wth ensty p(z = 1 a e z/a 1(z 0 The MLE s gven by the followng:

More information

Shell Stiffness for Diffe ent Modes

Shell Stiffness for Diffe ent Modes Engneerng Mem N 28 February 0 979 SUGGESTONS FOR THE DEFORMABLE SUBREFLECTOR Sebastan vn Herner Observatns wth the present expermental versn (Engneerng Dv nternal Reprt 09 July 978) have shwn that a defrmable

More information

Lucas Imperfect Information Model

Lucas Imperfect Information Model Lucas Imerfect Infrmatn Mdel 93 Lucas Imerfect Infrmatn Mdel The Lucas mdel was the frst f the mdern, mcrfundatns mdels f aggregate suly and macrecnmcs It bult drectly n the Fredman-Phels analyss f the

More information

The Ordinary Least Squares (OLS) Estimator

The Ordinary Least Squares (OLS) Estimator The Ordnary Least Squares (OLS) Estmator 1 Regresson Analyss Regresson Analyss: a statstcal technque for nvestgatng and modelng the relatonshp between varables. Applcatons: Engneerng, the physcal and chemcal

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students.

[The following data appear in Wooldridge Q2.3.] The table below contains the ACT score and college GPA for eight college students. PPOL 59-3 Problem Set Exercses n Smple Regresson Due n class /8/7 In ths problem set, you are asked to compute varous statstcs by hand to gve you a better sense of the mechancs of the Pearson correlaton

More information

Approach: (Equilibrium) TD analysis, i.e., conservation eqns., state equations Issues: how to deal with

Approach: (Equilibrium) TD analysis, i.e., conservation eqns., state equations Issues: how to deal with Schl f Aerspace Chemcal D: Mtvatn Prevus D Analyss cnsdered systems where cmpstn f flud was frzen fxed chemcal cmpstn Chemcally eactng Flw but there are numerus stuatns n prpulsn systems where chemcal

More information

Statistics Chapter 4

Statistics Chapter 4 Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment

More information

SIMULATION OF THREE PHASE THREE LEG TRANSFORMER BEHAVIOR UNDER DIFFERENT VOLTAGE SAG TYPES

SIMULATION OF THREE PHASE THREE LEG TRANSFORMER BEHAVIOR UNDER DIFFERENT VOLTAGE SAG TYPES SIMULATION OF THREE PHASE THREE LEG TRANSFORMER BEHAVIOR UNDER DIFFERENT VOLTAGE SAG TYPES Mhammadreza Dlatan Alreza Jallan Department f Electrcal Engneerng, Iran Unversty f scence & Technlgy (IUST) e-mal:

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

Problem Set 5 Solutions - McQuarrie Problems 3.20 MIT Dr. Anton Van Der Ven

Problem Set 5 Solutions - McQuarrie Problems 3.20 MIT Dr. Anton Van Der Ven Prblem Set 5 Slutns - McQuarre Prblems 3.0 MIT Dr. Antn Van Der Ven Fall Fall 003 001 Prblem 3-4 We have t derve the thermdynamc prpertes f an deal mnatmc gas frm the fllwng: = e q 3 m = e and q = V s

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Big Data Analytics! Special Topics for Computer Science CSE CSE Mar 31

Big Data Analytics! Special Topics for Computer Science CSE CSE Mar 31 Bg Data Analytcs! Specal Tpcs fr Cmputer Scence CSE 4095-001 CSE 5095-005! Mar 31 Fe Wang Asscate Prfessr Department f Cmputer Scence and Engneerng fe_wang@ucnn.edu Intrductn t Deep Learnng Perceptrn In

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Lecture 13: Markov Chain Monte Carlo. Gibbs sampling

Lecture 13: Markov Chain Monte Carlo. Gibbs sampling Lecture 13: Markv hain Mnte arl Gibbs sampling Gibbs sampling Markv chains 1 Recall: Apprximate inference using samples Main idea: we generate samples frm ur Bayes net, then cmpute prbabilities using (weighted)

More information

Exploiting vector space properties for the global optimization of process networks

Exploiting vector space properties for the global optimization of process networks Exptng vectr space prpertes fr the gbal ptmzatn f prcess netwrks Juan ab Ruz Ignac Grssmann Enterprse Wde Optmzatn Meetng March 00 Mtvatn - The ptmzatn f prcess netwrks s ne f the mst frequent prblems

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Interference is when two (or more) sets of waves meet and combine to produce a new pattern.

Interference is when two (or more) sets of waves meet and combine to produce a new pattern. Interference Interference is when tw (r mre) sets f waves meet and cmbine t prduce a new pattern. This pattern can vary depending n the riginal wave directin, wavelength, amplitude, etc. The tw mst extreme

More information

PHYSICS 536 Experiment 12: Applications of the Golden Rules for Negative Feedback

PHYSICS 536 Experiment 12: Applications of the Golden Rules for Negative Feedback PHYSICS 536 Experment : Applcatns f the Glden Rules fr Negatve Feedback The purpse f ths experment s t llustrate the glden rules f negatve feedback fr a varety f crcuts. These cncepts permt yu t create

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

New Liu Estimators for the Poisson Regression Model: Method and Application

New Liu Estimators for the Poisson Regression Model: Method and Application New Lu Estmators for the Posson Regresson Moel: Metho an Applcaton By Krstofer Månsson B. M. Golam Kbra, Pär Sölaner an Ghaz Shukur,3 Department of Economcs, Fnance an Statstcs, Jönköpng Unversty Jönköpng,

More information

The support vector machine. Nuno Vasconcelos ECE Department, UCSD

The support vector machine. Nuno Vasconcelos ECE Department, UCSD he supprt vectr machne Nun Vascncels ECE Department UCSD Outlne e have talked abut classfcatn and lnear dscrmnants then e dd a detur t talk abut kernels h d e mplement a nn-lnear bundar n the lnear dscrmnant

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Final Exam Spring 2014 SOLUTION

Final Exam Spring 2014 SOLUTION Appled Opts H-464/564 C 594 rtland State nverst A. La Rsa Fnal am Sprng 14 SOLTION Name There are tw questns 1%) plus an ptnal bnus questn 1%) 1. Quarter wave plates and half wave plates The fgures belw

More information

Experiment 6: Constructing a Microscope

Experiment 6: Constructing a Microscope Experment 6: Cnstructng a Mcrscpe Pre-lab Preparatn: Revew the llwng sectns rm the Yung an reeman textbk page reerences gven r th etn: Sectn 3.: Thn Lenses, p. 7 8 Sectn 3.7: The Magner, p. 89 90 Sectn

More information

β0 + β1xi and want to estimate the unknown

β0 + β1xi and want to estimate the unknown SLR Models Estmaton Those OLS Estmates Estmators (e ante) v. estmates (e post) The Smple Lnear Regresson (SLR) Condtons -4 An Asde: The Populaton Regresson Functon B and B are Lnear Estmators (condtonal

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall Analyss of Expermental Measurements B. Esensten/rev. S. Erree Hypothess Testng, Lkelhoo Functons an Parameter Estmaton: We conser estmaton of (one or more parameters to be the expermental etermnaton

More information

A Note on Equivalences in Measuring Returns to Scale

A Note on Equivalences in Measuring Returns to Scale Internatnal Jurnal f Busness and Ecnmcs, 2013, Vl. 12, N. 1, 85-89 A Nte n Equvalences n Measurng Returns t Scale Valentn Zelenuk Schl f Ecnmcs and Centre fr Effcenc and Prductvt Analss, The Unverst f

More information

EE 204 Lecture 25 More Examples on Power Factor and the Reactive Power

EE 204 Lecture 25 More Examples on Power Factor and the Reactive Power EE 204 Lecture 25 Mre Examples n Pwer Factr and the Reactve Pwer The pwer factr has been defned n the prevus lecture wth an example n pwer factr calculatn. We present tw mre examples n ths lecture. Example

More information

Exercises H /OOA> f Wo AJoTHS l^»-l S. m^ttrt /A/ ?C,0&L6M5 INFERENCE FOR DISTRIBUTIONS OF CATEGORICAL DATA. tts^e&n tai-ns 5 2%-cas-hews^, 27%

Exercises H /OOA> f Wo AJoTHS l^»-l S. m^ttrt /A/ ?C,0&L6M5 INFERENCE FOR DISTRIBUTIONS OF CATEGORICAL DATA. tts^e&n tai-ns 5 2%-cas-hews^, 27% /A/ mttrt?c,&l6m5 INFERENCE FOR DISTRIBUTIONS OF CATEGORICAL DATA Exercses, nuts! A cmpany clams that each batch f ttse&n ta-ns 5 2%-cas-hews, 27% almnds, 13% macadama nuts, and 8% brazl nuts. T test ths

More information

ENGI 4430 Parametric Vector Functions Page 2-01

ENGI 4430 Parametric Vector Functions Page 2-01 ENGI 4430 Parametric Vectr Functins Page -01. Parametric Vectr Functins (cntinued) Any nn-zer vectr r can be decmpsed int its magnitude r and its directin: r rrˆ, where r r 0 Tangent Vectr: dx dy dz dr

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

F8: Heteroscedasticity

F8: Heteroscedasticity F8: Heteroscedastcty Feng L Department of Statstcs, Stockholm Unversty What s so-called heteroscedastcty In a lnear regresson model, we assume the error term has a normal dstrbuton wth mean zero and varance

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of Chapter 7 Generalzed and Weghted Least Squares Estmaton The usual lnear regresson model assumes that all the random error components are dentcally and ndependently dstrbuted wth constant varance. When

More information

A Note on the Linear Programming Sensitivity. Analysis of Specification Constraints. in Blending Problems

A Note on the Linear Programming Sensitivity. Analysis of Specification Constraints. in Blending Problems Aled Mathematcal Scences, Vl. 2, 2008, n. 5, 241-248 A Nte n the Lnear Prgrammng Senstvty Analyss f Secfcatn Cnstrants n Blendng Prblems Umt Anc Callway Schl f Busness and Accuntancy Wae Frest Unversty,

More information

Quantum Mechanics I - Session 4

Quantum Mechanics I - Session 4 Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

Large-Scale Data-Dependent Kernel Approximation Appendix

Large-Scale Data-Dependent Kernel Approximation Appendix Large-Scale Data-Depenent Kernel Approxmaton Appenx Ths appenx presents the atonal etal an proofs assocate wth the man paper [1]. 1 Introucton Let k : R p R p R be a postve efnte translaton nvarant functon

More information

Chapter 14 Simple Linear Regression

Chapter 14 Simple Linear Regression Chapter 4 Smple Lnear Regresson Chapter 4 - Smple Lnear Regresson Manageral decsons often are based on the relatonshp between two or more varables. Regresson analss can be used to develop an equaton showng

More information

A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON

A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON A MULTIDIMENSIONAL ANALOGUE OF THE RADEMACHER-GAUSSIAN TAIL COMPARISON PIOTR NAYAR AND TOMASZ TKOCZ Abstract We prove a menson-free tal comparson between the Euclean norms of sums of nepenent ranom vectors

More information

Chapter 3. Two-Variable Regression Model: The Problem of Estimation

Chapter 3. Two-Variable Regression Model: The Problem of Estimation Chapter 3. Two-Varable Regresson Model: The Problem of Estmaton Ordnary Least Squares Method (OLS) Recall that, PRF: Y = β 1 + β X + u Thus, snce PRF s not drectly observable, t s estmated by SRF; that

More information