13 : CHAPTER. Linear Stochastic Models. Stationary Stochastic processes

Size: px
Start display at page:

Download "13 : CHAPTER. Linear Stochastic Models. Stationary Stochastic processes"

Transcription

1 13 : CHAPTER Lnear Stochastc Models Statonary Stochastc processes A temporal stochastc process s smply a sequence of random varables ndexed by a tme subscrpt Such a process can be denoted by x(t) The element of the sequence at the pont t = τ s x τ = x(τ) Let x(τ) =[x τ+1,x τ+2,,x τ+n ] denote n consecutve elements of the sequence Then the process s sad to be strctly statonary f the ont probablty dstrbuton of the elements does not depend on τ regardless of the sze of n Ths means that any two segments of the sequence of equal length have dentcal probablty densty functons In consequence, the decson on where to place the tme orgn s arbtrary; and the argument τ can be omtted Some further mplcatons of statonarty are that (1) E(x t )=µ< for all t and C(x τ+t,x τ+s )=γ t s The latter condton means that the covarance of any two elements depends only on ther temporal separaton t s Notce that, f the elements of the sequence are normally dstrbuted, then the two condtons are suffcent to establsh strct statonarty On ther own, they consttute the condtons of weak or 2nd-order statonarty The condton on the covarances mples that the dsperson matrx of the vector x =[x 1,x 2,,x n ] s a bsymmetrc Laurent matrx of the form (2) D(x) =E { [x E(x)][x E(x)] } γ γ 1 γ 2 γ n 1 γ 1 γ γ 1 γ n 2 = γ 2 γ 1 γ γ n 3 γ n 1 γ n 2 γ n 3 γ Gven that a sequence of observatons of a tme seres represents only a segment of a sngle realsaton of a stochastc process, one mght magne that 1

2 DSG POLLOCK : ECONOMETRIC THEORY there s lttle chance of makng vald nferences about the parameters of the process However, provded that the process x(t) s statonary and provded that the statstcal dependences between wdely separated elements of the sequence are weak, t s possble to estmate consstently those parameters of the process whch express the dependence of proxmate elements of the sequence If one s prepared to make suffcently strong assumptons about the nature of the process, then a knowledge of such parameters may be all that s needed for a complete charactersaton of the process Movng Average Processes The qth-order movng average process, or MA(q) process, s defned by the equaton (3) y(t) =µ ε(t)+µ 1 ε(t 1) + +µ q ε(t q), where ε(t), whch has E{x(t)} =, s a whte-nose process consstng of a sequence of ndependently and dentcally dstrbuted random varables wth zero expectatons The equaton s normalsed ether by settng µ =1orby settng V {ε(t)} = σε 2 = 1 The equaton can be wrtten n summary notaton as y(t) =µ(l)ε(t), where µ(l) =µ +µ 1 L+ +µ q L q s a polynomal n the lag operator A movng-average process s clearly statonary snce any two elements y t and y s represent the same functon of dentcally dstrbuted vectors ε t = [ε t,ε t 1,,ε t q ] and ε s =[ε s,ε s 1,,ε s q ] In addton to the condton of statonarty, t s usually requred that a movng-average process should be nvertble such that t can be expressed n the form of µ 1 (L)y(t) =ε(t) where the LHS embodes a convergent sum of past values of y(t) Ths s an nfnte-order autoregressve representaton of the process The representaton s avalable only f all the roots of the equaton µ(z) =µ +µ 1 z+ +µ q z q = le outsde the unt crcle Ths concluson follows from our dscusson of partal fractons As an example, let us consder the frst-order movng-average process whch s defned by (4) y(t) =ε(t) θε(t 1)=(1 θl)ε(t) Provded that θ < 1, ths can be wrtten n autoregressve form as ε(t) =(1 θl) 1 y(t) (5) = { y(t)+θy(t 1) + θ 2 y(t 2) + } Imagne that θ > 1 nstead Then, to obtan a convergent seres, we have to wrte (6) y(t +1)=ε(t+1) θε(t) = θ(1 L 1 /θ)ε(t), 2

3 where L 1 ε(t) =ε(t+ 1) Ths gves LINEAR STOCHASTIC MODELS (7) ε(t) = θ 1 (1 L 1 /θ) 1 y(t +1) = θ 1{ y(t+1)/θ + y(t +2)/θ 2 + y(t 3)/θ 3 + } Normally, an expresson such as ths, whch embodes future values of y(t), would have no reasonable meanng It s straghtforward to generate the sequence of autocovarances from a knowledge of the parameters of the movng-average process and of the varance of the whte-nose process Consder (8) γ τ = E(y t y t τ ) { } = E µ ε t µ ε t τ = µ µ E(ε t ε t τ ) Snce ε(t) s a sequence of ndependently and dentcally dstrbuted random varables wth zero expectatons, t follows that {, f τ + ; (9) E(ε t ε t τ )= σε, 2 f = τ + Therefore (1) γ τ = σε 2 µ µ +τ Now let τ =,1,,q Ths gves (11) γ = σε(µ µ µ 2 q), γ 1 =σε(µ 2 µ 1 +µ 1 µ 2 + +µ q 1 µ q ), γ q =σεµ 2 µ q Also, γ τ = for all τ>q The frst-order movng-average process y(t) = ε(t) θε(t 1) has the followng autocovarances: (12) γ = σε(1 2 + θ 2 ), γ 1 = σεθ, 2 γ τ = f τ>1 3

4 DSG POLLOCK : ECONOMETRIC THEORY Thus, for a vector y =[y 1,y 2,,y T ] of T consecutve elements from a frstorder movng-average process, the dsperson matrx s 1+θ 2 θ θ 1+θ 2 θ (13) D(y) =σε 2 θ 1+θ 2 1+θ 2 In general, the dsperson matrx of a qth-order movng-average process has q subdagonal and q supradagonal bands of nonzero elements and zero elements elsewhere It s also helpful to defne an autocovarance generatng functon whch s a power seres whose coeffcents are the autocovarances γ τ for successve values of τ Ths s denoted by (14) γ(z) = τ γ τ z τ ; wth τ = {, ±1, ±2,} and γ τ = γ τ The generatng functon s also called the z-transform of the autocovarance functon The autocovarance generatng functon of the qth-order movng-average process can be found qute readly Consder the convoluton (15) µ(z)µ(z 1 )= = = τ µ z µ z µ µ z ( µ µ +τ )z τ, τ = By referrng to the expresson for the autocovarance of lag τ of a movngaverage process gven under (1), t can be seen that the autocovarance generatng functon s ust (16) γ(z) =σ 2 εµ(z)µ(z 1 ) Autoregressve Processes The pth-order autoregressve process, or AR(p) process, s defned by the equaton (17) α y(t)+α 1 y(t 1) + +α p y(t p)=ε(t) 4

5 LINEAR STOCHASTIC MODELS Ths equaton s nvarably normalsed by settng α = 1, although t would be possble to set σ 2 ε = 1 nstead The equaton can be wrtten n summary notaton as α(l)y(t) =ε(t), where α(l) =α +α 1 L+ +α p L p For the process to be statonary, the roots of the equaton α(z) =α +α 1 z+ + α p z p = must le outsde the unt crcle Ths condton enables us to wrte the autoregressve process as an nfnte-order movng-average process n the form of y(t) =α 1 (L)ε(t) As an example, let us consder the frst-order autoregressve process whch s defned by (18) ε(t) =y(t) φy(t 1) =(1 φl)y(t) Provded that the process s statonary wth φ < 1, t can be represented n movng-average form as (19) y(t) =(1 φl) 1 ε(t) = { ε(t)+φε(t 1) + φ 2 ε(t 2) + } The autocovarances of the process can be found by usng the formula of (1) whch s applcable to movng-average process of fnte or nfnte order Thus (2) γ τ = E(y t y t τ ) { } = E φ ε t φ ε t τ = φ φ E(ε t ε t τ ); and the result under (9) ndcates that γ τ = σε 2 φ φ +τ (21) = σ2 εφ τ 1 φ 2 For a vector y =[y 1,y 2,,y T ] of T consecutve elements from a frst-order autoregressve process, the dsperson matrx has the form 1 φ φ 2 φ T 1 φ 1 φ φ T 2 (22) D(y) = σ2 ε φ 2 φ 1 φ T 3 1 φ 2 φ T 1 φ T 2 φ T 3 1 5

6 DSG POLLOCK : ECONOMETRIC THEORY To fnd the autocovarance generatng functon for the general pth-order autoregressve process, we may consder agan the functon α(z) = α z Snce an autoregressve process may be treated as an nfnte-order movngaverage process, t follows that σε 2 (23) γ(z) = α(z)α(z 1 ) For an alternatve way of fndng the autocovarances of the pth-order process, consder multplyng α y t = ε t by y t τ and takng expectatons to gve (24) α E(y t y t τ )=E(ε t y t τ ) Takng account of the normalsaton α = 1, we fnd that { σ 2 ε, f τ =; (25) E(ε t y t τ )=, f τ> Therefore, on settng E(y t y t τ )=γ τ, equaton (24) gves { σ 2 ε, f τ =; (26) α γ τ =, f τ> The second of these s a homogeneous dfference equaton whch enables us to generate the sequence {γ p,γ p+1,} once p startng values γ,γ 1,,γ p 1 are known By lettng τ =,1,,p n (26), we generate a set of p + 1 equatons whch can be arrayed n matrx form as follows: γ γ 1 γ 2 γ p 1 σ 2 ε γ 1 γ γ 1 γ p 1 α 1 (27) γ 2 γ 1 γ γ p 2 α 2 = γ p γ p 1 γ p 2 γ α p These are called the Yule Walker equatons, and they can be used ether for generatng the values γ,γ 1,,γ p from the values α 1,,α p,σε 2 or vce versa Example To llustrate the two uses of the Yule Walker equatons, let us consder the second-order autoregressve process In that case, we have γ γ 1 γ 2 γ 1 γ γ 1 α α 1 = α γ 2 2 α 1 α γ 1 α 2 α 1 α γ γ 2 γ 1 γ α 2 α 2 α 1 α γ 1 (28) γ 2 = α α 1 α 2 α 1 α +α 2 γ γ 1 = σ2 ε α 2 α 1 α γ 2 6

7 LINEAR STOCHASTIC MODELS Gven α = 1 and the values for γ,γ 1,γ 2, we can fnd σ 2 ε and α 1,α 2 Conversely, gven α,α 1,α 2 and σ 2 ε, we can fnd γ,γ 1,γ 2 It s worth recallng at ths uncture that the normalsaton σ 2 ε = 1 mght have been chosen nstead of α = 1 Ths would have rendered the equatons more easly ntellgble Notce also how the matrx followng the frst equalty s folded across the axs whch dvdes t vertcally to gve the matrx whch follows the second equalty Pleasng effects of ths sort often arse n tme-seres analyss Autoregressve Movng Average Processes The autoregressve movng-average process of orders p and q, whch s referred to as the ARMA(p, q) process, s defned by the equaton (29) α y(t)+α 1 y(t 1) + +α p y(t p) =µ ε(t)+µ 1 ε(t 1) + +µ q ε(t q) The equaton s normalsed by settng α = 1 and by settng ether µ =1 or σ 2 ε = 1 A more summary expresson for the equaton s α(l)y(t) =µ(l)ε(t) Provded that the roots of the equaton α(z) = le outsde the unt crcle, the process can be represented by the equaton y(t) =α 1 (L)µ(L)ε(t) whch corresponds to an nfnte-order movng-average process Conversely, provded the roots of the equaton µ(z) = le outsde the unt crcle, the process can be represented by the equaton µ 1 (L)α(L)y(t) =ε(t) whch corresponds to an nfnte-order autoregressve process By consderng the movng-average form of the process, and by notng the form of the autocovarance generatng functon for such a process whch s gven by equaton (16), t can be seen that the autocovarance generatng functon for the autoregressve movng-average process s (3) γ(z) =σε 2 µ(z)µ(z 1 ) α(z)α(z 1 ) Ths generatng functon, whch s of some theoretcal nterest, does not provde a practcal means of fndng the autocovarances To fnd these, let us consder multplyng the equaton α y t = µ ε t by y t τ and takng expectatons Ths gves (31) α γ τ = µ δ τ, where γ τ = E(y t y t τ ) and δ τ = E(ε t y t τ ) Snce ε t s uncorrelated wth y t τ whenever t s subsequent to the latter, t follows that δ τ =f 7

8 DSG POLLOCK : ECONOMETRIC THEORY τ> Snce the ndex n the RHS of the equaton (31) runs from to q, t follows that (32) α γ τ = f τ>q Gven the q +1 nonzero values δ,δ 1,,δ q, and p ntal values γ,γ 1,,γ p 1 for the autocovarances, the equatons can be solved recursvely to obtan the subsequent values {γ p,γ p+1,} To fnd the requste values δ,δ 1,,δ q, consder multplyng the equaton α y t = µ ε t by ε t τ and takng expectatons Ths gves (33) α δ τ = µ τ σε, 2 where δ τ = E(y t ε t τ ) The equaton may be rewrtten as (34) δ τ = 1 ( µ τ σε 2 δ τ ), α =1 and, by settng τ =,1,,q, we can generate recursvely the requred values δ,δ 1,,δ q Example Consder the ARMA(2, 2) model whch gves the equaton (35) α y t + α 1 y t 1 + α 2 y t 2 = µ ε t + µ 1 ε t 1 + µ 2 ε t 2 Multplyng by y t, y t 1 and y t 2 and takng expectatons gves (36) γ γ 1 γ 2 γ 1 γ γ 1 α α 1 = δ δ 1 δ 2 δ δ 1 µ µ 1 γ 2 γ 1 γ α 2 δ µ 2 Multplyng by ε t, ε t 1 and ε t 2 and takng expectatons gves (37) δ δ 1 δ α α 1 = σ2 ε σε 2 µ µ 1 δ 2 δ 1 δ α 2 σε 2 µ 2 When the latter equatons are wrtten as (38) α α 1 α δ δ 1 = σ 2 µ ε µ 1, α 2 α 1 α δ 2 µ 2 8

9 LINEAR STOCHASTIC MODELS they can be solved recursvely for δ, δ 1 and δ 2 on the assumpton that that the values of α, α 1, α 2 and σε 2 are known Notce that, when we adopt the normalsaton α = µ = 1, we get δ = σε 2 When the equatons (36) are rewrtten as (39) α α 1 α 2 α 1 α + α 2 γ γ 1 = µ µ 1 µ 2 µ 1 µ 2 δ δ 1, α 2 α 1 α γ 2 µ 2 δ 2 they can be solved for γ, γ 1 and γ 2 Thus the startng values are obtaned whch enable the equaton (4) α γ τ + α 1 γ τ 1 + α 2 γ τ 2 =; τ>2 to be solved recursvely to generate the succeedng values {γ 3, γ 4,} of the autocovarances Mnmum Mean-Square Error Predcton Imagne that y(t) s a statonary stochastc process wth E{y(t)} = We may be nterested n predctng values of ths process several perods nto the future on the bass of ts observed hstory Ths hstory s contaned n our so-called nformaton set In practce, the latter s always a fnte set {y t,y t 1,,y t p } representng the recent past Nevertheless, n developng the theory of predcton, t s also useful to consder an nfnte nformaton set {y t,y t 1,,y t p,} representng the entre past We shall denote the predcton of y t+m whch s made at the tme t by ŷ t+m t or by ŷ t+m when t s clear that we a predctng m steps ahead The crteron by whch we usually udge the performance of an estmator or predctor ŷ of a random varable y s ts mean-square error defned by E{(y ŷ) 2 } If all of the avalable nformaton on y s summarsed n ts margnal dstrbuton, then the mnmum mean-square error predcton s smply the expected value E(y) However, f y s statstcally related to another random varable x whose value we can observe, and f we know the form of the ont dstrbuton of x and y, then the mnmum mean-square error predcton of y s the condtonal expectaton E(y x) We may state ths proposton formally: (41) Let ŷ =ŷ(x) be the condtonal expectaton of y gven x whch s also expressed as ŷ = E(y x) Then we have E{(y ŷ) 2 } E{(y π) 2 }, where π = π(x) s any other functon of x Proof Consder (42) E{(y π) 2 } = E[{(y ŷ)+(ŷ π)} 2 ] =E{(y ŷ) 2 }+2E{(y ŷ)(ŷ π)} + E{(ŷ π) 2 } 9

10 DSG POLLOCK : ECONOMETRIC THEORY In the second term, we have E{(y ŷ)(ŷ π)} = (y ŷ)(ŷ π)f(x, y) y x x y { } (43) = (y ŷ)f(y x) y (ŷ π)f(x) x x y = Here the second equalty depends upon the factorsaton f(x, y) = f(y x)f(x) whch expresses the ont probablty densty functon of x and y as the product of the condtonal densty functon of y gven x and the margnal densty functon of x The fnal equalty depends upon the fact that (y ŷ)f(y x) y = E(y x) E(y x) = Therefore E{(y π) 2 } = E{(y ŷ) 2 } + E{(ŷ π) 2 } E{(y ŷ) 2 }and our asserton s proved We mght note that the defnton of the condtonal expectaton mples that E(xy) = xyf(x, y) y x x y { } (44) = x yf(y x) y f(x) x x y = E(xŷ) When the equaton E(xy) = E(xŷ) s rewrtten as (45) E {x(y ŷ)} =, t may be descrbed as an orthogonalty condton Ths condton ndcates that the predcton error y ŷ s uncorrelated wth x The result s ntutvely appealng; for, f the error were correlated wth x, we should not usng the nformaton of x effcently n formng ŷ The proposton of (41) s readly generalsed to accommodate the case where, n place of the scalar x, we have a vector x =[x 1,,x p ] Ths generalsaton ndcates that the mnmum-mean-square-error predcton of y t+m gven the nformaton n {y t,y t 1,,y t p } s the condtonal expectaton E(y t+m y t,y t 1,,y t p ) In order to determne the condtonal expectaton of y t+m gven {y t,y t 1,,y t p }, we need to known the functonal form of the ont probablty densty functon all of these varables In leu of precse knowledge, we are often prepared to assume that the dstrbuton s normal In that case, t follows that the condtonal expectaton of y t+m s a lnear functon of {y t,y t 1,,y t p }; 1

11 LINEAR STOCHASTIC MODELS and so the problem of predctng y t+m becomes a matter of formng a lnear regresson Even f we are not prepared to assume that the ont dstrbuton of the varables n normal, we may be prepared, nevertheless, to base our predcton of y upon a lnear functon of {y t,y t 1,,y t p } In that case, we satsfy the crteron of mnmum mean-square error lnear predcton by formng ŷ t+m = φ y t +1 from the values φ 1,,φ p+1 whch mnmse (46) E { ( (y t+m ŷ t+m ) 2} p+1 ) } 2 = E{ y t+m φ y t +1 = γ 2 =1 φ γ m+ 1 + φ φ γ Ths s a lnear least-squares regresson problem whch leads to a set of p +1 orthogonalty condtons descrbed as the normal equatons: (47) p E{(y t+m ŷ t+m )y t +1 } = γ m+ 1 φ γ =1 = ; =1,,p+1 In matrx terms, we have (48) γ γ 1 γ p γ 1 γ γ p 1 γ p γ p 1 γ φ 1 φ 2 φ p+1 = γ m γ m+1 γ m+p Notce that, for the one-step-ahead predcton of y t+1, these are nothng but the Yule Walker equatons Forecastng wth ARMA Models So far, we have avoded makng any specfc assumptons about the nature of the process y(t) other than that t can be represented by an nfnte-order movng average We are greatly asssted n the busness of developng practcal forecastng procedures f we can assume that y(t) s generated by an ARMA process such that (49) y(t) = µ(l) α(l) ε(t)=ψ(l)ε(t) We shall contnue to assume, for the sake of smplcty, that the forecasts are based on the nformaton contaned n the nfnte set {y t,y t 1,,y t p,} 11

12 DSG POLLOCK : ECONOMETRIC THEORY comprsng all values that have been taken by the varable up to the present tme t Knowng the parameters n ψ(l) enables us to recover the sequence {ε t,ε t 1,ε t 2,} from the sequence {y t,y t 1,y t 2,} and vce versa; so ether of these can be regarded as our nformaton set Let us wrte the realsatons of equaton (49) as (5) y t+m = m 1 = ψ ε t+m + ψ ε t+m Here the frst term on the RHS embodes dsturbances subsequent to the tme t when the forecast s made, and the second term embodes dsturbances whch are wthn the nformaton set {ε t,ε t 1,ε t 2,} Let us now defne a forecastng functon, based on the nformaton set, whch takes the form of (51) ŷ t+m = =m ρ ε t+m =m Then, gven that ε(t) s a whte-nose process, t follows that the mean square of the error n the forecast m perods ahead s gven by (52) E{(y t+m ŷ t+m ) 2 } = σ 2 ε m 1 = ψ 2 + σ 2 ε (ψ ρ ) 2 Clearly, the mean-square error s mnmsed by settng ρ = ψ ; and so the optmal forecast s gven by (53) ŷ t+m = ψ ε t+m =m Ths mght have been derved from the the equaton y(t + m) =ψ(l)ε(t+m), whch generates the the true value of y t+m, smply by puttng zeros n place of the unobserved dsturbances ε t+1,ε t+2,,ε t+m whch le n the future when the forecast s made Notce that, as the lead tme m of the forecast ncreases, the mean-square error of the forecast tends to the value of =m (54) V {y(t)} = σ 2 ε ψ 2 whch s nothng but the varance of the process y(t) We can also derve the optmal forecast of (45) by specfyng that the forecast error should be uncorrelated wth the dsturbances up to the tme of makng the forecast For, f the the forecast errors were correlated wth some of the elements of our nformaton set, then, as we have noted before, we would 12

13 LINEAR STOCHASTIC MODELS not be usng the nformaton effcently, and we could not be generatng optmal forecasts To demonstrate ths result anew, let us consder the covarance between the forecast error and the dsturbance ε t : (55) m E{(y t+m ŷ t+m )ε t } = ψ m k E(ε t+k ε t ) k=1 + (ψ m+ ρ m+ )E(ε t ε t ) = = σ 2 ε(ψ m+ ρ m+ ) Here the fnal equalty follows from the fact that { σ 2 ε, f =, (56) E(ε t ε t )=, f If the covarance n (55) s to be equal to zero for all values of, then we must have ρ = ψ for all, whch means that our forecastng functon must be the one that we have already specfed under (53) It s helpful, sometmes, to have a functonal notaton for descrbng the process whch generates the m-steps-ahead forecast The notaton provded by Whttle (1963) s wdely used To derve ths, let us begn by wrtng (57) y(t + m t) = { L m ψ(l) } ε(t) On the LHS, we have not only the lagged sequences ε(t),ε(t 1), but also the sequences ε(t + m) =L m ε(t),,ε(t +1) = L 1 ε(t), all of whch are assocated wth negatve powers of L Let {L m ψ(l)} + be defned as the part of the operator contanng only postve powers of L Then we can express the forecastng functon as (58) ŷ(t + m t) ={L m ψ(l)} + ε(t) { } ψ(l) 1 = L m ψ(l) y(t) + The Forecasts as Condtonal Expectatons We have already seen that we can regard the optmal (mnmum meansquare error) forecast of y t+m as the condtonal expectaton of y t+m gven the values of {ε t,ε t 1,ε t 2,} or {y t,y t 1,y t 2,} Let us denote the forecast by ŷ t+m = E t (y t+m ) where the subscrpt on the operator s to ndcate that 13

14 DSG POLLOCK : ECONOMETRIC THEORY the expectaton s condtonal upon the nformaton avalable at tme t On applyng the operator to the sequences y(t) and ε(t), we fnd that (59) E t (y t+k )=ŷ t+k ; k> E t (y t )=y t ; E t (ε t+k )= ; k> E t (ε t )=ε t ; In ths notaton, the forecast m perods ahead s (6) m E t (y t+m )= ψ m k E t (ε t+k )+ ψ m+ E t (ε t ) k=1 = = ψ m+ ε t = In practce, we may generate the forecasts usng a recurson based on the equaton (61) y(t) = {α 1 y(t 1) + α 2 y(t 2) + +α p y(t p)} +µ ε(t)+µ 1 ε(t 1) + +µ q ε(t q) By takng the condtonal expectaton of ths functon, we get (62) ŷ t+m = {α 1 ŷ t+m 1 + +α p y t+m p } +µ m ε t + +mu q ε t+m q when <m p, q, (63) ŷ t+m = {α 1 ŷ t+m 1 + +α p y t+m p } f q<m p, (64) ŷ t+m = {α 1 ŷ t+m 1 + +α p ŷ t+m p } +µ m ε t + +µ q ε t+m q f p<m q, and (65) ŷ t+m = {α 1 ŷ t+m 1 + +α p ŷ t+m p } when p, q < m We can see from (65) that, for m>p,q, the forecastng functon becomes a pth-order homogeneous dfference equaton n y The p values of y(t) from t = r = max(p, q) tot=r p+ 1 serve as the startng values for the equaton 14

15 LINEAR STOCHASTIC MODELS The behavour of the forecast functon beyond the reach of the startng values can be charactersed n terms of the roots of the autoregressve operator We can assume that none of the roots of α(l) = le nsde the unt crcle If all of the roots are less than unty, then ŷ t+m wll converge to zero as m ncreases If one of the roots of α(l) = s unty, then we have and ARIMA(p, 1,q) model; and the general soluton of the homogeneous equaton of (65) wll nclude a constant term whch represents the product of the unt root wth an coeffcent whch s determned by the startng values Hence the the forecast wll tend to a nonzero constant If two of the roots are unty, then the the general soluton wll embody a lnear tme trend whch s the asymptote to whch the forecasts wll tend In general, f d of the roots are unty, then the general soluton wll comprse a polynomal n t of order d 1 The forecasts can be updated easly once the coeffcents n the expanson of ψ(l) = µ(l)/α(l) have been obtaned Consder (66) ŷ (t+1)+m = {ψ m ε t+1 + ψ m+1 ε t + ψ m+2 ε t 1 + } ŷ t+(m+1) = {ψ m+1 ε t + ψ m+2 ε t 1 + } and The frst of these s the forecast for m perods ahead made at tme t + 1 whlst the second s the forecast for m + 1 perods ahead made at tme t We can easly see that (67) ŷ (t+1)+m =ŷ t+(m+1) + ψ m ε t+1, where ε t+1 =ŷ t+1 y t+1 s the current dsturbance at tme t + 1 The later s also the predcton error of the one-step-ahead forecast made at tme t Example Consder the AR(41) process We have (68) y(t + m) =φy(t + m 1) + ε(t + m) On applyng the operator E t to ths equaton we obtan the followng: (69) ŷ(t +1)=φy(t) ŷ(t +2)=φŷ(t+1)=φ 2 y(t) ŷ(t+m)=φŷ(t+m 1) = φ m y(t) Gven that y(t) =ε(t)/(1 φl) ={ε(t)+φε(t 1)+φ 2 ε(t 2)+ }, t follows from (44) that { (7) E{(y t+m ŷ t+m ) 2 } = σε 2 1+φ 2 +φ 4 + +φ 2(m 1)} 15

16 DSG POLLOCK : ECONOMETRIC THEORY As m, ths tends to σ 2 ε/(1 φ 2 ) whch s ust the varance of the AR(41) process Example(Exponental Smoothng) A common forecastng procedure s exponental smoothng Ths depends upon takng a weghted average of past values of the tme seres wth the weghts followng a geometrcally declnng pattern The functon generatng the one-step-ahead forecasts can be wrtten as (71) (1 θ) ŷ(t +1)= 1 θl y(t) =(1 θ) { y(t)+θy(t 1) + θ 2 y(t +2)+ } On multplyng both sdes of ths equaton by 1 θl and rearrangng, we get (72) ŷ(t +1)=θŷ(t)+(1 θ)y(t), whch shows that the current forecast for one step ahead s a convex combnaton of the prevous forecast and the value that actually transpred It s possble to show that the method of exponental smoothng corresponds to the optmal forecastng procedure for the ARIMA(, 1, 1) model (1 L)y(t) =(1 θl)ε(t) To see ths, let us consder the ARMA(1, 1) model y(t) φy(t 1) = ε(t) θε(t 1) Ths gves (73) ŷ(t +1)=φy(t) θε(t) (1 φl) = φy(t) θ 1 θl y(t) {(1 θl)φ (1 φl)θ} = y(t) 1 θl (φ θ) = 1 θl y(t) On settng φ = 1, whch converts the ARMA(1, 1) model to an ARIMA(, 1, 1) model, we obtan precsely the forecastng functon of (6) 16

Linear Stochastic Models

Linear Stochastic Models LECTURE 5 Lnear Stochastc Models Autcovarances of a Statonary Process A temporal stochastc process s smply a sequence of random varables ndexed by a tme subscrpt Such a process can be denoted by x(t) The

More information

LINEAR STOCHASTIC MODELS

LINEAR STOCHASTIC MODELS LINEAR STOCHASTIC MODELS Let {x τ+1,x τ+2,...,x τ+n } denote n consecutive elements from a stochastic process. If their joint distribution does not depend on τ, regardless of the size of n, then the process

More information

Forecasting with ARMA Models

Forecasting with ARMA Models LECTURE 4 Forecasting with ARMA Models Minumum Mean-Square Error Prediction Imagine that y(t) is a stationary stochastic process with E{y(t)} = 0. We may be interested in predicting values of this process

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Primer on High-Order Moment Estimators

Primer on High-Order Moment Estimators Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

k t+1 + c t A t k t, t=0

k t+1 + c t A t k t, t=0 Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

III. Econometric Methodology Regression Analysis

III. Econometric Methodology Regression Analysis Page Econ07 Appled Econometrcs Topc : An Overvew of Regresson Analyss (Studenmund, Chapter ) I. The Nature and Scope of Econometrcs. Lot s of defntons of econometrcs. Nobel Prze Commttee Paul Samuelson,

More information

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of

Chapter 7 Generalized and Weighted Least Squares Estimation. In this method, the deviation between the observed and expected values of Chapter 7 Generalzed and Weghted Least Squares Estmaton The usual lnear regresson model assumes that all the random error components are dentcally and ndependently dstrbuted wth constant varance. When

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

STAT 511 FINAL EXAM NAME Spring 2001

STAT 511 FINAL EXAM NAME Spring 2001 STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980 MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Differentiating Gaussian Processes

Differentiating Gaussian Processes Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

Lecture 3 Stat102, Spring 2007

Lecture 3 Stat102, Spring 2007 Lecture 3 Stat0, Sprng 007 Chapter 3. 3.: Introducton to regresson analyss Lnear regresson as a descrptve technque The least-squares equatons Chapter 3.3 Samplng dstrbuton of b 0, b. Contnued n net lecture

More information

Exercises. 18 Algorithms

Exercises. 18 Algorithms 18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

1 Generating functions, continued

1 Generating functions, continued Generatng functons, contnued. Exponental generatng functons and set-parttons At ths pont, we ve come up wth good generatng-functon dscussons based on 3 of the 4 rows of our twelvefold way. Wll our nteger-partton

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Quantum Mechanics for Scientists and Engineers. David Miller

Quantum Mechanics for Scientists and Engineers. David Miller Quantum Mechancs for Scentsts and Engneers Davd Mller Types of lnear operators Types of lnear operators Blnear expanson of operators Blnear expanson of lnear operators We know that we can expand functons

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

1 Generating functions, continued

1 Generating functions, continued Generatng functons, contnued. Generatng functons and parttons We can make use of generatng functons to answer some questons a bt more restrctve than we ve done so far: Queston : Fnd a generatng functon

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Equilibrium with Complete Markets. Instructor: Dmytro Hryshko

Equilibrium with Complete Markets. Instructor: Dmytro Hryshko Equlbrum wth Complete Markets Instructor: Dmytro Hryshko 1 / 33 Readngs Ljungqvst and Sargent. Recursve Macroeconomc Theory. MIT Press. Chapter 8. 2 / 33 Equlbrum n pure exchange, nfnte horzon economes,

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations Quantum Physcs 量 理 Robert Esberg Second edton CH 09 Multelectron atoms ground states and x-ray exctatons 9-01 By gong through the procedure ndcated n the text, develop the tme-ndependent Schroednger equaton

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

First day August 1, Problems and Solutions

First day August 1, Problems and Solutions FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve

More information

6. Stochastic processes (2)

6. Stochastic processes (2) 6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

Expected Value and Variance

Expected Value and Variance MATH 38 Expected Value and Varance Dr. Neal, WKU We now shall dscuss how to fnd the average and standard devaton of a random varable X. Expected Value Defnton. The expected value (or average value, or

More information

Basic Business Statistics, 10/e

Basic Business Statistics, 10/e Chapter 13 13-1 Basc Busness Statstcs 11 th Edton Chapter 13 Smple Lnear Regresson Basc Busness Statstcs, 11e 009 Prentce-Hall, Inc. Chap 13-1 Learnng Objectves In ths chapter, you learn: How to use regresson

More information

LOGIT ANALYSIS. A.K. VASISHT Indian Agricultural Statistics Research Institute, Library Avenue, New Delhi

LOGIT ANALYSIS. A.K. VASISHT Indian Agricultural Statistics Research Institute, Library Avenue, New Delhi LOGIT ANALYSIS A.K. VASISHT Indan Agrcultural Statstcs Research Insttute, Lbrary Avenue, New Delh-0 02 amtvassht@asr.res.n. Introducton In dummy regresson varable models, t s assumed mplctly that the dependent

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Learning from Data 1 Naive Bayes

Learning from Data 1 Naive Bayes Learnng from Data 1 Nave Bayes Davd Barber dbarber@anc.ed.ac.uk course page : http://anc.ed.ac.uk/ dbarber/lfd1/lfd1.html c Davd Barber 2001, 2002 1 Learnng from Data 1 : c Davd Barber 2001,2002 2 1 Why

More information

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 6 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER I STATISTICAL THEORY The Socety provdes these solutons to assst canddates preparng for the eamnatons n future years and for

More information