Should Exac Index umbers have Sandard Errors? Theory and Applcaon o Asan Growh Rober C. Feensra Marshall B. Rensdorf ovember 003 Proof of Proposon APPEDIX () Frs, we wll derve he convenonal Sao-Vara prce ndex. Rewre he un-cos funcon for consan values of b = b as, where γ = η. The cos shares are, / γ γ c( p, b) = b p. (A) = s γ γ τ = ln c( τ, b)/ ln pτ = c( pτ, b) b pτ p, (A) for τ = -,. Rewrng hese, we oban, c( p c( p, b) p =, b) p / γ s / γ s, for =,,. (A3) Take a geomerc mean of (A3) usng he weghs w from (8) o oban: c( p c( p, b) =, b) p p = w = s s w / γ = p p = w, (A4) whch shows ha he Sao-Vara ndex on he rgh of (A4) equals he rao of un-coss on he lef. To show ha he produc of share erms n he cener of (A4) equals uny, ake s logarhm o oban:
= w (/ γ) ( s s = ( lns )/ γ = = = / lns ) 0, where he frs equaly follows from he defnon of w n (8), and he second equaly follows from he fac ha he cos shares s τ sum o uny over =,,, for τ =,. () ex, we show ha we can choose he b such ha: c( p c( p, b ) =, b) p p = w, (A5) where he weghs w are evaluaed as n (8) usng he cos shares s τ = ln c(p τ,b τ )/ ln p τ for τ =,. From (A4), he rao of un-coss on he lef of (A5) equals: c( p c( p, b ) =, b) p p = w, (A6) where he w are calculaed as n (8) bu usng he cos shares s τ = ln c(pτ, b )/ ln p τ, τ =,. Thus, a suffcen condon for (A5) o hold s ha here exs b such ha: w = w =,...,. (A7) From he defnon of he weghs n (8), condon (A7) wll hold ff here exss k > 0 such ha, s ln s = k s ln s, =,...,. (A8)
3 Defne π c( p, b) / c( p, b). Then, from (A), he denomnaor on he rgh of (A8) equals γ( ln p ln π ). (If hs s zero hen we can replace he brackeed erm on he rgh sde of (A8) by s lmng value of s = and adap wha follows o solve for b. So whou loss of s generaly, suppose γ( ln p ln π ) 0.) Also usng (A) and (A) o subsue for he numeraor on rgh sde of (A8), we have, s lns ( ln p ln π ) k = γ γ bp b p j= j γ j b p j= b p γ j γ j, =,...,. (A9) Rearrangng erms n (A9) and recallng ha π = c( p, b) / c( p, b) wh c( ) defned n (A), we can solve for b as, γ s ln p ln b π γ = b j p j > 0, k lns γ γ γ p p π j= =,...,. (A0) oce ha (A0) deermnes b only up o a scalar mulple, so we are free o choose a normalzaon on b. Specfyng hs normalzaon as j = b p he rgh sde of (A0) by γ j j = γ p, summng over =,...,, and rearrangng erms:, we solve for k by mulplyng k s ln p ln π = γ 0. lns (A) γ γ γ (p / p ) = π >
4 We can subsue (A) no (A0) o oban equaons n unknowns, b for =,,. These equaons have he form: b s = lns ln p ln s ln p ln j j π π 0, γ γ γ p p j lns j (p j / p j ) =,...,. (A) γ γ γ π = π > Recallng ha π = c( p, b) / c( p, b), hese equaons are hghly nonlnear, bu gven any argumens b > 0 whn π on he rgh of (A), we deermne a soluon b * > 0 on he lef. In oher words, (A) provdes a connuous mappng b * = F( b). Denoe he se of parameers b 0 sasfyng he normalzaon γ = b p = as he smplex S. Choosng b S, s readly verfed ha b * S, so F s a connuous mappng from S o S, and hus wll have a fxed pon. Then (A7) holds by consrucon a hs fxed pon, so ha (A5) follows from (A6). () ex, we mus show ha b evaluaed as n (A0) les beween he bounds descrbed n Proposon. The cos shares s τ appearng n (A0) are evaluaed as n (A), bu usng b τ, wh τ =,. Whou loss of generaly, we can normalze he prce vecors p τ by a scalar mulple n each perod so ha c(p τ,b τ ) =, τ =,. We wll drop he normalzaon on b ha γ = b p =, and nsead specfy = γ b = > p k 0. Denong B b /b -, (A0) can hen be wren as, b = b k k γ p p γ ln p γ B + ln B γ ln p γln π γ γ γ p p π (A3a)
5 = π γ γ γ k γ γ π p B p ln p ln b. (A3b) k γ + γ γ γ ln p ln B pπ p From concavy of he naural log funcon we have (/z) < ln z < z, for z > 0, and leng z = B (p /p - ) γ follows ha, B p p γ p ln B + γ ln p p B p γ. (A4) oce ha he las brackeed erms n (A3a, b) are he recprocals of he prevous brackeed erms, bu wh B = b /b - appearng nsead of γ π. Suppose ha B > γ π. Usng (A4), we can show ha: d db γ γ p p B ln B + γ ln p 0 and d db γ γ pb p ln B + γ ln p 0. I follows by comparng he brackeed erms n (A3) ha: π γ b (k / k) b b (k / k), (A5) whle f B < γ π hen hese nequales would be reversed. Express π from (A5) as: w w / γ p b / γ p b π =. p = b (A6) / γ = = = p b w
6 A sraghforward exenson of (A)-(A4) allowng for b b - shows ha he fnal produc n (A6) equals c(p, b )/c(p -,b - ). Bu hs s uny by our normalzaon of prces, so ha γ π n (A6) equals w ( b / b ). Then choose k such ha (k /k ) = = w b Subsung hs = no (A5), he bounds on b n Proposon are obaned. Proof of Proposon Le b denoe he vecor of he b from he CES model, le β = ln b and le β denoe he vecor wh componens ln b. Also, o model sochasc ases, le β τ = β * + e τ, (A7) for τ = or. The e τ are assumed o be d wh mean 0 and varance σ β. Le w and ln p represen vecors of he w as n (8) and he log prce changes, and le w* denoe he value of w when β - = β = β*. Then, var π sv = Ε[( ln p ) (w E(w))(w E(w)) ( ln p )] ( ln p ) Ε[(w w*)(w w*) ] ( ln p ) (A8) A lnear approxmaon for w s: w w* + ( w / β - ) e - + ( w / β ) e (A9) where he dervave marces are evaluaed a he pon β - = β = β*. In Lemma below we show ha hese dervaves approxmaely equal: w - / β - = w / β [dag(w*) w* w* ] (A0)
7 where dag(w*) denoes he marx wh he elemens w* on s man dagonal and zeros elsewhere. Snce E(e - e ) = 0, follows ha, E[(w w*)(w w*) ] = 4 [dag(w*) w* w* ] [E(e e ) + E(e - e - )][dag(w*) w* w* ] = σ β [dag(w*) w* w* ][dag(w*) w* w* ]. (A) Then subsung from (A) no (A8) we oban: var π sv σβ ( ln p π sv ) = w. (A) Lemma : An approxmae formula for he dervaves of Sao-Vara weghs wh respec o he CES model dsurbances s: w / β - = w / β [dag(w*) w* w* ] (A0) Proof: In secon (a) below we show ha he elemens on he man dagonal of w / β - are of he form 0.5w (-w ). Then n secon (b) we show ha w / β - has off-dagonal elemens of he form 0.5 w w j. Furhermore, by symmery w / β wll have he same form as w / β -. () Soluon for w / β k,- for k =. Denong he logarhmc mean of he shares by m (s s,- )/ln(s /s,- ), or s f s = s,-, we have: w / β,- = ( w / m )( m / ln s,- )( ln s,- / β,- ) + j ( w / m j )( m j / ln s j,- )( ln s j,- / β,- ). (A3) In he frs erm on he rgh sde of (A3),
8 w / m = ( w ) / k m k (A4) Also, snce m = (s s,- )/ln(s /s,- ), m / ln s,- = (m s,- )/ln(s, /s,- ). (A5) Fnally, snce s,- = b,- p η - cη - where c - = /( η) η b p, = ln s,- / β,- = s,-. (A7) Subsung from (A4), (A5) and (A6) no he frs erm of (A3), we have: ( w / m )( m / ln s,- )( ln s,- / β,- ) = ( w )(m s - )( s - ) ln(s, /s,- )[ k m k ]. (A8) To fnd an expresson for ( w / m j )( m j / ln s j,- )( ln s j,- / β,- ), j, noe frs ha, w / m j = w / [ k m k ]. (A9) Also, m j / ln s j,- = (m j s j,- )/ln(s j, /s j,- ). (A30) And fnally, ln s j,- / β,- = s,-. (A3) Pung hese hree facors ogeher gves: j ( w / m j )( m j / ln s j,- )( ln s j,- / β,- ) = w s - k m k j m j s j - ln(s j, /s j,- ). (A3) The wo erms n (A3) herefore have a oal of,
9 w / β,- = ( w )(m s - )( s - ) ln(s, /s,- )[ k m k ] + w s - m j s j - k m k j ln(s j, /s j,- ) = ( w )(m s - ) ln(s, /s,- )[ k m k ] s - (m s - ) ln(s, /s,- )[ k m k ] + w s - m k s k - k m k k ln(s k, /s k,- ). (A33) Snce m k approxmaely equals he mdpon beween s k,- and s k,, m k s k - ln(s k, /s k,- ) 0.5 m k. (A34) The overall error of approxmaon n he varance esmae for π sv from subsung from (A34) no (A33) wll be nconsequenal, boh because he ndvdual errors are small and because hey are on average zero. Hence: w / β,- = ( w )(m s - ) ln(s, /s,- )[ k m k ] s - (m s - ) ln(s, /s,- )[ k m k ] + w s - m k s k - k m k k ln(s k, /s k,- ) 0.5 ( w )m k m k 0.5 s - m k m k + 0.5 w s,- = 0.5( w )w. (A35) () Soluon for w / β k,- for k. A change n β k,- wll effec w by changng s,-, by changng s k,-, and by changng any remanng shares: w / β k,- = ( w / m )( m / ln s,- )( ln s,- / β k,- ) + ( w / m k )( m j / ln s k,- )( ln s k,- / β k,- ) + ( w / m j )( m j / ln s j,- )( ln s j,- / β k,- ). j k (A36) The componens of he frs erm on he rgh-sde above are:
0 w / m = ( w ) /[ k m k ] (A37) m / ln s,- = [m s,- ] / ln(s, /s,- ) (A38) ln s,- / β k,- = s k,-. (A39) From (A34), [m s,- ] / [ln(s, /s,- ) { k m k }] 0.5w. Makng hs subsuon, ( w / m )( m / ln s,- )( ln s,- / β k,- ) 0.5 ( w )w s k,-. (A40) ex, decompose ( w / m k )( m j / ln s k,- )( ln s k,- / β k,- ) as: w / m k = w /[ k m k ] m k / ln s k,- = [m k s k,- ] /ln(s k, /s k,- ) ln s k,- / β k,- = s k,-. (A4) (A4) (A43) Las, decompose j k ( w / m j )( m j / ln s j,- )( ln s j,- / β k,- ) as, w / m j = w /[ k m k ] (A44) m j / ln s j,- = [m j s j,- ] /ln (s j, /s j,- ) (A45) ln s j,- / β k,- = s k,-. (A46) Hence, subsung from (A4) o (A46) and summng he approxmaons for ( w / m k )( m j / ln s k,- )( ln s k,- / β k,- ) and j k ( w / m j )( m j / ln s j,- )( ln s j,- / β k,- ) gves: j ( w / m j )( m j / ln s j,- )( ln s j,- / β k,- ) 0.5 w s k,- [ j w j ] 0.5 w w k = 0.5 w s k,- ( w ) 0.5 w w k. (A47)
The fnal sep s o combne all he approxmaons for erms n w / β k,-. Ths gves: w / β k,- = ( w / m )( m / ln s,- )( ln s,- / β k,- ) + ( w / m k )( m j / ln s k,- )( ln s k,- / β k,- ) + ( w / m j )( m j / ln s j,- )( ln s j,- / β k,- ) j k 0.5 ( w )w s k,- + 0.5( w )w s k,- 0.5 w w k = 0.5 w w k. (A48) Proof of Proposon 3 We prove a more general verson of Proposon 3 han ha saed n he ex. In hs verson, we suppose ha regresson () s run over goods =,, and perods =,,T. In addon, we now denoe he weghs n (8) by w, and he weghed varance of prces by s = w ( ln p π ). Fnally, le w =. w ( ln p π ) /s denoe he weghed average of he w ha has weghs proporonal o w ( ln p π ). The more general verson (whch smplfes o equaon (5) when T = ) s: Proposon 3 : Le w. w, he weghed average of he w ha has weghs w, le λ s [ τ s τ ] 0.5, and le ρ (s - s ) [w =. - w =. ] 0.5 [ w - ( ln p - π - )w ( ln p π )] denoe he auocorrelaon of he producs w ( ln p π ). Fnally, denoe he mean squared error of he generalzed verson of regresson () by s ε w ε$, where ε ˆ = ln s δˆ + ( ηˆ ) ln p. Then an approxmaely unbased esmaor s β for σ β s:
ε s s β =. (5 ) 0.5 [T λ ( λ ) ρ (w. w. ) ] w. λ w. + > λ - Formula (5 ) s only approxmaely unbased because reas he w as predeermned and herefore nonsochasc. Proof: Replacng ( η) n () wh γ, le ε$ be he fed value of ε from: ln s = δ + γ ln p + ε, () and le s ε w ε$, he weghed sum of squared errors of he regresson equaon (). Subsung from equaon (A7) no equaon (4) mples ha ε = e e,- j w j (e j e j,- ), where e,- and e have varance σ β. Furhermore, w [δ + j w j (e j e j,- )]( ln p π ) = [δ + j w j (e j e j,- )][ w ( ln p π )] = 0, so follows ha w ( ln s )( ln p π ) = γ[ w ( ln p π ) ] + w (e e,- )( ln p π ). Consequenly, a weghed leas squares esmaor of γ n equaon () s: γˆ = w ( ln s )( ln p π ) w ( ln p π ) = γ + w (e e - )( ln p π ) w ( ln p π ) = γ + λ w p (e e - ) [ s ] 0.5 (A49)
3 where s w ( ln p π ), λ s /[ τ s τ ] 0.5, and p ( ln p π )/s. The h regresson error s: ε$ = ln s γˆ ( ln p π ) = e e,- j w j (e j e j,- ) λ p { τ λ τ [ j w jτ p jτ(e jτ e jτ -)]} = [ w ( + λ p )](e e,- ) j w j ( + λ p p j)(e j e j,- ) λ p { τ λ τ [ j w jτ p jτ(e jτ e j,τ )]}. (A50) Snce he e,- and he e have been assumed o be ndependen of one anoher, E[(e e,- ) ] = σ β. Also, E[(e e,- )(e τ e,τ- )]= σ β f τ = + or, bu all oher covarances equal zero. For example, n case when =, we have: E[ε$ ]/σ β = ( w ( + λ p )) + j w j ( + λ p p j) + λ p { τ> λ τ [ j w jτ p jτ ]} + [ w ( + λ p )]λ λ w p p j w j ( + λ p p j)λ λ w j p p j λ p { τ=3,,t λ τ λ τ [ j (w jτ p jτ)(w jτ p jτ )]} = w ( + λ p ) + j=,, w j ( + λ p p j) + λ p { τ> λ τ [ j w jτ p jτ ]} + λ λ w p p p λ λ [ j w j ( + λ p p j)w j p j] λ p { τ=3,,t λ τ λ τ [ j (w jτ p jτ)(w jτ p jτ )]} = w w λ p + j w j + λ p [ j w j p j] + λ p { τ =,, Τ λ τ [ j w jτ p jτ ]} + λ λ p w p p λ λ [ j w j w j p j] p λ 3 λ [ j w j p jw j p j] λ p { τ=3,,t λ τ λ τ [ j (w jτ p jτ)(w jτ p jτ )]}. (A5)
4 To express he weghed mean of he expeced squares of he me perod regresson errors n a convenen way, le w. denoe w, and le w =. denoe a weghed average of he w ha has weghs proporonal o w p. (Tha s, w =. = w ( ln p π ) / w ( ln p π ).) Furhermore, defne ρ as he (unweghed) correlaon beween w j p j and w j,- p j,-. We use he followng equales o make subsuons: w =, w p =, w w., w p w =., w p = 0, w,- p,-w p = ρ [w =. - w =. ] 0.5. These subsuons gve he resul: [ w E(ε$ )]/ σ β = w. λ w=. + λ [ τ=,,τ λ τ w =. τ ] + [λ λ λ 3 λ ][ j w j p jw j p j] = w. λ w=. + λ [ τ=,,τ λ τ w =. τ ] + λ ( λ )λ ρ [w =. w =. ] 0.5 (A5) When < < T, he expresson for E(ε$ ) conans auocorrelaons beween boh perods and and perods + and. Is dervaon s as follows: E[ε$ ]/σ β = ( w ( + λ p )) + j w j ( + λ p p j) + λ p { τ λ τ [ j w jτ p jτ ]} + [ w ( + λ p )]λ p [λ - w,- p,- + λ + w,+ p,+] j w j ( + λ p p j)λ p [λ - w j,- p j,- + λ + w j,+ p j,+] λ p { τ + λ τ [ j (w jτ p jτ)(w j,τ p j,τ )]}
5 = w w λ p + w. + λ p [ j w j p j] + λ p { τ λ τ [ j w jτ p jτ ]} + λ p [λ - w,- p,- + λ + w,+ p,+] 3 λ p { j w j p j [λ - w j,- p j,- + λ + w j,+ p j,+]} λ p { τ + λ τ [ j (w jτ p jτ)(w j,τ p j,τ )]}. (A53) Consequenly, for < < T, E[ w ε$ ]/ σ β = w. λ w =. + λ [ τ λ τ w =. τ ] + λ - λ ( λ )ρ [w =. - w =. ] 0.5 + λ ( λ )λ + ρ + [w =. w =. +] 0.5. (A54) Usng he fac ha λ =, he sum over all me perods s: E[ w ε$ ]/σ β = T w. λ w =. + =,,T λ - λ ( λ )ρ [w =. - w =. ] 0.5. (A55) The sum of squared errors w ε$ dvded by mes he rgh sde of (A55) herefore has an expeced value of σ β, whch s he resul n Proposon 3. Proof of Proposon 4 In hs secon we denoe he vecor of log prce changes ln p by p, and assume ha hese have a random error erm of e p, whch may be heeroskedasc. Tha s, p = µ p + e p. (A56)
6 Denoe he varance of p by σ and denoe he esmae of hs varance by s. Alhough an assumpon ha p has a posve covarance wh w s appealng because posve shocks o b rase equlbrum marke prces, adds excessve complexy o he expresson for var π. (See Mood, Graybll and Boes, 974, p. 80.) Hence, for he sake of smplcy, we wll assume ha he shocks o prces are ndependen of he error erm for preferences. We connue o assume ha he ase parameers are dsrbued as n (A7), now wren as β τ = β + e τ, for τ = -,. To oban an esmaor for he varance, we use he followng lnear approxmaon for w: w µ w + ( w / β - )e - + ( w / β )e µ w + Ge w (A57) where µ w denoes he value of w when β - = β = β, w / β - and w / β are evaluaed a β - = β = β and are esmaed by G = 0.5[dag(w) ww ], and where e w = e - + e. We hen have, π sv = w p (µ w + Ge w ) (µ p + e p ). (A58) Usng he ndependence assumpon o elmnae he expeced values of cross-producs of error erms, we have: E[π sv [E(π sv )] ] E(µ w e p ) + E(e w G µ p ) + E(e w G e p ). (A59) To oban an esmaor for he frs erm on he rgh sde of (A59), subsue w for µ w and s for σ : E(µ w e p ) = (µ w ) σ w s. (A60) Smlarly, o oban an esmaor for E(e w G µ p ), subsue p for µ p and s β for σ β :
7 E(e w G µ p ) σ β w [µ p ( w µp )] s β w (p π sv ). (A6) To esmae he hrd erm, noe ha E(e w G e p ) = E(e p G e w e w G e p ) = σ β E(e p G G e p ). Leng g denoe a vecor equal o he man dagonal of GG and usng he ndependence assumpon o se he expeced value of he cross-produc erms equal o zero gves, E(e p GG e p ) = [σ σ σ n ] g (A6) Leng w denoe he vecor of he w and nong ha w w = w, GG equals 4 [dag(w ) (w )w w(w ) + w (ww )]. Hence, he h elemen of g, equals 4 w ( + w w ) and, E(e p G G e p ) = 4 σ w ( + w w ). (A63) Therefore, E(e p G e w e w G e p ) = σ β σ w ( + w w ). (A64) Fnally, addng ogeher he esmaors for he hree erms gves: E[π sv [E(π sv )] ] (µ w ) σ + σ β w (p π) + σ β σ w ( + w w ). (A65) We can esmae hs expresson by: var π sv w s + s β w (p π) + s β s w ( + w w ), s s w ε p sε w s ( + w w ) = w s + +. 4( w w) 4( w w) (A66) where he expresson subsued for s β comes from Proposon 3. oe ha equaon (A66) s only an approxmaely unbased, because he responses of demand o prce dsurbances add a (presumably small) o addonal componen o he varances of
8 he w ha s gnored n (A66). oe also ha n he specal case where all prces have he same dsrbuon, so ha s = s p /( w ), he varance of he ndex may be esmaed as: spw sεspw sεs pw( + w) sεspw w var π sv + +. ( w) 4( w w) 4( w w)( w) ( w w)( w) 3 (Α67) Proof of Proposon 5 Takng he log of (), we oban, lnc(p, α) lnc(p, α) = ( α + α )ln(p / p ) + γj ln p ln p j γj ln p ln p j = = j= = j= = ( α + α ) ln(p / p ) + γj(ln p + ln p )(ln p j ln p j = = j= = (s = + s ) ln(p / p ), (A68) ) where he second lne follows by usng he ranslog formula n (9), he hrd lne usng smple algebra, and he fnal lne follows from he share formula n (0). Proof of Proposon 6 Proved n he man ex. Proof of Proposon 7
9 We now assume ha ln p = p* + u, where u s an error erm wh E(u ) = 0 and u ndependen of u j and u,-. Defne µ p as p* ln p* - and v as u u -. We assume ha he h elemen of v has varance σ. Then, leng Γ represen he marx of he γ j, from equaon (), w = α + Γ (ln p + ln p - ) + (ε + ε - ) = α + Γ (µp + µ p - + u + u - ) + (ε + ε - ) = µ w + e (A69) where e = [Γ(u + u - ) + (ε + ε - )], E(ε ε ) = Ω, and E(ε ε - ) = ρω. Le Σ denoe E[(u + u - )(u + u - ) ],where he man dagonal of Σ equals he σ and s off-dagonal elemens equal 0 because he u are assumed o be ndependenly dsrbued. We also assume ha (u + u - ) s ndependen of ε and ε -. Hence, E(e e ) = 4 ΓΣ Γ + (+ρ)ω. The Törnqvs ndex, whch we denoe by π, may be wren as: π = (µ w + e ) (µ p + v ). = µ w µ p + µ w v + µ p e + e v. (A70) oe ha E[(u u - )(ε + ε - ) ] = 0 and ha E(e v ) = 0. (E(e v ) = E[(u + u - )Γ + (ε + ε - ) ](u u - ), whch equals 0 snce E[γ j (u + u,- )(u j u j,- )] = 0 for j and γ E(u u,- ) = 0.) I follows ha, Ε(π) = µ w µ p. (A7) In addon, because he cross-producs of he erms n (A70) have expeced values of 0, we have Ε(π ) [Ε(π)] = E(µ w v v µ w ) + E(µ p e e µ p ) + E[(e v ) ]. (A7)
0 We can subsue he followng expressons for he erms n (A7) : E(µ w v v µ w ) = (µ w ) σ (A73) E[µ p e e µ p ] = 4 µp ΓΣ Γ µ p + µp (+ρ)ω µ p (A74) To evaluae E[(e v ) ], noe ha E(e v e j v j ) = 0 because v s ndependen of he oher erms n hs produc. Also, E[(e v ) ] = E{[ [ j γ j (u j + u j,- )](u u,- ) + (ε + ε,- )(u u,- )] } = 4 σ [ j γ j σ j ] + (σ )( + ρ)ω. Hence = E[(e v ) ] = 4 σ [ j γ j σ j ] + (σ )( + ρ)ω (A75) where Ω represens he elemens on he man dagonal of Ω. Subsung (A73)- (A75) no (A7), we have: var(π) = (µ w ) σ + 4 µp Γ Σ Γµ p + (+ρ)µp Ω µ p + 4 σ [ j γ j σ j ] + ( + ρ)[ Ω σ ]. (A76) To esmae var(π) usng he expresson n (A76), esmae µ w by w, esmae σ by s, where s may be he sample varance for he log changes n he ndvdual prces colleced for em. In addon, µ p can be esmaed by ln p, and Γ, ρ and Ω can be esmaed from he mulperod verson of regresson (), whch may ake he form of regresson (5). Subsung j ( ln p )( ln p j ) [ k γ k γ jk s k ] for µ p Γ Σ Γµ p, an esmaor for var(π) s, hen: var(π) w s + 4 j ( ln p )( ln p j ) [ k γ k γ jk s k ]
+ ( + ρ$)( ln p ) Ω $ ( ln p ) + 4 (s )[ j γ$ j s j ] + ( + ρ$)[ Ω$ s ]. (A77) In he case where all prces have he same rend and varance, we can esmae σ by s p /( w ) for all. Then (A77) becomes: var(π) s p w /( w ) + 4 [s p /( w )] ( ln p ) Γ $ Γ $ ( ln p ) + ( + ρ$)( ln p ) Ω $ ( ln p ) + 4 [s p /( w )] [ j γ$ j ] + [s p /( w )]( + ρ$)[ Ω $ ] (A77 ) where ( ln p ) Γ $ Γ $ ( ln p ) can be expressng as j ( ln p )( ln p j ) [ k γ$ k γ$ jk ].