Online Appendix for Demand Fluctuations in the Ready-Mix Concrete Industry

Size: px
Start display at page:

Download "Online Appendix for Demand Fluctuations in the Ready-Mix Concrete Industry"

Transcription

1 Onlne Appendx for Demand Fluctuatons n the Ready-Mx Concrete Industry Allan Collard-Wexler NYU Stern and NBER November 29, 2012 Contents A Stochastc Algorthm 2 A.1 Dscrete Acton Stochastc Algorthm: Termnaton Crtera B Modfed DASA to Compute the Gamma functon 5 C Market Fxed Effects 6 C.1 Condtonal Choce Probablty Estmaton C.2 Alternatve Market Categores from Market Fxed Effects D Identfcaton of Fxed Costs, Sunk Costs and Scrap Values 11 E Smulated Indrect Inference Estmaton 12 E.1 Consstency Proof F Seral Correlaton 15 G Prce Data 17 H Addtonal Tables and Fgures 18 1

2 A Stochastc Algorthm To compute the strateges assocated wth a Nash Equlbrum of the dynamc game, I adapt the stochastc algorthm of Pakes and McGure (2001) to the dscrete acton setup used n ths paper snce the state space has up to 1.4 mllon states. 1 I defne the ht counter, denoted h(a, x), as the number of tmes the locaton l = (a, x) has been vsted by my algorthm. The ht counter s mportant snce t allows me to keep track of the precson of the computaton of W (a, x) and Ψ[a x] usng the Dscrete Acton Stochastc Algorthm (henceforth DASA). Gven a reward and transton cost functon r( ) and τ( ), as well as a demand transton matrx D, the DASA computes a soluton to the dynamc game, characterzed by the choce specfc value functon W (a, x), and the condtonal choce probabltes Ψ. Algorthm Dscrete Acton Stochastc Algorthm (DASA) An teraton k of the DASA follows these steps: 1. Start n a locaton l k = {a k, xk }, wth values for W k, Ψ k, and h k n memory. 2. Draw an acton profle for other players a k r Ψk [a k r x k ]. Gven the acton profle a k = {a k, ak } draw a state n the next perod xk+1 : x k+1 a k D[M k+1 M k ] ι(x k+1 a k, x k ) (1) where ι(x k+1 a k, xk ) s the updatng functon, whch updates the frm s state based on a frm s acton and the frms largest sze n the past Increment the ht counter (how often you have vsted the state-acton par): h k+1 (a k, xk ) = h k (a k, xk ) Compute the value R of the acton as: where E(ε x k+1, Ψ k ) = R =r(a k+1, x k+1 ) τ(a k+1, x k ) + β j A W k (j, x k+1 )Ψ k [j x k+1 ] + βe(ε x k+1, Ψ k ) (2) ( γ ) j A ln(ψk [j x k+1 ])Ψ k [j x k+1 ] (where γ s Euler s Constant). 1 There are 10 frms, 7 possble states per frm and n the most complex model 50 demand states. I reduce the sze of the state from to 1.4 mllon by usng the assumpton of exchangeablty descrbed by Gowrsankaran (1999). 2 Later n the paper I make the frm s prevous state relevant to the transton cost. Specfcally, f the frm s state s x k = {x C,k a k, x k ) s: ι(x k+1, x P,k },.e. the current sze x C,k x k+1 = and the largest sze n the past x P,k, then the updatng functon { {a k, a k } f a k x P,k {a k, x P,k } f a k < x P,k 2

3 5. Update the W-functon: W k+1 (a k, x k ) = α[a k, x k ]R + ( ) 1 α[a k, x k ] W k (a k, x k ) (3) where α = 1 h k+1 (a k,xk ).3 6. Update the Polcy Functon Ψ for state x k : Ψ k+1 [a k x k ] = exp ( W k+1 (a k, xk ) ) j A exp (W k+1 (j, x k )) (4) for all actons a k A. 7. Draw a new acton a k+1 Ψ k+1 [ x k+1 ]. 8. Check the stoppng rule. 4 If t s not satsfed, update the current locaton to l k+1 = {a k+1, x k+1 }, ncrement k to k + 1 and return to step 1. The stoppng rule for ths algorthm s based on Fershtman and Pakes (2012) whch compares the W-functon to a smulated average based on rewards from steps 2 and 4 for states that are recurrent. If the W-functon s exact, then the squared dfference between between these two objects (weghted by the ergodc dstrbuton) can be accounted for by smulaton error. The stoppng rule s presented n the next secton. The ntal values of W n the stochastc algorthm are mportant, snce f I ntalze W (a, x) wth a hgh value, the algorthm mght get trapped at ths state. To fnd ntal values of W, I use Value Iteraton n whch I smulate the expectaton va Monte-Carlo. A.1 Dscrete Acton Stochastc Algorthm: Termnaton Crtera The stoppng rule s based on the fact that f I have the correct W functon, then t satsfes the Bellman equaton. However, t s computatonally expensve to calculate the W-functon exactly, nstead we can approxmate the value functon usng forward smulaton. Consder the locatons R A X defned as the state-acton pars vsted n the last 1 mllon teratons (keep a ht counter that tracks the last 1 mllon teratons denoted rh(l)). Algorthm Fershtman-Pakes Stoppng Rule (FPStop) 3 The man problem wth the stochastc algorthm s: 1- makng sure the entre state space s searched, 2- ensure fast learnng about the W functon at the start of the algorthm and 3- makng sure that the convergence propertes of the algorthm are satsfed. Frst, I ntalze the startng W usng farly hgh values so that the algorthm vsts all states before lowerng the estmate of W. Second, at the start of the algorthm I use α = 1/ h k+1 (a k, xk ) to ensure that ntally naccurate W s get updated quckly. As well, I reset the ht counter after 20 mllon teratons to ensure that the frst rounds of updates are down-weghted. Thrd, n the fnal stage of the algorthm I swtch to the α = 1/h k+1 (a k, x k ) update rule whch satsfes the convergence propertes of Stochastc Approxmaton Algorthms descrbed n Powell (2007) on page 216. However, n the context of a game ths condton on α may not be enough to guarantee convergence. 4 Snce the stoppng rule s computatonally ntensve relatve to a sngle teraton of the DASA, t s better to check the stoppng rule only every several mllon teratons. 3

4 For all locatons l = {a, x} A X whch have been vsted n the last 1 mllon teratons: 1. Compute an approxmaton to the the W-functon usng a one step forward smulaton (denoted W ). For q = 1,..., Q (I use Q = ): (a) Draw a state tomorrow x q gven locaton l = {a, x}. (b) Get rewards: R q =r(a q x q, θ) + τ(a q x, θ) + β W (j, x q )Ψ[j x q ] j A + β γ ln(ψ[j x q ])Ψ[j x q ] j A (5) (c) Compute the approxmaton to the W-functon: W (l) = 1 Q Q R q (6) 2. Compute the dfference n value functons weghted by the recent ht counter rh: q=1 1 τ = l rh(l) rh(l) ( W (l) W (l)) 2 (7) l If the test statstc τ s small enough, then we can argue that we have a good approxmaton. In practce I use the recent ht counter weghted R 2 between W (l) and W (l) beng greater than Ths usually happens after as lttle as 50 mllon teratons, and t s usually more effcent to run the DASA for 150 mllon teratons (.e. 15 mnutes) whch leads to a W functon whch satsfy the FPStop crtera. Furthermore, n ths applcaton there are only about state-acton pars (where the acton s not 0) that are vsted n the last 1 mllon teratons. Thus the ergodc class R s qute small compared to the sze of the entre state space. 4

5 B Modfed DASA to Compute the Gamma functon I use a modfed DASA to compute the Γ functon. The two dfferences are that () I shut down the polcy functon update n the DASA, and () I compute the net present value of the components of rewards rather than the rewards themselves (whch would requre me to have nformaton on the parameters θ). 5 Algorthm Γ-Compute Dscrete Acton Stochastc Algorthm (GC-DASA) An teraton k of the GC-DASA s gven by the followng steps: 1. Start n a locaton l k = {a k, xk } wth values for Γ k and h k n memory. 2. Draw an acton profle a k a k 1(a = a k ) ˆP [a k xk ] and a state n the next perod x k+1 gven acton profle a k : x k+1 a k ˆD[M k+1 M k ] ι(x k+1 a k, x k ) (9) where ι(x k+1 a k, xk ) s the updatng functon, whch updates the frm s state based on a frm s acton and the frms largest sze n the past. 3. Increment the ht counter (how often you have vsted the state-acton par): h k+1 (a k, xk ) = h k (a k, xk ) Compute th component of payoffs R of the acton a k as: R =r (a k, x k+1 ) + β j A Γ k, (j, x k+1 ) ˆP [j x k+1 ] (10) 5. Update the Γ-functon: where α = 1 h k+1 (a k,xk ). 6. Update current locaton to l k+1 = {a k, xk+1 }. The stoppng rule s Fershtman and Pakes (2012) s. 5 I could have computed the Γ P µ usng forward smulaton,.e.: Γ k+1, (a k, x k ) = αr + (1 α)γ k, (a k, x k ) (11) Γ P µ (a, s) 1 K K k=1 t=0 β t ρ(a tk, x tk ) (8) where the sequence of states x tk can be smulated usng demand transton process ˆD and the choce probabltes for frms ˆP. However, there are about states and 4 actons, thus I would need to do ths forward smulaton 1.4 mllon tmes the number of smulaton draws K. 5

6 C Market Fxed Effects C.1 Condtonal Choce Probablty Estmaton In the man model, I use a market categores model whch s meant to to mmc the ncluson of market fxed effects. These market fxed effects are crtcal to the estmaton of the model snce persstent market level dfferences n proftablty lead to upward bas on the effect of competton. Ths bas, especally when t nduces postve effects of competton, leads to very aberrant ndustry dynamcs such as havng a market flp between 0 and 10 plants due to a postve externalty due to competton. The goal of ths secton s to motvate the use of market category effects based on the average number of frms n a market over tme, and explan why other plausble correctons for market fxed effects usng average constructon employment or the number of plants n a pre-perod, do not gve the rght answer. I consder the followng dfferent specfcatons of the market category effects: a) No Market Effects. b) Average Number of Frms n Market (rounded to nearest nteger). In the man estmates of the model, I use the average number of frms n the market rounded to the nearest nteger. However, ths approach suffers from an endogenety problem. To put t most clearly, consder the followng dynamc, two frm model of the type: a t = αa t + βa t 1 + ɛ t (12) If I nclude a t+1 n the above regresson, then I am ncludng an endogenous regressor snce a t+1 s a functon of a t whch n turn depends on ɛ t, and more broadly on the entre hstory of ɛ t for t < t. c) Average Number of Frms n Market n years before ths one (rounded to nearest nteger). However, f I nclude lagged a t 2 then ths s not an endogenous regressor, snce there s no dependance on a t 2 except through a t 1 whch s already ncluded n the regresson. The only ssue wth usng the average number of frms n the market n prevous years s that a market can swtch categores over tme whch makes for a more dffcult state space to deal wth, whch s the reason that I do not use these market category controls n the man part of the paper. d) Average Number of Frms n the perod, wth data on the post 1983 perod. Notce that for ths model, I am usng the early perod to condton the number of frms n the market. Ths s a verson of model c), but but the pre-perod on whch I condton does not change wthn a market. e) Average Constructon Employment. Here I classfy markets by the average level of constructon employment from Ths s an exogenous classfcaton scheme snce t does not depend on what ready-mx concrete frms are dong. f) Market Fxed Effects (Condtonal Logt). Table 1 presents estmates from the bnary logt model of entry and ext for specfcatons (a)- (f). I have chosen the bnary logt model snce t allows me to use the condtonal logt wth market 6

7 fxed effects. 6 Column (a) shows estmate wthout market category controls (henceforth referred to as no market effects), whle column (f) shows estmates wth market fxed effects (henceforth referred to as market fxed effects), whle columns (b)-(e) show dfferent market category controls. Columns (b) and (c),.e. wth market controls based on the average number of plants and the average number of plants n the perods before ths one, are smlar to the market fxed effect estmates n column (f). Lkewse, columns (d) and (e) show estmates that are more smlar to the no market effects estmates n column (a). The effect of past plant sze on actvty are farly smlar n all of these estmates, wth smaller effects of plant sze n the market fxed effect specfcatons (f), (b) and (c) than the no market effect specfcaton (a), (d) and (e). Unobserved heterogenety between markets s loaded onto varable ndcatng state dependence, such as past plant sze. The effect of log constructon employment s hgher at to n the no market effect models (a), (d) and (e) than n the market fxed effect estmates, whch have estmates from to These hgher effects of demand are due to the fact that frms are far more lkely to react to cross-sectonal dfferences n demand (whch are more lkely to be persstent) than to year to year changes n demand. Lkewse, the effect of the second compettor (whch wll representatve of the effect of competton more broadly) vares from to n the no market effect columns (a), (d) and (e), but ranges from to n the market fxed effect columns (f), (b) and (c). Ths s ndcatve of the fact that unobserved dfferences n the proftablty of a market wll be correlated wth the number of plants n the market. There are two man conclusons from the table that are relevant for my choce of market categores. Frst, the market categores based on the ether the average number of frms (b) or the average number of frms n all perods before today (c) do a good job n mmckng true market fxed effects. However, usng categores based on the number of frms before 1983 (d), or usng nformaton about the average level of constructon demand (e) do not replcate market fxed effect estmates, and n fact mmc no havng any market controls whatsoever. Second, whle t s true that usng the average number of frms over tme condtons on an endogenous varable, I can equally easly use the lagged number of frms whch does not condton on an endogenous varable and obtan vrtually dentcal results. Thus the ssue of endogenety s of lmted practcal mportance n the use of the average number of frms over tme as a groupng. C.2 Alternatve Market Categores from Market Fxed Effects In ths secton, I present an alternatve procedure for constructng market categores based on values of the market fxed effect. Consder the bnary logt model: y mt = 1(αX mt + ξ m > ɛ mt ) I can construct market categores based on estmates of the market fxed effect varable ɛ mt, usng the followng procedure: 1. Step 1: Run a condtonal logt (wth market fxed effects) on the number of actve plants to recover parameters ˆα,.e. everythng except the market fxed effect ξ m. Note that we can get α wthout the problem of ncdental parameters usng a condtonal logt. 6 Techncally, I can also use a multnomal condtonal logt, but the number of categores I need to condton on becomes farly large. As well, I am not presentng margnal effects here snce the condtonal logt does not estmate the market fxed effects. 7

8 Dependant Varable: Actvty (a) (b) (c) (d) (e) (f) Condtonal Logt Log County 0.133*** ** *** 0.129*** 0.099*** Constructon Employment (0.011) (0.011) (0.010) (0.015) (0.019) (0.023) Frst Compettor *** *** *** *** *** *** (0.052) (0.051) (0.048) (0.066) (0.052) (0.043) Second Compettor *** *** *** (0.036) (0.037) (0.036) (0.047) (0.037) (0.030) Thrd Compettor *** *** *** (0.044) (0.044) (0.043) (0.058) (0.044) (0.036) Log Compettors above *** *** *** (0.029) (0.028) (0.028) (0.040) (0.029) (0.025) Small 5.889*** 5.703*** 5.720*** 5.977*** 5.887*** 5.585*** (0.037) (0.035) (0.035) (0.047) (0.037) (0.025) Small, 5.665*** 5.388*** 5.393*** 5.707*** 5.657*** 5.220*** Medum n Past (0.048) (0.045) (0.045) (0.057) (0.048) (0.033) Small, 4.866*** 4.636*** 4.643*** 4.944*** 4.865*** 4.450*** Large n Past (0.065) (0.063) (0.062) (0.075) (0.065) (0.041) Medum 7.503*** 7.292*** 7.315*** 7.696*** 7.495*** 7.234*** (0.057) (0.055) (0.055) (0.075) (0.057) (0.050) Medum, 7.511*** 7.237*** 7.251*** 7.585*** 7.503*** 7.122*** Large n Past (0.080) (0.079) (0.079) (0.094) (0.081) (0.074) Large 7.671*** 7.446*** 7.450*** 7.724*** 7.676*** 7.436*** (0.056) (0.054) (0.054) (0.068) (0.056) (0.050) Market Classfcaton Varable Average Number of Plants X Lagged Average Plants X Before 1983 Average Plants X Constructon Employment X Category *** 1.118*** 0.225*** 0.132** (0.036) (0.032) (0.062) (0.049) Category *** 1.836*** 0.348*** 0.199** (0.050) (0.047) (0.058) (0.061) Category *** 2.424*** 0.482*** 0.169* (0.063) (0.062) (0.061) (0.082) Constant *** *** *** *** *** (0.065) (0.066) (0.062) (0.089) (0.090) Observatons Markets Log-Lkelhood χ (Standard Errors are Clustered by Market). Table 1: Market Effects n the Bnomal Logt Regresson of Entry and Ext 8

9 2. Step 2: Create the varable Z ˆα = ˆαX, the part of the covarates that s not the market fxed effect. 3. Step 3: Run the logt on the model: y mt = 1(Z α mt + ξ m > ɛ mt ) whch can be done separately for each market n the data. Note that ths means that I am estmatng market fxed effects ˆξ m (for whch I need the number of tme perods to be large to do ths). 4. Step 4: Use estmated ˆξ m to form groups of markets. The estmated ˆξ m has the dstrbuton n Table 2. Percentle ˆξ m Table 2: Dstrbuton of Market Fxed Effect ˆξ m. Table 3 shows the bnary logt results on actvty, where columns IV, V, VI and VII shows fxed effects constructed by roundng ˆξ m nto 4, 10, 20 and 40 categores (where each category contans the same number of markets). Notce that usng these fxed effect categores yelds smlar results to Columns I (fxed-effects) and Column II (market categores). However, to match the market category effects n Column III, n partcular to get smlar effects of log compettors above 4, I need to have at least 10 market groups. In the estmates I wll show you, I use 20 market categores of ˆµ m. 9

10 Dependant Varable: Actvty I II (FE ) III (µ) IV V VI VII Log County Constructon Employment (0.01) (0.02) (0.01) (0.01) (0.01) (0.01) (0.01) Frst Compettor (0.05) (0.04) (0.05) (0.05) (0.05) (0.05) (0.05) Second Compettor (0.04) (0.03) (0.04) (0.04) (0.04) (0.04) (0.04) Thrd Compettor (0.04) (0.04) (0.04) (0.04) (0.04) (0.04) (0.04) Log Compettors above (0.03) (0.03) (0.03) (0.03) (0.03) (0.03) (0.03) Small (0.04) (0.03) (0.04) (0.03) (0.03) (0.03) (0.03) Small, Medum n Past (0.05) (0.03) (0.05) (0.05) (0.05) (0.05) (0.05) Small, Large n Past (0.07) (0.04) (0.06) (0.07) (0.07) (0.07) (0.07) Medum (0.06) (0.05) (0.06) (0.06) (0.06) (0.06) (0.06) Medum, Large n Past (0.08) (0.07) (0.08) (0.08) (0.08) (0.08) (0.08) Large (0.06) (0.05) (0.05) (0.06) (0.06) (0.06) (0.06) Market Classfcaton Varable 4 Fxed Effect Groups X 10 Fxed Effect Groups X 20 Fxed Effect Groups X 40 Fxed Effect Groups X Observatons Markets Log-Lkelhood χ Table 3: Bnary Logt Regressons of the decson to have an actve plant wth market fxed effects and market category effects. 10

11 D Identfcaton of Fxed Costs, Sunk Costs and Scrap Values To show the ntuton behnd the dentfcaton of fxed costs, sunk costs and scrap values, I wll use a smplfed model. Suppose that I have varable profts V (X) where X are tme nvarant proft shfters, fxed cost f, entry costs ψ and ext costs φ. Then the entry and ext rules n a statonary envronment are: Enter ff: Ext ff: t=0 t=0 β t (V (X) f) = V (X) f 1 β β t (V (X) f) = V (X) f 1 β ψ < φ In ths case t s clear that f, ψ and φ are lnearly ndependent. Addng future ext rates, δ (whch at ths pont are generated by shocks to the ext value φ + ɛ that I don t want to put n ths smple model). Ths wll adjust these equatons to Enter ff: Ext ff: V (X) f 1 β(1 δ) + φ 1 βδ ψ V (X) f 1 β(1 δ) + φ 1 βδ < φ Agan, we have the same collnearty problem as before. However, f future ext rates are dfferent n dfferent markets (say due to dfferences n future demand shocks such as a market at the top demand level, versus one at the bottom demand level), then we have a δ(x) whch depends on the state X. Ths allows us to separately dentfy f and φ n the ext equaton gven we know V (X) and δ(x). V (X) f 1 β(1 δ(x)) + φ 1 βδ(x) < φ Now gven we know ˆf and ˆφ, the entry equaton becomes V (X) ˆf 1 β(1 δ(x)) + ˆφ 1 βδ(x) ψ So formally I can separately dentfy f, φ and ψ. What makes ths dffcult s that I need enough varaton n δ(x) for ths to work, and ths varaton s not very mportant ether n Monte-Carlo smulatons, or n the data. 11

12 E Smulated Indrect Inference Estmaton The smulated ndrect nference estmator used n equaton (19) on page 14 uses the choce probabltes Ψ(a x, Γ, θ) as an outcome vector,.e. ỹ n = Ψ(a x, Γ, θ). Typcally, one would sample outcomes y n from the choce probabltes Ψ(a x, Γ, θ). I can show that usng the ỹ n s equvalent to samplng actons as the number of actons tends to nfnty. Denote the outcome vectors yn s as: 1(a s yn s n = small) = 1(a s n = medum) (13) 1(a s n = bg) where the acton a s n Ψ( x, Γ, θ) s drawn from the choce probabltes Ψ. The smulaton draws are ndexed from s = 1,, S. The β S (θ) coeffcent s estmated usng outcome vectors {yn} s {s=1,,s},n. The crteron functon usng S smulaton draws of actons s thus: ( ) ( ) Q S (θ) = ˆβ β(θ)) W ˆβ β(θ) (14) E.1 Consstency Proof In ths secton I wll show condtons under whch the procedure I use n ths paper s a consstent estmator of θ. Specfcally, I wll show the condtons that need to be satsfed for Proposton 1 on page S89 n Goureroux and Montfort (1993) dealng wth the consstency of ndrect nference estmators, to be satsfed. Defne the crteron functon used to compute S N,K (β, θ) = N K n=1 k=1 + [ 1 [ ( ) ] 2 1 a k n = small Z n β s ( a k n = medum β(θ) (for a gven value of θ) as: ) ] 2 [ Z n β m + 1 (15) ( ) ] 2 a k n = large Z n β l where N denotes the number of observatons and K denotes the number of smulaton draws to draw actons a k n from the polcy functon ψ(a n x n, θ, Γ( ˆP N, ˆD N )). Note that S N,K (β, θ) s the crteron used n OLS estmaton, just the sum of squared errors. The frst step s to show that I can replace draws of a k n wth the actual polcy functon ψ, or n other words S N,K (β, θ) a.s. S N, (β, θ) unformly as K. Theorem 1 As the number of smulaton draws K tends to nfnty, S N,K (β, θ) a.s. S N, (β, θ) unformly. Proof: I wll show the proof usng only the choce to be small to lghten the notaton, but the proof extends to as many actons as I want: S N,K (β, θ) = = N n=1 1 K K k=1 N (Z n β s ) 2 + n=1 [ ( ) ] 2 1 a k n = small Z n β s N K n=1 k=1 1 ( K 1 a k n = small ) 2 N 2 Z n β s n=1 K k=1 1 ( ) K 1 a k n = small (16) 12

13 As K, K k=1 1 K 1 ( a k n = small ) ψ(a n = small x n, θ, Γ( ˆP, ˆD)) snce ths s just an average, and K k=1 1 K 1 ( a k n = small ) 2 ψ(an = small x n, θ, Γ( ˆP, ˆD)) 2. Thus I can rewrte S N, (β, θ) as: Fx me S N, (β, θ) = = N (Z n β s ) 2 + n=1 2 N ψ(a n = small x n, θ, Γ( ˆP, ˆD) 2 n=1 N Z n β s ψ(a n = small x n, θ, Γ( ˆP, ˆD)) n=1 N [ψ(a n = small x n, θ, Γ( ˆP, ˆD)) ] 2 Z n β s n=1 Second, I need to show that S N, (β, θ) a.s. S 0, (β, θ) as N. The frst condton s that the lnear probablty estmator s consstent, whch s just an outcome of the OLS estmator beng a consstent estmator, whch s a standard proof. However, I am not usng the true Γ 0 (P 0, D 0 ) but an estmate of Γ( ˆP, ˆD) due to samplng error n the condtonal choce probabltes P and the demand transton process D, as well as approxmaton error n the computaton of Γ. The CCP s ˆP N P 0 whch happens snce I am usng a consstent estmator of the CCP s, just a parametrc multnomal logt, whch s consstent usng the usual proofs on the consstency of M- estmators. Lkewse ˆD N D 0 as N snce I am usng a consstent estmator of D, just a bn estmator where the number of bns s fxed as N. Now the next pont s to show that Γ L (P 0, D 0 ) Γ 0 (P 0, D 0 ) as the number of teratons L n the DASA goes to nfnty. It wll be dffcult to show convergence of the DASA, snce to my knowledge there s no proof of the convergence of algorthms that compute the solutons to games (n contrast to sngle agent problems). However, the Fershtman and Pakes (2012) convergence crteron can be used to check the convergence of the DASA, and I can send the tolerance of the Fershtman-Pakes crteron to 0 as N. 7 The convergence of Γ L (P 0, D 0 ) Γ(P 0, D 0 ) mples the convergence of S N,K (β, θ) S, (β, θ) as K and N. Ths satsfes assumpton (A2) n Indrect Inference. Assumpton (A3) of Indrect Inference requres that: (17) β(θ) = argmax β S, (β, θ) (18) be a contnuous functon and have a unque value. Contnuty s an outcome of the OLS structure of S, whle unqueness occurs f Z n s full rank and the dmenson of β s smaller than the dmenson of Z n. The fnal condton, (A4) requres that β(θ) be one to one and have full rank. I wll assume ths condton, but notce that the dmenson of β s larger than the dmenson of θ and I have checked that β(θ) s full rank n the estmaton of model. 7 Notce that snce there s a full support shock ɛ to the payoffs of any actons, Γ s computed correctly on the entre state space S, snce the set of recurrent ponts s the entre state space,.e. S = R. The DASA used to compute Γ s a verson of the Q-learnng algorthm, where consstency proof are provded for the sngle agent (non-game verson) n Propostons 5.5 and 5.6 on page n Bertsekas and Tstskls (1996) show condtons under whch the DASA s (whch s the game verson of a Q-learnng algorthm) computaton of Γ : converges wth probablty one to Γ 0. These condtons are (1) that polces are proper,.e. there s a postve probablty that a frm wll ext after t perod, whch s true n ths context due to the full support of the shock dstrbuton for each acton, ncludng the choce to ext; and (2) for mproper polces, there s a negatve nfnte value of W for at least one state. Unfortunately, there s to my knowledge no proof wth shows the convergence of the Q-learnng algorthm n the context of a game. 13

14 Snce condtons (A1), (A2), (A3) and (A4) are satsfed, then ˆθ defned as the mnmzer of: ( ) ( ) Q(θ) = ˆβ β(θ)) W ˆβ β(θ) (19) wll be a consstent estmator of θ as N. 14

15 F Seral Correlaton The assumpton that the unobserved state ɛ t a are..d. logts mples the followng assumpton: Assumpton 2 (Seral Independence) Unobserved states are serally ndependent,.e. Pr(ε t εk ) = Pr(ε t ) for k t. Seral ndependence of unobserved components of a frm s proftablty s volated by any form of persstent productvty dfference between frms, or long term reputatons of ready-mx concrete operators. Note that n the context of a dynamc game, unobserved states are a frst-order problem snce the sze of the frm-level state x t s severely restrcted by the dffculty of keepng track of the jont dstrbuton of the states of all frms. I smulate the age profle of ext usng the ext and sze changes n Table??, whch captures what the age profle of ext would look lke n the absence of selecton on an unobserved state. Wth a serally correlated unobserved state, as plants age ther ext rate falls due to the effect of selectng out plants wth a bad unobserved state. Fgure 1 shows the ext hazard wth age n the data and smulated data. Both the data and the smulaton have the same average ext rate of about 6%, but the data has a somewhat steeper declne n ext rates over tme, so a plant aged 20 years old has an ext rate of about 3.5% n the data, whle the smulated data yelds a ext rate of about 5.2%. Ths s consstent wth most models of ndustry dynamcs wth a serally correlated unobserved state, and the actve or passve learnng models of Pakes and Ercson (1998) and Jovanovc (1982), but s a small effect compared to other ndustres such as restaurants where we would worry more about unobserved states. I do not deal wth seral correlaton and both the estmates and counterfactuals wll be contamnated by ths problem. 15

16 Ext Hazard and Plant Age 7% Model Smulaton 6% Ext Rate 5% 4% Data 3% Plant Age Fgure 1: The data predcts a slghtly steeper declne of the ext hazard wth age. 16

17 G Prce Data The Census Bureau does not generally collect prce data. Ths job s left to the Bureau of Economc Analyss and the Bureau of Labor Statstcs. However, followng Syverson (2004a) we can generate prces usng the followng equaton: p t (c) = s t(c) q t (c) whch s just sales of the commodty dvded by quantty sold. Whle these prces may be good ndcators of prce dsperson (the applcaton Syverson consders), they are partcular poor measures of actual plant prces, wth an nterquartle range over 2 log ponts (the thrd quartle s 100 tmes bgger than the frst prce quartle). Ths s probably because of how measurement error n the numerator and especally the denomnator nteract. To reduce the mpact of mputed data and measurement error on the dsperson of prces, I apply a verson of Syverson (2004b) s procedures: 1. Hot Imputes n the data are dentfed as prces that satsfy the followng: (20) p t p t j < for some and j n the data (21) I drop all prces that are hot mputes. Notce that ths procedure wll also elmnate cold mputes, defned as prces whch equal the mode n the current year. 2. I trm the data by droppng observatons that are less than 1/5 or more than 5 tmes the medan prce for the current year. The deflated data s computed by p Dt = p t /cpt where I normalze the cp n 1977 to be equal to 1 (.e. cp t = raw cp t /raw cp 1977 ). Ths elmnate dfferences n prce level across tme, but does not ncorporate dfferences n prces between regons. 17

18 H Addtonal Tables and Fgures Year Brth Contnuer Death ,737 N.A , , , , , , , , , , , , , , , , , , , , , , , Table 4: The number of Brths, Deaths and Contnuers s farly stable over the last 25 years 18

19 References Bertsekas, D., and J. Tstskls (1996): Neuro-Dynamc Programmng. Athena Scentfc. Fershtman, C., and A. Pakes (2012): Dynamcs Games wth Asymmetrc Informaton: A Framework for Emprcal Work, Quarterly Journal of Economcs. Goureroux, C., and A. Montfort (1993): Indrect nference, Journal of Appled Econometrcs, 8, S85 S118. Gowrsankaran, G. (1999): Effcent Representaton of State Spaces for Some Dynamc Models, Journal of Economc Dynamcs and Control, 23(8), Jovanovc, B. (1982): Selecton and the Evoluton of Industry, Econometrca, 50(3), Pakes, A., and R. Ercson (1998): Emprcal Applcatons of Alternatve Models of Frm and Industry Dynamcs, Journal of Economc Theory, 79(1), Pakes, A., and P. McGure (2001): Stochastc Algorthms, Symmetrc Markov Perfect Equlbrum, and the Curse of Dmensonalty, Econometrca, 69(5), Powell, W. B. (2007): Approxmate Dynamc Programmng: Solvng the Curse of Dmensonalty, Wley Seres n Probablty and Statstcs. John Wley and Sons. Syverson, C. (2004a): Market Structure and Productvty: A Concrete Example, Journal of Poltcal Economy, 112(6), (2004b): Product substtutablty and productvty dsperson, Revew of Economcs and Statstcs, 86(2),

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Ryan (2009)- regulating a concentrated industry (cement) Firms play Cournot in the stage. Make lumpy investment decisions

Ryan (2009)- regulating a concentrated industry (cement) Firms play Cournot in the stage. Make lumpy investment decisions 1 Motvaton Next we consder dynamc games where the choce varables are contnuous and/or dscrete. Example 1: Ryan (2009)- regulatng a concentrated ndustry (cement) Frms play Cournot n the stage Make lumpy

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

The Granular Origins of Aggregate Fluctuations : Supplementary Material

The Granular Origins of Aggregate Fluctuations : Supplementary Material The Granular Orgns of Aggregate Fluctuatons : Supplementary Materal Xaver Gabax October 12, 2010 Ths onlne appendx ( presents some addtonal emprcal robustness checks ( descrbes some econometrc complements

More information

Primer on High-Order Moment Estimators

Primer on High-Order Moment Estimators Prmer on Hgh-Order Moment Estmators Ton M. Whted July 2007 The Errors-n-Varables Model We wll start wth the classcal EIV for one msmeasured regressor. The general case s n Erckson and Whted Econometrc

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2017 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2017 Instructor: Victor Aguirregabiria ECOOMETRICS II ECO 40S Unversty of Toronto Department of Economcs Wnter 07 Instructor: Vctor Agurregabra SOLUTIO TO FIAL EXAM Tuesday, Aprl 8, 07 From :00pm-5:00pm 3 hours ISTRUCTIOS: - Ths s a closed-book

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Online Appendix to: Axiomatization and measurement of Quasi-hyperbolic Discounting

Online Appendix to: Axiomatization and measurement of Quasi-hyperbolic Discounting Onlne Appendx to: Axomatzaton and measurement of Quas-hyperbolc Dscountng José Lus Montel Olea Tomasz Strzaleck 1 Sample Selecton As dscussed before our ntal sample conssts of two groups of subjects. Group

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that

Online Appendix. t=1 (p t w)q t. Then the first order condition shows that Artcle forthcomng to ; manuscrpt no (Please, provde the manuscrpt number!) 1 Onlne Appendx Appendx E: Proofs Proof of Proposton 1 Frst we derve the equlbrum when the manufacturer does not vertcally ntegrate

More information

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva

Econ Statistical Properties of the OLS estimator. Sanjaya DeSilva Econ 39 - Statstcal Propertes of the OLS estmator Sanjaya DeSlva September, 008 1 Overvew Recall that the true regresson model s Y = β 0 + β 1 X + u (1) Applyng the OLS method to a sample of data, we estmate

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Notes on Kehoe Perri, Econometrica 2002

Notes on Kehoe Perri, Econometrica 2002 Notes on Kehoe Perr, Econometrca 2002 Jonathan Heathcote November 2nd 2005 There s nothng n these notes that s not n Kehoe Perr NBER Workng Paper 7820 or Kehoe and Perr Econometrca 2002. However, I have

More information

CS286r Assign One. Answer Key

CS286r Assign One. Answer Key CS286r Assgn One Answer Key 1 Game theory 1.1 1.1.1 Let off-equlbrum strateges also be that people contnue to play n Nash equlbrum. Devatng from any Nash equlbrum s a weakly domnated strategy. That s,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Chapter 20 Duration Analysis

Chapter 20 Duration Analysis Chapter 20 Duraton Analyss Duraton: tme elapsed untl a certan event occurs (weeks unemployed, months spent on welfare). Survval analyss: duraton of nterest s survval tme of a subject, begn n an ntal state

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Empirical Methods for Corporate Finance. Identification

Empirical Methods for Corporate Finance. Identification mprcal Methods for Corporate Fnance Identfcaton Causalt Ultmate goal of emprcal research n fnance s to establsh a causal relatonshp between varables.g. What s the mpact of tangblt on leverage?.g. What

More information

9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Testing for seasonal unit roots in heterogeneous panels

Testing for seasonal unit roots in heterogeneous panels Testng for seasonal unt roots n heterogeneous panels Jesus Otero * Facultad de Economía Unversdad del Rosaro, Colomba Jeremy Smth Department of Economcs Unversty of arwck Monca Gulett Aston Busness School

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

1 Binary Response Models

1 Binary Response Models Bnary and Ordered Multnomal Response Models Dscrete qualtatve response models deal wth dscrete dependent varables. bnary: yes/no, partcpaton/non-partcpaton lnear probablty model LPM, probt or logt models

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Mamum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models for

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

8 Derivation of Network Rate Equations from Single- Cell Conductance Equations

8 Derivation of Network Rate Equations from Single- Cell Conductance Equations Physcs 178/278 - Davd Klenfeld - Wnter 2015 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons We consder a network of many neurons, each of whch obeys a set of conductancebased,

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

k t+1 + c t A t k t, t=0

k t+1 + c t A t k t, t=0 Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,

More information

Factor models with many assets: strong factors, weak factors, and the two-pass procedure

Factor models with many assets: strong factors, weak factors, and the two-pass procedure Factor models wth many assets: strong factors, weak factors, and the two-pass procedure Stanslav Anatolyev 1 Anna Mkusheva 2 1 CERGE-EI and NES 2 MIT December 2017 Stanslav Anatolyev and Anna Mkusheva

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Market structure and Innovation

Market structure and Innovation Market structure and Innovaton Ths presentaton s based on the paper Market structure and Innovaton authored by Glenn C. Loury, publshed n The Quarterly Journal of Economcs, Vol. 93, No.3 ( Aug 1979) I.

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Limited Dependent Variables and Panel Data. Tibor Hanappi

Limited Dependent Variables and Panel Data. Tibor Hanappi Lmted Dependent Varables and Panel Data Tbor Hanapp 30.06.2010 Lmted Dependent Varables Dscrete: Varables that can take onl a countable number of values Censored/Truncated: Data ponts n some specfc range

More information

Financing Innovation: Evidence from R&D Grants

Financing Innovation: Evidence from R&D Grants Fnancng Innovaton: Evdence from R&D Grants Sabrna T. Howell Onlne Appendx Fgure 1: Number of Applcants Note: Ths fgure shows the number of losng and wnnng Phase 1 grant applcants over tme by offce (Energy

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Econometrics of Panel Data

Econometrics of Panel Data Econometrcs of Panel Data Jakub Mućk Meetng # 8 Jakub Mućk Econometrcs of Panel Data Meetng # 8 1 / 17 Outlne 1 Heterogenety n the slope coeffcents 2 Seemngly Unrelated Regresson (SUR) 3 Swamy s random

More information

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF 10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

An Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation

An Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation An Experment/Some Intuton I have three cons n my pocket, 6.864 (Fall 2006): Lecture 18 The EM Algorthm Con 0 has probablty λ of heads; Con 1 has probablty p 1 of heads; Con 2 has probablty p 2 of heads

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation Econ 388 R. Butler 204 revsons Lecture 4 Dummy Dependent Varables I. Lnear Probablty Model: the Regresson model wth a dummy varables as the dependent varable assumpton, mplcaton regular multple regresson

More information

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques

More information

Portfolios with Trading Constraints and Payout Restrictions

Portfolios with Trading Constraints and Payout Restrictions Portfolos wth Tradng Constrants and Payout Restrctons John R. Brge Northwestern Unversty (ont wor wth Chrs Donohue Xaodong Xu and Gongyun Zhao) 1 General Problem (Very) long-term nvestor (eample: unversty

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Supplementary Notes for Chapter 9 Mixture Thermodynamics Supplementary Notes for Chapter 9 Mxture Thermodynamcs Key ponts Nne major topcs of Chapter 9 are revewed below: 1. Notaton and operatonal equatons for mxtures 2. PVTN EOSs for mxtures 3. General effects

More information

3.2. Cournot Model Cournot Model

3.2. Cournot Model Cournot Model Matlde Machado Assumptons: All frms produce an homogenous product The market prce s therefore the result of the total supply (same prce for all frms) Frms decde smultaneously how much to produce Quantty

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR. Introductory Econometrics 1 hour 30 minutes

DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR. Introductory Econometrics 1 hour 30 minutes 25/6 Canddates Only January Examnatons 26 Student Number: Desk Number:...... DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR Department Module Code Module Ttle Exam Duraton

More information

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING

Sampling Theory MODULE VII LECTURE - 23 VARYING PROBABILITY SAMPLING Samplng heory MODULE VII LECURE - 3 VARYIG PROBABILIY SAMPLIG DR. SHALABH DEPARME OF MAHEMAICS AD SAISICS IDIA ISIUE OF ECHOLOGY KAPUR he smple random samplng scheme provdes a random sample where every

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

4.3 Poisson Regression

4.3 Poisson Regression of teratvely reweghted least squares regressons (the IRLS algorthm). We do wthout gvng further detals, but nstead focus on the practcal applcaton. > glm(survval~log(weght)+age, famly="bnomal", data=baby)

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

January Examinations 2015

January Examinations 2015 24/5 Canddates Only January Examnatons 25 DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR STUDENT CANDIDATE NO.. Department Module Code Module Ttle Exam Duraton (n words)

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

Hila Etzion. Min-Seok Pang

Hila Etzion. Min-Seok Pang RESERCH RTICLE COPLEENTRY ONLINE SERVICES IN COPETITIVE RKETS: INTINING PROFITILITY IN THE PRESENCE OF NETWORK EFFECTS Hla Etzon Department of Technology and Operatons, Stephen. Ross School of usness,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information