Online Appendix for Managerial Attention and Worker Performance by Marina Halac and Andrea Prat Thi Online Appendix contain the proof of our reult for the undicounted limit dicued in Section 2 of the paper, the proof of Propoition 2, detailforthedicuionoftheforwardlooking agent cae decribed in Section 4, and an analyi of dicontinuou equilibria. A1 Undicounted limit Denote a lim!1 a.weprovethefollowingreult: Propoition A1. Fix any et of parameter (,µ,b) and conider the limit a r goe to zero. Let F < F o that the equilibrium of Propoition 1 exit. A F approache F, a vanihe, o the probability of returning to high performance goe to zero a time pae without recognition. Proof. Conider for, givenbyequation(26). In the limit a r!, we have = µbx (F + )+ µb Z where a = r L. Uing (27) andubtitutingwith(22), x t dt + µ 2 bx a, (A1) a = µbx Subtituting (22) and(a2) in(a1) yield 2 F µ 2 bx (1 x) F. (A2) = µbx F + F µ 2 bx + µb = µ 2 bx + + µb(x x)+ µb Z Z x t dt + µ 2 bx x t dt Fx. µbx 2 F µ 2 bx (1 x) F Solving thi di erential equation with initial condition = give that for, Z = µb(x x)+ µb Z x t dt Fx e R (µ 2 bx i + )di d. (A3) 1
Following the ame tep a in the proof of Propoition 1, anequilibriumiavalueof 2 (, 1) uchthat(i)a and(ii) =. For condition (i), note that the right-hand ide of (A2) iincreainginx and @x @ = x x2 (1 x)µ 2 b<, o a i decreaing in. Note alo that the value of that make a = i finite. Hence, making the dependence of a on explicit, a() iequivalentto max for max defined by a( max ) =. Note that max i a continuou and di erentiable function of parameter. Uing (A3), condition (ii) i equivalent to x Z µb(x x)+ µb Z x t dt Fx e R (µ 2 bx i + )di d = F µ 2 b. (A4) Denote the left-hand ide of (A4) by lh() and the right-hand ide by rh. We how that if i an equilibrium, then lh() itrictlyincreaingat =. The derivative of lh() with repect to i @lh() @ ( R @x = @ + x (µb + F) @x @ + + (µ 2 bx + ) Fx µbx e R (µ 2 bx i + )di d Subtituting with @x @ x x2 (1 x)µ 2 b and = andcancelingandrearrangingterm yield ). @lh() @ = x (µb + F)[ x + x 2 (1 x)µ 2 b]+ µbx Z e R (µ 2 bx i + )di d >. Since both lh() andrh are continuou, thi implie that the equilibrium threhold time i unique: there exit a unique point where lh( )=rh. Moreover,thefactthatthe derivative of lh()iboundedawayfromzeroallowtoapplytheimplicitfunctiontheorem and obtain that the equilibrium i continuou (in fact di erentiable) in the parameter. Hence, given an original equilibrium with < max,anewequilibriumwith < max exit for any local change of parameter. We next how that increaing F reduce a. Togetherwiththereultabove,thiimplie that tarting from any given continuou equilibrium with invetment, one can increae F until a become arbitrarily cloe to zero in equilibrium. Note that for a fixed, rh increae when F increae wherea lh() decreaepointwie. Therefore,thepoint at which lh() =rh increae when F increae. Note that x depend on F only through, and @ @F > implie 2
@x @F <. Hence, uing (A2), @a 2 @F = F @x µb + µ 2 bx + F 2 @F 2 µ 2 bx (1 x) <. Q.E.D. A2 Proof of Propoition 2 The cae with µ> i analogou to that tudied in Propoition 1 and thu omitted. Conider µ. The contruction of the equilibrium i imple. Given a threhold time b 2 (, 1), the law of motion for the agent belief on [, b] i given by (9). The olution to thi di erential equation with initial condition x = 1 yield the agent belief x and the agent e ort a =(µb b)x for 2 [, b]. Since the right-hand ide of (9) iliphitzcontinuouinx, it follow from the Picard-Lindelöf theorem that there exit a olution and it i unique. Note that thi olution doe not depend on b, andthatbothx and a are decreaing for all <b. Denote the value at b by x b bx(b) anda b ba(b); we omit the dependence on b in what follow. A explained in the text, (1) impliethattheagent e ortmutbecontantforall b. Therefore, given continuity of x,theagent beliefande ortmutbex = bx and a = ba for all b. Moreover,giventheecontantvalue,theprincipal invetmentialo pinned down: etting ẋ =in(8), we obtain that for all bq = bx + bx[µba + (1 ba)]. 1 bx b, q mut be equal to Conider now the claim that any continuou equilibrium with poitive invetment mut take thi form. Firt, note that thi equilibrium i the unique continuou equilibrium with poitive invetment where, at each time, the principal either i indi erent or doe not have incentive to invet. Thi follow from (1), which implie that in any continuou equilibrium, the agent e ort and the principal value of recognition mut be contant for all time e if the principal i indi erent between inveting and not inveting at e. Next, conider continuou equilibria in which the principal ha trict incentive to invet over ome time interval. By the ame reaoning a in the proof of Propoition 1, thereexit > uch that the principal ha trict incentive to invet at 2 [, ]. However, thi require [µa + (1 a )] ( + r)f,whichcannotbeatifiedince =. Thu,acontinuou 3
equilibrium in which the principal ha trict incentive to invet doe not exit, and the claim follow. Finally, we prove the claim in fn. 18 of the paper. A explained above, the olution to (9) (with initial condition x =1)uniquelydeterminex,andthua =(µb b)x,for b, independently of the value of b. Moreover, note that for any given b, thevalueofa, L and H are pinned down for b thee value are a = ba, L = ba r,and H = L + F and a a reult the value of L and H are alo pinned for b: L = H = Z b Z b e r( ) a d + e r(b )ba r, e ( +r)( ) R [µaz+ (1 az)]dz a + L +[µa + (1 a )] H d +e ( +r)(b ) R b ba [µaz+ (1 az)]dz r + F. (A5) Uing (A5), it follow that for any given b, b = H H b i given by b = Z b e ( +r) R [µaz+ (1 az)]dz ( (a ba)+ ( L L b ) +[µa + (1 a )] b ( + r)f ) d. It i immediate to verify that b i trictly increaing in b and i thu bounded above by lim b!1 b, which i finite. Note that µba + (1 ba) ialotrictlyincreainginb and i bounded above by. Therefore, it follow that there exit F b > uchthatatimeb at which (1) iatified(i.e., b[µa b + (1 a b )] = ( + r)f )exitifandonlyiff F b, and uch a time b i unique. Given the contruction and claim above, thi prove that a continuou equilibrium with poitive invetment exit if and only if F i mall enough, and uch an equilibrium i unique. A3 Detail for Section 4 In thi ection, we decribe how we olve numerically the cae of a forward-looking agent dicued in Section 4. Agent problem. The agent expected payo at =i U = Z 1 e r 1 (1 R ) µx a b + U 2 a2 d, (BC) 4
where R 1 e R µx a d. The agent optimization problem i max a U = Z 1 e r 1 (1 R ) µx a b + U 2 a2 d ubject to ẋ = x x (1 x )µa +(1 x )q, (A6) Ṙ =(1 R )µa x, (A7) x =1,R =. (BC1) Note that given the principal equilibrium trategy, the agent face a relatively imple ingleagent experimentation problem where the evolution of the underlying tate (the principal type) depend only on recognition, a the principal invetment i only a function of the time that ha paed ince recognition (and her type). The agent action a ect both the payo proce and the learning proce, and the forward-looking agent take thi into account when chooing e ort. In particular, we olve for the agent optimal equence of e ort taking into account how hi belief will evolve and a ect e ort choice depending on the e ort he chooe. Thi computation enure that no deviation (including double deviation) i profitable for the agent. For multiplier H = Z 1 1, 2, the Hamiltonian i: e r 1 (1 R ) µx a b + U 2 a2 + 1 ẋ + 2 Ṙ d. The firt order condition with repect to a, 1 and 2 yield = e r (1 R ) µx b + U a 1x (1 x )µ + 2 (1 R )µx, 1 = e r (1 R )µa (b + U ) 1 [ +(1 2x )µa + q ]+ 2 (1 R )µa, 2 = e r µx a (b + U ) 1 2 a2 2µa x. Replacing with m 1 = 1 e r and m 2 = 2 e r, = (1 R ) µx (b + U ) a m 1 x (1 x )µ + m 2 (1 R )µx, (A8) ṁ 1 + rm 1 = (1 R )µa (b + U ) m 1 [ +(1 2x )µa + q ]+m 2 (1 R )µa, (A9) 1 ṁ 2 + rm 2 = µx a (b + U ) 2 a2 m 2 µa x. (A1) 5
The tranverality condition on m 1 and m 2 i lim e r m 1 =lime r m 2 =. (BC2)!1!1 Given the principal invetment q, the agent e ort and belief are determined by equation (A6)-(A1) andboundarycondition(bc)-(bc2). Equilibrium dynamic and mooth pating. We conider an equilibrium with threhold time 2 (, 1) o that the principal doe not invet at < and he mixe between inveting and not inveting at <,. A in the cae of a myopic agent, we have that for = ( + r) µa, (A11) = L, (A12) L = a + r L, (A13) with boundary condition = F, =, and L = L, (BC3) where For = ( +r)f µa and L i derived below., = F,otheytemi =, (A14) = L, (A15) L = a + r L, (A16) with boundary condition = F, =, and L = L. (BC4) The value of L i obtained from mooth pating: we require that a and x be continuouly di erentiable at. Thigive L = a r = 1 r (r + )F µ + ȧµ 2. ( + r)f 6
Solving for the equilibrium. We find value (, U,m 1,m 2 )uchthatequation(a6)- (A16) and boundary condition (BC)-(BC4) are atified. Begin by fixing a et of initial value (, U,m 1,m 2 ). We proceed a follow: 1. Solve the agent problem for <. Given (, U,m 1,m 2 )andinitialcondition (BC1), and etting q =,wecanolve(a6)-(a1) on[, ]. We obtain a,x,r,m 1, and m 2 for <. 2. Solve the ytem characterizing equilibrium dynamic for. Weolve(A14)-(A16) given the boundary condition (BC4). We obtain a, L,, and for. 3. Solve for the agent belief and the principal invetment for. We obtain the belief x on [, 1) byinputtingthee ortpatha obtained in tep 2 into the agent problem (A6)-(A1). Then having olved for x and a,wecanolvefortheinvetment q on [, 1). We obtain q,x,r,m 1,andm 2 for. 4. Solve the ytem characterizing equilibrium dynamic for <. Weolve(A11)-(A13) given boundary condition (BC3). Note that the value of i unknown here but we can olve the ytem becaue a i pinned down at thi point. We obtain L,, and for. 5. Compare olution to initial value. Having olved for all variable, compute now the reulting value for the value of recognition and the agent expected payo at time =,whichwecandenoteby e and e U repectively, and the limit lim!1 e r m 1 and lim!1 e r m 2.Ifgiveninitialvalue(, U,m 1,m 2 ), we obtain e =, e U = U, lim!1 e r m 1 =,andlim!1 e r m 2 =,thenwehavefoundanequilibrium. Otherwie we change the initial value, earching on a grid of (, U,m 1,m 2 ), until thee four condition are atified up to ome preciion target. A4 Dicontinuou equilibria Conider the etting of Section 1. Our analyi in the paper retricted attention to equilibria in which the agent belief a a function of the time ince recognition, x,icontinuou. In thi ection, we tudy equilibria in which thi belief can jump. Becaue uch equilibria can in principle take many arbitrary form, we focu on a imple cla of dicontinuou equilibria that are tationary. We how that the principal prefer the continuou equilibrium characterized in Propoition 1 to any dicontinuou equilibrium in thi cla. 7
We define a tationary dicontinuou equilibrium a an equilibrium in which the principal doe not invet except in countably many point 1, 2,... uch that, for all n 2 N = {1, 2,...}, (i) n+1 = n + for ome >, and (ii) the principal invet with a ma probability >at n. Denote the et of time at which the principal invet by J = { 1, 1 +, 1 +2,...}. 1,,and are uch that for ome x <x + 1, the agent belief that the principal i a high type atifie x = x and x + = x + for all 2 J. Let min{ : x = x + };notethat 1 =. Figure A1 depict a dicontinuou equilibrium. (While the cale make it di the value of all variable hown in the figure are trictly poitive at all 2 J at which the principal invet, the agent belief x jump from x cult to ee,.) At each time to x +,andoe ort a jump from a = µbx to a + = µbx +. At all other time /2 J, theevolutionofx i given by the law of motion (3), the ame one that decribe the agent belief over [, ] in the continuou equilibrium. Note that ince the principal ha incentive to invet only at the intant 2 J, he mut be indi erent between inveting and not inveting at thee time. 1 Alo, by contruction, the low type and high type expected payo are the ame at each 2 J [, and hence the principal i alo indi erent at. Analogou to (6) and(7), it follow that = F and µa + =( + r)f at all 2 J [. It i worth noting that in any tationary dicontinuou equilibrium, mut be bounded from below by a trictly poitive value. 2 Although the mooth pating condition need not be atified in a dicontinuou equilibrium, roughly peaking the intuition i related to that for mooth pating in the continuou equilibrium: if i too mall, the principal indi erence between inveting and not when he invet would imply that he ha trict incentive to invet at a previou point. Thu, a dicued in the paper, an equilibrium where the agent belief i contant from (approximately) the time at which the principal tart inveting doe not exit. Comparing with the continuou equilibrium of Propoition 1, wefind: 3 Propoition A2. The principal expected payo at =, H, i higher in the continuou equilibrium of Propoition 1 than in any tationary dicontinuou equilibrium. 1 If the principal had trict incentive to invet at 2 J, he would invet over a time interval [ ", + "] for ome ">. 2 To prove thi, we can how that 1, which implie that if (and thu ) were to go to zero, then and 1 would go to. However, in thi limit, the dicontinuou equilibrium would yield a higher payo for the principal at =, H, than the continuou equilibrium of Propoition 1, contradicting Propoition A2 below. A formal proof for the claim that 1 i available from the author upon requet. 3 A welfare analyi of the agent i unintereting becaue the agent i myopic. A myopic agent i indi erent between the continuou and dicontinuou equilibria at time = ; at any other time, he prefer the equilibrium that induce higher e ort. 8
1. x.8.6.4.2 5 1 15 2 25.7 a.5.3.1 5 1 15 2 25 1 ph 8 6 4 2 5 1 15 2 25 1.5 Y 1.2.9.6.3 5 1 15 2 25.7 Rec.5.3.1 5 1 15 2 25.15 L.12.9.6.3 5 1 15 2 25 Figure A1: Dynamic in the continuou equilibrium (olid line) and the dicontinuou equilibrium (dahed line). Parameter are the ame a in Figure 1. Rec i the unconditional intantaneou probability of recognition, given by µx a.theverticallineindicatethetime and. 9
Proof. Uing upercript d and c to denote variable in the dicontinuou equilibrium and the continuou equilibrium repectively, we have x d = x c and a d = a c at all min{, 1 }, and x d + = x d and a d + = a d for all 2 J. A for the principal incentive, a noted, indi erence implie d = F and µa d d =( + r)f at each 2 J [,andatany/2 J [ we mut have d <F. Finally, we will ue the fact that in the continuou equilibrium, µa c c ( + r)f for all. ThifollowfromtheproofofPropoition 1, wherewehow that the equilibrium threhold time i uch that the left-hand ide of (32) i le than the right-hand ide at all. We now proceed by proving two claim. Claim 1. If >, then. Proof of Claim 1. Suppoe by contradiction that > and >. Note that > implie where a d <a c.therefore, µa d d =( + r)f = µac c d = Hd > Hc = c. (A17) Now note that we can write = < Z Z e ( +r) R µad e de a d + Ld e ( +r) R µac e de "a c + +e ( +r) R µa c e de, + µa d d + e ( +r) R µa d e de! # e r(e ) aede c + e r( ) Ld + µa c Z d where the inequality follow from the fact that a d = a c for 2 [, ], a d <a c for 2 [, ], and > for 2 (, ]. It then follow that < Z e ( +r) R µac e de h +e ( +r) R µa c e de e r( ) Ld Lc + µac i d Hc. (A18) 1
Note that Ld = F and Lc = F ; hence, ubtituting, < Z e ( +r) R µac e de h +e ( +r) R µa c e de e r( ) Hc + µac i d Hc. (A19) Recall that by the contradiction aumption, >. But then (A19) require < Claim 2.,contradicting(A17). 4 k If, then. Proof of Claim 2. Suppoe by contradiction that and >. Note that implie µa d d =( + r)f µac c. Note that a d = a c for 2 [, ]. Hence, we obtain d = Hd Hc = c. (A2) Now note that given a d = a c for 2 [, ], we can write = Z e ( +r) R µac e de Ld +e ( +r) R µa c e de Lc + µa c d Hc. (A21) For any, Ld Lc = e r( ) Ld Lc e r( ) Hc, where the lat inequality follow from the fact that Ld 4 To ee why (A19) require < that > : 1 < Z e ( +r) R µac e de The claim follow from the fact that e r( ) Hd = F wherea Lc F., divide both ide by under the aumption + µa c d + e ( R +r) µa c e de Hd. 1= Z e ( +r) R µac e de [r + + µa c ] d + e ( +r) R µa c e de. 11
Hence, ubtituting Ld Lc in (A21), we obtain Z e ( +r) R µac e de h +e ( +r) R µa c e de e r( ) Hc + µac i d Hc. (A22) Recall that by the contradiction aumption, >. But then (A22) require <,contradicting(a2). 5 k Q.E.D. 5 Thi can be verified following analogou tep to thoe in fn. 4. 12