OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES

Size: px
Start display at page:

Download "OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES"

Transcription

1 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES RAHUL DEB AND COLIN STEWART MARCH 29, 2016 ABSTRACT. We introduce a learning ramework in which a principal eek to determine the ability o a trategic agent. The principal aign a tet coniting o a inite equence o tak. The tet i adaptive: each tak that i aigned can depend on the agent pat perormance. The probability o ucce on a tak i jointly determined by the agent privately known ability and an unoberved eort level that he chooe to maximize the probability o paing the tet. We identiy a imple monotonicity condition under which the principal alway employ the mot (tatitically) inormative tak in the optimal adaptive tet. Converely, whenever the condition i violated, we how that there are cae in which the principal trictly preer to ue le inormative tak. We dicu the implication o our reult or tak aignment in organization with the aim o determining uitable candidate or promotion. 1. INTRODUCTION In thi paper, we introduce a learning ramework in which a principal eek to determine the privately known ability o a trategic agent. Our exercie i primarily motivated by the problem o a manager chooing tak aignment to her worker in order to determine uitable candidate or promotion. Tak rotation i one important way in which irm learn about worker ability (ee, or intance, Meyer (1994) and Ortega (2001)). Thi i becaue worker dier in their abilitie acro tak (Gibbon and Waldman (2004) reer to thi a tak peciic human capital) and o, in particular, the worker perormance on dierent tak provide varying amount o inormation to the manager about their ability. 1 However, worker who are privately inormed about their ability can, through unobervable action, aect the outcome on their aigned tak and thereby aect the inormation that the manager receive. For intance, the incentive or an employee eeking promotion to exert eort (or trategically hirk) on any given tak depend on how perormance aect ubequent tak aignment, and ultimately hi probability o promotion (ee, or example, DeVaro and Gürtler, 2015). When do manager need to worry about uch trategic behavior and, converely, when can they maximize learning by aigning more inormative tak and expect worker to avoid trategic hirking? DEPARTMENT OF ECONOMICS, UNIVERSITY OF TORONTO addree: rahul.deb@utoronto.ca, colinbtewart@gmail.com. Thi paper wa previouly circulated with the horter title Optimal Adaptive Teting. We would like to thank Heki Bar-Iaac, Dirk Bergemann, Rohan Dutta, Amanda Friedenberg, Sean Horan, Johanne Hörner, Marcin Peki, John Quah, Phil Reny, Larry Samuelon, Ed Schlee, Balaz Szente and variou eminar and conerence participant or helpul comment and uggetion. Rya Sciban and Young Wu provided excellent reearch aitance. We are very grateul or the inancial upport provided by the Social Science and Humanitie Reearch Council o Canada. 1 Learning a worker ability by oberving their perormance on dierentially inormative tak i a problem that goe back to at leat Precott and Vicher (1980). 1

2 2 DEB AND STEWART Uing thi motivating example a a point o departure, our aim i to develop a new dynamic learning model that imultaneouly eature advere election, moral hazard and no traner. It i the preence o all three o thee apect that make our ramework ditinct rom the previou literature. Our model it numerou other application. Example include interviewing to determine whether a candidate i uitable or a job opening or tandardized teting with the aim o uncovering a tudent ability. A in the tak aignment problem, in thee cenario, inormation i obtained by oberving the agent perormance over a equence o quetion, and the principal choice o which quetion to aign may depend on the agent pat perormance. Additionally, the agent can, to an extent, control the path o quetioning by trategic repone. At a more abtract level, our exercie build on the claic equential choice o experiment (ee, or intance, Chapter 14 o DeGroot, 2005) problem in tatitic. In thi problem, a reearcher who want to learn about an unknown parameter ha at her dipoal a collection o experiment, each o which i aociated with a dierent ditribution o ignal about the parameter. In one ormulation, the principal can run a ixed number o experiment, and chooe each experiment equentially only ater oberving the outcome o the preceding one. A key reult in thi literature pertain to the cae in which one experiment i more inormative, in the ene o Blackwell (1953), than all other available to the reearcher. In thi cae, the optimal trategy i independent o the hitory and imply involve repeatedly drawing rom the mot inormative experiment. We reer to thi a Blackwell reult (ee Corollary 4.4 in DeGroot, 1962). We introduce trategic behavior into thi ramework and ak how thi trategic behavior by the agent aect the optimal choice o experiment? Speciically, doe Blackwell reult carry over? Following the literature on tandardized teting, we reer to the optimal tak aignment problem that we tudy a an adaptive teting problem. The principal ha a ixed number o time period (or intance, the tenure clock in academic intitution or the duration o an interview) over which to evaluate the agent and a inite collection o dierent tak. The agent probability o ucce on a particular tak depend on hi ability (or type) and hi choice o action (or eort), neither o which are directly obervable to the principal. For intance, the agent may deliberately chooe action that lead to ailure i doing o lead to uture path o tak that are more likely to make him look better. Higher action correpond to a greater probability o ucce. The principal irt commit to a tet. The tet begin by aigning the agent a tak. Upon eeing the aigned tak, the agent chooe hi eort level. Depending on the realized ucce or ailure on the irt tak, the tet aign another tak to the agent in the next period, and the agent again chooe hi eort. The tet continue in thi way, with the aigned tak in each period poibly depending on the entire hitory o previou uccee and ailure. At the end o a ixed number o period, the tet iue a verdict indicating whether the agent pae or ail (i promoted or not) given the hitory o tak and the agent perormance. The principal goal i to pa the agent i and only i hi type belong to a particular et (which we reer to a the et o good type ). A in Meyer (1994), the principal objective i deliberately retricted to learning alone by auming that there are no payo aociated with tak completion. The agent eek to maximize the probability with which he pae the tet.

3 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 3 Our main goal i to undertand the eect o the agent trategic eort choice on learning. Hence, we aume paing the tet i the only incentive driving the worker a thi allow u to ocu purely on learning (a otherwie, agent would alo try to maximize payment received). Baker, Jenen, and Murphy (1988) provide a jutiication or thi by oberving that, in numerou organization, promotion i the only mean ued or providing incentive. Additionally, or the ame reaon, we abtract away rom cot-aving incentive by auming that all eort level have the ame cot or the agent. A natural benchmark i the optimal tet under the aumption that the agent alway chooe the highet eort. Given thi trategy, deigning the optimal tet i eentially a pecial cae o the equential choice o experiment problem, which can in principle be olved by backward induction (although qualitative propertie o the olution are hard to obtain except in the implet o cae). We reer to thi benchmark olution a the optimal non-trategic tet (ONST). In our trategic environment, Blackwell reult doe not hold in general (ee Example 2). Our main reult (Theorem 2) how that it doe hold i a property we reer to a group monotonicity i atiied, namely, i there doe not exit a tak at which ome bad type ha higher ability than ome good type. I group monotonicity hold, then it i optimal or the principal alway to aign the mot inormative tak and or the agent alway to chooe the highet eort (in particular, the optimal tet coincide with the ONST). We provide a partial convere (Theorem 3) to thi reult, which indicate that whenever a tak violate group monotonicity, there i an environment that include that tak in which alway aigning the mot inormative tak i not optimal or the principal. Our reult ugget that, in organization with limited tak breadth which implie that good worker perorm better at all tak (or given level o eort), manager can optimally learn by aigning the mot inormative tak. However, by contrat, in organization which require more tak peciic pecialization by employee, manager hould be concerned about trategic behavior by worker aecting learning. (Praad (2009) and Ferreira and Sah (2012) are recent example o model where worker can be either generalit or pecialit.) Similarly, trategic repone mut be actored in evaluation o job candidate when they dier in their breadth and level o pecialization (uch a interview or academic poition). In a tatic etting, the intuition behind our main reult i traightorward. Since all type can chooe not to ucceed on the aigned tak, the principal can learn about the agent type only i ucce i rewarded with a higher probability o paing the tet. In that cae, all type chooe the highet eort ince doing o maximize the probability o ucce. Group monotonicity then enure that good type have a higher probability o paing than do bad type. Since trategic behavior play no role, aigning the mot inormative tak i optimal or the principal. The dynamic etting i complicated by the act that the agent mut conider how hi perormance on each tak aect the ubequent tak that will be aigned; he may have an incentive to perorm poorly on a tak i doing o make the remainder o the tet eaier, and thereby increae the ultimate probability o paing. For example, in job interview, depite it relecting badly on him, an interviewee may want to deliberately eign ignorance on a topic earing that the line o quetioning that would otherwie ollow would be more damaging. Milgrom and Robert (1992)

4 4 DEB AND STEWART (ee Chapter 7) document trategic hirking in organization where an employee own pat perormance i ued a a benchmark or evaluation. In our model, worker are not judged relative to their pat perormance; however, trategic choice o eort can be ued to inluence uture tak aignment and, ultimately, the likelihood o promotion. It i worth treing that in our model, even with group monotonicity, there are cae in which ome type chooe not to ucceed on certain tak in the optimal tet (ee Example 4). I, however, there i one tak q that i more inormative than the other, then thi turn out not to be an iue. Given any tet that, at ome hitorie, aign tak other than q, we how that one can recurively replace each o thoe tak with q together with a randomized continuation tet in a way that doe not make the principal wore o. While thi procedure reemble Blackwell garbling in the tatitical problem, in our cae one mut be careul to conider how each uch change aect the agent incentive; group monotonicity enure that any change in the agent trategy reulting rom thee modiication to the tet can only improve the principal payo. In Section 6, we conider optimal teting when tak are not comparable in term o inormativene. We how that, under group monotonicity, the ONST i optimal when the agent ha only two type (Theorem 4). However, when there are more than two type, thi reult doe not hold: Example 4 how that even i high eort i alway optimal or the agent in the ONST, the principal may be able to do better by inducing ome type to hirk. Example 5 and the example in Appendix B demontrate a wealth o poibilitie (even with group monotonicity). Section 7 how that our main reult continue to hold i the principal can oer the agent a menu o tet (Theorem 5), and i he lack the power to commit to a tet. Related Literature Our model and reult are related to everal ditinct trand o the literature. The literature on career concern (beginning with Holmtrom, 1999) i imilar in pirit to our model in that the market i trying to learn about an agent unknown ability by oberving hi output. Like our model, tandard ignal jamming model eature moral hazard; however, unlike our model, there i no aymmetric inormation between the agent and the market regarding the agent ability, and monetary incentive are provided uing contract. In addition, thee model typically do not involve tak aignment by a principal. Perhap the cloet related work in thi literature i Dewatripont, Jewitt, and Tirole (1999). They provide condition under which the market may preer a le inormative monitoring technology (relating the agent action to perormance variable) to a more inormative one, and vice vera. More broadly, while more inormation i alway beneicial in a non-trategic ingle agent etting, it can ometime be detrimental in multi-agent environment. Example include oligopolie (Mirman, Samuelon, and Schlee, 1994) and election (Ahworth, de Mequita, and Friedenberg, 2015). While more inormation i never harmul to the principal in our etting (ince he could alway chooe to ignore it), our ocu i on whether le inormative tak can be ued to alter the agent trategy in a way that generate more inormation. Our model provide a tarting point or tudying how manager aign tak when they beneit rom learning about worker abilitie (or intance, to determine their uitability or important

5 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 5 project). Unlike our etting, dynamic contracting i oten modeled with pure moral hazard, where the principal chooe bonu payment in order to generate incentive to exert cotly eort (ee, or intance, Rogeron, 1985; Holmtrom and Milgrom, 1987). However, there a ew recent exception that eature both advere election and moral hazard. The work o Gerardi and Maetri (2012) and Halac, Kartik, and Liu (2016) dier rom our in ocu. In thee paper, the principal goal i to learn an unknown tate o the world (not the agent type) and they characterize the optimal traner chedule or a ingle tak (wherea we tudy optimal tak allocation when promotion are the only mean to provide incentive). Gerhkov and Perry (2012) alo conider a model with traner but, in their etting, the principal i concerned primarily with matching the complexity o the tak (which are not aigned by the principal and are intead drawn independently in each period) and the quality o the agent. The literature on teting orecater (or urvey, ee Foter and Vohra, 2011; Olzewki, 2015) hare with our model the aim o deigning a tet to uncover the type o a trategic agent (an expert ). In that literature, the expert make probabilitic orecat about an unknown tochatic proce, and the principal eek to determine whether the expert know the true probabilitie or i completely ignorant. Our model dier in a number o way; in particular, the principal aign tak, and the agent chooe an unobervable action that aect the true probabilitie. Finally, our work i related to the literature on multi-armed bandit problem (an overview can be ound in Bergemann and Välimäki, 2006), in which a principal chooe in each period which arm to pull jut a, in our model, he chooe which tak to aign and learn rom the reulting outcome. The main trade-o i between maximizing hort-term payo and the long-term gain rom learning. Our model can be thought o a a irt tep toward undertanding bandit problem in which a trategic agent can manipulate the inormation received by the deciion-maker. 2. MODEL A principal (he) i trying to learn the private type o an agent (he) by oberving hi perormance on a equence o tak over T period. 2 At each period t {1,..., T}, he aign the agent a tak q t rom a inite et Q o available tak. We interpret two identical tak q t = q t aigned at time period t = t a two dierent tak o the ame diiculty; the agent being able to ucceed on one o the tak doe not imply that he i ure to be able to ucceed on the other. Faced with a tak q t Q, the agent chooe an eort level a t [0, 1]; action in the interior o the interval may be interpreted a randomization between 0 and 1. All action have the ame cot, which we normalize to zero. 3 We reer to a t = 1 a ull eort, and any a t < 1 a hirking. Depending on the agent ability and eort choice, he may either ucceed () or ail ( ) on a given tak. Thi outcome i oberved by both the principal and the agent. 2 Note that T i exogenouly ixed. I the principal could chooe T, he would alway (weakly) preer it to be a large a poible. Thu, an equivalent alternate interpretation i that the principal ha up to T period to tet the agent. 3 We make the aumption o identical cot acro action to ocu purely on learning, a it enure that trategic action choice are not muddied by cot aving incentive.

6 6 DEB AND STEWART Type Space: The agent ability (which tay contant over time) i captured by hi privately known type θ i : Q (0, 1), which belong to a inite et Θ = {θ 1,..., θ I }. 4 In period t, the probability o a ucce on a tak q t when the agent chooe eort a t i a t θ i (q t ). The type determine the highet probability o ucce on each tak, obtained when the agent chooe ull eort. Zero eort implie ure ailure. 5 Note that, a i common in dynamic moral hazard model, the agent probability o ucce on a given tak i independent o event that occur beore t (uch a him having aced the ame tak beore). Beore period 1, the principal announce and commit to an (adaptive) tet. The tet determine which tak i aigned in each period depending on the agent perormance o ar, and the inal verdict given the hitory at the end o period T. Hitorie: At the beginning o period t, h t denote a nonterminal public hitory (or imply a hitory) up to that point. Such a hitory lit the tak aced by the agent and the correponding uccee or ailure in period 1,..., t 1. The et o (nonterminal) hitorie i denoted by H = t=1,...,t (Q {, }) t 1. We write H T+1 = (Q {, }) T or the et o terminal hitorie. Similarly, ht A denote a hitory or the agent decribing hi inormation beore chooing an eort level in period t. In addition to the inormation contained in the hitory h t, ht A alo contain the tak he currently ace. 6 Thu the et o all hitorie or the agent i given by H A = t=1,...,t (Q {, }) t 1 Q. For example, h 3 = {(q 1, ), (q 2, )} i the hitory at the beginning o period 3 in which the agent ucceeded on tak q 1 in the irt period and ailed on tak q 2 in the econd. The correponding hitory h3 A = {(q 1, ), (q 2, ), q 3 } alo include the tak in period 3. Determinitic Tet: A determinitic tet (T, V ) conit o unction T : H Q and V : H T+1 {0, 1}. Given a hitory h t at the beginning o period t, the tak q t aigned to the agent i T (h t ). The probability that the agent pae the tet given any terminal hitory h T+1 i V (h T+1 ). Tet: A (random) tet ρ i a ditribution over determinitic tet. A mentioned above, the principal commit to the tet in advance. Beore period 1, a determinitic tet i drawn according to ρ and aigned to the agent. The agent know ρ but doe not oberve which determinitic tet i realized. He can, however, update a the tet proceed baed on the equence o tak that have been aigned o ar. Note that even i the agent i acing a determinitic tet, ince the tak he will ace can depend on hi tochatic perormance o ar in the tet, he may not be able to perectly predict which tak he will ace in ubequent period. Strategie: A trategy or type θ i i given by a mapping σ i : H A [0, 1] rom hitorie or the agent to eort choice; given a hitory ht A in period t, the eort in period t i a t = σ i (ht A ). We denote the proile o trategie by σ = (σ 1,..., σ I ). 4 The retriction that θi (q) = 0 or 1 impliie ome argument but i not neceary or any o our reult. 5 The agent ability to ail or ure i not eential a none o our reult are aected by making the lowet poible eort trictly poitive. 6 By not including the agent action in h A t we are implicitly excluding the poibility that the agent condition hi eort on hi own pat choice. Allowing or thi would only complicate the notation and make no dierence or our reult.

7 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 7 Agent Payo: Regardle o the agent type, hi goal i to pa the tet. Accordingly, aced with a determinitic tet (T, V ), the payo o the agent at any terminal hitory h T+1 i the probability with which he pae, which i given by the verdict V (h T+1 ). Given a tet ρ, we denote by u i (h; ρ, σ i ) the expected payo o type θ i when uing trategy σ i conditional on reaching hitory h H. Principal Belie: The principal prior belie about the agent type i given by (π 1,..., π I ), with π i being the probability the principal aign to type θ i (thu π i 0 and I i=1 π i = 1). Similarly, or any h H H T+1, π(h) = (π 1 (h),..., π I (h)) denote the principal belie at hitory h. We aume that each o thee belie i conitent with Baye rule given the agent trategy; in particular, at the hitory h 1 =, (π 1 (h 1 ),..., π I (h 1 )) = (π 1,..., π I ). Principal Payo: The principal partition the et o type Θ into dijoint ubet o good type {θ 1..., θ i } and bad type {θ i +1..., θ I }, where i {1,..., I 1}. At any terminal hitory h T+1, he get a payo o 1 i the agent pae and ha a good type, 1 i the agent pae and ha a bad type, and 0 i the [ agent ail. Thereore, her expected payo rom ] a determinitic tet (T, V ) i given by E ht+1 i i=1 π i(h T+1 )V (h T+1 ) i=i I +1 π i(h T+1 )V (h T+1 ), where the ditribution over terminal hitorie depend on both the tet and the agent trategy. 7 One might expect the principal to receive dierent payo depending on the exact type o the agent, not only whether the type i good or bad. All o our reult extend to the more general model in which the receive a payo o γ i rom paing type θ i, and a payo normalized to 0 rom ailing any type. Auming without lo generality that the type are ordered o that γ i γ i+1 or each i, the cuto i dividing good and bad type then atiie γ i 0 i i i and γ i 0 i i > i. The principal problem with thee more general payo and prior π i equivalent to the original problem with prior π given by π i = γ i π i / j=1 I γ jπ j. Since our reult are independent o the prior, thi tranormation allow u to reduce the problem to the imple binary payo or paing the agent decribed above. Optimal Tet: The principal chooe and commit to a tet that maximize her payo ubject to the agent chooing hi trategy optimally. Facing a tet ρ, we write σi to denote an optimal trategy or type θ i, that i, a trategy atiying σ i argmax σ i u i (h 1 ; ρ, σ i ). Note that thi implicitly require the agent to play optimally at all hitorie occurring with poitive probability given the trategy. Given her prior, the principal olve max E ht+1 [V (h T+1 ) ρ ( i i=1 π i (h T+1 ) I i=i +1 π i (h T+1 ) where the expectation i taken over terminal hitorie (the ditribution o which depend on the tet, ρ, and the trategy σ = (σ1,..., σ I )), and the belie are updated rom the prior uing Baye 7 A in Meyer (1994), we want to ocu on the principal optimal learning problem. Thi i why we abtract away rom payo aociated with tak completion. )],

8 8 DEB AND STEWART rule (wherever poible). To keep the notation imple, we do not explicitly condition the principal belie π on the agent trategy. An equivalent and convenient way to repreent the principal problem i to tate it in term o the agent payo a max ρ [ i i=1 π i v i (ρ) I i=i +1 π i v i (ρ) ], (1) where v i (ρ) := u i (h 1 ; ρ, σ i ) i the expected payo type θ i receive rom chooing an optimal trategy in the tet ρ. Note in particular that whenever ome type o the agent ha multiple optimal trategie, the principal i indierent about which one he employ. 3. BENCHMARK: THE OPTIMAL NON-STRATEGIC TEST Our main goal i to undertand how trategic eort choice by the agent aect the principal ability to learn hi type. Thu a natural benchmark i the tatitical problem in which the agent i aumed to chooe ull eort at every hitory. Formally, in thi benchmark, the principal olve the problem [ ( )] i I max E h T+1 V (h T+1 ) π i (h T+1 ) π i (h T+1 ), T,V i=1 i=i +1 where the ditribution over terminal hitorie i determined by the tet (T, V ) together with the ull-eort trategy σi N (h A ) = 1 or all h A H A or every i. We reer to the olution (T N, V N ) to thi problem a the optimal non-trategic tet (ONST). Notice that we have retricted attention to determinitic tet; we argue below that thi i without lo. In principle, it i traightorward to olve or the ONST by backward induction. The principal can irt chooe the optimal tak at all period T hitorie and belie along with the optimal verdict correponding to the reulting terminal hitorie. Formally, conider any hitory h T at the beginning o period T with belie π(h T ). The principal chooe the tak T (h T ) and verdict V ({h T, (T (h T ), )}) and V ({h T, (T (h T ), )}) o that (T (h T ), V ({h T, (T (h T ), )}), V ({h T, (T (h T ), )})) ( ) v i i=1 argmax θ i(q T )π i (h T ) i=i I +1 θ i(q T )π i (h T ) ( ) (q T,v,v ) +v i i=1 (1 θ i(q T ))π i (h T ) i=i I +1 (1 θ i(q T ))π i (h T ) The term in the maximization are the expected payo to the principal when the agent ucceed and ail repectively at tak q T. The probability o ucce i baed on all type chooing action a T = 1. Note that the payo i linear in the verdict, o that even i randomization o verdict i allowed, the optimal choice can alway be taken to be either 0 or 1. Moreover, there i no beneit in randomizing tak: i two tak yield the ame expected payo, the principal can chooe either one..

9 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 9 Once tak in period T and verdict have been determined, it remain to derive the tak in period T 1 and earlier. At any hitory h T 1, the choice o tak will determine the belie correponding to ucce and ailure repectively. In either cae, the principal payo a a unction o thoe belie ha already been determined above. Hence the principal imply chooe the tak that maximize her expected payo. Thi proce can be continued all the way to period 1 to determine the optimal equence o tak. At each tep, by the ame argument a in period T, there i no beneit rom randomization. Since the principal may be indierent between tak at ome hitory and between verdict at ome terminal hitory, the ONST need not be unique. Thi problem i an intance o the general equential choice o experiment problem rom tatitic that we decribe in the introduction. The ame backward induction procedure can be applied to (theoretically) olve thi more general problem. However, it i typically very diicult to explicitly characterize or to decribe qualitative propertie o the olution, even in relatively imple pecial cae that it within our etting (Bradt and Karlin, 1956). 4. INFORMATIVENESS Although the equential choice o experiment problem i diicult to olve in general, there i a prominent pecial cae that allow or a imple olution: the cae in which one tak i more Blackwell inormative than the other. Blackwell Inormativene: We ay that a tak q i more Blackwell inormative than another tak q i there are number α, α [0, 1] uch that θ 1 (q) 1 θ 1 (q) [ ] θ 1 (q ) 1 θ 1 (q ).. α 1 α = α 1 α... (2) θ I (q) 1 θ I (q) θ I (q ) 1 θ I (q ) Thi i the claic notion o inormativene. Eentially, it ay that q i more inormative than q i the latter can be obtained by adding noie to or garbling the ormer. Note that Blackwell inormativene i a partial order; it i poible or two tak not to be ranked in term o Blackwell inormativene. A eminal reult due to Blackwell (1953) i that, in any tatic deciion problem, regardle o the deciion-maker preerence, he i alway better o with inormation rom a more Blackwell inormative experiment than rom a le inormative one. Thi reult carrie over to the equential etting: i there i one experiment that i more Blackwell inormative than every other, then it i optimal or the deciion maker alway to ue that experiment (ee Section in DeGroot, 2005). Since the ONST i a pecial cae o thi more general problem, i there i a tak q that i the mot Blackwell inormative, then T N (h) = q at all h H. The ollowing i the ormal tatement o Blackwell reult applied to our context. Theorem 1 (Blackwell 1953). Suppoe there i a tak q that i more Blackwell inormative than all other tak q Q. Then there i an ONST in which the tak q i aigned at every hitory. In our etting, it i poible to trengthen thi reult becaue the principal payo take a pecial orm; Blackwell inormativene i a tronger property than what i needed to guarantee that the

10 10 DEB AND STEWART ONST eature only a ingle tak. We ue the term inormativene (without the additional Blackwell qualiier) to decribe the weaker property appropriate or our etting. Inormativene: Let θ G (q, π) = i i π iθ i (q) i i π i be the probability, given belie π, that ucce i oberved on tak q conditional on the agent being a good type, under the aumption that the agent chooe ull eort. Similarly, let θ B (q, π) = i>i π iθ i (q) i>i π i be the correponding probability o ucce conditional on the agent being a bad type. We ay that a tak q i more inormative than another tak q i, or all belie π, there are number α (π), α (π) [0, 1] uch that [ θ G (q, π) 1 θ G (q, π) θ B (q, π) 1 θ B (q, π) ] [ α (π) α (π) ] [ 1 α (π) = 1 α (π) θ G (q, π) 1 θ G (q, π) θ B (q, π) 1 θ B (q, π) ]. (3) To ee that Blackwell inormativene i the tronger o thee two notion, note that any α and α that atiy (2) mut alo atiy (3) or every belie π. The ollowing example coniting o three type and two tak how that the convere need not hold. Example 1. Suppoe there are three type (I = 3), and two tak, Q = {q, q }. Succe probabilitie i the agent chooe ull eort are a ollow: q q θ θ θ (4) The irt column correpond to the probability θ i (q) o ucce on tak q, and the econd column to that on tak q. I i = 2 (o that type θ 1 and θ 2 are good type), q i more inormative than q. Intuitively, thi i becaue the perormance on tak q i better at dierentiating θ 3 rom θ 1 and θ 2. However, i i = 1, then q i no longer more inormative than q. Thi i becaue perormance on tak q i better at dierentiating θ 1 rom θ 2. Thu, i the principal belie aign high probabilitie to θ 1 and θ 2, he can beneit more rom tak q, wherea i her belie aign high probability to type θ 1 and θ 3, he can beneit more rom q. Since Blackwell inormativene i independent o the cuto i, neither q nor q i more Blackwell inormative than the other. Although weaker than Blackwell condition (2), inormativene i till a partial order, and in many cae no element o Q i more inormative than all other. However, when there exit a mot inormative tak, our main reult how that Blackwell reult continue to hold or the deign o the optimal tet in our etting, even when the agent i trategic, provided that a natural monotonicity condition i atiied. A key diiculty in extending the reult i that inormativene i deined independently o the agent action and, a the example in Appendix B demontrate, in ome cae the principal can beneit rom trategic behavior by the agent. 5. INFORMATIVENESS AND OPTIMALITY 5.1. The Optimal Tet The ollowing example how that trategic behavior by the agent can caue Blackwell reult to ail in our model.

11 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 11 Example 2. Suppoe there are three type (I = 3) and one period (T = 1), with i = 2. There are two tak, Q = {q, q }, with ucce probabilitie given by the ollowing matrix: The principal prior belie i q q θ θ θ (π 1, π 2, π 3 ) = (.3,.2,.5). Note that tak q i more Blackwell inormative than q. 8 I the agent wa not trategic, the optimal tet would aign tak q and verdict V {(q, )} = 0 and V {(q, )} = 1. In thi cae, all type would chooe a 1 = 0, yielding the principal a payo o 0 (which i the ame payo he would get rom chooing either tak and V {(q, )} = V {(q, )} = 0). Can the principal do better? Aigning tak q and revering the verdict make a 1 = 1 a bet repone or all type o the agent but would reult in a negative payo or the principal. Intead, it i optimal or the principal to aign tak q along with verdict V {(q, )} = 1 and V {(q, )} = 0. Full eort i a bet repone or all type and thi yield a poitive payo. Notice that in the lat example, the type are not ordered in term o their ability on the tak the principal can aign. In particular, or each tak, the bad type can ucceed with higher probability than ome good type. Thi eature turn out to play an important role in determining whether Blackwell reult hold; our main theorem how that the ollowing condition i uicient or Blackwell reult to carry over to our model. Group Monotonicity: We ay that group monotonicity hold i, or every tak q Q, θ i (q) θ j (q) whenever i i < j. Thi aumption ay that the two group are ordered in term o ability in a way that i independent o the tak: good type are alway at leat a likely to ucceed a bad one when ull eort i choen. The proo o our main reult build on a key lemma that, under the aumption o group monotonicity, provide a imple characterization o inormativene which dipene with the unknown α ( ) and α ( ), and i typically eaier to veriy than the original deinition. Lemma 1. Suppoe group monotonicity hold. Then a tak q i more inormative than q i and only i θ i (q) θ j (q) θ i(q ) θ j (q ) and 1 θ j (q) 1 θ i (q) 1 θ j(q ) 1 θ i (q ) or all i i and j > i. Intuitively, a tak i more inormative i there i a higher relative likelihood that the agent ha a good type conditional on a ucce, and a bad type conditional on a ailure. Uing thi lemma, it i now traightorward to veriy that q i more inormative than q in the type pace (4) when i = 2 but not when i = 1. We are now in a poition to tate our main reult. 8 The correponding value o α and α in equation (2) are.1 and.6, repectively.

12 12 DEB AND STEWART Theorem 2. Suppoe that there i a tak q that i more inormative than every other tak q Q, and group monotonicity hold. Then any ONST i an optimal tet. In particular, it i optimal or the principal to aign tak q at all hitorie and the ull-eort trategy σ N i optimal or the agent. Thi reult tate that principal cannot enhance learning by inducing trategic hirking through the choice o tak, a trategy that help her in Example 4 and 5. I the principal aign only the mot inormative tak, it ollow rom Lemma 2 that he hould aign the ame verdict a in the ONST, and the ull-eort trategy i optimal or the agent. While upericially imilar, there are critical dierence between Theorem 2 and Blackwell reult (Theorem 1). In the latter, where the agent i aumed to alway chooe the ull-eort trategy, the optimality o uing the mot Blackwell inormative tak q can be hown contructively by garbling. To ee thi, uppoe that at ome hitory h in the ONST, the principal aign a tak q = q, and let α and α denote the correponding value olving equation (2). In thi cae, the principal can replace tak q with q and appropriately randomize the continuation tet to achieve the ame outcome. More peciically, at the hitory {h, (q, )}, he can chooe the continuation tet ollowing {h, (q, )} with probability α and, with the remaining probability 1 α, chooe the continuation tet ollowing {h, (q, )}. A imilar randomization uing α can be done at hitory {h, (q, )}. Thi contruction i not uicient to yield the reult when the agent i trategic. In thi cae, replacing the tak q by q and garbling can alter incentive in a way that change the agent optimal trategy, and conequently, the principal payo. To ee thi, uppoe that ull eort i optimal or ome type θ i at h A = (h, q ). Thi implie that the agent expected probability o paing the tet i higher in the continuation tet ollowing {h, (q, )} than in the continuation tet ollowing {h, (q, )}. Now uppoe the principal replace tak q by q and garble the continuation tet a decribed above. Type θ i may no longer ind ull eort to be optimal. In particular, i α > α, then zero eort will be optimal ater the change ince ailure on tak q give a higher likelihood o obtaining the continuation tet that he i more likely to pa. Thereore, the imple garbling argument doe not imply Theorem 2. Intead, the proo exploit the tructure o inormativene in our particular context captured by Lemma 1, which, when coupled with a backward induction argument, enable u to veriy that the continuation tet can be garbled in a way that doe not adverely aect incentive. In the non-trategic benchmark model, Blackwell reult can be trengthened to eliminate le inormative tak even i there i no mot inormative tak. More preciely, i q, q Q are uch that q i more inormative than q, then there exit an ONST in which q i not aigned at any hitory (and thu any ONST or the et o tak Q \ {q } i alo an ONST or the et o tak Q). The intuition behind thi reult i eentially the ame a or Blackwell reult: whenever a tet aign tak q, replacing it with q and uitably garbling the continuation tet yield the ame joint ditribution o type and verdict. In the trategic etting, thi more general reult doe not hold. For example, there exit cae with one bad type in which zero eort i optimal or the bad type in the irt period and ull eort i trictly optimal or at leat one good type; one uch cae i decribed in Example 7 in Appendix B. Letting q denote the tak aigned in the irt period, adding any tak q to the et Q that i eaier

13 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 13 than q and aigning q intead o q doe not change the optimal action or any type; doing o only increae the payo o any type that trictly preer ull eort. Since only good type have thi preerence, uch a change increae the principal payo. I, in addition, q i more inormative than q, then the optimal tet or the et o tak Q { q} i trictly better or the principal than that or the et Q, which implie that q mut be aigned with poitive probability at ome hitory, and the generalization o Blackwell reult ail On the Structure o the Model While Theorem 2 may eem intuitive, a Example 2 indicate, it doe rely on group monotonicity. The ollowing partial convere to Theorem 2 extend the logic o Example 2 to how that, in a ene, group monotonicity i neceary or Blackwell reult to hold in the trategic etting. Theorem 3. Suppoe q i uch that θ i (q) < θ j (q) or ome i and j uch that i i < j. Then there exit q and π uch that q i more Blackwell inormative than q, and or each tet length T, i Q = {q, q }, no optimal tet aign tak q at every hitory h H. The idea behind thi reult i that, i θ i (q) < θ j (q) and the tet alway aign q, type j can pa with at leat a high a probability a can type i. When the principal aign high prior probability to thee two type, he i better o aigning a tak q (at leat at ome hitorie) or which θ i (q ) > θ j (q ) (and uch a le Blackwell inormative q alway exit) in order to advantage the good type. The next example demontrate that, even i group monotonicity hold, Blackwell reult can alo break down i we alter the tructure o the agent payo. When all type chooe ull eort, ucce on a tak increae the principal belie that the type i good. Not urpriingly, i ome type preer to ail the tet, thi can give them an incentive to hirk in a way that overturn Blackwell reult. Example 3. Suppoe there are two type (I = 2), one good and one bad, and one period (T = 1). The principal ha two tak, Q = {q, q }, with ucce probabilitie given by the ollowing matrix: The principal prior belie i q q θ θ (π 1, π 2 ) = (.5,.5). Compared to the main model, uppoe that the principal payo are the ame, but the agent are type-dependent: type θ 1 preer a verdict o 1 to 0, while type θ 2 ha the oppoite preerence. One interpretation i that verdict repreent promotion to dierent department. The principal want to promote type θ 1 to the poition correponding to verdict 1 and type θ 2 to the poition correponding to verdict 0, a preerence that the agent hare. Tak q i trivially more Blackwell inormative than tak q ince the perormance on tak q (conditional on ull eort) convey no inormation. 9 Faced with a nontrategic agent, the optimal tet would aign tak q and verdict V {(q, )} = 1 and V {(q, )} = 0. Faced with a trategic agent, 9 The correponding α and α in equation (2) are both.9.

14 14 DEB AND STEWART the optimal tet i to aign tak q and verdict V {(q, )} = 1 and V {(q, )} = 0. In each o thee tet, type θ 1 will chooe a 1 = 1 and type θ 2 will chooe a 1 = 0. Thu the probability with which θ 2 get verdict 0 remain the ame but the probability with which θ 1 get verdict 1 i higher with the eaier tak q. 6. NON-COMPARABLE TASKS In many cae, tak cannot be ordered by inormativene. What can we ay about the deign o the optimal tet and it relationhip to the ONST in general? The next reult how that, when group monotonicity hold, any ONST i an optimal tet when there are only two type (I = 2); or trategic action to play an important role, there mut be at leat three type. Theorem 4. Suppoe group monotonicity hold. I I = 2, any ONST i an optimal tet and make the ull-eort trategy σ N optimal or the agent. To ee why the trategy σ N i optimal or the agent in ome optimal tet, uppoe there i an optimal tet in which the good type trictly preer to hirk at ome hitory h A. Thi implie that hi expected payo ollowing a ailure on the current tak at h A i higher than that ollowing a ucce. Now uppoe the principal altered the tet by replacing the continuation tet ollowing a ucce with that ollowing a ailure (including replacing the correponding verdict). Thi would make ull eort optimal or both type ince the continuation tet no longer depend on ucce or ailure at h A. Since the good type choe zero eort beore the change, there i no eect on hi payo. Similarly, the bad type payo cannot increae: i he trictly preerred ull eort beore the change then he i made wore o, and otherwie hi payo i alo unchanged. Thereore, thi change cannot lower the principal payo. A imilar argument applie to hitorie where the bad type preer to hirk (in which cae we can replace the continuation tet ollowing a ailure with that ollowing a ucce). Such a contruction can be ued inductively at all hitorie where there i hirking. 10 Given thi argument, Theorem 4 ollow i σ N i optimal in every ONST. Thi can be een uing a imilar argument to that above, except or the cae in which both type trictly preer to hirk at ome hitory. However, it turn out that thi cae cannot happen when the continuation tet ater both outcome are choen optimally. When there are more than two type, even i group monotonicity hold, there need not be an optimal tet in which the ully inormative trategy i optimal. The ollowing example how that, even i the ull-eort trategy σ N i optimal in ome ONST, the optimal tet may dier; the principal can ometime beneit rom ditorting the tet relative to the ONST o a to induce hirking by ome type. Example 4. Suppoe there are three type (I = 3) and three period (T = 3), with i = 2 (o that type θ 1 and θ 2 are good type). There are two tak, Q = {q, q }, and the ucce probabilitie are 10 The dicuion ha ignored the eect o a change ollowing a given period t hitory on the eort choice at all period t < t; indeed, earlier action might change. However, it i traightorward to argue that i a type payo goe down at a given hitory ater uch a change, the (optimal) payo i alo lower at the beginning o the tet.

15 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 15 Period 1 q Period 2 q q Period 3 q q q q Verdict FIGURE 1. An ONST or Example 4. The level o a node correpond to the time period. Inner node indicate the tak aigned at the correponding hitory, while the leave indicate the verdict. For intance, the rightmot node at level 3 correpond to the period 3 hitory h 3 = {(q, ), (q, )} and the tak aigned by the tet at thi hitory i T N (h 3 ) = q. The verdict ollowing thi hitory are 0 whether he ucceed or ail at thi tak. given by the ollowing matrix: q q θ θ θ Note that the type are ranked in term o ability (in particular, group monotonicity hold), and the tak are ranked in term o diiculty. The principal prior belie i (π 1, π 2, π 3 ) = (.06,.44,.5). The ONST (T N, V N ) i repreented by the tree in Figure 1. The ONST alway aign the tak q. The agent pae the tet i he ucceed at leat twice in the three period. Intuitively, the principal aign a low prior probability to type θ 1, and o deign the tet to ditinguih between type θ 2 and θ 3, or which q i better than q. Given that only a ingle tak i ued, group monotonicity implie that the optimal verdict eature a cuto number o uccee required to pa. 11 I the principal commit to thi tet, then the ull-eort trategy i optimal or the agent: ailure on the tak aigned in any period ha no eect on the tak aigned in the uture, and merely decreae the probability o paing. I thi tet optimal when the agent i trategic? Conider intead the determinitic tet (T, V ) decribed by the tree in Figure 2. Thi alternate tet dier rom the ONST in everal way. The agent now ace tak q intead o q both in period 1 and at the period 2 hitory ollowing a ucce. In addition, the agent can pa only at two o the terminal hitorie. We will argue that thi tet yield a higher payo to the principal depite σ N being an optimal trategy or the agent in tet (T N, V N ). 11 Note that the ONST i not unique in thi cae ince the principal can aign either o the two tak (keeping the verdict the ame) at hitorie {(q, ), (q, )} and {(q, ), (q, )}.

16 16 DEB AND STEWART Period 1 q Period 2 q q Period 3 q q q q Verdict FIGURE 2. An optimal determinitic tet or Example 4. By deinition, (T, V ) can only yield a higher payo or the principal than doe (T N, V N ) i at leat one type o the agent chooe to hirk at ome hitory. Thi i indeed the cae. Since type θ 1 ucceed at tak q or ure conditional on chooing ull eort, he will chooe a t = 1 in each period and pa with probability 1. However, type θ 2 and θ 3 both preer a t = 0 in period t = 1, 2. Following a ucce in period 1, two urther uccee are required at tak q to get a paing verdict. In contrat, by chooing the zero eort in the irt two period, the hitory {(q, ), (q, )} can be reached with probability 1, ater which the agent need only a ingle ucce at tak q to pa. Conequently, thi hirking trategy yield a higher payo or type θ 2 and θ 3. The dierence in payo or the three type in (T, V ) relative to (T N, V N ) are v 1 = v 1 (T, V ) v 1 (T N, V N ) = 1 [ ] =.5, v 2 = v 2 (T, V ) v 2 (T N, V N ) =.5 [ ] = 0, and v 3 = v 3 (T, V ) v 3 (T N, V N ) =.4 [ ] =.048. The change in the principal payo i 2 π i v i π i v 3 = > 0, i=1 which implie that (T N, V N ) i not the optimal tet. In particular, the principal can beneit rom the act that the agent can chooe hi action trategically. The next example how that the ull-eort trategy i not alway optimal in an ONST. In repone, the principal may be able to improve on the ONST with a dierent tet, even one that induce the ame trategy or the agent. Example 5. Suppoe there are three type (I = 3) and three period (T = 3), with i = 2. The principal ha two dierent tak, Q = {q, q }, and the ucce probabilitie are a ollow: q q θ θ θ

17 OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES 17 Period 1 q Period 2 q q Period 3 q q q q Verdict The principal prior belie i FIGURE 3. An ONST or Example 5. (π 1, π 2, π 3 ) = (.5,.1,.4). Figure 3 depict an ONST (T N, V N ) or thi environment. The intuition or the optimality o thi tet i a ollow. The principal ha a low prior probability that the agent type i θ 2. Tak q i eective at ditinguihing between type θ 1 and θ 3 a, looely peaking, their ability dierence i larger on that tak. I there i a ucce on q, it greatly increae the belie that the type i θ 1, and the principal will aign q again. Converely, i there i a ailure on tak q (in any period), then the belie aign zero probability to the agent having type θ 1. The principal then intead witche to tak q, which i more eective than q at ditinguihing between type θ 2 and θ 3. Since θ 3 ha very low ability on q, a ucce on thi tak i a trong ignal that the agent type i not θ 3, in which cae the tet iue a pa verdict. Note that the ull-eort trategy σ N i not optimal or type θ 2 : he preer to chooe action 0 in period 1 and action 1 thereater. Thi i becaue hi expected payo at hitory h 2 = {(q, )} i u 2 (h 2 ; T N, V N, σ2 N) = =.16, which i lower than hi expected payo u 2(h 2 ; T N, V N, σ2 N) = =.2775 at the hitory h 2 = {(q, )}. Thereore, thi example demontrate that the ull-eort trategy i not alway be optimal or the agent in an ONST. 12 The ability o the agent to behave trategically beneit the principal ince θ 2 i a good type. An optimal determinitic tet (T, V ) i depicted in Figure 4. Note that thi tet i identical to (T N, V N ) except that the verdict at terminal hitory {(q, ), (q, ), (q, )} i 0 a oppoed to 1. In thi tet, type θ 1 and θ 3 chooe the ull-eort trategy and type θ 2 chooe action 0 in period 1 and action 1 ubequently. Note that the expected payo o type θ 1 remain unchanged relative to the ONST but that o type θ 3 i trictly lower. The payo o type θ 2 i identical to what he receive rom optimal play in (T N, V N ). Thu the payo or the principal rom the tet (T, V ) i higher than that rom (T N, V N ). The example in Appendix B illutrate a range o poibilitie or the both the optimal tet and the ONST. Group monotonicity implie that, under the aumption that the agent chooe the ull-eort trategy, ucce on each tak raie the principal belie that the agent type i good. Nonethele, becaue o the adaptive nature o the tet, ailure on a tak can make the remainder o 12 Although the ONST i not unique, there i no ONST in thi cae or which σ N i optimal.

18 18 DEB AND STEWART Period 1 q Period 2 q q Period 3 q q q q Verdict FIGURE 4. An optimal determinitic tet or Example 5. the tet eaier or ome type, a hown by Example 5. Relative to chooing σ N, trategic behavior by the agent can either help the principal (a in Example 5) or hurt her (a in Example 6). Further, in ome cae the ull-eort trategy i optimal in the optimal determinitic tet but not in the ONST. Finally, unlike the ONST, or which it uice to retrict to determinitic tet, there are cae in which there i no determinitic optimal tet or the principal when the agent i trategic. Example 7 illutrate one cae in which randomizing a verdict trictly beneit the principal, and another in which a tet that randomize tak i trictly better than any that doe not. 7. DISCUSSION 7.1. Menu o Tet We have o ar ignored the poibility that the principal can oer a menu o tet and allow the agent to chooe which tet to take. While thi i not typically oberved in the application we mentioned in the introduction, it may eem natural rom a theoretical perpective. Formally, in thi cae, the principal oer a menu o M tet {ρ k } M k=1 and each type θ i o the agent chooe a tet ρ k that maximize hi expected payo v i (ρ k ). Although a nontrivial menu could in principle help to creen the dierent type, our main reult till hold. Theorem 5. Suppoe there i a tak q that i more inormative than every other tak q Q. Then or any ONST, there i an optimal menu coniting only o that tet. Proo. In the proo o Theorem 2, we how that any tet can be replaced by one where the mot inormative tak q i aigned at all hitorie and appropriate verdict can be choen o that the payo o the good type (weakly) increae and thoe o the bad type (weakly) decreae. Applying thi change to every tet in a menu mut alo increae good type payo while decreaing thoe o bad type. Thu we can retrict attention to menu in which every tet aign tak q at every hitory. But then the proo o Lemma 2 how that replacing any tet that i not an ONST with an ONST make any good type that chooe that tet better o and any bad type wore o. Thereore, by the expreion or the principal payo in (1), replacing every tet in the menu with any given ONST cannot make the principal wore o.

OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES

OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES RAHUL DEB AND COLIN STEWART JUNE 9, 2017 ABSTRACT. We introduce a learning ramework in which a principal eek to determine the ability o a trategic

More information

OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES

OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES OPTIMAL ADAPTIVE TESTING: INFORMATIVENESS AND INCENTIVES RAHUL DEB AND COLIN STEWART DECEMBER 18, 2017 ABSTRACT. We introduce a learning ramework in which a principal eek to determine the ability o a trategic

More information

Optimal adaptive testing: Informativeness and incentives

Optimal adaptive testing: Informativeness and incentives Theoretical Economics 13 2018), 1233 1274 1555-7561/20181233 Optimal adaptive testing: Informativeness and incentives Rahul Deb Department of Economics, University of Toronto Colin Stewart Department of

More information

CHAPTER 8 OBSERVER BASED REDUCED ORDER CONTROLLER DESIGN FOR LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS

CHAPTER 8 OBSERVER BASED REDUCED ORDER CONTROLLER DESIGN FOR LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS CHAPTER 8 OBSERVER BASED REDUCED ORDER CONTROLLER DESIGN FOR LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS 8.1 INTRODUCTION 8.2 REDUCED ORDER MODEL DESIGN FOR LINEAR DISCRETE-TIME CONTROL SYSTEMS 8.3

More information

Social Studies 201 Notes for March 18, 2005

Social Studies 201 Notes for March 18, 2005 1 Social Studie 201 Note for March 18, 2005 Etimation of a mean, mall ample ize Section 8.4, p. 501. When a reearcher ha only a mall ample ize available, the central limit theorem doe not apply to the

More information

Social Studies 201 Notes for November 14, 2003

Social Studies 201 Notes for November 14, 2003 1 Social Studie 201 Note for November 14, 2003 Etimation of a mean, mall ample ize Section 8.4, p. 501. When a reearcher ha only a mall ample ize available, the central limit theorem doe not apply to the

More information

Problem Set 8 Solutions

Problem Set 8 Solutions Deign and Analyi of Algorithm April 29, 2015 Maachuett Intitute of Technology 6.046J/18.410J Prof. Erik Demaine, Srini Devada, and Nancy Lynch Problem Set 8 Solution Problem Set 8 Solution Thi problem

More information

Assignment for Mathematics for Economists Fall 2016

Assignment for Mathematics for Economists Fall 2016 Due date: Mon. Nov. 1. Reading: CSZ, Ch. 5, Ch. 8.1 Aignment for Mathematic for Economit Fall 016 We now turn to finihing our coverage of concavity/convexity. There are two part: Jenen inequality for concave/convex

More information

Preemptive scheduling on a small number of hierarchical machines

Preemptive scheduling on a small number of hierarchical machines Available online at www.ciencedirect.com Information and Computation 06 (008) 60 619 www.elevier.com/locate/ic Preemptive cheduling on a mall number of hierarchical machine György Dóa a, Leah Eptein b,

More information

(b) Is the game below solvable by iterated strict dominance? Does it have a unique Nash equilibrium?

(b) Is the game below solvable by iterated strict dominance? Does it have a unique Nash equilibrium? 14.1 Final Exam Anwer all quetion. You have 3 hour in which to complete the exam. 1. (60 Minute 40 Point) Anwer each of the following ubquetion briefly. Pleae how your calculation and provide rough explanation

More information

Clustering Methods without Given Number of Clusters

Clustering Methods without Given Number of Clusters Clutering Method without Given Number of Cluter Peng Xu, Fei Liu Introduction A we now, mean method i a very effective algorithm of clutering. It mot powerful feature i the calability and implicity. However,

More information

UNIT 15 RELIABILITY EVALUATION OF k-out-of-n AND STANDBY SYSTEMS

UNIT 15 RELIABILITY EVALUATION OF k-out-of-n AND STANDBY SYSTEMS UNIT 1 RELIABILITY EVALUATION OF k-out-of-n AND STANDBY SYSTEMS Structure 1.1 Introduction Objective 1.2 Redundancy 1.3 Reliability of k-out-of-n Sytem 1.4 Reliability of Standby Sytem 1. Summary 1.6 Solution/Anwer

More information

Theoretical Computer Science. Optimal algorithms for online scheduling with bounded rearrangement at the end

Theoretical Computer Science. Optimal algorithms for online scheduling with bounded rearrangement at the end Theoretical Computer Science 4 (0) 669 678 Content lit available at SciVere ScienceDirect Theoretical Computer Science journal homepage: www.elevier.com/locate/tc Optimal algorithm for online cheduling

More information

Technical Appendix: Auxiliary Results and Proofs

Technical Appendix: Auxiliary Results and Proofs A Technical Appendix: Auxiliary Reult and Proof Lemma A. The following propertie hold for q (j) = F r [c + ( ( )) ] de- ned in Lemma. (i) q (j) >, 8 (; ]; (ii) R q (j)d = ( ) q (j) + R q (j)d ; (iii) R

More information

Lecture 21. The Lovasz splitting-off lemma Topics in Combinatorial Optimization April 29th, 2004

Lecture 21. The Lovasz splitting-off lemma Topics in Combinatorial Optimization April 29th, 2004 18.997 Topic in Combinatorial Optimization April 29th, 2004 Lecture 21 Lecturer: Michel X. Goeman Scribe: Mohammad Mahdian 1 The Lovaz plitting-off lemma Lovaz plitting-off lemma tate the following. Theorem

More information

1. The F-test for Equality of Two Variances

1. The F-test for Equality of Two Variances . The F-tet for Equality of Two Variance Previouly we've learned how to tet whether two population mean are equal, uing data from two independent ample. We can alo tet whether two population variance are

More information

Comparing Means: t-tests for Two Independent Samples

Comparing Means: t-tests for Two Independent Samples Comparing ean: t-tet for Two Independent Sample Independent-eaure Deign t-tet for Two Independent Sample Allow reearcher to evaluate the mean difference between two population uing data from two eparate

More information

List coloring hypergraphs

List coloring hypergraphs Lit coloring hypergraph Penny Haxell Jacque Vertraete Department of Combinatoric and Optimization Univerity of Waterloo Waterloo, Ontario, Canada pehaxell@uwaterloo.ca Department of Mathematic Univerity

More information

Tarzan s Dilemma for Elliptic and Cycloidal Motion

Tarzan s Dilemma for Elliptic and Cycloidal Motion Tarzan Dilemma or Elliptic and Cycloidal Motion Yuji Kajiyama National Intitute o Technology, Yuge College, Shimo-Yuge 000, Yuge, Kamijima, Ehime, 794-593, Japan kajiyama@gen.yuge.ac.jp btract-in thi paper,

More information

Codes Correcting Two Deletions

Codes Correcting Two Deletions 1 Code Correcting Two Deletion Ryan Gabry and Frederic Sala Spawar Sytem Center Univerity of California, Lo Angele ryan.gabry@navy.mil fredala@ucla.edu Abtract In thi work, we invetigate the problem of

More information

Online Appendix for Corporate Control Activism

Online Appendix for Corporate Control Activism Online Appendix for Corporate Control Activim B Limited veto power and tender offer In thi ection we extend the baeline model by allowing the bidder to make a tender offer directly to target hareholder.

More information

Chapter Landscape of an Optimization Problem. Local Search. Coping With NP-Hardness. Gradient Descent: Vertex Cover

Chapter Landscape of an Optimization Problem. Local Search. Coping With NP-Hardness. Gradient Descent: Vertex Cover Coping With NP-Hardne Chapter 12 Local Search Q Suppoe I need to olve an NP-hard problem What hould I do? A Theory ay you're unlikely to find poly-time algorithm Mut acrifice one of three deired feature

More information

Chapter 9 Review. Block: Date:

Chapter 9 Review. Block: Date: Science 10 Chapter 9 Review Name: KEY Block: Date: 1. A change in velocity occur when the peed o an object change, or it direction o motion change, or both. Thee change in velocity can either be poitive

More information

Convex Hulls of Curves Sam Burton

Convex Hulls of Curves Sam Burton Convex Hull of Curve Sam Burton 1 Introduction Thi paper will primarily be concerned with determining the face of convex hull of curve of the form C = {(t, t a, t b ) t [ 1, 1]}, a < b N in R 3. We hall

More information

Information acquisition, referral, and organization

Information acquisition, referral, and organization RAND Journal o Economic Vol. 47, No. 4, Winter 206 pp. 935 960 Inormation acquiition, reerral, and organization Simona Grai and Ching-to Albert Ma Each o two expert may provide a ervice to a client. Expert

More information

STRAIN LIMITS FOR PLASTIC HINGE REGIONS OF CONCRETE REINFORCED COLUMNS

STRAIN LIMITS FOR PLASTIC HINGE REGIONS OF CONCRETE REINFORCED COLUMNS 13 th World Conerence on Earthquake Engineering Vancouver, B.C., Canada Augut 1-6, 004 Paper No. 589 STRAIN LIMITS FOR PLASTIC HINGE REGIONS OF CONCRETE REINFORCED COLUMNS Rebeccah RUSSELL 1, Adolo MATAMOROS,

More information

Online Appendix for Managerial Attention and Worker Performance by Marina Halac and Andrea Prat

Online Appendix for Managerial Attention and Worker Performance by Marina Halac and Andrea Prat Online Appendix for Managerial Attention and Worker Performance by Marina Halac and Andrea Prat Thi Online Appendix contain the proof of our reult for the undicounted limit dicued in Section 2 of the paper,

More information

Physics 741 Graduate Quantum Mechanics 1 Solutions to Final Exam, Fall 2014

Physics 741 Graduate Quantum Mechanics 1 Solutions to Final Exam, Fall 2014 Phyic 7 Graduate Quantum Mechanic Solution to inal Eam all 0 Each quetion i worth 5 point with point for each part marked eparately Some poibly ueful formula appear at the end of the tet In four dimenion

More information

CSE 355 Homework Two Solutions

CSE 355 Homework Two Solutions CSE 355 Homework Two Solution Due 2 Octoer 23, tart o cla Pleae note that there i more than one way to anwer mot o thee quetion. The ollowing only repreent a ample olution. () Let M e the DFA with tranition

More information

Lecture 9: Shor s Algorithm

Lecture 9: Shor s Algorithm Quantum Computation (CMU 8-859BB, Fall 05) Lecture 9: Shor Algorithm October 7, 05 Lecturer: Ryan O Donnell Scribe: Sidhanth Mohanty Overview Let u recall the period finding problem that wa et up a a function

More information

Optimal Coordination of Samples in Business Surveys

Optimal Coordination of Samples in Business Surveys Paper preented at the ICES-III, June 8-, 007, Montreal, Quebec, Canada Optimal Coordination of Sample in Buine Survey enka Mach, Ioana Şchiopu-Kratina, Philip T Rei, Jean-Marc Fillion Statitic Canada New

More information

DIFFERENTIAL EQUATIONS

DIFFERENTIAL EQUATIONS DIFFERENTIAL EQUATIONS Laplace Tranform Paul Dawkin Table of Content Preface... Laplace Tranform... Introduction... The Definition... 5 Laplace Tranform... 9 Invere Laplace Tranform... Step Function...4

More information

Suggested Answers To Exercises. estimates variability in a sampling distribution of random means. About 68% of means fall

Suggested Answers To Exercises. estimates variability in a sampling distribution of random means. About 68% of means fall Beyond Significance Teting ( nd Edition), Rex B. Kline Suggeted Anwer To Exercie Chapter. The tatitic meaure variability among core at the cae level. In a normal ditribution, about 68% of the core fall

More information

Application of Extended Scaling Law to the Surface Tension of Fluids of Wide Range of Molecular Shapes

Application of Extended Scaling Law to the Surface Tension of Fluids of Wide Range of Molecular Shapes Application o Extended caling Law to the urace enion o Fluid o Wide Range o Molecular hape Mohammad Hadi Ghatee, Ali oorghali (Department o Chemitry, College o cience, hiraz Univerity, hiraz 71454, Iran)

More information

Chapter 2 Sampling and Quantization. In order to investigate sampling and quantization, the difference between analog

Chapter 2 Sampling and Quantization. In order to investigate sampling and quantization, the difference between analog Chapter Sampling and Quantization.1 Analog and Digital Signal In order to invetigate ampling and quantization, the difference between analog and digital ignal mut be undertood. Analog ignal conit of continuou

More information

A Class of Linearly Implicit Numerical Methods for Solving Stiff Ordinary Differential Equations

A Class of Linearly Implicit Numerical Methods for Solving Stiff Ordinary Differential Equations The Open Numerical Method Journal, 2010, 2, 1-5 1 Open Acce A Cla o Linearl Implicit Numerical Method or Solving Sti Ordinar Dierential Equation S.S. Filippov * and A.V. Tglian Keldh Intitute o Applied

More information

IEOR 3106: Fall 2013, Professor Whitt Topics for Discussion: Tuesday, November 19 Alternating Renewal Processes and The Renewal Equation

IEOR 3106: Fall 2013, Professor Whitt Topics for Discussion: Tuesday, November 19 Alternating Renewal Processes and The Renewal Equation IEOR 316: Fall 213, Profeor Whitt Topic for Dicuion: Tueday, November 19 Alternating Renewal Procee and The Renewal Equation 1 Alternating Renewal Procee An alternating renewal proce alternate between

More information

Chapter 9: Controller design. Controller design. Controller design

Chapter 9: Controller design. Controller design. Controller design Chapter 9. Controller Deign 9.. Introduction 9.2. Eect o negative eedback on the network traner unction 9.2.. Feedback reduce the traner unction rom diturbance to the output 9.2.2. Feedback caue the traner

More information

7.2 INVERSE TRANSFORMS AND TRANSFORMS OF DERIVATIVES 281

7.2 INVERSE TRANSFORMS AND TRANSFORMS OF DERIVATIVES 281 72 INVERSE TRANSFORMS AND TRANSFORMS OF DERIVATIVES 28 and i 2 Show how Euler formula (page 33) can then be ued to deduce the reult a ( a) 2 b 2 {e at co bt} {e at in bt} b ( a) 2 b 2 5 Under what condition

More information

OBSERVER-BASED REDUCED ORDER CONTROLLER DESIGN FOR THE STABILIZATION OF LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS

OBSERVER-BASED REDUCED ORDER CONTROLLER DESIGN FOR THE STABILIZATION OF LARGE SCALE LINEAR DISCRETE-TIME CONTROL SYSTEMS International Journal o Computer Science, Engineering and Inormation Technology (IJCSEIT, Vol.1, No.5, December 2011 OBSERVER-BASED REDUCED ORDER CONTROLLER DESIGN FOR THE STABILIZATION OF LARGE SCALE

More information

Participation Games and international environmental agreements: A NON-PARAMETRIC MODEL

Participation Games and international environmental agreements: A NON-PARAMETRIC MODEL Participation Game and international environmental agreement: A NON-PARAMETRIC MODEL LARRY KARP AND LEO SIMON FEBRUARY 3, 2 Abtract. We examine the ize o table coalition in a participation game that ha

More information

Design By Emulation (Indirect Method)

Design By Emulation (Indirect Method) Deign By Emulation (Indirect Method he baic trategy here i, that Given a continuou tranfer function, it i required to find the bet dicrete equivalent uch that the ignal produced by paing an input ignal

More information

The Informativeness Principle Under Limited Liability

The Informativeness Principle Under Limited Liability The Informativene Principle Under Limited Liability Pierre Chaigneau HEC Montreal Alex Edman LBS, Wharton, NBER, CEPR, and ECGI Daniel Gottlieb Wharton Augut 7, 4 Abtract Thi paper how that the informativene

More information

Bogoliubov Transformation in Classical Mechanics

Bogoliubov Transformation in Classical Mechanics Bogoliubov Tranformation in Claical Mechanic Canonical Tranformation Suppoe we have a et of complex canonical variable, {a j }, and would like to conider another et of variable, {b }, b b ({a j }). How

More information

Multicolor Sunflowers

Multicolor Sunflowers Multicolor Sunflower Dhruv Mubayi Lujia Wang October 19, 2017 Abtract A unflower i a collection of ditinct et uch that the interection of any two of them i the ame a the common interection C of all of

More information

CHAPTER 6. Estimation

CHAPTER 6. Estimation CHAPTER 6 Etimation Definition. Statitical inference i the procedure by which we reach a concluion about a population on the bai of information contained in a ample drawn from that population. Definition.

More information

Chapter 4. The Laplace Transform Method

Chapter 4. The Laplace Transform Method Chapter 4. The Laplace Tranform Method The Laplace Tranform i a tranformation, meaning that it change a function into a new function. Actually, it i a linear tranformation, becaue it convert a linear combination

More information

Beta Burr XII OR Five Parameter Beta Lomax Distribution: Remarks and Characterizations

Beta Burr XII OR Five Parameter Beta Lomax Distribution: Remarks and Characterizations Marquette Univerity e-publication@marquette Mathematic, Statitic and Computer Science Faculty Reearch and Publication Mathematic, Statitic and Computer Science, Department of 6-1-2014 Beta Burr XII OR

More information

Avoiding Forbidden Submatrices by Row Deletions

Avoiding Forbidden Submatrices by Row Deletions Avoiding Forbidden Submatrice by Row Deletion Sebatian Wernicke, Jochen Alber, Jen Gramm, Jiong Guo, and Rolf Niedermeier Wilhelm-Schickard-Intitut für Informatik, niverität Tübingen, Sand 13, D-72076

More information

Performance Evaluation

Performance Evaluation Performance Evaluation 95 (206) 40 Content lit available at ScienceDirect Performance Evaluation journal homepage: www.elevier.com/locate/peva Optimal cheduling in call center with a callback option Benjamin

More information

ON THE APPROXIMATION ERROR IN HIGH DIMENSIONAL MODEL REPRESENTATION. Xiaoqun Wang

ON THE APPROXIMATION ERROR IN HIGH DIMENSIONAL MODEL REPRESENTATION. Xiaoqun Wang Proceeding of the 2008 Winter Simulation Conference S. J. Maon, R. R. Hill, L. Mönch, O. Roe, T. Jefferon, J. W. Fowler ed. ON THE APPROXIMATION ERROR IN HIGH DIMENSIONAL MODEL REPRESENTATION Xiaoqun Wang

More information

into a discrete time function. Recall that the table of Laplace/z-transforms is constructed by (i) selecting to get

into a discrete time function. Recall that the table of Laplace/z-transforms is constructed by (i) selecting to get Lecture 25 Introduction to Some Matlab c2d Code in Relation to Sampled Sytem here are many way to convert a continuou time function, { h( t) ; t [0, )} into a dicrete time function { h ( k) ; k {0,,, }}

More information

Lecture 10 Filtering: Applied Concepts

Lecture 10 Filtering: Applied Concepts Lecture Filtering: Applied Concept In the previou two lecture, you have learned about finite-impule-repone (FIR) and infinite-impule-repone (IIR) filter. In thee lecture, we introduced the concept of filtering

More information

Question 1 Equivalent Circuits

Question 1 Equivalent Circuits MAE 40 inear ircuit Fall 2007 Final Intruction ) Thi exam i open book You may ue whatever written material you chooe, including your cla note and textbook You may ue a hand calculator with no communication

More information

5. Fuzzy Optimization

5. Fuzzy Optimization 5. Fuzzy Optimization 1. Fuzzine: An Introduction 135 1.1. Fuzzy Memberhip Function 135 1.2. Memberhip Function Operation 136 2. Optimization in Fuzzy Environment 136 3. Fuzzy Set for Water Allocation

More information

arxiv: v1 [math.mg] 25 Aug 2011

arxiv: v1 [math.mg] 25 Aug 2011 ABSORBING ANGLES, STEINER MINIMAL TREES, AND ANTIPODALITY HORST MARTINI, KONRAD J. SWANEPOEL, AND P. OLOFF DE WET arxiv:08.5046v [math.mg] 25 Aug 20 Abtract. We give a new proof that a tar {op i : i =,...,

More information

Lecture 7: Testing Distributions

Lecture 7: Testing Distributions CSE 5: Sublinear (and Streaming) Algorithm Spring 014 Lecture 7: Teting Ditribution April 1, 014 Lecturer: Paul Beame Scribe: Paul Beame 1 Teting Uniformity of Ditribution We return today to property teting

More information

MATEMATIK Datum: Tid: eftermiddag. A.Heintz Telefonvakt: Anders Martinsson Tel.:

MATEMATIK Datum: Tid: eftermiddag. A.Heintz Telefonvakt: Anders Martinsson Tel.: MATEMATIK Datum: 20-08-25 Tid: eftermiddag GU, Chalmer Hjälpmedel: inga A.Heintz Telefonvakt: Ander Martinon Tel.: 073-07926. Löningar till tenta i ODE och matematik modellering, MMG5, MVE6. Define what

More information

Z a>2 s 1n = X L - m. X L = m + Z a>2 s 1n X L = The decision rule for this one-tail test is

Z a>2 s 1n = X L - m. X L = m + Z a>2 s 1n X L = The decision rule for this one-tail test is M09_BERE8380_12_OM_C09.QD 2/21/11 3:44 PM Page 1 9.6 The Power of a Tet 9.6 The Power of a Tet 1 Section 9.1 defined Type I and Type II error and their aociated rik. Recall that a repreent the probability

More information

Secretary problems with competing employers

Secretary problems with competing employers Secretary problem with competing employer Nicole Immorlica 1, Robert Kleinberg 2, and Mohammad Mahdian 1 1 Microoft Reearch, One Microoft Way, Redmond, WA. {nickle,mahdian}@microoft.com 2 UC Berkeley Computer

More information

List Coloring Graphs

List Coloring Graphs Lit Coloring Graph February 6, 004 LIST COLORINGS AND CHOICE NUMBER Thomaen Long Grotzch girth 5 verion Thomaen Long Let G be a connected planar graph of girth at leat 5. Let A be a et of vertice in G

More information

Suggestions - Problem Set (a) Show the discriminant condition (1) takes the form. ln ln, # # R R

Suggestions - Problem Set (a) Show the discriminant condition (1) takes the form. ln ln, # # R R Suggetion - Problem Set 3 4.2 (a) Show the dicriminant condition (1) take the form x D Ð.. Ñ. D.. D. ln ln, a deired. We then replace the quantitie. 3ß D3 by their etimate to get the proper form for thi

More information

SHEAR STRENGTHENING OF RC BEAMS WITH NSM CFRP LAMINATES: EXPERIMENTAL RESEARCH AND ANALYTICAL FORMULATION. S. J. E. Dias 1 and J. A. O.

SHEAR STRENGTHENING OF RC BEAMS WITH NSM CFRP LAMINATES: EXPERIMENTAL RESEARCH AND ANALYTICAL FORMULATION. S. J. E. Dias 1 and J. A. O. SHEAR STRENGTHENING OF RC BEAMS WITH NSM CFRP LAMINATES: EXPERIMENTAL RESEARCH AND ANALYTICAL FORMULATION S. J. E. Dia 1 and J. A. O. Barro 2 1 Aitant Pro., ISISE, Dep. o Civil Eng., Univ. o Minho, Azurém,

More information

Linear Momentum. calculate the momentum of an object solve problems involving the conservation of momentum. Labs, Activities & Demonstrations:

Linear Momentum. calculate the momentum of an object solve problems involving the conservation of momentum. Labs, Activities & Demonstrations: Add Important Linear Momentum Page: 369 Note/Cue Here NGSS Standard: HS-PS2-2 Linear Momentum MA Curriculum Framework (2006): 2.5 AP Phyic 1 Learning Objective: 3.D.1.1, 3.D.2.1, 3.D.2.2, 3.D.2.3, 3.D.2.4,

More information

μ + = σ = D 4 σ = D 3 σ = σ = All units in parts (a) and (b) are in V. (1) x chart: Center = μ = 0.75 UCL =

μ + = σ = D 4 σ = D 3 σ = σ = All units in parts (a) and (b) are in V. (1) x chart: Center = μ = 0.75 UCL = Our online Tutor are available 4*7 to provide Help with Proce control ytem Homework/Aignment or a long term Graduate/Undergraduate Proce control ytem Project. Our Tutor being experienced and proficient

More information

Computers and Mathematics with Applications. Sharp algebraic periodicity conditions for linear higher order

Computers and Mathematics with Applications. Sharp algebraic periodicity conditions for linear higher order Computer and Mathematic with Application 64 (2012) 2262 2274 Content lit available at SciVere ScienceDirect Computer and Mathematic with Application journal homepage: wwweleviercom/locate/camwa Sharp algebraic

More information

Alternate Dispersion Measures in Replicated Factorial Experiments

Alternate Dispersion Measures in Replicated Factorial Experiments Alternate Diperion Meaure in Replicated Factorial Experiment Neal A. Mackertich The Raytheon Company, Sudbury MA 02421 Jame C. Benneyan Northeatern Univerity, Boton MA 02115 Peter D. Krau The Raytheon

More information

Non-myopic Strategies in Prediction Markets

Non-myopic Strategies in Prediction Markets Non-myopic Strategie in Prediction Market Stanko Dimitrov Department of Indutrial and Operation Engineering Univerity of Michigan, 205 Beal Avenue, Ann Arbor, MI 4809-27, USA dimitro@umich.edu Rahul Sami

More information

Stochastic Neoclassical Growth Model

Stochastic Neoclassical Growth Model Stochatic Neoclaical Growth Model Michael Bar May 22, 28 Content Introduction 2 2 Stochatic NGM 2 3 Productivity Proce 4 3. Mean........................................ 5 3.2 Variance......................................

More information

The Impact of Imperfect Scheduling on Cross-Layer Rate. Control in Multihop Wireless Networks

The Impact of Imperfect Scheduling on Cross-Layer Rate. Control in Multihop Wireless Networks The mpact of mperfect Scheduling on Cro-Layer Rate Control in Multihop Wirele Network Xiaojun Lin and Ne B. Shroff Center for Wirele Sytem and Application (CWSA) School of Electrical and Computer Engineering,

More information

Memoryle Strategie in Concurrent Game with Reachability Objective Λ Krihnendu Chatterjee y Luca de Alfaro x Thoma A. Henzinger y;z y EECS, Univerity o

Memoryle Strategie in Concurrent Game with Reachability Objective Λ Krihnendu Chatterjee y Luca de Alfaro x Thoma A. Henzinger y;z y EECS, Univerity o Memoryle Strategie in Concurrent Game with Reachability Objective Krihnendu Chatterjee, Luca de Alfaro and Thoma A. Henzinger Report No. UCB/CSD-5-1406 Augut 2005 Computer Science Diviion (EECS) Univerity

More information

Savage in the Market 1

Savage in the Market 1 Savage in the Market 1 Federico Echenique Caltech Kota Saito Caltech January 22, 2015 1 We thank Kim Border and Chri Chamber for inpiration, comment and advice. Matt Jackon uggetion led to ome of the application

More information

Approximating discrete probability distributions with Bayesian networks

Approximating discrete probability distributions with Bayesian networks Approximating dicrete probability ditribution with Bayeian network Jon Williamon Department of Philoophy King College, Str and, London, WC2R 2LS, UK Abtract I generalie the argument of [Chow & Liu 1968]

More information

Imperfect Signaling and the Local Credibility Test

Imperfect Signaling and the Local Credibility Test Imperfect Signaling and the Local Credibility Tet Hongbin Cai, John Riley and Lixin Ye* Abtract In thi paper we tudy equilibrium refinement in ignaling model. We propoe a Local Credibility Tet (LCT) which

More information

In presenting the dissertation as a partial fulfillment of the requirements for an advanced degree from the Georgia Institute of Technology, I agree

In presenting the dissertation as a partial fulfillment of the requirements for an advanced degree from the Georgia Institute of Technology, I agree In preenting the diertation a a partial fulfillment of the requirement for an advanced degree from the Georgia Intitute of Technology, I agree that the Library of the Intitute hall make it available for

More information

Lecture 8: Period Finding: Simon s Problem over Z N

Lecture 8: Period Finding: Simon s Problem over Z N Quantum Computation (CMU 8-859BB, Fall 205) Lecture 8: Period Finding: Simon Problem over Z October 5, 205 Lecturer: John Wright Scribe: icola Rech Problem A mentioned previouly, period finding i a rephraing

More information

A categorical characterization of relative entropy on standard Borel spaces

A categorical characterization of relative entropy on standard Borel spaces MFPS 2017 A categorical characterization o relative entropy on tandard Borel pace Nicola Gagné 1,2 School o Computer Science McGill Univerity Montréal, Québec, Canada Prakah Panangaden 1,3 School o Computer

More information

DIFFERENTIAL EQUATIONS Laplace Transforms. Paul Dawkins

DIFFERENTIAL EQUATIONS Laplace Transforms. Paul Dawkins DIFFERENTIAL EQUATIONS Laplace Tranform Paul Dawkin Table of Content Preface... Laplace Tranform... Introduction... The Definition... 5 Laplace Tranform... 9 Invere Laplace Tranform... Step Function...

More information

Changes in Fresh and Saltwater Movement in a Coastal Aquifer by Land Surface Alteration

Changes in Fresh and Saltwater Movement in a Coastal Aquifer by Land Surface Alteration Firt International Conerence on Saltwater Intruion and Coatal Aquier Monitoring, Modeling, and Management. Eaouira, Morocco, April 3 5, 1 Change in Freh and Saltwater Movement in a Coatal Aquier by Land

More information

DYNAMIC MODELS FOR CONTROLLER DESIGN

DYNAMIC MODELS FOR CONTROLLER DESIGN DYNAMIC MODELS FOR CONTROLLER DESIGN M.T. Tham (996,999) Dept. of Chemical and Proce Engineering Newcatle upon Tyne, NE 7RU, UK.. INTRODUCTION The problem of deigning a good control ytem i baically that

More information

An Inequality for Nonnegative Matrices and the Inverse Eigenvalue Problem

An Inequality for Nonnegative Matrices and the Inverse Eigenvalue Problem An Inequality for Nonnegative Matrice and the Invere Eigenvalue Problem Robert Ream Program in Mathematical Science The Univerity of Texa at Dalla Box 83688, Richardon, Texa 7583-688 Abtract We preent

More information

ORIGINAL ARTICLE Electron Mobility in InP at Low Electric Field Application

ORIGINAL ARTICLE Electron Mobility in InP at Low Electric Field Application International Archive o Applied Science and Technology Volume [] March : 99-4 ISSN: 976-488 Society o Education, India Webite: www.oeagra.com/iaat.htm OIGINAL ATICLE Electron Mobility in InP at Low Electric

More information

arxiv: v3 [quant-ph] 23 Nov 2011

arxiv: v3 [quant-ph] 23 Nov 2011 Generalized Bell Inequality Experiment and Computation arxiv:1108.4798v3 [quant-ph] 23 Nov 2011 Matty J. Hoban, 1, 2 Joel J. Wallman, 3 and Dan E. Browne 1 1 Department of Phyic and Atronomy, Univerity

More information

EC381/MN308 Probability and Some Statistics. Lecture 7 - Outline. Chapter Cumulative Distribution Function (CDF) Continuous Random Variables

EC381/MN308 Probability and Some Statistics. Lecture 7 - Outline. Chapter Cumulative Distribution Function (CDF) Continuous Random Variables EC38/MN38 Probability and Some Statitic Yanni Pachalidi yannip@bu.edu, http://ionia.bu.edu/ Lecture 7 - Outline. Continuou Random Variable Dept. of Manufacturing Engineering Dept. of Electrical and Computer

More information

A Bluffer s Guide to... Sphericity

A Bluffer s Guide to... Sphericity A Bluffer Guide to Sphericity Andy Field Univerity of Suex The ue of repeated meaure, where the ame ubject are teted under a number of condition, ha numerou practical and tatitical benefit. For one thing

More information

MAE140 Linear Circuits Fall 2012 Final, December 13th

MAE140 Linear Circuits Fall 2012 Final, December 13th MAE40 Linear Circuit Fall 202 Final, December 3th Intruction. Thi exam i open book. You may ue whatever written material you chooe, including your cla note and textbook. You may ue a hand calculator with

More information

Control Systems Analysis and Design by the Root-Locus Method

Control Systems Analysis and Design by the Root-Locus Method 6 Control Sytem Analyi and Deign by the Root-Locu Method 6 1 INTRODUCTION The baic characteritic of the tranient repone of a cloed-loop ytem i cloely related to the location of the cloed-loop pole. If

More information

Online scheduling of jobs with favorite machines

Online scheduling of jobs with favorite machines Online cheduling o job with avorite machine Cong Chen 1, Paolo Penna, and Yineng Xu 1,3 arxiv:181.01343v1 [c.ds] 4 Dec 018 1 School o Management, Xi an Jiaotong Univerity, Xi an, China Department o Computer

More information

The Laplace Transform (Intro)

The Laplace Transform (Intro) 4 The Laplace Tranform (Intro) The Laplace tranform i a mathematical tool baed on integration that ha a number of application It particular, it can implify the olving of many differential equation We will

More information

66 Lecture 3 Random Search Tree i unique. Lemma 3. Let X and Y be totally ordered et, and let be a function aigning a ditinct riority in Y to each ele

66 Lecture 3 Random Search Tree i unique. Lemma 3. Let X and Y be totally ordered et, and let be a function aigning a ditinct riority in Y to each ele Lecture 3 Random Search Tree In thi lecture we will decribe a very imle robabilitic data tructure that allow inert, delete, and memberhi tet (among other oeration) in exected logarithmic time. Thee reult

More information

The Use of MDL to Select among Computational Models of Cognition

The Use of MDL to Select among Computational Models of Cognition The Ue of DL to Select among Computational odel of Cognition In J. yung, ark A. Pitt & Shaobo Zhang Vijay Balaubramanian Department of Pychology David Rittenhoue Laboratorie Ohio State Univerity Univerity

More information

Bayesian-Based Decision Making for Object Search and Characterization

Bayesian-Based Decision Making for Object Search and Characterization 9 American Control Conference Hyatt Regency Riverfront, St. Loui, MO, USA June -, 9 WeC9. Bayeian-Baed Deciion Making for Object Search and Characterization Y. Wang and I. I. Huein Abtract Thi paper focue

More information

TRIPLE SOLUTIONS FOR THE ONE-DIMENSIONAL

TRIPLE SOLUTIONS FOR THE ONE-DIMENSIONAL GLASNIK MATEMATIČKI Vol. 38583, 73 84 TRIPLE SOLUTIONS FOR THE ONE-DIMENSIONAL p-laplacian Haihen Lü, Donal O Regan and Ravi P. Agarwal Academy of Mathematic and Sytem Science, Beijing, China, National

More information

ALLOCATING BANDWIDTH FOR BURSTY CONNECTIONS

ALLOCATING BANDWIDTH FOR BURSTY CONNECTIONS SIAM J. COMPUT. Vol. 30, No. 1, pp. 191 217 c 2000 Society for Indutrial and Applied Mathematic ALLOCATING BANDWIDTH FOR BURSTY CONNECTIONS JON KLEINBERG, YUVAL RABANI, AND ÉVA TARDOS Abtract. In thi paper,

More information

Improving the Efficiency of a Digital Filtering Scheme for Diabatic Initialization

Improving the Efficiency of a Digital Filtering Scheme for Diabatic Initialization 1976 MONTHLY WEATHER REVIEW VOLUME 15 Improving the Efficiency of a Digital Filtering Scheme for Diabatic Initialization PETER LYNCH Met Éireann, Dublin, Ireland DOMINIQUE GIARD CNRM/GMAP, Météo-France,

More information

White Rose Research Online URL for this paper: Version: Accepted Version

White Rose Research Online URL for this paper:   Version: Accepted Version Thi i a repoitory copy of Identification of nonlinear ytem with non-peritent excitation uing an iterative forward orthogonal leat quare regreion algorithm. White Roe Reearch Online URL for thi paper: http://eprint.whiteroe.ac.uk/107314/

More information

Research on sound insulation of multiple-layer structure with porous material and air-layer

Research on sound insulation of multiple-layer structure with porous material and air-layer Reearch on ound inulation o multiple-layer tructure with porou material and air-layer Guoeng Bai 1 ; Pei Zhan; Fuheng Sui; Jun Yang Key Laboratory o Noie and Vibration Reearch Intitute o Acoutic Chinee

More information

Lecture 10: Recursive Contracts and Endogenous Market Incompleteness

Lecture 10: Recursive Contracts and Endogenous Market Incompleteness Lecture 0: Recurive Contract and Endogenou Market Incompletene Florian Scheuer Why are the market for inuring againt idioyncratic rik imperfect/miing? Methodologically: recurive contract ( dynamic programing

More information

GNSS Solutions: What is the carrier phase measurement? How is it generated in GNSS receivers? Simply put, the carrier phase

GNSS Solutions: What is the carrier phase measurement? How is it generated in GNSS receivers? Simply put, the carrier phase GNSS Solution: Carrier phae and it meaurement for GNSS GNSS Solution i a regular column featuring quetion and anwer about technical apect of GNSS. Reader are invited to end their quetion to the columnit,

More information

Predicting the Performance of Teams of Bounded Rational Decision-makers Using a Markov Chain Model

Predicting the Performance of Teams of Bounded Rational Decision-makers Using a Markov Chain Model The InTITuTe for ytem reearch Ir TechnIcal report 2013-14 Predicting the Performance of Team of Bounded Rational Deciion-maer Uing a Marov Chain Model Jeffrey Herrmann Ir develop, applie and teache advanced

More information