Crowdsourced Judgement Elicitation with Endogenous Proficiency

Size: px
Start display at page:

Download "Crowdsourced Judgement Elicitation with Endogenous Proficiency"

Transcription

1 Crowsource Jugement Elicitation with Enogenous Proficiency Anirban Dasgupta Arpita Ghosh Abstract Crowsourcing is now wiely use to replace jugement or evaluation by an expert authority with an aggregate evaluation from a number of non-experts, in applications ranging from rating an categorizing online content all the way to evaluation of stuent assignments in massively open online courses (MOOCs) via peer graing. A key issue in these settings, where irect monitoring of both effort an accuracy is infeasible, is incentivizing agents in the crow to put in effort to make goo evaluations, as well as to truthfully report their evaluations. We stuy the esign of mechanisms for crowsource jugement elicitation when workers strategically choose both their reports an the effort they put into their evaluations. This leas to a new family of information elicitation problems with unobservable groun truth, where an agent s proficiency the probability with which she correctly evaluates the unerlying groun truth is enogenously etermine by her strategic choice of how much effort to put into the task. Our main contribution is a simple, new, mechanism for binary information elicitation for multiple tasks when agents have enogenous proficiencies, with the following properties: (i) Exerting maximum effort followe by truthful reporting of observations is a Nash equilibrium. (ii) This is the equilibrium with maximum payoff to all agents, even when agents have ifferent maximum proficiencies, can use mixe strategies, an can choose a ifferent strategy for each of their tasks. Our information elicitation mechanism requires only minimal bouns on the priors, asks agents to only report their own evaluations, an oes not require any conitions on a iverging number of agent reports per task to achieve its incentive properties. The main iea behin our mechanism is to use the presence of multiple tasks an ratings to estimate a reporting statistic to ientify an penalize low-effort agreement the mechanism rewars agents for agreeing with another reference agent report on the same task but also penalizes for blin agreement by subtracting out this statistic term, esigne so that agents obtain rewars only when they put in effort into their observations. 1 Introuction Crowsourcing, where a problem or task is broacast to a crow of potential participants for solution, is use for an increasingly wie variety of tasks on the Web. One particularly common application of crowsourcing is in the context of making evaluations, or jugements when the number of evaluations require is too large for a single expert, a solution is to replace the expert by an evaluation aggregate from a crow, recruite on an online crowsourcing platform such as Amazon Mechanical Turk. Crowsource jugement elicitation is now use for a number of applications such as image classification an labeling, juging the quality of online content an ientifying abusive or ault content, an most recently for peer graing in online eucation, where Massively Open Online Courses (MOOCs) with enrollment in the hunres of thousans crowsource the problem of evaluating assignments submitte by stuents back to the class itself. While one issue in the context of crowsourcing evaluations is how best to aggregate the evaluations obtaine from the crow, there is also a key question of eliciting the best possible evaluations from the crow in the first place. The problem of esigning incentive mechanisms for such crowsource jugement elicitation scenarios has two aspects. First, suppose each worker has alreay evaluate, or forme a jugement on, the tasks allocate to her. Since the groun truth for each task is unknown to the system, a natural solution is to rewar workers base on other workers reports for the same task (this being the only available source of information about this groun truth) 1. The Yahoo! Labs, Sunnyvale, CA anirban@yahoo-inc.com Cornell University, Ithaca, NY arpitaghosh@cornell.eu 1 It is of course infeasible for a requester to monitor every worker s performance on her task, since this woul be a problem of the same scale as simply performing all the tasks herself. We also note that a naive approach of ranomly checking some subset of evaluations, either via inserting tasks with known responses, or via ranom checking by the requester, turns out to be very wasteful of effort at the scale neccessary to achieve the right incentives. 1

2 problem of esigning rewars to incentivize agents to truthfully report their observation, rather than, for example, a report that is more likely to agree with other agents reports, is an information elicitation problem with unobservable groun truth. Information elicitation has been recently been aresse in the literature in the context of eliciting opinions online (such as user opinions about proucts, or experiences with service proviers); see 1.1. However, in those settings, agents (users) have alreay forme an opinion after receiving their signal (for example a user who buys a prouct forms an opinion about it after buying it) so agents only nee to be incentivize to incur the cost to report this opinion an fin it more profitable to report their opinions truthfully than to report a ifferent opinion. In the crowsourcing settings we consier, however, the user oes not have such a pre-forme, or experiential, opinion anyway, but rather forms a jugement as part of her task further, the accuracy of this jugement epens on whether or not the agent puts in effort into it (for instance, a worker evaluating whether images contain objectionable content coul put in no effort an eclare all images to be clean, or put in effort into ientifying which images are actually appropriate; a similar choice applies in other contexts like peer-graing). A key issue in these crowsource jugement elicitation scenarios is therefore incentivizing effort 2 that is, ensuring that agents make the best jugements that they possibly can (in aition, of course, to ensuring that they then truthfully report this observation). This leas to a new kin of information elicitation problem where an agent s proficiency now epens on her effort choice, an so is enogenous an unknown to the system even if an agent s maximum proficiency is known, the actual proficiency with which she performs a task is an enogenous, strategic choice an therefore cannot be assume as fixe or given. A mechanism for information elicitation in this setting shoul make it most beneficial, if not the only beneficial strategy, for agents to not just report their observations truthfully, but to also make the best observations they can in the first place. Also, it is even more important now to ensure that the payoffs from all agents always blinly reporting the same observation (for instance, eclaring all content to be goo) are strictly smaller than the payoffs from truthfully reporting what was actually observe, since eclaring all tasks to be of some preecie type requires no effort an therefore incurs no cost, whereas actually putting in effort into making observations will incur a nonzero cost. Finally, unlike mechanisms esigne for settings where a large auience is being polle for its opinion about a single event, a mechanism here must retain its incentive properties even when there are only a few reports per task this is because it can be infeasible, ue to either monetary or effort constraints, to solicit reports from a large number of agents for each task. (For example, the number of tasks in peer graing scales linearly with the number of agents, limiting the number of reports available for each task since each stuent can only grae a few assignments; similarly, the total cost to the requester in crowsourcing platforms such as Amazon Mechanical Turk scales linearly with the number of workers reporting on each task). How can we elicit the best possible evaluations from agents whose proficiency of evaluation epens on their strategically chosen effort, when the groun truth as well as the effort levels of agents are unobservable to the mechanism? Our Contributions. We introuce a moel for information elicitation with enogenous proficiency, where an agent s strategic choice of whether or not to put in effort into a task enogenously etermines her proficiency (the probability of correctly evaluating the groun truth) for that task. We focus on the esign of mechanisms for binary information elicitation, i.e., when the unerlying groun truth is binary (corresponing to eliciting goo or ba ratings). While generalizing to an arbitrary unerlying type space is an immeiate irection for further work, we note that a number of interesting jugement an evaluation tasks, for example ientifying ault content or correctness evaluation, are inee binary; also, even very recent literature proviing improve mechanisms for information elicitation (e.g. [17, 19]), as well as experimental work on the performance of elicitation mechanisms [16, 5], focuses on moels with binary groun truth. Our main contribution is a simple, new, mechanism for binary information elicitation for multiple tasks when agents have enogenous proficiencies. Our mechanism has the following incentive properties. (i) Exerting maximum effort followe by truthful reporting of observations is a Nash equilibrium. (ii) This is the equilibrium with maximum payoff to all agents, even when agents have ifferent maximum proficiencies, can use mixe strategies, an can choose a ifferent strategy for each of their tasks. Showing that full-effort truthtelling leas to the maximum rewar amongst all equilibria (incluing those involving mixe strategies) requires arguing about the rewars to agents in all possible equilibria that may arise. 2 We thank Davi Evans (VP Eucation, Uacity) for pointing out this issue in the context of peer-graing applications while stuents might put in their best efforts on graing screening assignments to ensure they emonstrate the minimum proficiency require to be allowe to grae, how can we be sure that they will continue to work with the same proficiency when graing homeworks outsie of this screening set? 2

3 To o this, we use a matrix representation of strategies where every strategy can be written as a convex combination of basis strategies, so that maximizing a function over the set of all possible strategies is equivalent to a maximization over the space of coefficients in this convex combination. This representation lets us show that the rewar to an agent over all possible strategy choices (by herself an other agents), an therefore over all equilibria, is maximize when all agents use the strategy of full-effort truthful reporting. (iii) Suppose there is some positive probability, however small, that there is some truste agent for each task who will report on that task truthfully with proficiency greater than half. The equilibrium where all agents put in full effort an report truthfully on all their tasks is essentially the only equilibrium of our mechanism, even if the mechanism oes not know the ientity of the truste agents. We note that our mechanism requires only minimal bouns on the priors an imposes no conitions on a iverging number of agent reports per task to achieve its incentive properties to the best of our knowlege, previous mechanisms for information elicitation o not provie all these guarantees simultaneously, even when proficiency is not an enogenously etermine choice (see 1.1 for a iscussion). The main iea behin our mechanism M is the following. With just one task, it is ifficult to istinguish between agreement arising from high-effort observations of the same groun truth, an blin agreement achieve by the loweffort strategy of always making the same report. We use the presence of multiple tasks an ratings to istinguish between these two scenarios an appropriately rewar or penalize agents to incentivize high effort M rewars an agent i for her report on task j for agreeing with another reference agent r j (i) s report on the same task, but also penalizes for blin agreement by subtracting out a statistic term corresponing to the part of i an r j (i) s agreement on task j that is to be expecte anyway given their reporting statistics estimate from other tasks. This statistic term is chosen so that there is no benefit to making reports that are inepenent of the groun truth; the incentive properties of the mechanism follow from this property that agents obtain positive rewars only when they put effort into their evaluations. 1.1 Relate Work The problem of esigning incentives for crowsource jugement elicitation is closely relate to the growing literature on information elicitation mechanisms. The key ifference between this literature (iscusse in greater etail below) an our work is that agents in the settings motivating past work have opinions that are experientially forme anyway inepenent, an outsie of, any mechanisms to elicit opinions so that agents only nee be incentivize to participate an truthfully report these opinions. In contrast, agents in the crowsourcing settings we stuy o not have such experientially forme opinions to report an agent makes a jugement only because it is part of her task, expening effort to form her jugement, an therefore must be incentivize to both expen this effort an then to truthfully report her evaluation. There are also other ifferences in terms of the moels an guarantees in previous mechanisms for information elicitation; we iscuss this literature below. The peer-preiction metho, introuce by Miller, Resnick an Zeckhauser [12], is a mechanism for the information elicitation problem for general outcome spaces where truthful reporting is a Nash equilibrium, using proper scoring rules to rewar agents for reports that are preictive of other agents reports. The main ifference between our mechanism an [12], as well as other mechanisms base on the peer preiction metho [8, 9, 18, 10, 19], is in the moel of agent proficiency. In peer-preiction moels, while agents can ecie whether to incur the cost to participate (i.e., submit their opinion), an agent s proficiency the istribution of opinions or evaluations conitional on groun truth is exogenously etermine (an common to all agents, an in most moels, known to the center). That is, an agent might not participate at all, but if she oes participate she is assume to have some known proficiency. In contrast, in our setting, agents can choose not just whether or not to participate, but also enogenously etermine their proficiency conitional on participating through their effort choice. Thus, while peer preiction mechanisms o nee to incentivize agents to participate (by submitting a report), they then know the proficiency of agents who o submit a report, an therefore can, an o, ispense rewars that crucially use knowlege of this proficiency. In contrast, even agents who o submit reports in our setting cannot be assume to be using their maximum proficiency to make their evaluations, an therefore cannot be reware base on any assume level of proficiency. Aitionally, truthtelling, while an equilibrium, is not necessarily the maximum-rewar equilibrium in these existing peer-preiction mechanisms [6] shows that for the mechanisms in [12, 8], the strategies of always reporting goo or always reporting ba both constitute Nash equilibria, at least one of which generates higher payoff than truthtelling. Such blin strategy equilibria can be eliminate an honest reporting mae the unique Nash equilibrium by esigning the 3

4 payments as in [10], but this nees agents to be restricte to pure reporting strategies, an requires full knowlege of the prior an conitional probability istributions to compute the rewars. The Bayesian Truth Serum (BTS) [15] is another mechanism for information elicitation with unobservable groun truth. BTS oes not use the knowlege of a common prior to compute rewars, but rather collects two reports from each agent an information report which is the agent s own observation, as well as a preiction report which is the agent s preiction about the istribution of information reports from the population an uses these to compute rewars such that truthful reporting is the highest-rewar Nash equilibrium of the BTS mechanism. In aition to the key ifference of exogenous versus enogenous proficiencies iscusse above, an important limitation of BTS in the crowsourcing setting is that it requires the number of agents n reporting on a task to iverge to ensure its incentive properties. This n requirement is infeasible in our setting ue to the scaling of cost with number of reports as iscusse in the introuction. [17] provies a robust BTS mechanism (RBTS) that works even for small populations (again in the same non-enogenous proficiency moel as BTS an peer preiction mechanisms), an also ensures payments are positive, making the mechanism ex-post iniviually rational in contrast to BTS. However, the RBTS mechanism oes not retain the property of truthtelling being the highest rewar Nash equilibrium inee, the blin agreement equilibrium via constant reports achieves the maximum possible rewar in RBTS, whereas truthtelling might in fact lea to lower rewars. There is also work on information elicitation in conucting surveys an online polling [11, 7], both of which are not quite appropriate for our crowsourcing setting. The mechanism in [11] is weakly incentive compatible (agents are inifferent between lying an truthtelling), while [7] presents a online mechanism that is not incentive compatible in the sense that we use an potentially requires a large (constant) number of agents to converge to the true result. For other work on information elicitation, albeit in settings very ifferent from ours, see [3, 2, 14]. We note also that we moel settings where there is inee a notion of a groun truth, albeit unobservable, so that proficient agents who put in effort are more likely than not to correctly observe this groun truth. Peer-preiction methos as well as the Bayesian truth serum are esigne for settings where there may be no unerlying groun truth at all, an the mechanism only seeks to elicit agents true observations (whatever they are) which means that some agents might be in the minority even when they truthfully report their observation this makes the peer preiction setting harer along the imension of inucing truthful reports, but easier along the imension of not neeing to incentivize agents to choose to exert effort to make high-proficiency observations. We note that our problem can also be cast as a version of a principal-agent problem with a very large number of agents, although the principal cannot irectly observe an agent s output as in stanar moels. While there is a vast literature in economics on the principal-agent problem too large to escribe here (see, eg, [1] an references therein), none of this literature, to the best of our knowlege, aresses our problem. Finally, there is also a large orthogonal boy of work on the problem of learning unknown (but exogenous) agent proficiencies, as well as on the problem of optimally combining reports from agents with iffering proficiencies to come up with the best aggregate evaluation in various moels an settings. These problems of learning exogenous agent proficiencies an optimally aggregating agent reports are orthogonal to our problem of proviing incentives to agents with enogenous, effort-epenent proficiencies to elicit the best possible evaluations from them. 2 Moel We now present a simple abstraction of the problem of esigning mechanisms for crowsource jugement elicitation settings where agents proficiencies are etermine by strategic effort choice. Tasks. There are m tasks, or objects, j = 1,..., m, where each task has some unerlying true quality, or type, X j. This true type X j is unknown to the system. We assume that the types are binary-value: Xj is either H (or 1, corresponing to high-quality) or L (or 0, for low quality) for all j; we use 1 an H (resp. 0 an L) interchangeably throughout for convenience. The prior probabilities of H an L for all tasks are enote by P[H] an P[L]. We assume throughout that max(p[h], P[L]) < 1, i.e., that there is at least some uncertainty in the unerlying qualities of the objects. Agents. There are n workers or agents i = 1,..., n who noisily evaluate, or form jugements on, the qualities of objects. We say agent i performs task j if i evaluates object j. An agent i s jugement on task j is enote by ˆX ij {0, 1}, where ˆX ij is 0 if i evaluates j to be of type L an ˆX ij is 1 if i evaluates it to be H. Having mae an evaluation ˆX ij, an agent can choose to report any value X ij {0, 1} either base on, or inepenent of, her actual evaluation ˆX ij. 4

5 We enote the set of tasks performe by an agent i by J(i), an let I(j) enote the set of agents who perform task j. We will assume for notational simplicity that J(i) = D an I(j) = T for all agents i an tasks j. Proficiency. An agent s proficiency at a task is the probability with which she correctly evaluates its true type or quality. We assume that an agent s proficiency is an increasing function of the effort she puts into making her evaluation. Let e ij enote agent i s effort level for task j: we assume for simplicity that effort is binary-value, e ij {0, 1}. Putting in 0 effort has cost c ij (0) = 0, whereas putting in full effort has cost c ij (1) 0 (we note that our results also exten to a linear moel with continuous effort where e ij [0, 1] an the probability p i (e ij ) of correctly observing X j as well as the cost c i (e ij ) increase linearly with e ij ). An agent who puts in zero effort makes evaluations with proficiency p ij (0) = 1/2 an oes no better than ranom guessing, i.e., Pr( ˆX ij = X j e ij = 0) = 1/2. An agent who puts in full effort e ij = 1 attains her maximum proficiency, Pr( ˆX ij = X j e ij = 1) = p ij (1) = p i. Note that this maximum proficiency p i can be ifferent for ifferent agents moeling agents with ifferent abilities, an nee not be known to the center. We assume that the maximum proficiency p i 1 2 for all i this minimum requirement on agent ability can be ensure in online crowsourcing settings by prescreening workers on a representative set of tasks (Amazon Mechanical Turk, for instance, offers the ability to prescreen workers [13, 4], whereas in peer-graing applications such as on Coursera, stuents are given a set of pre-grae assignments to measure their graing abilities prior to graing their peers, the results of which can be use as a prescreen.) We note that our results also exten easily to the case where the maximum proficiency of an agent epens on whether the object is of type H or L, i.e., the probabilities of correctly observing the groun truth when putting in full effort are ifferent for ifferent groun truths, Pr( ˆX ij = X j X j = H) Pr( ˆX ij = X j X j = L) (of course, ifferent agents can continue to have ifferent maximum proficiences). Strategies. Agents strategically choose both their effort levels an reports on each task to maximize their total utility, which is the ifference between the rewar receive for their reports an the cost incurre in making evaluations. Formally, an agent i s strategy is a vector of D tuples [(e ij, f ij )], specifying her effort level e ij as well as the function f ij she uses to map her actual evaluation ˆX ij into her report X ij for each of her tasks. Note that since an agent s proficiency on a task p ij is a function of her strategically chosen effort e ij, the proficiency of agent i for task j is enogenous in our moel. For a single task, we use the notation (1, X) to enote the choice of full effort e ij = 1 an truthfully reporting one s evaluation (i.e., f ij is the ientity function X ij = ˆX ij ), (1, X c ) to enote full effort followe by inverting one s evaluation, an (0, r) to enote the choice of exerting no effort (e ij = 0) an simply reporting the outcome of a ranom coin toss with probability r of returning H. We use [(1, X)] to enote the strategy of using full effort an truthtelling on all of an agent s tasks, an similarly [(1, X c )] an [(0, r)] for the other strategies. Mechanisms. A mechanism in this setting takes as input the set of all receive reports X ij an computes a rewar for each agent base on her reports, as well as possibly the reports of other agents. Note that the mechanism has no access 3 to the unerlying true qualities X j for any task, an so cannot use the X j to etermine agents rewars. A set of effort levels an reporting functions [(e ij, f ij )] is a full-information Nash equilibrium of a mechanism if no agent i can strictly improve her expecte utility by choosing either a ifferent effort level ê ij, or a ifferent function ˆf ij to map her evaluation ˆX ij into her report X ij. Here, the expectation is over the ranomness in agents noisy evaluation of the unerlying groun truth, as well as any ranomness in the mechanism. We will be intereste in esigning mechanisms for which it is (i) an equilibrium for all agents to put in full effort an report their evaluations truthfully on all tasks, i.e., use strategies [(1, X)], an (ii) for which [(1, X)] is the maximum utility (if not unique) equilibrium. We emphasize here that we o not aress the problem of how to optimally aggregate the T reports X ij for task j into a final estimate of X j, which is an orthogonal problem requiring application-specific moeling our only goal is to elicit the best possible jugements to aggregate, by ensuring that agents fin it most profitable to put in maximum effort into their evaluations an then report these evaluations truthfully. 3 Mechanism The main iea behin our mechanism M is following. Recall that a mechanism oes not have access to the true qualities X j, an therefore must compute rewars for agents that o not rely on irectly observing X j. Since the only 3 Crowsourcing is use typically precisely in scenarios where the number of tasks is too large for the principal (or a set of truste agents chosen by the principal) to carry out herself, so it is at best feasible to verify the groun truth for a tiny fraction of all tasks, which fraction turns out to be inaequate (a formal statement is omitte here) to incentivize effort using knowlege of the X j. 5

6 source of information about Xj comes from the reports X ij, a natural solution is to rewar base on some form of agreement between ifferent agents reporting on j, similar to the peer-preiction setting [12]. However, an easy way for agents to achieve perfect agreement with no effort is to always report H (or L). With just one task, it is ifficult for a mechanism to istinguish between the scenario where agents achieve agreement by making accurate, high-effort, evaluations of the same groun truth, an the low-effort scenario where agents achieve agreement by always reporting H, especially if P[H] is high. However, in our setting, we have the benefit of multiple tasks an ratings, which coul potentially be use to istinguish between these two strategies an appropriately rewar agents to incentivize high effort. M uses the presence of multiple ratings to subtract out a statistic term B ij from the agreement score, chosen so that there is no benefit to making reports that are inepenent of X j roughly speaking, M rewars an agent i for her report on task j for agreeing with another reference agent r j (i) s report on the same task, but only beyon what woul be expecte if i an r j (i) were ranomly tossing coins with their respective empirical frequencies of heas. Let enote the number of other reports mae by i an r j (i) that are use in the computation of this statistic term B ij base on the observe frequency of heas for each pair (i, j). We use M to enote the version of M which uses other reports from each of i an r j (i) to compute B ij. To completely specify M, we also nee to specify a reference rater r j (i) as well as this set of (non-overlapping) tasks performe by i an r j (i), for which we use the following notation. (We require these other tasks to be non-overlapping so that the reports for these tasks X ik an X rj(i)l are inepenent 4, which is necessary to achieve the incentive properties of M.) Definition 1 (S ij, S rj(i)j). Consier agent i an task j J(i), an a reference rater r j (i). Given a value of (1 D 1), let S ij an S rj(i)j be sets of non-overlapping tasks other than task j performe by i an r j (i) respectively, i.e., S ij J(i) \ j, S rj(i)j J(r j (i)) \ j, S ij S rj(i)j =, S ij = S rj(i)j =. A mechanism M is completely specifie by reference raters r j (i) an the sets S ij an S rj(i)j, an rewars agents as efine below. Note that M only uses agents reports X ij to compute rewars an not their maximum proficiencies p i, which therefore nee not be known to the system. Definition 2 (Mechanism M ). M computes an agent i s rewar for her report X ij {0, 1} on task j, R ij, by comparing against a reference rater r j (i) s report X rj(i)j for j, as follows: R ij = A ij B ij, where (1) A ij = X ij X rj(i)j + (1 X ij )(1 X rj(i)j), an k S B ij = ( ij X ik l S X rj r (i)j j(i)l k S )( ) + (1 ij X ik )(1 l S rj (i)j X r j(i)l ), (2) where the sets S ij an S rj(i)j in B ij are as in Definition 1. The final rewar to an agent i is βr i, where R i = j J(i) R ij an β is simply a non-negative scaling parameter that is chosen base on agents costs of effort. The first term, A ij, in R ij is an agreement rewar, an is 1 when i an r j (i) both agree on their report, i.e., when X ij = X rj(i)j = 1 or when X ij = X rj(i)j = 0. The secon term B ij is the statistic term which, roughly speaking, eucts from the agreement rewar whatever part of i an r j (i) s agreement on task j is to be expecte anyway given their reporting statistics, i.e., the relative frequencies with which they report H an L. This euction is what gives M its nice incentive properties while M rewars agents for agreement via A ij, M also penalizes for blin agreement that agents achieve without effort, by subtracting out the B ij term corresponing to the expecte frequency of agreement if i an r j (i) were ranomly choosing reports corresponing to their estimate means. For example, suppose all agents were to always report H. Then A ij is always 1, but B ij = 1 as well so that the net rewar is 0; similarly if agents chose their reports accoring to a ranom cointoss, even one with the correct bias P [H], the value of A ij is exactly equal to B ij since there is no correlation between the reports for a particular task, again leaing to a rewar of 0. The rewar function R ij is esigne so that it only rewars agents when they put in effort into their evaluations, which leas to the esirable incentive properties of M. (We note that there are other natural statistics which might incentivize agents away from low-effort reports e.g., rewaring reports which collectively have an empirical mean close to P[H], or for variance. However, it turns out that appropriately balancing 4 We assume that co-raters ientities are kept unknown to agents, so there is no collusion between i an r j (i). 6

7 the agreement term (which is necessary to ensure agents cannot simply report accoring to a cointoss with bias P[H]) with a term penalizing blin agreement to simultaneously ensure that [(1, X)] is an equilibrium an the most esirable equilibrium is har to accomplish.) There are two natural choices for the parameter, i.e., how many reports of i an r j (i) to inclue for estimating the statistic term that we subtract from the agreement score in R ij 5. (i) In M D 1, we set = D 1 an inclue all reports of agents i an r j (i), except those on their common task j. Here, the non-overlap requirement for sets S ij an S rj(i)j says that an agent i an her reference rater r j (i) for task j have only that task j in common. (ii) In M 1, we set = 1, i.e., subtract away the correlation between the report of i an r j (i) on exactly one other non-overlapping task. In M 1, the non-overlap conition only requires that for each agent-task pair, there is a reference agent r j (i) available who has rate one other task that is ifferent from the remaining tasks rate by i, a conition that is much easier to satisfy than that in M D 1. In 4, we will see that M 1 will require that the choices of (j, j ), where {j } = S ij is the task use in the statistic term of i s rewar for task j, are such that each task j performe by i is use exactly once to etermine R ij for j j. Note that this is always feasible, for instance by using task j + 1 in the statistic term for task j for j = 1,..., D 1 an task 1 for task D. 4 Analyzing M In this section, we analyze equilibrium behavior in M. We begin with some notation an preliminaries. 4.1 Preliminaries Recall that proficiency is the probability of correctly evaluating the true quality. We use p[h] (respectively p[l]) to enote the probability that an agent observes H (respectively L) when making evaluations with proficiency p, i.e., the probability that ˆXij = H is p[h] = pp[h] + (1 p)p[l]. Similarly, q[h], q[l] an p i [H], p i [L] correspon to the probabilities of seeing H an L when making evaluations with proficiencies q an p i respectively. Matrix representation of strategies. We will frequently nee to consier the space of all possible strategies an agent may use in the equilibrium analysis of M. While the choice of effort level e ij in an agent s strategy [(e ij, f ij )] is easily escribe there are only two possible effort levels 1 an 0 the space of functions f ij through which an agent can map her evaluation ˆX ij into her report X ij is much larger. For instance, an agent coul choose f ij corresponing to making an evaluation, performing a Bayesian upate of her prior on X j, an choosing the report with the higher posterior probability. We now iscuss a way to represent strategies that will allow us to easily escribe the set of all reporting functions f ij. An agent i s evaluation ˆX ij can also be written as a two-imensional vector o ij R 2, where o ij = [ 1 0 ] T if i observes a H, an o ij = [ 0 1 ] T if i observes a L, where a T enotes the transpose of a. For the purpose of analyzing M, any choice of reporting function f ij can then be escribe via a 2 2 matrix [ ] M ij x 1 y =, 1 x y where x is the probability with which i chooses to report H after observing H, i.e., x = Pr(X ij = H ˆX ij = H), an similarly y = Pr(X ij = L ˆX ij = L). Observe that the choice of effort e ij affects only o ij an its correctness, or correlation with the (vector representing the) actual quality X j, an the choice of reporting function f ij only affects M ij. Any reporting matrix M ij of the form above can be written as a convex combination of four matrices one for each of the f ij corresponing to (i) truthful reporting (X ij = ˆX ij ) (ii) inverting (X ij = ˆX ij c ), an (iii, iv) always reporting H or L inepenent of one s evaluation (X ij = H an X ij = L respectively): M X = [ ] 1 0, M 0 1 X c = [ ] 0 1, M 1 0 H = [ ] 1 1, M 0 0 L = [ ] Unerstaning the effect of the parameter in our mechanisms, which appears irrelevant to the mechanism s behavior when agents are riskneutral, is an interesting open question. 7

8 That is, M ij = α 1 M X + α 2 M X c + α 3 M H + α 4 M L, where α 1 = x α 3, α 2 = 1 y α 3, an α 3 = x y an α 4 = 0 if x y, an α 3 = 0 an α 4 = y x if y > x. It is easily verifie that α i 0, an α i = 1, so that this is a convex combination. Since all possible reporting strategies f ij can be escribe by appropriately choosing the values of x [0, 1] an y [0, 1] in M ij, every reporting function f ij can be written as a convex combination of these four matrices. The agent s final report X ij is then escribe by the vector M ij o ij R 2, where the first entry is the probability that i reports H, i.e., X ij = H, an the secon entry is the probability that she reports X ij = L. The expecte rewar of agent i for task j can therefore be written using the matrix-vector representation (where T enotes transpose an 1 is the vector of all ones) as E[R ij ] = E[(M rj(i)j o rj(i)j ) T M ij o ij + (1 M rj(i)j o rj(i)j ) T (1 M ij o ij )] [(M rj(i)j E[o rj(i)j ]) T M ij E[o ij ] + (1 M rj(i)j E[o rj(i)j ]) T (1 M ij E[o ij ])], which is linear in M ij. So the payoff from an arbitrary reporting function f ij can be written as the corresponing linear combination of the payoffs from each of the basis functions (corresponing to M X, M X c, M H an M L ) constituting f ij = M ij. We will use this to argue that it is aequate to consier eviations to each of the remaining basis reporting functions an show that they yiel strictly lower rewar to establish that [(1, X)] is an equilibrium of M. Equivalent strategies. For the equilibrium analysis, we will use the following simple facts. (i) The strategy (0, X) (i.e., using zero effort but truthfully reporting one s evaluation) is equivalent to the strategy (0, r) with r = 1/2, i.e., to the strategy of putting in no effort, an ranomly reporting H or L inepenent of the evaluation ˆX ij with probability 1/2 each. (ii) The strategy (1, r) is equivalent to the strategy (0, r), since the report X ij in both cases is completely inepenent of the evaluation ˆX ij an therefore of e ij. Cost of effort. While agents o incur a higher cost when using e ij = 1 as compare to e ij = 0, we will not nee to explicitly eal with the cost in the equilibrium analysis if the rewar from using a strategy where e ij = 1 is strictly greater than the rewar from any strategy with e ij = 0, the rewars R ij can always be scale appropriately using the factor β (in Definition 2) to ensure that the net utility (rewar minus cost) is strictly greater as well. We remark here that bouns on this scaling factor β coul be estimate empirically without requiring knowlege of the priors by estimating the cost of effort c ij from the maximum proficiencies obtaine from a pre-screening ( 2), by conucting a series of trials with increasing rewars an then using iniviual rationality to estimate the cost of effort from observe proficiencies in these trials. Iniviual rationality an non-negativity of payments. The expecte payments mae by our mechanism to each agent are always nonnegative in the full-effort truthful reporting equilibrium, i.e., when all agents use strategies [1, X]. To ensure that the payments are also non-negative for every instance (of the tasks an reports) an not only in expectation, note that it suffices to a 1 to the payments currently specifie, since the penalty term B ij in the rewar R ij is boune above by 1. We also note that iniviual rationality can be achieve by using a value of β large enough to ensure that the net utility βr ij c(1) remains non-negative for all values of P [H] while the expecte payment R ij oes go to zero as P [H] tens to 1 (i.e., in the limit of vanishing uncertainty as the unerlying groun truth is more an more likely to always be H (or always be L)), as long as there is some boun ɛ > 0 such that max{p [H], P [L]} 1 ɛ, a simple calculation can be use to etermine a value β (ɛ) such that the resulting mechanism with β = β leas to nonnegative utilities for all agents in the full-effort truth-telling Nash equilibrium of M. 4.2 Equilibrium analysis We now analyze the equilibria of M. Throughout, we inex the tasks J(i) corresponing to agent i by j {1,..., D}. First, to illustrate the iea behin the mechanism, we prove the simpler result that [(1, X)] is an equilibrium of M when agents all have equal proficiency p i = p, an are restricte to choosing one common strategy for all their tasks. Proposition 3. Suppose all agents have the same maximum proficiency p, an are restricte to choosing the same strategy for each of their tasks. Then, all agents choosing [(1, X)] is an equilibrium of M for all, if p 1/2. 8

9 Proof. Consier an agent i, an suppose all other agents use the strategy (1, X) on all their tasks. As iscusse in the preliminaries, an agent s rewar is linear in her reports fixing the strategies of other agents, so it will be enough to show that there is no beneficial eviation to (1, X c ), or (0, r) for any r [0, 1] to establish an equilibrium. (as note earlier, the choice of effort level is irrelevant when reporting X ij accoring to a the outcome of a ranom coin toss inepenent of the observe value of X ij ). The rewar to agent i when she uses strategy [(1, X)] is E[R i ((1, X))] = E [ X ij X rj(i)j + (1 X ij )(1 X ] rj(i)j) [ k S ij X ik l S rj (i)j X r j(i)l E + (1 = D [ p 2 + (1 p) 2 (p[h] 2 + (1 p[h]) 2 ) ] = D(p p(h))(p p(l)) = D(2p 1) 2 P[H]P[L], k S ij X ik )(1 l S rj (i)j X r j(i)l which is strictly positive if p 1/2 an min(p[h], P[L]) > 0 (as assume throughout), where we use p p[h] = (2p 1)P[L] an p p[l] = (2p 1)P[H]. The expecte rewar from eviating to (1, X c ), when other agents are using (1, X) is E[R i ((1, X c ))] = D (2p(1 p) 2p[H](1 p[h])) = D(p p(h))(p p(l)). Therefore, the expecte rewar from eviating to (1, X c ) is negative an strictly smaller than the rewar from (1, X) if p 1/2. Finally, suppose agent i eviates to playing (0, r), i.e., reporting the outcome of a ranom coin toss with bias r as her evaluation of X ij. Her expecte rewar from using this strategy when other agents play accoring to (1, X) is E[R i ((0, r))] = D (rp[h] + (1 r)p[h] (rp[h] + (1 r)(1 p[h]))) = 0. (In fact, if either agent reports ratings on her tasks by tossing a ranom coin with any probability r [0, 1], inepenent of the unerlying true realization of X ij, the expecte rewar to agent i is 0.) Therefore, if p 1/2, eviating from (1, X) leas to a strict ecrease in rewar to agent i. Hence, the rewars R ij can always be scale appropriately to ensure that [(1, X)] is an equilibrium of M for any values of the costs c i. We will now move on to proving our main equilibrium result for M, where agents can have ifferent maximum proficiencies, as well as possibly use ifferent strategies for ifferent tasks. We begin with a technical lemma an a efinition. Lemma 4. Let f α (p, q) = pq + (1 p)(1 q) α(p[h]q[h] + (1 p[h])(1 q[h])). If α 1, (i) f α (p, q) is strictly increasing in p if q > 1/2, an strictly increasing in q if p > 1/2. (ii) f α (p, q) is nonnegative if p, q 1/2, an positive if p, q > 1/2. (iii) Denote f(p, q) f 1 (p, q). Then, f(p, q) = f(q, p) = f(1 p, 1 q). Also f(p, 1 q) = f(1 p, q) = f(p, q). Proof. Recall that p[h] = pp[h] + (1 p)p[l], an similarly for q[h]. f α(p, q) = p(2q 1) + (1 q) α(pp[h] + (1 p)p[l])(2q[h] 1) (1 q[h]) = p [(2q 1) α(p[h] P[L])(2q[H] 1)] + K p = p(2q 1)(1 α(p[h] P[L]) 2 ) + K p, where K p is a term that oes not epen on p, an we use 2q[H] 1 = (2q 1)(P[H] P[L]) in the last step. Note that P[H] P[L] < P[H] < 1 if max(p[h], P[L]) < 1, so that 1 α(p[h] P[L]) 2 > 0 if α 1. Therefore, f α (p, q) is linear in p with strictly positive coefficient when q > 1/2 an α 1. An ientical argument can be use for q since f α (p, q) can be written as a linear function of q exactly as for p: This proves the first claim. f α (p, q) = q(2p 1)(1 α(p[h] P[L]) 2 ) + K q. )] ] 9

10 For nonnegativity of f α (p, q) on p [1/2, 1], we simply argue that f 1 (p, q) is increasing in q when p [1/2, 1], an 0 at q = 1/2. So for any q > 1/2, f 1 (p, q) 0 for any p [1/2, 1]. But f α (p, q) is ecreasing in α, so f α (p, q) is nonnegative for any α 1 as well. The final claims about f(p, q) an f(1 p, q) can be verifie just by substituting the efinitions of p[h] an q[h] an from symmetry in p an q. Definition 5 (T ij, ij ). Let T ij be the set of all tasks j j such that j S ij, i.e., T ij is the set of tasks j for which i s report on task j is use to compute the statistic term of i s rewar R ij for task j. We use ij = T ij to enote the number of such tasks j. Our main equilibrium result states that uner a mil set of conitions on the choice of reference raters r j (i) an sets T ij, exerting full effort an reporting truthfully on all tasks is an equilibrium of M even when agents have ifferent maximum proficiencies an can choose a ifferent strategy for each task (for instance, an agent coul choose to shirk effort on some tasks an put in effort on the others). The main iea behin this result can be unerstoo from the proof of Proposition p-easy above, where all agents ha the same maximum proficiency p i = p an were restricte to using the same strategy for each task. There, the payoff from using [(1, X)] is exactly f(p, p) where f is as efine in Lemma 4, while the payoff from playing [(0, r)] is 0 (inepenent of other agents strategies); the payoff from eviating to [(1, X c )] when other agents play [(1, X)] is f(p, p). Since f(p, p) > 0 for p > 1/2 an increases with p, it is a best response for every agent to attain maximum proficiency an truthfully report her evaluation. Extening the argument when agents can have both ifferent maximum proficiencies p i an use ifferent strategy choices for each task requires more care, an are what necessitate the conitions on the task assignment in Theorem 6 below. We note that these conitions on M arise because of the generalization to both iffering abilities p i an being allowe to choose a ifferent strategy for each task if either generalization is waive, i.e., if agents can choose ifferent strategies per task but all have equal ability (p i = p), or agents can have ifferent abilities p i but are restricte to choosing the same strategy for all their tasks, [(1, X)] can be shown to be an equilibrium of M even without imposing these conitions. Theorem 6. Suppose p i > 1/2 for all i, an for each agent i, for each task j J(i), (i) ij =, an (ii) E[p rj(i)] = E jl T ij [p rjl (i)] p i, where the expectation is over the ranomness in the assignment of reference raters to tasks an the sets T ij. Then, [(1, X)] is an equilibrium of M. The first conition in Theorem 6, ij =, says that each task j performe by an agent i must contribute to computing the rewar via the statistic term for exactly other tasks in J(i), where is the number of reports use to compute the empirical frequency of H reports by i in the statistic term. The secon conition E[p rj(i)] = E jl T ij [p rjl (i)] says that an agent i shoul expect the average proficiency of her reference rater r j (i) to be equal for all the tasks that she performs, i.e., agent i shoul not be able to ientify any particular task where her reference raters are, on average, worse than the reference raters for her other tasks (intuitively, this can lea to agent i shirking effort on this task being a profitable eviation). The first conition hols for each of the two specific mechanisms M 1 an M D 1, an the secon conition can be satisfie, essentially, by a ranomization of the agents before assignment, as escribe in 5. We now prove the result. Proof. Consier agent i, an suppose all other agents use strategy [(1, X)], i.e., put in full effort with truthtelling on all their tasks. It will be enough to consier pure strategy eviations, an show that there is no beneficial eviation to (1, X c ), or (0, r) for any r [0, 1] on any single task or subset of tasks. First, consier a particular assignment of reference raters r j (i) an the sets S ij (an therefore T ij ). The total expecte rewar to agent i from all her D tasks in this assignment, when other agents all play accoring to [(1, X)] is [ ] k S ij E[X ik ] k S ij E[X ik ] E[R i] = E[X ijx rj (i)j + (1 X ij)(1 X rj (i)j)] p rj (i)[h] + (1 )(1 p rj (i)[h]) [ ] k S ij E[X ik ] = E[X ijx rj (i)j + (1 X ij)(1 X rj (i)j)] (2p rj (i)[h] 1) + (1 p rj (i)[h]) = [E [ X ijx rj (i)j + (1 X ij)(1 X rj (i)j) ] ( ) E[Xij] ] (2p rjl (i)[h] 1) (1 p rj (i)[h]), j l T ij 10

11 where the expectation is over any ranomness in the strategy of i as well as ranomness in i an r j (i) s evaluations for each task j, an we rearrange to collect X ij terms in the last step. Now, agent i can receive ifferent reference raters an task sets S ij in ifferent assignments. So to compute her expecte rewar, agent i will also take an expectation over the ranomness in the assignment of reference raters to tasks an the sets S ij, which appear in the summation above via T ij. Recall the conition that E[p rj(i)] = E jl T ij [p rjl (i)] p i. Using this conition an taking the expectation over the ranomness in the assignments of r j (i) an S ij, the expecte rewar of i is E[R i] = [E [ X ijx rj (i)j + (1 X ij)(1 X rj (i)j) ] ( ) E[Xij] ] (2 p i[h] 1) (1 p i[h]) = j l T ij [ E [ X ijx rj (i)j + (1 X ij)(1 X rj (i)j) ] ij E[Xij](2 pi[h] 1) (1 pi[h]) ], where p i [H] = E[p rj(i)[h]] = E jl T ij [p rjl (i)[h]]. The expecte rewar to agent i, when she makes evaluations with proficiency q j for task j an truthfully reports these evaluations (X ij = ˆX ij ), is then E[R i] = = [ q j p i + (1 q j)(1 p i) ij qj[h](2 pi[h] 1) (1 pi[h]) ] [ ] q j p i + (1 q j)(1 p i) ij ij (qj[h] pi[h] + (1 q[h])(1 pi[h])) (1 )(1 pi[h]) [ ] = f ij (q j, p i) + ( ij 1)(1 pi[h]). (3) where the expectation is taken over ranomness in all agents evaluations, as well as over ranomness in the choices of r j (i) an S ij. We can now show that choosing full effort an truthtelling on all tasks is a best response when all other agents use [(1, X)] if ij =. First, by Lemma 4, f ij (q j, p i ) is increasing in q j provie ij 1, so agent i shoul choose full effort to achieve her maximum proficiency p i on all tasks. Next, note that in terms of the expecte rewar, using proficiency q j an reporting X c is equivalent to using proficiency 1 q j an reporting X. So again by Lemma 4, eviating to X c, i.e., (1 q j ), on any task is strictly ominate by X for q j > 1/2 an p i > 1/2. Finally, if agent i chooses f ij as the function which reports the outcome of a ranom cointoss with probability r of H for any task j, the component of E[R i ] contribute by the term corresponing to X ij becomes E [ X ijx rj (i)j + (1 X ij)(1 X rj (i)j) ] ij ( E[Xij]p rj (i)[h] + (1 E[X ij])(1 p rj (i)[h]) ) = rp rj (i)[h] + (1 r)(1 p rj (i)[h]) (rp rj (i)[h] + (1 r)(1 p rj (i)[h]))) = 0, which is strictly smaller than the rewar from f ij = X in (3) if q j > 1/2 an ij 1, since f(q j, p i ) is strictly positive when q j, p i > 1/2 by Lemma 4. Since we nee ij 1 to ensure that (1, Xc ) is not a profitable eviation, an ij 1 to ensure that (0, r) is not a profitable eviation, requiring ij = simultaneously satisfies both conitions. Therefore, if ij =, eviating from (1, X) on any task j leas to a strict ecrease in rewar to agent i. Since the total rewar to agent i can be ecompose into the sum of D terms which each epen only on the report X ij an therefore the strategy for the single task j, any eviation from [(1, X)] for any single task or subset of tasks strictly ecreases i s expecte rewar. Therefore, the rewars R i = j J(i) R ij can always be scale appropriately to ensure that [(1, X)] is an equilibrium of M. Other equilibria. While [(1, X)] is an equilibrium, M can have other equilibria as well for instance, the strategy [(0, r)], where all agents report the outcome of a ranom cointoss with bias r on each task, is also an equilibrium of M for all r [0, 1], albeit with 0 rewar to each agent. In fact, as we show in the next theorem, no equilibrium, 11

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Equilibrium in Queues Under Unknown Service Times and Service Value

Equilibrium in Queues Under Unknown Service Times and Service Value University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 1-2014 Equilibrium in Queues Uner Unknown Service Times an Service Value Laurens Debo Senthil K. Veeraraghavan University

More information

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

inflow outflow Part I. Regular tasks for MAE598/494 Task 1 MAE 494/598, Fall 2016 Project #1 (Regular tasks = 20 points) Har copy of report is ue at the start of class on the ue ate. The rules on collaboration will be release separately. Please always follow the

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y Ph195a lecture notes, 1/3/01 Density operators for spin- 1 ensembles So far in our iscussion of spin- 1 systems, we have restricte our attention to the case of pure states an Hamiltonian evolution. Toay

More information

Permanent vs. Determinant

Permanent vs. Determinant Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

On colour-blind distinguishing colour pallets in regular graphs

On colour-blind distinguishing colour pallets in regular graphs J Comb Optim (2014 28:348 357 DOI 10.1007/s10878-012-9556-x On colour-blin istinguishing colour pallets in regular graphs Jakub Przybyło Publishe online: 25 October 2012 The Author(s 2012. This article

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Online Appendix for Trade Policy under Monopolistic Competition with Firm Selection

Online Appendix for Trade Policy under Monopolistic Competition with Firm Selection Online Appenix for Trae Policy uner Monopolistic Competition with Firm Selection Kyle Bagwell Stanfor University an NBER Seung Hoon Lee Georgia Institute of Technology September 6, 2018 In this Online

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

How to Minimize Maximum Regret in Repeated Decision-Making

How to Minimize Maximum Regret in Repeated Decision-Making How to Minimize Maximum Regret in Repeate Decision-Making Karl H. Schlag July 3 2003 Economics Department, European University Institute, Via ella Piazzuola 43, 033 Florence, Italy, Tel: 0039-0-4689, email:

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

05 The Continuum Limit and the Wave Equation

05 The Continuum Limit and the Wave Equation Utah State University DigitalCommons@USU Founations of Wave Phenomena Physics, Department of 1-1-2004 05 The Continuum Limit an the Wave Equation Charles G. Torre Department of Physics, Utah State University,

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Chapter 6: Energy-Momentum Tensors

Chapter 6: Energy-Momentum Tensors 49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.

More information

Acute sets in Euclidean spaces

Acute sets in Euclidean spaces Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of

More information

Lecture 6 : Dimensionality Reduction

Lecture 6 : Dimensionality Reduction CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling

Balancing Expected and Worst-Case Utility in Contracting Models with Asymmetric Information and Pooling Balancing Expecte an Worst-Case Utility in Contracting Moels with Asymmetric Information an Pooling R.B.O. erkkamp & W. van en Heuvel & A.P.M. Wagelmans Econometric Institute Report EI2018-01 9th January

More information

Integration Review. May 11, 2013

Integration Review. May 11, 2013 Integration Review May 11, 2013 Goals: Review the funamental theorem of calculus. Review u-substitution. Review integration by parts. Do lots of integration eamples. 1 Funamental Theorem of Calculus In

More information

Robustness and Perturbations of Minimal Bases

Robustness and Perturbations of Minimal Bases Robustness an Perturbations of Minimal Bases Paul Van Dooren an Froilán M Dopico December 9, 2016 Abstract Polynomial minimal bases of rational vector subspaces are a classical concept that plays an important

More information

Expected Value of Partial Perfect Information

Expected Value of Partial Perfect Information Expecte Value of Partial Perfect Information Mike Giles 1, Takashi Goa 2, Howar Thom 3 Wei Fang 1, Zhenru Wang 1 1 Mathematical Institute, University of Oxfor 2 School of Engineering, University of Tokyo

More information

MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX

MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX J. WILLIAM HELTON, JEAN B. LASSERRE, AND MIHAI PUTINAR Abstract. We investigate an iscuss when the inverse of a multivariate truncate moment matrix

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Quantum mechanical approaches to the virial

Quantum mechanical approaches to the virial Quantum mechanical approaches to the virial S.LeBohec Department of Physics an Astronomy, University of Utah, Salt Lae City, UT 84112, USA Date: June 30 th 2015 In this note, we approach the virial from

More information

CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, and Tony Wu

CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, and Tony Wu CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, an Tony Wu Abstract Popular proucts often have thousans of reviews that contain far too much information for customers to igest. Our goal for the

More information

Lecture 6: Calculus. In Song Kim. September 7, 2011

Lecture 6: Calculus. In Song Kim. September 7, 2011 Lecture 6: Calculus In Song Kim September 7, 20 Introuction to Differential Calculus In our previous lecture we came up with several ways to analyze functions. We saw previously that the slope of a linear

More information

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing

More information

VI. Linking and Equating: Getting from A to B Unleashing the full power of Rasch models means identifying, perhaps conceiving an important aspect,

VI. Linking and Equating: Getting from A to B Unleashing the full power of Rasch models means identifying, perhaps conceiving an important aspect, VI. Linking an Equating: Getting from A to B Unleashing the full power of Rasch moels means ientifying, perhaps conceiving an important aspect, efining a useful construct, an calibrating a pool of relevant

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

MATHEMATICAL REPRESENTATION OF REAL SYSTEMS: TWO MODELLING ENVIRONMENTS INVOLVING DIFFERENT LEARNING STRATEGIES C. Fazio, R. M. Sperandeo-Mineo, G.

MATHEMATICAL REPRESENTATION OF REAL SYSTEMS: TWO MODELLING ENVIRONMENTS INVOLVING DIFFERENT LEARNING STRATEGIES C. Fazio, R. M. Sperandeo-Mineo, G. MATHEMATICAL REPRESENTATION OF REAL SYSTEMS: TWO MODELLING ENIRONMENTS INOLING DIFFERENT LEARNING STRATEGIES C. Fazio, R. M. Speraneo-Mineo, G. Tarantino GRIAF (Research Group on Teaching/Learning Physics)

More information

Non-Linear Bayesian CBRN Source Term Estimation

Non-Linear Bayesian CBRN Source Term Estimation Non-Linear Bayesian CBRN Source Term Estimation Peter Robins Hazar Assessment, Simulation an Preiction Group Dstl Porton Down, UK. probins@stl.gov.uk Paul Thomas Hazar Assessment, Simulation an Preiction

More information

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything

More information

Spurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009

Spurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009 Spurious Significance of reatment Effects in Overfitte Fixe Effect Moels Albrecht Ritschl LSE an CEPR March 2009 Introuction Evaluating subsample means across groups an time perios is common in panel stuies

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Keywors: multi-view learning, clustering, canonical correlation analysis Abstract Clustering ata in high-imensions is believe to be a har problem in general. A number of efficient clustering algorithms

More information

This module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics

This module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics This moule is part of the Memobust Hanbook on Methoology of Moern Business Statistics 26 March 2014 Metho: Balance Sampling for Multi-Way Stratification Contents General section... 3 1. Summary... 3 2.

More information

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number

More information

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP ALESSANDRA PANTANO, ANNEGRET PAUL, AND SUSANA A. SALAMANCA-RIBA Abstract. We classify all genuine unitary representations of the metaplectic

More information

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University

More information

Axiometrics: Axioms of Information Retrieval Effectiveness Metrics

Axiometrics: Axioms of Information Retrieval Effectiveness Metrics Axiometrics: Axioms of Information Retrieval Effectiveness Metrics ABSTRACT Ey Maalena Department of Maths Computer Science University of Uine Uine, Italy ey.maalena@uniu.it There are literally ozens most

More information

A Robust Bayesian Truth Serum for Small Populations

A Robust Bayesian Truth Serum for Small Populations Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence A Robust Bayesian Truth Serum for Small Populations Jens Witkowski Department of Computer Science Albert-Ludwigs-Universität Freiburg,

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,

More information

Homework 3 - Solutions

Homework 3 - Solutions Homework 3 - Solutions The Transpose an Partial Transpose. 1 Let { 1, 2,, } be an orthonormal basis for C. The transpose map efine with respect to this basis is a superoperator Γ that acts on an operator

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

Necessary and Sufficient Conditions for Sketched Subspace Clustering

Necessary and Sufficient Conditions for Sketched Subspace Clustering Necessary an Sufficient Conitions for Sketche Subspace Clustering Daniel Pimentel-Alarcón, Laura Balzano 2, Robert Nowak University of Wisconsin-Maison, 2 University of Michigan-Ann Arbor Abstract This

More information

arxiv: v4 [math.pr] 27 Jul 2016

arxiv: v4 [math.pr] 27 Jul 2016 The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Entanglement is not very useful for estimating multiple phases

Entanglement is not very useful for estimating multiple phases PHYSICAL REVIEW A 70, 032310 (2004) Entanglement is not very useful for estimating multiple phases Manuel A. Ballester* Department of Mathematics, University of Utrecht, Box 80010, 3508 TA Utrecht, The

More information

Equations of lines in

Equations of lines in Roberto s Notes on Linear Algebra Chapter 6: Lines, planes an other straight objects Section 1 Equations of lines in What ou nee to know alrea: The ot prouct. The corresponence between equations an graphs.

More information

I m Doing as Well as I Can: Modeling People as Rational Finite Automata

I m Doing as Well as I Can: Modeling People as Rational Finite Automata I m Doing as Well as I Can: Moeling People as Rational Finite Automata Joseph Y Halpern Rafael Pass Lior Seeman Computer Science Dept Cornell University Ithaca, NY E-mail: halpern rafael lseeman@cscornelleu

More information

Analyzing Tensor Power Method Dynamics in Overcomplete Regime

Analyzing Tensor Power Method Dynamics in Overcomplete Regime Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes Fin these erivatives of these functions: y.7 Implicit Differentiation -- A Brief Introuction -- Stuent Notes tan y sin tan = sin y e = e = Write the inverses of these functions: y tan y sin How woul we

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri ITA, UC San Diego, 9500 Gilman Drive, La Jolla, CA Sham M. Kakae Karen Livescu Karthik Sriharan Toyota Technological Institute at Chicago, 6045 S. Kenwoo Ave., Chicago, IL kamalika@soe.ucs.eu

More information

Improving Estimation Accuracy in Nonrandomized Response Questioning Methods by Multiple Answers

Improving Estimation Accuracy in Nonrandomized Response Questioning Methods by Multiple Answers International Journal of Statistics an Probability; Vol 6, No 5; September 207 ISSN 927-7032 E-ISSN 927-7040 Publishe by Canaian Center of Science an Eucation Improving Estimation Accuracy in Nonranomize

More information

Calculus of Variations

Calculus of Variations Calculus of Variations Lagrangian formalism is the main tool of theoretical classical mechanics. Calculus of Variations is a part of Mathematics which Lagrangian formalism is base on. In this section,

More information

Calculus in the AP Physics C Course The Derivative

Calculus in the AP Physics C Course The Derivative Limits an Derivatives Calculus in the AP Physics C Course The Derivative In physics, the ieas of the rate change of a quantity (along with the slope of a tangent line) an the area uner a curve are essential.

More information

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences. S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information Parameter estimation: A new approach to weighting a priori information J.L. Mea Department of Mathematics, Boise State University, Boise, ID 83725-555 E-mail: jmea@boisestate.eu Abstract. We propose a

More information

Zachary Scherr Math 503 HW 3 Due Friday, Feb 12

Zachary Scherr Math 503 HW 3 Due Friday, Feb 12 Zachary Scherr Math 503 HW 3 Due Friay, Feb 1 1 Reaing 1. Rea sections 7.5, 7.6, 8.1 of Dummit an Foote Problems 1. DF 7.5. Solution: This problem is trivial knowing how to work with universal properties.

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Proof of SPNs as Mixture of Trees

Proof of SPNs as Mixture of Trees A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a

More information

Closed and Open Loop Optimal Control of Buffer and Energy of a Wireless Device

Closed and Open Loop Optimal Control of Buffer and Energy of a Wireless Device Close an Open Loop Optimal Control of Buffer an Energy of a Wireless Device V. S. Borkar School of Technology an Computer Science TIFR, umbai, Inia. borkar@tifr.res.in A. A. Kherani B. J. Prabhu INRIA

More information

Lecture 2: Correlated Topic Model

Lecture 2: Correlated Topic Model Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

Computing Derivatives

Computing Derivatives Chapter 2 Computing Derivatives 2.1 Elementary erivative rules Motivating Questions In this section, we strive to unerstan the ieas generate by the following important questions: What are alternate notations

More information

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

Unit vectors with non-negative inner products

Unit vectors with non-negative inner products Unit vectors with non-negative inner proucts Bos, A.; Seiel, J.J. Publishe: 01/01/1980 Document Version Publisher s PDF, also known as Version of Recor (inclues final page, issue an volume numbers) Please

More information

On the Aloha throughput-fairness tradeoff

On the Aloha throughput-fairness tradeoff On the Aloha throughput-fairness traeoff 1 Nan Xie, Member, IEEE, an Steven Weber, Senior Member, IEEE Abstract arxiv:1605.01557v1 [cs.it] 5 May 2016 A well-known inner boun of the stability region of

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Regular tree languages definable in FO and in FO mod

Regular tree languages definable in FO and in FO mod Regular tree languages efinable in FO an in FO mo Michael Beneikt Luc Segoufin Abstract We consier regular languages of labele trees. We give an effective characterization of the regular languages over

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

Definitions and Proofs

Definitions and Proofs Giving Advice vs. Making Decisions: Transparency, Information, and Delegation Online Appendix A Definitions and Proofs A. The Informational Environment The set of states of nature is denoted by = [, ],

More information

Similarity Measures for Categorical Data A Comparative Study. Technical Report

Similarity Measures for Categorical Data A Comparative Study. Technical Report Similarity Measures for Categorical Data A Comparative Stuy Technical Report Department of Computer Science an Engineering University of Minnesota 4-92 EECS Builing 200 Union Street SE Minneapolis, MN

More information