A Fully Sequential Elimination Procedure for Indi erence-zone Ranking and Selection with Tight Bounds on Probability of Correct Selection

Size: px
Start display at page:

Download "A Fully Sequential Elimination Procedure for Indi erence-zone Ranking and Selection with Tight Bounds on Probability of Correct Selection"

Transcription

1 Submitted to Operations Research manuscript OPRE A Fully Sequential Elimination Procedure for Indi erence-zone Ranking and Selection with Tight Bounds on Probability of Correct Selection Peter I. Frazier School of Operations Research and Information Engineering, Cornell University, Ithaca, NY 4853, pf98@cornell.edu, We consider the indi erence-zone (IZ) formulation of the ranking and selection problem with independent normal samples. In this problem, we must use stochastic simulation to select the best among several noisy simulated systems, with a statistical guarantee on solution quality. Existing IZ procedures sample excessively in problems with many alternatives, in part because loose bounds on probability of correct selection lead them to deliver solution quality much higher than requested. Consequently, existing IZ procedures are seldom considered practical for problems with more than a few hundred alternatives. To overcome this, we present a new sequential elimination IZ procedure, called BIZ (Bayes-inspired Indi erence Zone), whose lower bound on worst-case probability of correct selection in the preference zone is tight in continuous time, and nearly tight in discrete time. To the author s knowledge, this is the first sequential elimination procedure with tight bounds on worst-case preference-zone probability of correct selection for more than two alternatives. Theoretical results for the discrete-time case assume that variances are known and have an integer multiple structure, but the BIZ procedure itself can be used when these assumptions are not met. In numerical experiments, the sampling e ort used by BIZ is significantly smaller than that of another leading IZ procedure, the KN procedure of Kim and Nelson (00), especially on the largest problems tested ( 4 =6, 384 alternatives).. Introduction In the use of simulation, one commonly encounters the problem of selecting the best among several simulated systems, e.g., selecting the method for operating a supply chain with minimum average cost, or selecting the configuration of an assembly line with maximum throughput. The higher-level problem of deciding how many simulation samples to take from each system to best support this selection of the best is called the ranking and selection (R&S) problem. Doing well in R&S requires balancing the amount of time spent sampling against the quality of the ultimate selection.

2 Article submitted to Operations Research; manuscript no. OPRE We consider the indi erence-zone (IZ) formulation of the R&S problem, in which we wish to correctly select the best alternative with a probability exceeding a user-specified target, whenever the best alternative is su ciently separated from the others. A sampling procedure having this property is said to satisfy the IZ guarantee. This problem has a rich history, dating to the seminal work Bechhofer (954), with early work summarized in the monograph Bechhofer et al. (968). Research in the area has been active since that time (see, e.g., Paulson (964), Fabian (974), Rinott (978), Hartmann (988), Paulson (994), Nelson et al. (00), Goldsman et al. (00), Hong (006), Andradóttir and Kim (00)). This large body of research is summarized in Bechhofer et al. (995), and more recent work is reviewed in Swisher et al. (003), Kim and Nelson (006, 007). The goal in designing IZ sampling procedures is to take as few samples as possible while still satisfying the IZ guarantee. While early IZ procedures introduced in Bechhofer (954), Paulson (964), Fabian (974), Rinott (978), Hartmann (988, 99), Paulson (994) satisfy the IZ guarantee, they provide a probability of correct selection (PCS) much larger than the user-specified target probability. This is due in part to their use of the Bonferonni inequality, which leads to loose theoretical PCS bounds, and is problematic because it leads them to sample more than necessary. This issue becomes more severe as the number of alternatives grows. More recent procedures developed in Kim and Nelson (00), Goldsman et al. (00), Hong (006) have better performance, but these procedures continue to use the Bonferonni inequality, causing them to be overly conservative in large problems, in the sense that they sample more than necessary and over-deliver on PCS targets (Branke et al. 007). Recent procedures in Kim and Dieker (0), Dieker and Kim (0) avoid the Bonferonni inequality when comparing groups of three alternatives, but again requires the Bonferonni inequality for more than three alternatives. In this paper, we develop the Bayes-inspired IZ (BIZ) procedure, a fully sequential elimination procedure that satisfies the IZ guarantee (given assumptions on the sampling variances), is less conservative than existing IZ procedures, and samples less as a consequence. This procedure does

3 Article submitted to Operations Research; manuscript no. OPRE not use the Bonferroni inequality, instead using a novel symmetry based in Bayesian analysis. We assume independent normal samples, and present versions for both continuous and discrete time. The continuous-time BIZ procedure has a tight bound on worst-case preference-zone PCS: the PCS of the least-favorable configuration is exactly equal to the target probability. This is the first sequential elimination procedure with this property for more than two alternatives. The worst-case preference-zone PCS of the discrete-time BIZ procedure is shown in numerical experiments to be extremely close to the target PCS, even for as many as 4 = 6, 384 alternatives. The discrete-time BIZ procedure generalizes the non-elimination procedure PB introduced by Bechhofer et al. (968), and the tight worst-case preference-zone PCS bound presented here also applies to a continuoustime version of PB. Our theoretical results (the IZ guarantee, and tightness of the worst-case preference-zone PCS bound) assume that the sampling variances are known, and are either common across alternatives, or are integer multiples of a common divisor. The BIZ procedure itself, however, allows both known and unknown sampling variances with arbitrary values, and numerical results suggest that the procedure s performance is robust to deviations of the sampling variances from the structure assumed by the theoretical results. Although our bound on worst-case preference-zone PCS is tight in continuous time, and nearly tight in discrete time, BIZ s PCS under configurations that are not least favorable can be strictly larger than the target. Thus, in these other configurations, BIZ also over-delivers on PCS. Furthermore, Wang and Kim (0) shows that, for a variant of Paulson s procedure, the contribution to over-delivery from the Bonferonni inequality is smaller than from the requirement that PCS be no smaller than the target for slippage configurations in the preference zone. However, violating this slippage configuration requirement would violate the IZ guarantee itself, while our results show that the Bonferonni inequality can be avoided while retaining the IZ guarantee. Numerical experiments demonstrate that, across a variety of configurations, BIZ s over-delivery is much less than that of a leading IZ procedure, the KN procedure of Kim and Nelson (00). The KN family of procedures might be considered state-of-the-art for IZ R&S (Branke et al. 007),

4 4 Article submitted to Operations Research; manuscript no. OPRE and has been shown to be highly e cient compared to other existing IZ procedures (Malone et al. 005, Wang and Kim 0). Thanks to reduced over-delivery, BIZ requires fewer samples than KN on a variety of problems. Although the PCS bounds presented in this paper are non-bayesian, the BIZ procedure is derived using a Bayesian approach. This derivation employs a Bayesian prior concentrated on slippage configurations. The proof techniques are reminiscent of results on the relationship between minimax and Bayesian analysis from decision theory (see, e.g., Berger (985)). Thus, this work connects the IZ with the Bayesian formulation of R&S (see, e.g., Gupta and Miescke (996), Chick and Inoue (00), Frazier et al. (008, 009), Frazier and Powell (008), Chick et al. (00)). We begin in Section by formally stating the IZ formulation of the R&S problem. We then introduce the BIZ procedure for discrete time in Section 3, first assuming a known common sampling variance in Section 3., and then the allowing sampling variances to be heterogeneous and unknown in Section 3.. Our theoretical results, first that the BIZ procedure satisfies the IZ guarantee when the variance is known and is either common across alternatives or has an integer multiple structure, and second that it has tight worst-case preference-zone PCS bounds in continuous time, are given in Section 4. To support this analysis, a continuous-time generalization of the discrete-time BIZ procedure is also given. Section 5 gives numerical results, including a comparison with the KN procedure of Kim and Nelson (00) and the PB procedure of Bechhofer et al. (968).. Indi erence-zone Ranking and Selection We have k alternative simulated systems, among which we would like to select the best. Samples from system x {,...,k} are normally distributed and independent, over time and across alternatives. Let µ x and x be the mean and variance of this sampling distribution. Let µ =(µ,...,µ k ) and =(,..., k) be the corresponding vectors of sampling means and variances. Together, the pair µ, are referred to as a system configuration. Our goal is to observe samples sequentially over time to find which alternative is the best, in the sense of having the largest µ x. Let t =0,,,... index time, and let Y tx be the sum of the samples observed from alternative x

5 Article submitted to Operations Research; manuscript no. OPRE by time t, so that Y tx is a discrete-time random walk with N (µ x, x ) increments and Y 0x = 0. For any set A {,...,k} let Y ta be the vector (Y tx : x A), and Y t be the vector Y t =(Y t,...,y tk ). Any R&S procedure observes samples over time, either adaptively or deterministically, and either choosing to sample all of the alternatives at each time, or only a subset. Based on these samples, the procedure eventually stops sampling and selects an alternative as its estimate of the best. Call the selected alternative ˆx. The goal in designing an R&S procedure is to take as few samples as possible, while still accurately selecting the best alternative. We now define the indi erence-zone guarantee, which is a statistical guarantee on the quality of the solution produced by an R&S procedure. First, we define the probability of correct selection as PCS(µ, )=P µ, ˆx arg max µ x, x where P µ, is the probability measure under which samples from system x have mean µ x and variance x. In the common-variance case, when x = for all x, wewritepcs(µ, ) in place of PCS(µ, ) and P µ, in place of P µ,. Then, we define the preference zone (PZ), parameterized by >0, to be the set PZ( )= µ R k : µ [k] µ [k ], where µ [k] µ [k ]... µ [] are the sorted components of µ. This is the set of system configurations under which the best alternative is better than the second best by at least. The complement of the preference zone is called the indi erence zone, and is the set of system configurations in which we are indi erent between the best and second best alternatives. Then, a procedure meets the indi erence-zone (IZ) guarantee at P (/k, ) and >0if PCS(µ, ) P for all µ PZ( ). We assume that P > /k because IZ guarantees for smaller values of P can be met by choosing ˆx uniformly at random from among {,...,k} without observing any samples. In this definition, whether a procedure meets the IZ guarantee depends upon, although procedures that satisfy the IZ guarantee are usually designed to do so for all R k ++.

6 6 Article submitted to Operations Research; manuscript no. OPRE The Bayes-inspired IZ (BIZ) Procedure In this section we define the Bayes-inspired IZ procedure (BIZ), and summarize the theoretical results shown later in Section 4. This procedure is developed using a Bayesian motivation, although the PCS bounds and IZ guarantee that we show are non-bayesian. We first define a version for known common variance in Section 3., and then generalize to unknown and/or heterogeneous variances in Section 3.. The BIZ procedure as described in this section operates in discrete time. In support of theoretical analysis, Section 4 provides generalizations of this discrete-time procedure that may operate in discrete or continuous time. 3.. The BIZ Procedure with Known Common Variance We first define the Bayes-inspired indi erence zone (BIZ) procedure for the case of common known variance, when x are all equal to the constant. In Section 4, we show that this procedure satisfies the IZ guarantee, with tight bounds on worst-case preference-zone PCS in continuous time. Below, in Section 3., we generalize to heterogeneous and/or unknown sampling variances. BIZ is an elimination procedure. It maintains a list of alternatives that are in contention, and at each point in time, it takes one sample from each alternative in this set. Initially, all alternatives are in contention, and over time, as samples are observed, alternatives are eliminated. Once an alternative is eliminated, it may not come back into contention, and will not be considered for selection when sampling stops. When all but one alternative has been eliminated, this remaining alternative is selected as the best. In contrast with non-elimination procedures, which sample every alternative at every time, elimination procedures may eliminate bad alternatives quickly to reduce sampling e ort. BIZ is parameterized by the values P (/k, ) and >0 for which we desire an IZ guarantee, and a parameter c satisfying c [0, (P ) k ] if k>, c =0 if k =. ()

7 Article submitted to Operations Research; manuscript no. OPRE The parameter c determines how aggressively we eliminate alternatives, and its choice is discussed below in Section 3.3. We recommend setting it to its maximum value, (P ) k.theprocedure also depends on the sampling variances x =. For each t, x {,...,k}, and subset A {,...,k}, we define a function q tx (A)=exp Y tx X x 0 A exp Y tx 0. () In Section 4, this expression is shown to be equal to a Bayesian posterior probability that alternative x is the best, given Y t, and given that the best is in the subset of alternatives A. The BIZ procedure for known common sampling variance is then defined by Alg.. Algorithm BIZ for known common sampling variance, in discrete time Require: c [0, (P ) k ], >0, P (/k, ), common sampling variance > 0. : Let A {,...,k}, t 0, P P, Y 0x 0 for each x. : while max xa q tx (A) <P do 3: while min xa q tx (A) apple c do 4: Let x arg min xa q tx (A). 5: Let P P/( q tx (A)). 6: Remove x from A. 7: end while 8: Sample from each x A and add this sample to Y tx to obtain Y t+,x.thenincrementt. 9: end while 0: Select ˆx arg max xa Y tx as our estimate of the best. The set A is the set of alternatives in contention, and is initially set to contain all of the alternatives in Step. Alternatives can be eliminated either in Step 6, or by the final selection in Step 0, which e ectively eliminates all remaining alternatives except the one selected. These eliminations are performed based on the current value of (q tx (A):xA), and an adaptively updated

8 8 Article submitted to Operations Research; manuscript no. OPRE threshold P. The threshold P, which is initially set to P, can be interpreted as a Bayesian posterior probability of selecting the best that we must achieve to stop sampling. Motivation: We motivate the BIZ procedure as follows. First, consider elimination resulting from exiting the outer while loop in Step and going to Step 0. Recall that q tx (A) can be interpreted, in a Bayesian setting, as the posterior probability that x is the best (given that the best is in our contention set). The quantity P is a threshold on the probability of correctly selecting the best that we must achieve to stop sampling. In Step, if an alternative exceeds the this threshold P, then we exit the loop and select it as best in Step 0. Now, consider elimination resulting from entering the inner while loop in Step 3 and going to Step 7. The quantity min xa q tx (A) is this posterior probability for the alternative in contention that is least likely to be the best. The inner while loop, Step 3, checks whether this minimal posterior probability is below the threshold c, and if it is, eliminates this alternative by removing it from A in steps 4 to 7. In addition to removing this alternative from A, the threshold P is increased, to account for the fact that we may have incorrectly removed the best alternative from A, and should strengthen the criteria that we must meet in Step to stop. The behavior of this algorithm is illustrated in Figure. The example illustrated has k =4 alternatives. The lower threshold c is plotted as a horizontal line, and the upper threshold P is plotted as a line with jumps. The posterior probability q tx (A) for each alternative x is also plotted versus time t. The figure uses the additional notation n to indicate the time at which the nth elimination occurs, and Z n to indicate the alternative eliminated. Starting from time t = 0, the contention set A contains all 4 alternatives and we plot q tx (A) for each. At time,min xa q tx (A) hits the lower threshold c and the alternative Z achieving this minimum is eliminated. The values q tx (A) jump at this time, as an alternative is removed from A. This jump is small for the two alternatives with small posterior probabilities q tx (A), but is larger for the one alternative with a higher value. The threshold P also jumps. Moving forward from time, three alternatives remain in A, and we plot q tx (A) for each. A second alternative is eliminated at time when its posterior probability q tx (A) hits threshold c.

9 Article submitted to Operations Research; manuscript no. OPRE Final selection 0.8 Pn 0.6 qtx(an) c time (t)! (eliminate Z)! (eliminate Z) time (t) Figure Illustration of the BIZ procedure with k = 4 alternatives. The BIZ procedure follows the posterior probabilities q tx(a) over time, eliminating alternatives as they hit the lower threshold. Each time an alternative is eliminated, the upper threshold jumps upward. Eventually, an alternative reaches the upper threshold and is selected as the best. (At the same time, the largest posterior probability comes very close to the upper threshold, but does not hit it.) After time, posterior probabilities q tx (A) are plotted for the two remaining alternatives, until an alternative meets the upper threshold P (marked Final selection ). At this time, the alternative whose q tx (A) hits the upper threshold is selected as best, and sampling stops. 3.. The BIZ Procedure with Heterogeneous and Unknown Sampling Variances While Section 3. assumed a common known sampling variance x =, sampling variances are often heterogeneous and unknown in practice. In this section, we generalize the BIZ procedure to handle heterogeneous sampling variances, in both variance-known and variance-unknown settings. When the variances are known and are integer multiples of a common value, this procedure retains the IZ guarantee of the known common-variance BIZ procedure. The continuous-time version of this procedure presented in Section 4.5 also retains the IZ guarantee, with a tight worst-case preference-zone PCS bound. However, in discrete time, when the variances are unknown or lack an integer multiple structure, we do not have a proof that it satisfies the IZ guarantee. Instead, in this setting, we present this procedure as a heuristic.

10 0 Article submitted to Operations Research; manuscript no. OPRE The discrete-time BIZ procedure for unknown and/or heterogeneous sampling variances is given below in Alg.. Rather than taking only one sample from each alternative in contention for each increment in t, Alg. takes a variable number, storing the number of samples taken as n tx.we let Z tx = Y ntx,x be the sum of all of these samples. The algorithm also maintains an adaptive estimate b tx of the sampling variance for alternative x, and is designed to keep n tx approximately proportional to b tx. Alg. accepts additional parameters beyond those accepted by Alg. : an integer n 0, and a collection of integers B,...,B k. n 0 is the number of samples to use in a first stage of samples, for which we recommend the value of 00. The parameter B x governs the number of samples taken from alternative x in each stage. In practice we recommend setting B x to, and we leave it as a free parameter because doing so supports theoretical analysis in Section 4.5. Alg. can also be used when the sampling variances are known. In this case, we set n 0 =0 and replace the estimators b tx in Alg. with their known values. If the variances are known and identical, and we set B,...,B k to their recommended values of, we recover Alg.. Alg. uses the quantity bq tx (A), defined here in terms of another quantity t. bq tx (A)=exp t Z tx n tx X x 0 A exp t P Z tx 0, t = n x 0 A tx 0. (3) n tx 0 Px 0 A b tx 0 Motivation We motivate Alg. as follows. Consider what would happen if n tx were exactly proportional to b tx and our estimate b tx = x were perfect, so n tx = xt for some >0. Then consider the stochastic process Y 0 =(Y 0 tx : t =0,,,...), where Y 0 tx = Z tx / x. A straightforward computation shows that this stochastic process is a random walk whose increments are normal with mean µ x and variance /. This variance does not depend on x, so to find arg max x µ x,we may use a common-variance R&S procedure, such as Alg.. Alg. is derived by running Alg. on Y 0 if these idealized conditions are met, or on an approximation to it if they are not. This approximation is Y 0 tx Z tx /( n tx / t ). Applying (), but with this approximation of Y 0 tx in place of Y tx and the variance / of the increments of Y 0 tx in place of, provides (3). When, as in the motivating situation described above, b tx = tx and n tx = txt

11 Article submitted to Operations Research; manuscript no. OPRE Algorithm Discrete-time implementation of BIZ, for unknown and/or heterogeneous variances. Require: c [0, (P ) k ], >0, P (/k, ), n 0 0 an integer, B,...,B k strictly positive integers. Recommended choices are c = (P ) k, B = = B k = and n 0 = 00. If the sampling variances x are known, replace the estimators b tx with the true values x, and set n 0 = 0. To compute bq tx (A), use (3). : For each x, sample alternative xn 0 times and set n 0x n 0. Let Z 0x and b 0x be the sample mean and sample variance respectively of these samples. Let t 0. : Let A {,...,k}, P P, t. 3: while max xa bq tx (A) <P do 4: while min xa bq tx (A) apple c do 5: Let x arg min xa bq tx (A). 6: Let P P/( bq tx (A)). 7: Remove x from A. 8: end while 9: Let z arg min xa n tx / b tx. 0: For each x A, letn t+,x =ceil b tx(n tz + B z )/ b tz. : For each x A, ifn t+,x >n tx, take n t+,x n tx additional samples from alternative x. Let Z t+,x and b t+,x be the sample mean and sample variance respectively of all samples from alternative x thus far. : Increment t. 3: end while 4: Select ˆx arg max xa Z tx /n tx as our estimate of the best. for some >0, we have t = t = n tx / x, terms cancel, and Y 0 tx is exactly equal to its approximation. Below, in Section 4.5, we analyze special cases in which this occurs and show that, in these situations, Alg. satisfies the IZ guarantee, and does so with a tight bound on worst-case preference-zone PCS in continuous time.

12 Article submitted to Operations Research; manuscript no. OPRE BIZ Recovers PB as a Special Case When c = 0 and variances are known and common, the discrete-time BIZ procedure is equivalent to the non-elimination procedure PB introduced by Bechhofer et al. (968). This can be seen as follows: Because Y tx is almost surely finite at any fixed t, q tx (A 0 ) > 0 almost surely. Thus, in Step 3 of Alg., min xa q tx (A) > 0=c, the loop from Steps 4 to 7 will never execute, and A = {,...,k} and P = P. The resulting procedure is then no longer an elimination procedure, and takes one sample from every alternative at each point in time. It stops and selects the alternative with the largest sample mean at the first time t for which max x=,...,k q tx ({,...,k}) P. This is exactly the P B procedure of Bechhofer et al. (968). The parameter c determines the trade-o between the number of stages of sampling and the overall number of samples taken. When c = 0 the procedure does no elimination. When c>0, the procedure eliminates alternatives, with larger c causing more aggressive elimination, decreasing the number of samples and increasing the number of stages. In highly parallel computing environments, and some biological and agricultural applications, one can evaluate many alternatives simultaneously, and the focus is on minimizing the number of stages. In simulation, however, when the number of alternatives is large compared to the parallelism in one s computing environment, the focus is on minimizing the number of samples taken. In Section 5 we compare the number of samples taken by BIZ with c at its maximum value, (P ) k, to the number taken by P B (which is BIZ with c at its minimum value, 0). Setting c = (P ) k dramatically reduces the expected number of samples taken, particularly when some alternatives are much worse than others. For use in simulation in a non-parallel setting, we recommend c = (P ) k. 4. Theoretical Analysis In this section, we present our theoretical results: that IZ guarantees hold for Alg. and, if variances are known and have a special structure, Alg. ; and that continuous-time generalizations of these procedures have tight bounds on worst-case preference-zone PCS.

13 Article submitted to Operations Research; manuscript no. OPRE First, we present a generalization of the R&S problem that allows both continuous-time and discrete-time sampling in Section 4.. Then, we consider the setting with common known variance: Section 4. generalizes Alg. to the continuous-time setting; Section 4.3 presents preliminary definitions and results; and Section 4.4 presents the main theoretical results for the common known variance setting. Section 4.5 considers the setting with heterogeneous known variance, presenting first a continuous-time analogue of Alg., and then theoretical results for both this continuous-time analogue and Alg. itself. 4.. Generalization to Continuous Time: Observation Process Although the R&S problem occurs in discrete time in practice, our theoretical analysis relies on a generalization in which observations occur in continuous time, while decisions about eliminating alternatives or stopping to select the best are made at any of a predetermined set of decision points. When this set of decision points is the non-negative integers Z + = {0,,...},werecoverthediscretetime BIZ procedure. When it is the non-negative reals R + =[0, ), we obtain a continuous-time version of BIZ, which we later show has tight worst-case bounds on preference-zone PCS. Recall from Section that, in discrete time and under P µ,, the sum of all samples from alternative x by time t, given in the stochastic process (Y tx : t Z + ), is a discrete-time random walk with N (µ x, x ) increments. We generalize this by letting (Y tx : t R + ) be a Brownian motion under P µ, starting from 0, with drift µ x, volatility x, and independence across x. This is consistent with the previous definition of Y tx at integer times t, since(y tx : t Z + ) continues to be a discrete-time random walk with N (µ x, x ) increments. As before, for each A {,...,k} we let Y ta =(Y tx : x A), and Y t =(Y t,...,y tk ). We let F =(F t : t R + ) be the filtration generated by (Y t : t R + ). In the continuous-time setting, we assume that the variances are known. If they are not known, they can be estimated with perfect accuracy from a sample path (Y tx :0appletapple ) for any > Generalization to Continuous Time: The BIZ Procedure with Common Variance We now generalize the BIZ procedure for known common variance in discrete time (Alg. ) to include the continuous-time setting. In this section, we assume x = for all x, with known.

14 4 Article submitted to Operations Research; manuscript no. OPRE This generalized BIZ procedure includes a parameter T which is a set of decision points, and is set to either R + or Z +.SettingT = R + provides a continuous-time procedure, while setting T = Z + recovers the discrete-time procedure Alg.. To define this generalized BIZ procedure, we recursively define a sequence of stopping times 0= 0 apple apple apple k apple, random variables Z,...,Z k and P 0,P,...,P k, and random sets A 0,A,...,A k.wefirstdefine 0, A 0 and P 0 as 0 =0, P 0 = P, A 0 = {,...,k}. (4a) Then, for each n =0,,...,k, we define n+, Z n+, A n+ and P n+ recursively given n, A n, and P n as n+ =inf t T \ [ n, ):minq tx (A n ) apple c or max q tx (A n ) P n, xa n xa n Z n+ arg min xa n q n+,x(a n ), A n+ = A n \{Z n+ }, P n+ = P n min q n+,x(a n ). xa n (4b) Finally, with these quantities defined, ˆx is the single alternative in A k, ˆx A k. (4c) In this definition, the times,..., k are times at which alternatives are eliminated, the random variables Z,...,Z k are the alternatives eliminated at these times, and A n is the set of alternatives in contention starting at time n. The random variables P 0,...,P k are thresholds our posterior probability of being best must achieve to allow us to stop sampling. The times at which alternatives are eliminated has a particular structure: Initially, elimination occurs because min xan q tx (A n ) apple c, i.e., because this posterior probability of being best fell below the lower threshold c. Eventually though, an elimination occurs because max xan q tx (A n ) P n, i.e., because an alternative s posterior probability of being best exceeded the upper threshold P n. At this time, Lemma 6 below shows that all alternatives except one are eliminated simultaneously, n+ = n+ = = k, and that

15 Article submitted to Operations Research; manuscript no. OPRE the one alternative remaining in A k (which is selected as best) is the one whose q tx (A n ) satisfied q tx (A n ) P n at time t = n+. We define a random variable M so that the time at which this occurs is M = n+, and the Mth through the (k )st eliminations occur simultaneously in this way. When T = Z +, the algorithm defined by (4) is identical to Alg.. We see this as follows. The stopping time n is the value of t in Alg. at the time when the nth alternative is eliminated, either explicitly in Step 6 (if n<m), or implicitly in Step 0 (if n M). For apple n<m, at time n,we go through the inner while loop (Steps through 7) for the nth time. The alternative x chosen for elimination in Step 4 is Z n, Step 5 takes P from P n to P n, and Step 6 takes A from A n to A n. At time t = M, the condition of the outer while loop checked in Step fails to be satisfied, and Alg. goes to Step 0 and selects arg max xa Y tx = arg max xa Y M,x, which Lemma 6 below shows is the same as the selection ˆx made by the procedure defined by (4). In (4b), the choice among the arg min set for Z n+ does not a ect the analysis, but we set Z n+ to be the alternative with the smallest index in that set. When n+ =, we choose P n+ = and Z n+ uniformly at random from A n, although again this choice does not a ect the analysis because later in Lemma 4 we show that n+ < almost surely under any P µ,. The claim that each n is a stopping time is justified in Section 4.3, in Lemma 5. This definition of BIZ in continuous time also provides an extension of P B to continuous time, obtained by setting c = 0. The resulting procedure can be simplified, as is shown below in Lemma 7, to a procedure that samples from all alternatives, eliminating none, until a selection is made at time = = = = k of ˆx arg max x=,...,k Y,x. This time can be written ( ) kx =inf t T : max exp Y tx / P exp Y tx 0/. x When T = Z +, this is the original discrete-time P B procedure of Bechhofer et al. (968). x 0 = 4.3. Preliminaries for the Proofs In this section, we present definitions and preliminary results that support our main theoretical results for the common variance setting in Section 4.4. We continue to assume that x = for all x, with known.

16 6 Article submitted to Operations Research; manuscript no. OPRE We first construct a probability measure Q under which the vector of sampling means is chosen at random according to a prior distribution. To indicate that the vector of sampling means under Q is random, and may di er from the true vector of sampling means µ, we denote it by. We emphasize that Q is a mathematical construct that we use to analyze the BIZ procedure, and is di erent from the true sampling distribution P µ,. We construct Q as follows. Let X be chosen uniformly at random from among,...,k, and let X =. Let x = 0 for all x 6= X. Configurations of the form X x = for some parameter > 0 are called slippage configurations in the R&S literature, and slippage configurations in which = are often the most di cult configurations under which to select correctly. We then define a family of probability measures that includes and generalizes Q. For each u R k with u [k] 6= u [k ], define a probability measure Q u as follows. First, let (R(),...,R(k)) under Q u be a uniformly distributed permutation of (,...,k). Then, let x = u R(x) almost surely under Q u, and let X arg max x x. (This argmax is unique because u [k] 6= u [k ].) Given, we let each (Y tx : t R + ) be an independent Brownian motion under Q u with drift x and volatility.defining u =[, 0,...,0], we have Q = Q u, so this definition generalizes the previously defined Q. The following lemma provides an expression for the posterior probability that a specified alternative x 0 has the largest sampling mean, given the prior Q u and partial information about the permutation R. Proofs of this and other lemmas may be found in the appendix. Lemma. Suppose x = > 0 8x. Let A {,...,k} with x 0 A. Let u PZ( ) and r arg max x u x. Let R be the (random) set of permutations r with R(x)=r(x) for x/ A. Then, Q u {X = x 0 Y ta, (R(x)) x/a } = X rr:r(x 0 )=r exp X xa Y tx u r(x)! X rr exp! X Y tx u r(x). xa If R contains no permutations r with r(x 0 )=r, then the numerator in this expression is 0. Lemma has as a consequence Lemma below, which gives the posterior probability under Q that alternative x has the best sampling mean, given that the best is in a specified set A. This expression is exactly q tx (A) (withx 0 in place of x), defined earlier in (). As discussed in

17 Article submitted to Operations Research; manuscript no. OPRE Section 3., this interpretation of q tx (A) motivates the BIZ procedure, and as we will see later, plays an important role in its analysis. Lemma. Suppose x = > 0 8x. Let A {,...,k} and x 0 A. Then, Q {X = x 0 Y ta,x A} =exp X Y tx 0 exp xa Y tx. The expression is una ected if we also condition on (R(x)) x/a. Later, we also use the following monotonicity result. Its proof involves algebraic manipulations of expressions from Lemmas and. Lemma 3. Suppose x = > 0 8x. Fixu PZ( ), a permutation r 0 of the integers {,...,k}, and y R k. Let A be a non-empty subset of {,...,k}. Let B denote the event that X A, Y tx = y x for each x A, andr(x)=r 0 (x) for each x/ A. Then max xa Q u {X = x B} max xa Q {X = x B}, (5) min Q u {X = x B}applemin Q xa xa {X = x B}. (6) Our results also require the following pair of technical lemmas. The first states that, with probability, the BIZ procedure takes finitely many samples. Its proof employs a standard geometric decay argument. The second states that the elimination times are stopping times of the filtration generated by the observation process Y, and uses elementary manipulations of events. Lemma 4. Suppose x = > 0 8x. Then n < a.s. under P µ, for n =0,,...,k and µ R k. Lemma 5. For each n =0,,...,k, n defined by (4) is a stopping time of F. In Section 4., we stated that, at the first elimination time n+ caused by an alternative s q tx (A n ) exceeding P n, all other alternatives are eliminated simultaneously, and this alternative is selected as the best. We describe this behavior more formally in the following lemma, whose statement uses the definition of the random variable M, M =inf n =,...,k : max xa n q n,x(a n ) P n,

18 8 Article submitted to Operations Research; manuscript no. OPRE so that M is the first time at which we eliminate an alternative because max xan q n,x(a n ) exceeds P n. Lemma 6. Suppose x = > 0 8x. Then, for any µ R k, the following statements hold almost surely under P µ,. (a) M apple k. (b) n = M for all n M and ˆx arg max xam Y M,x. (c) If T = R +, M < M. Lemma 6 allows us to formally state the previously claimed simplification of BIZ when c = 0, from the discussion in Section 4. of the P B procedure. Lemma 7. When c =0, we have = = = k =inf ( t T : max x = and ˆx arg max x=,...,k Y,x, where ) kx exp Y tx / P exp Y tx 0/. x 0 = We may now state two lemmas, which together constitute the proof of the main result in Section 4.4. These lemmas use CS = {ˆx arg max x x, k < } to denote the event of correct selection. Lemma 8 shows that the non-bayesian probability of correct selection PCS(u, ) is identical to the probability of correct selection under the Bayesian prior Q u. The proof follows a symmetry or equalizing argument. Lemma 8. Suppose x = > 0 8x. Let u R k. Then Q u {CS} = PCS(u, ). Furthermore, PCS(u, ) is invariant to translations and permutations of u. Lemma 9 shows that the conditional Bayesian PCS is bounded below by the random variable P n, with equality in the case of continuous-time sampling and prior Q. Lemma 9. Suppose x = > 0 8x. Then, for each n =,...,k and each u PZ( ), Q u CS F n, (R(x)) x/an,x A n,m n P n. If T = R + and u =[, 0,...,0] then this inequality holds with equality.

19 Article submitted to Operations Research; manuscript no. OPRE Theoretical Results for the Common Variance Setting Using the preliminary results from the previous section, we now state and prove our main result common known variances, Theorem. The first statement shows that the BIZ procedure satisfies the IZ guarantee in both discrete and continuous time. The second statement in the theorem shows that, in continuous time, the bound on the worst-case preference-zone PCS of the BIZ procedure is tight. This second statement can be interpreted as showing that any slack in the PCS bound for BIZ in discrete time is due to the gap between the time at which the continuous-time procedure would eliminate an alternative and the next integer-valued time. Theorem. Let c [0, (P ) k ], T {Z+, R + }, >0, P (/k, ), and x = > 0 for all x. Then, under the BIZ procedure defined by (4), PCS(µ, ) P for all µ PZ( ). Furthermore, if T = R +, inf PCS(µ, )=P. µpz( ) Proof: Let µ PZ( ). Let µ 0 be a permutation of µ such that µ 0 µ 0 x for all x. Let u R k be given by u x = µ 0 x µ 0 +.Thus,u = and u x apple 0 for all x 6=. Because u is a permutation and translation of µ, Lemma 8 implies PCS(µ, )=Q u {CS}. (7) Furthermore, u PZ( ). Since X A 0 and M with probability, and the complement of A 0 is empty, taking n = in Lemma 9 shows Q u {CS F } = Q u {CS F,X A 0,M } P 0 = P. (8) Then, the tower property of conditional expectation provides Q u {CS} = E Qu [Q u {CS F }] E Qu [P ] = P, (9) where E Qu is the expectation under Q u. Combining (7) and (9) provides PCS(µ, ) P.

20 0 Article submitted to Operations Research; manuscript no. OPRE We have shown that PCS(µ, ) P for all µ PZ( ). This shows that inf PCS(µ, ) P µpz(. ) To see that the infimum is equal to P when T = R +, consider µ = u =[, 0,...,0]. Lemma 9 shows that, in this case, the inequalities in (8) and (9) are actually equalities. Combining equality in (9) with the equality (7) shows that PCS([, 0,...,0], )=P,implyinginf PCS(µ, ) apple P µpz(. ) This shows that the infimum must in fact equal P. The last paragraph of the proof shows that the infimum of the PCS over the preference zone is achieved by the configuration [, 0,...,0]. The invariance of the PCS to translations and permutations of the configuration shown by Lemma 8 implies that the infimum is also attained by any slippage configuration with parameter. Thus, these slippage configurations are least-favorable for the BIZ procedure with common known variance Theoretical Results for the Heterogeneous Variance Setting We now discuss the heterogeneous known variance setting. We present a continuous-time procedure that is analogous to the discrete-time Alg.. This continuous-time procedure satisfies the IZ guarantee and has tight worst-case preference-zone PCS bounds. We then use this fact to show that Alg. satisfies the IZ guarantee when variances are known and have a special integer multiple structure. For each x let n x (t) = xt. This quantity is the continuous-time analogue of the discretetime quantity n tx in Alg., and in the certain special cases discussed below, n x (t)=n tx for all integer times t. Now define a stochastic process (Y 0 tx : t 0) as Y 0 tx = Y nx(t),x/ x. A straightforward computation shows that (Y 0 tx : t R + ) is a Brownian motion with drift µ x and volatility /,so any algorithm that performs R&S in the continuous-time common-variance case can be run on the modified observation processes Y 0, and the result is a continuous-time R&S algorithm for the original observation process Y. This was also noted for discrete time in Section 3.. With this motivation, the continuous-time BIZ procedure for known heterogeneous variances is obtained by applying the continuous-time BIZ procedure for common sampling variances from (4) to the modified observation process Y 0. More explicitly, this procedure is defined by first setting

21 Article submitted to Operations Research; manuscript no. OPRE =0, P 0 = P, A 0 = {,...,k}, (0a) then defining recursively, for n = 0,,...,k, n+ =inf t T \ [ n, ):minq 0 xa n tx(a n ) apple c or max q 0 xa n tx(a n ) P n, Z n+ arg min xa n q 0 n+,x(a n ), A n+ = A n \{Z n+ }, P n+ = P n min q 0 xa n+ n,x(a n ). (0b) where q 0 t,x(a) is obtained by substituting Y 0 in place of Y and / in place of, q 0 t,x(a)=exp( Y 0 tx) X x 0 A exp ( Y 0 tx 0) =exp Y nx(t),x x X x 0 A exp Y n 0 x (t),x 0 x 0, (0c) and finally letting the selected alternative ˆx be the single entry in A k, ˆx A k. (0d) When sampling variances are identically equal to across alternatives and =/,wehave n x (t)=t, Y 0 tx = Y t,x, q 0 t,x(a)=q t,x (A), and the procedure defined by (0) is identical to (4). The following theorem shows that this procedure satisfies the IZ guarantee, and its worst-case preference-zone PCS bound is tight in continuous time. This is true even when sampling variances di er from each other. Theorem. Let c [0, (P ) k ], T = {Z+, R + }, >0, P (/k, ), and > 0,..., k > 0. Then, under the BIZ procedure defined by (0), PCS(µ, ) P for all µ PZ( ). Furthermore, if T = R +, inf PCS(µ, )=P. µpz( ) Proof: Let ˆx be the selection decision ˆx defined by (0), with the specified values for P,,c,T, and,..., k.weusethesuperscript to emphasize that ˆx assumes sampling variances x.

22 Article submitted to Operations Research; manuscript no. OPRE Let ˆx be the selection decision defined by (4) using =/, and the same specified values of P,, c, and T. The distribution of Y 0 under P µ, is equal to the distribution of Y under P µ,. Consequently, the distribution of ˆx under P µ, is equal to the distribution of ˆx under P µ,. This implies that P µ, ˆx arg max x µ x = P µ, {ˆx arg max x µ x }. The result then follows from applying Theorem to P µ, {ˆx arg max x µ x }. While (0) is directly implementable in continuous time, it is more di cult to apply in discrete time. While one can set T to Z + in (0), the resulting procedure is not always implementable in discrete time. The reason is that (0) requires observations of Y nx(t),x for t T. Ifn x (t)= xt can fail to be an integer for some t T, then these observations may be unavailable in discrete time. However, if the variances have a special integer multiple structure, then (0) is implementable in discrete time, and is equivalent to Alg.. In particular, suppose the variances x are known and satisfy x = a x for some common and integers a,a,...,a k.ifweset =/ and T = Z +, then n x (t)=a x t is always an integer for t Z +, and all observations of Y nx(t),x required by (0) are available in discrete time. Furthermore, in this case, (0) is identical to Alg. with parameters B x = a x, n 0 = 0, and b x = x. A direct consequence of this and Theorem is that Alg. satisfies the IZ guarantee, in this special case. We have just shown the following corollary to Theorem. Corollary. Let x = a x for all x, where a x Z + with a x, > 0. Let c [0, (P ) k ], >0, andp (/k, ). Then, under the BIZ procedure for known heterogeneous sampling variances given in Alg. with n 0 =0, B x = a x,and b tx = x for all x, PCS(µ, ) P for all µ PZ( ). Outside of the common variance setting, the integer multiple structure assumed by Corollary is unlikely to appear in practice. Also, in practice one would set B x to, rather than to the values assumed by Corollary, to improve the responsiveness of the algorithm and reduce expected sample sizes. Thus, while Corollary provides insight into the behavior of Alg., it is not designed to provide an IZ guarantee that directly applies to how this algorithm is used in practice. Instead, we

23 Article submitted to Operations Research; manuscript no. OPRE present Alg. as a heuristic in practical settings, and we use numerical experiments to investigate its statistical properties in the next section. 5. Numerical Results We demonstrate the performance of the BIZ procedure in discrete time with maximum elimination (c = (P ) k ) on standard test problems, and compare it to another leading IZ procedure, the KN procedure of Kim and Nelson (00), first on problems with common known sampling variance, then on problems with common unknown sampling variance, and finally on problems with heterogeneous unknown sampling variance. The KN procedure improves over previously proposed IZ procedures in a number of configurations (Kim and Nelson 00), and the KN family of procedures has been regarded by Kim and Nelson (006) and Branke et al. (007) as state-of-the-art for IZ R&S. Improvements of the original KN procedure, particularly the KN++ procedure of Goldsman et al. (00) and the KVP and UVP procedures of Hong (006), o er better performance than KN in some settings with unknown and/or heterogeneous sampling variance, but KN remains one of the best existing IZ procedures. In problems with known variance, we modify KN from its original version in Kim and Nelson (00) to take advantage of knowing the variance. Where the original procedure uses estimates of the variance, the modified procedure uses the actual value. The modified procedure also uses the tighter constant (h ) =c in place of the parameter h from Kim and Nelson (00), where c =, and satisfies g( )= exp( )= P k.wesetn 0 =. This modified procedure is the same as the P procedure in Wang and Kim (0), and when used in common variance configurations, the same as Paulson s procedure Paulson (964). In problems where the sampling variance is unknown, we use KN as originally described in Kim and Nelson (00), with c = and n 0 = 00. We also compare to the P B procedure of Bechhofer et al. (968), which is BIZ with no elimination, as described in Section 3.3. In our figures, we denote the P B procedure by BKS, the initials of the authors of Bechhofer et al. (968). We examine both the PCS and the expected total number of samples taken, denoted E[N]. We emphasize that N counts the total number of samples taken, and so a procedure without elimination

24 4 Article submitted to Operations Research; manuscript no. OPRE has E[N]=kE[ k ], while a procedure with elimination has E[N] apple ke[ k ]. Rather than plotting E[N] directly, we plot the expected number of samples taken divided by the number of alternatives, E[N]/k. This normalizes E[N] and clarifies performance trends. Figure shows the performance of KN, BKS, and BIZ (Algorithm ), under three di erent configurations with common known variance, described in more detail below. Each row shows performance under a di erent configuration vs. the number of alternatives k. Left-hand panels show E[N]/k and right-hand panels show PCS, obtained using 0,000 independent replications. SC: Row of Figure shows performance under a slippage configuration (SC), in which µ = and µ x = 0 for x 6=. For many procedures, including BIZ and KN, a slippage configuration with parameter is the configuration in the preference zone in which correctly selecting the best is most di cult, and is often used as a test case to better understand the behavior of R&S procedures. Here, =, x = = 00, and P =0.9. Experiments were performed at k =, 3,...,8, and then at integral powers of up to k = 4 = 6, 384. PCS under this SC is shown in the right-hand panel of Row. We know from their IZ guarantees that PCS for all three procedures is bounded below by the target probability P =0.9. Apparent deviations below 0.9 are due to estimation error standard errors for PCS reported for BIZ and BKS are approximately p /0 4 =.003. Under KN, which has a loose worst-case preferencezone PCS bound and which over-delivers on PCS for large problems, PCS quickly rises away from P as the number of alternatives grows. In contrast, under BIZ and BKS, PCS remains close to P. The proximity of PCS to the target shows that, although the lower bound on worst-case preference-zone PCS given in Theorem is no longer tight as we move from continuous to discrete time, the bound remains nearly tight in discrete time, at least in the settings tested. E[N]/k under this SC is shown in the left-hand panel of Row. Points plotted have standard error less than for KN and BIZ, and less than 7 for BKS. As the number of alternatives grows large, KN begins taking a very large number of samples, while the number of samples taken by BIZ grows at a much slower rate. For the largest problem considered, k = 6, 384, KN takes (.4 ± 0.) 0 6

A Maximally Controllable Indifference-Zone Policy

A Maximally Controllable Indifference-Zone Policy A Maimally Controllable Indifference-Zone Policy Peter I. Frazier Operations Research and Information Engineering Cornell University Ithaca, NY 14853, USA February 21, 2011 Abstract We consider the indifference-zone

More information

Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds.

Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. Proceedings of the 015 Winter Simulation Conference L. ilmaz, W. K V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. ASMPTOTIC VALIDIT OF THE BAES-INSPIRED INDIFFERENCE ZONE PROCEDURE:

More information

Indifference-Zone-Free Selection of the Best

Indifference-Zone-Free Selection of the Best Submitted to Operations Research manuscript (Please, provide the manuscript number!) Indifference-Zone-Free Selection of the Best Weiwei Fan Department of Management Science, School of Management, University

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Selecting the Best Simulated System. Dave Goldsman. Georgia Tech. July 3 4, 2003

Selecting the Best Simulated System. Dave Goldsman. Georgia Tech. July 3 4, 2003 Selecting the Best Simulated System Dave Goldsman Georgia Tech July 3 4, 2003 Barry Nelson 1 Outline of Talk 1. Introduction. Look at procedures to select the... 2. Normal Population with the Largest Mean

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Performance Measures for Ranking and Selection Procedures

Performance Measures for Ranking and Selection Procedures Rolf Waeber Performance Measures for Ranking and Selection Procedures 1/23 Performance Measures for Ranking and Selection Procedures Rolf Waeber Peter I. Frazier Shane G. Henderson Operations Research

More information

Multi-Attribute Bayesian Optimization under Utility Uncertainty

Multi-Attribute Bayesian Optimization under Utility Uncertainty Multi-Attribute Bayesian Optimization under Utility Uncertainty Raul Astudillo Cornell University Ithaca, NY 14853 ra598@cornell.edu Peter I. Frazier Cornell University Ithaca, NY 14853 pf98@cornell.edu

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. Proceedings of the 016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. SPEEDING UP PAIRWISE COMPARISONS FOR LARGE SCALE RANKING AND

More information

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries 1. Introduction In this paper we reconsider the problem of axiomatizing scoring rules. Early results on this problem are due to Smith (1973) and Young (1975). They characterized social welfare and social

More information

APPENDIX C: Measure Theoretic Issues

APPENDIX C: Measure Theoretic Issues APPENDIX C: Measure Theoretic Issues A general theory of stochastic dynamic programming must deal with the formidable mathematical questions that arise from the presence of uncountable probability spaces.

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

CHAPTER 8: EXPLORING R

CHAPTER 8: EXPLORING R CHAPTER 8: EXPLORING R LECTURE NOTES FOR MATH 378 (CSUSM, SPRING 2009). WAYNE AITKEN In the previous chapter we discussed the need for a complete ordered field. The field Q is not complete, so we constructed

More information

2.2 Some Consequences of the Completeness Axiom

2.2 Some Consequences of the Completeness Axiom 60 CHAPTER 2. IMPORTANT PROPERTIES OF R 2.2 Some Consequences of the Completeness Axiom In this section, we use the fact that R is complete to establish some important results. First, we will prove that

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Single processor scheduling with time restrictions

Single processor scheduling with time restrictions J Sched manuscript No. (will be inserted by the editor) Single processor scheduling with time restrictions O. Braun F. Chung R. Graham Received: date / Accepted: date Abstract We consider the following

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Selecting a Selection Procedure

Selecting a Selection Procedure Jürgen Branke Stephen E. Chick Christian Schmidt Abstract Selection procedures are used in a variety of applications to select the best of a finite set of alternatives. Best is defined with respect to

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

FINDING THE BEST IN THE PRESENCE OF A STOCHASTIC CONSTRAINT. Sigrún Andradóttir David Goldsman Seong-Hee Kim

FINDING THE BEST IN THE PRESENCE OF A STOCHASTIC CONSTRAINT. Sigrún Andradóttir David Goldsman Seong-Hee Kim Proceedings of the 2005 Winter Simulation Conference M. E. Kuhl, N. M. Steiger, F. B. Armstrong, and J. A. Joines, eds. FINDING THE BEST IN THE PRESENCE OF A STOCHASTIC CONSTRAINT Sigrún Andradóttir David

More information

This means that we can assume each list ) is

This means that we can assume each list ) is This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

Connection to Branching Random Walk

Connection to Branching Random Walk Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host

More information

Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets

Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets Jacob Feldman School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853,

More information

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function Solution. If we does not need the pointwise limit of

More information

Seong-Hee Kim A. B. Dieker. Georgia Institute of Technology 765 Ferst Dr NW Atlanta, GA 30332, USA

Seong-Hee Kim A. B. Dieker. Georgia Institute of Technology 765 Ferst Dr NW Atlanta, GA 30332, USA Proceedings of the 011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. SELECTING THE BEST BY COMPARING SIMULATED SYSTEMS IN A GROUP OF THREE Seong-Hee

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

14 Random Variables and Simulation

14 Random Variables and Simulation 14 Random Variables and Simulation In this lecture note we consider the relationship between random variables and simulation models. Random variables play two important roles in simulation models. We assume

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Upper Bounds on the Bayes-Optimal Procedure for Ranking & Selection with Independent Normal Priors. Jing Xie Peter I. Frazier

Upper Bounds on the Bayes-Optimal Procedure for Ranking & Selection with Independent Normal Priors. Jing Xie Peter I. Frazier Proceedings of the 2013 Winter Simulation Conference R. Pasupathy, S.-H. Kim, A. Tol, R. Hill, and M. E. Kuhl, eds. Upper Bounds on the Bayes-Optimal Procedure for Raning & Selection with Independent Normal

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

On the Asymptotic Validity of Fully Sequential Selection Procedures for Steady-State Simulation

On the Asymptotic Validity of Fully Sequential Selection Procedures for Steady-State Simulation On the Asymptotic Validity of Fully Sequential Selection Procedures for Steady-State Simulation Seong-Hee Kim School of Industrial & Systems Engineering Georgia Institute of Technology Barry L. Nelson

More information

The concentration of the chromatic number of random graphs

The concentration of the chromatic number of random graphs The concentration of the chromatic number of random graphs Noga Alon Michael Krivelevich Abstract We prove that for every constant δ > 0 the chromatic number of the random graph G(n, p) with p = n 1/2

More information

Tractable Sampling Strategies for Ordinal Optimization

Tractable Sampling Strategies for Ordinal Optimization Submitted to Operations Research manuscript Please, provide the manuscript number!) Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

CONTROLLED SEQUENTIAL BIFURCATION: A NEW FACTOR-SCREENING METHOD FOR DISCRETE-EVENT SIMULATION

CONTROLLED SEQUENTIAL BIFURCATION: A NEW FACTOR-SCREENING METHOD FOR DISCRETE-EVENT SIMULATION ABSTRACT CONTROLLED SEQUENTIAL BIFURCATION: A NEW FACTOR-SCREENING METHOD FOR DISCRETE-EVENT SIMULATION Hong Wan Bruce Ankenman Barry L. Nelson Department of Industrial Engineering and Management Sciences

More information

Mean-Variance Utility

Mean-Variance Utility Mean-Variance Utility Yutaka Nakamura University of Tsukuba Graduate School of Systems and Information Engineering Division of Social Systems and Management -- Tennnoudai, Tsukuba, Ibaraki 305-8573, Japan

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 3 1 Terminology For any complexity class C, we define the class coc as follows: coc def = { L L C }. One class

More information

Algorithms for pattern involvement in permutations

Algorithms for pattern involvement in permutations Algorithms for pattern involvement in permutations M. H. Albert Department of Computer Science R. E. L. Aldred Department of Mathematics and Statistics M. D. Atkinson Department of Computer Science D.

More information

Probabilistic Bisection Search for Stochastic Root Finding

Probabilistic Bisection Search for Stochastic Root Finding Probabilistic Bisection Search for Stochastic Root Finding Rolf Waeber Peter I. Frazier Shane G. Henderson Operations Research & Information Engineering Cornell University, Ithaca, NY Research supported

More information

Common Knowledge and Sequential Team Problems

Common Knowledge and Sequential Team Problems Common Knowledge and Sequential Team Problems Authors: Ashutosh Nayyar and Demosthenis Teneketzis Computer Engineering Technical Report Number CENG-2018-02 Ming Hsieh Department of Electrical Engineering

More information

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming

Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of

More information

Economics of Networks Social Learning

Economics of Networks Social Learning Economics of Networks Social Learning Evan Sadler Massachusetts Institute of Technology Evan Sadler Social Learning 1/38 Agenda Recap of rational herding Observational learning in a network DeGroot learning

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs CS26: A Second Course in Algorithms Lecture #2: Applications of Multiplicative Weights to Games and Linear Programs Tim Roughgarden February, 206 Extensions of the Multiplicative Weights Guarantee Last

More information

E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments

E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments Jun Luo Antai College of Economics and Management Shanghai Jiao Tong University

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lyapunov Stability Theory

Lyapunov Stability Theory Lyapunov Stability Theory Peter Al Hokayem and Eduardo Gallestey March 16, 2015 1 Introduction In this lecture we consider the stability of equilibrium points of autonomous nonlinear systems, both in continuous

More information

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm Probabilistic Graphical Models 10-708 Homework 2: Due February 24, 2014 at 4 pm Directions. This homework assignment covers the material presented in Lectures 4-8. You must complete all four problems to

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Inference in Bayesian Networks

Inference in Bayesian Networks Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)

More information

PERFORMANCE OF VARIANCE UPDATING RANKING AND SELECTION PROCEDURES

PERFORMANCE OF VARIANCE UPDATING RANKING AND SELECTION PROCEDURES Proceedings of the 2005 Winter Simulation Conference M. E. Kuhl, N. M. Steiger, F. B. Armstrong, and J. A. Joines, eds. PERFORMANCE OF VARIANCE UPDATING RANKING AND SELECTION PROCEDURES Gwendolyn J. Malone

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

An Optimization-Based Heuristic for the Split Delivery Vehicle Routing Problem

An Optimization-Based Heuristic for the Split Delivery Vehicle Routing Problem An Optimization-Based Heuristic for the Split Delivery Vehicle Routing Problem Claudia Archetti (1) Martin W.P. Savelsbergh (2) M. Grazia Speranza (1) (1) University of Brescia, Department of Quantitative

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

MTH 2032 Semester II

MTH 2032 Semester II MTH 232 Semester II 2-2 Linear Algebra Reference Notes Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2 ii Contents Table of Contents

More information

Module 1. Probability

Module 1. Probability Module 1 Probability 1. Introduction In our daily life we come across many processes whose nature cannot be predicted in advance. Such processes are referred to as random processes. The only way to derive

More information

How Much Evidence Should One Collect?

How Much Evidence Should One Collect? How Much Evidence Should One Collect? Remco Heesen October 10, 2013 Abstract This paper focuses on the question how much evidence one should collect before deciding on the truth-value of a proposition.

More information

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes. CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers

More information

Minimax risk bounds for linear threshold functions

Minimax risk bounds for linear threshold functions CS281B/Stat241B (Spring 2008) Statistical Learning Theory Lecture: 3 Minimax risk bounds for linear threshold functions Lecturer: Peter Bartlett Scribe: Hao Zhang 1 Review We assume that there is a probability

More information

2.1 Convergence of Sequences

2.1 Convergence of Sequences Chapter 2 Sequences 2. Convergence of Sequences A sequence is a function f : N R. We write f) = a, f2) = a 2, and in general fn) = a n. We usually identify the sequence with the range of f, which is written

More information

Exercises. Template for Proofs by Mathematical Induction

Exercises. Template for Proofs by Mathematical Induction 5. Mathematical Induction 329 Template for Proofs by Mathematical Induction. Express the statement that is to be proved in the form for all n b, P (n) forafixed integer b. 2. Write out the words Basis

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

SEQUENTIAL ALLOCATIONS THAT REDUCE RISK FOR MULTIPLE COMPARISONS. Stephen E. Chick Koichiro Inoue

SEQUENTIAL ALLOCATIONS THAT REDUCE RISK FOR MULTIPLE COMPARISONS. Stephen E. Chick Koichiro Inoue Proceedings of the 998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. SEQUENTIAL ALLOCATIONS THAT REDUCE RISK FOR MULTIPLE COMPARISONS Stephen E. Chick Koichiro

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

4 Nonlinear Equations

4 Nonlinear Equations 4 Nonlinear Equations lecture 27: Introduction to Nonlinear Equations lecture 28: Bracketing Algorithms The solution of nonlinear equations has been a motivating challenge throughout the history of numerical

More information

The Boundary Problem: Markov Chain Solution

The Boundary Problem: Markov Chain Solution MATH 529 The Boundary Problem: Markov Chain Solution Consider a random walk X that starts at positive height j, and on each independent step, moves upward a units with probability p, moves downward b units

More information

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets 1 Rational and Real Numbers Recall that a number is rational if it can be written in the form a/b where a, b Z and b 0, and a number

More information

SUPPLEMENTARY MATERIAL FOR FAST COMMUNITY DETECTION BY SCORE. By Jiashun Jin Carnegie Mellon University

SUPPLEMENTARY MATERIAL FOR FAST COMMUNITY DETECTION BY SCORE. By Jiashun Jin Carnegie Mellon University SUPPLEMENTARY MATERIAL FOR FAST COMMUNITY DETECTION BY SCORE By Jiashun Jin Carnegie Mellon University In this supplement we propose some variants of SCORE and present the technical proofs for the main

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

PUTNAM TRAINING NUMBER THEORY. Exercises 1. Show that the sum of two consecutive primes is never twice a prime.

PUTNAM TRAINING NUMBER THEORY. Exercises 1. Show that the sum of two consecutive primes is never twice a prime. PUTNAM TRAINING NUMBER THEORY (Last updated: December 11, 2017) Remark. This is a list of exercises on Number Theory. Miguel A. Lerma Exercises 1. Show that the sum of two consecutive primes is never twice

More information

Change-point models and performance measures for sequential change detection

Change-point models and performance measures for sequential change detection Change-point models and performance measures for sequential change detection Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece moustaki@upatras.gr George V. Moustakides

More information

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4]. Lecture Notes: Rank of a Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Linear Independence Definition 1. Let r 1, r 2,..., r m

More information

Some Fixed-Point Results for the Dynamic Assignment Problem

Some Fixed-Point Results for the Dynamic Assignment Problem Some Fixed-Point Results for the Dynamic Assignment Problem Michael Z. Spivey Department of Mathematics and Computer Science Samford University, Birmingham, AL 35229 Warren B. Powell Department of Operations

More information

Conditions for Robust Principal Component Analysis

Conditions for Robust Principal Component Analysis Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

THE STRUCTURE OF RAINBOW-FREE COLORINGS FOR LINEAR EQUATIONS ON THREE VARIABLES IN Z p. Mario Huicochea CINNMA, Querétaro, México

THE STRUCTURE OF RAINBOW-FREE COLORINGS FOR LINEAR EQUATIONS ON THREE VARIABLES IN Z p. Mario Huicochea CINNMA, Querétaro, México #A8 INTEGERS 15A (2015) THE STRUCTURE OF RAINBOW-FREE COLORINGS FOR LINEAR EQUATIONS ON THREE VARIABLES IN Z p Mario Huicochea CINNMA, Querétaro, México dym@cimat.mx Amanda Montejano UNAM Facultad de Ciencias

More information

SELECTING THE NORMAL POPULATION WITH THE SMALLEST COEFFICIENT OF VARIATION

SELECTING THE NORMAL POPULATION WITH THE SMALLEST COEFFICIENT OF VARIATION SELECTING THE NORMAL POPULATION WITH THE SMALLEST COEFFICIENT OF VARIATION Ajit C. Tamhane Department of IE/MS and Department of Statistics Northwestern University, Evanston, IL 60208 Anthony J. Hayter

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Lecture 5: Counting independent sets up to the tree threshold

Lecture 5: Counting independent sets up to the tree threshold CS 7535: Markov Chain Monte Carlo Algorithms Fall 2014 Lecture 5: Counting independent sets up to the tree threshold Lecturer: Richard Brooks and Rakshit Trivedi Date: December 02 In this lecture, we will

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Chapter One. The Real Number System

Chapter One. The Real Number System Chapter One. The Real Number System We shall give a quick introduction to the real number system. It is imperative that we know how the set of real numbers behaves in the way that its completeness and

More information

Subspace Identification

Subspace Identification Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating

More information

CONSISTENCY OF SEQUENTIAL BAYESIAN SAMPLING POLICIES

CONSISTENCY OF SEQUENTIAL BAYESIAN SAMPLING POLICIES CONSISTENCY OF SEQUENTIAL BAYESIAN SAMPLING POLICIES PETER I. FRAZIER AND WARREN B. POWELL Abstract. We consider Bayesian information collection, in which a measurement policy collects information to support

More information

Chapter 1. Poisson processes. 1.1 Definitions

Chapter 1. Poisson processes. 1.1 Definitions Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s

More information

Finite-Horizon Statistics for Markov chains

Finite-Horizon Statistics for Markov chains Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update

More information

The integers. Chapter 3

The integers. Chapter 3 Chapter 3 The integers Recall that an abelian group is a set A with a special element 0, and operation + such that x +0=x x + y = y + x x +y + z) =x + y)+z every element x has an inverse x + y =0 We also

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

Finite Fields: An introduction through exercises Jonathan Buss Spring 2014

Finite Fields: An introduction through exercises Jonathan Buss Spring 2014 Finite Fields: An introduction through exercises Jonathan Buss Spring 2014 A typical course in abstract algebra starts with groups, and then moves on to rings, vector spaces, fields, etc. This sequence

More information

The Threshold Algorithm

The Threshold Algorithm Chapter The Threshold Algorithm. Greedy and Quasi Greedy Bases We start with the Threshold Algorithm: Definition... Let be a separable Banach space with a normalized M- basis (e i,e i ):i2n ; we mean by

More information

Production Policies for Multi-Product Systems with Deteriorating. Process Condition

Production Policies for Multi-Product Systems with Deteriorating. Process Condition Production Policies for Multi-Product Systems with Deteriorating Process Condition Burak Kazaz School of Business, University of Miami, Coral Gables, FL 3324. bkazaz@miami.edu Thomas W. Sloan College of

More information

Fully Sequential Selection Procedures with Control. Variates

Fully Sequential Selection Procedures with Control. Variates Fully Sequential Selection Procedures with Control Variates Shing Chih Tsai 1 Department of Industrial and Information Management National Cheng Kung University No. 1, University Road, Tainan City, Taiwan

More information