Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

Size: px
Start display at page:

Download "Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds."

Transcription

1 Proceedings of the 2016 Winter Siulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtan, E. Zhou, T. Huschka, and S. E. Chick, eds. THE EMPIRICAL LIKELIHOOD APPROACH TO SIMULATION INPUT UNCERTAINTY Henry La Departent of Industrial and Operations Engineering University of Michigan 1205 Beal Ave. Ann Arbor, MI 48109, USA Huajie Qian Departent of Matheatics University of Michigan 2074 East Hall 530 Church Street Ann Arbor, MI 48109, USA ABSTRACT We study the epirical likelihood ethod in constructing statistically accurate confidence bounds for stochastic siulation under nonparaetric input uncertainty. The approach is based on positing a pair of distributionally robust optiization, with a suitably averaged divergence constraint over the uncertainput distributions, and calibrated with a χ 2 -quantile to provide asyptotic coverage guarantees. We present the theory giving rise to the constraint and the calibration. We also analyze the perforance of our stochastic optiization algorith. We nuerically copare our approach with existing standard ethods such as the bootstrap. 1 INTRODUCTION Stochastic siulation often relies onput distributions that are not fully known but only observed via a finite saple of input data. The quantification of the errors in perforance analyses due to the isspecification of input odels is known as input uncertainty. This paper focuses particularly on situations where there are no specific assuptions on the paraetric for of the input odels, or nonparaetric input uncertainty. One ajor goal in this context is to construct confidence interval (CI) for the target perforance easure that accounts for input odel errors on top of siulation errors. This paper proposes an optiization-based ethod to construct such a CI. The upper and lower bounds of the CI are obtained as the optial values of a pair of optiization progras posited over the space of input probability distributions. These progras can be interpreted as finding the upper and lower worst-case perforance easures, subject to a constraint on the unknownput distributions within a neighborhood of the epirical distributions. We show that, by choosing the right size of this neighborhood and the appropriate nonparaetric distance to easure its size, these worst-case optiizations lead to asyptotically exact confidence bounds for covering the true perforance easure. We develop these statistical guarantees by using the epirical likelihood (EL) ethod, which can be viewed as a nonparaetric analog of the axiu likelihood theory. Our approach resebles, but also should be contrasted with, the literature of distributionally robust optiization (DRO) (e.g., Delage and Ye 2010, Ben-Tal et al. 2013, Goh and Si 2010). This literature ais to find worst-case stochastic perforance easures (or optiizes decisions over the worst-case) subject to so-called uncertainty sets that represent the inforation or uncertainty about an underlying probability distribution. One coon constraint is a nonparaetric neighborhood ball around a baseline odel, typically corresponding to the odeler s best guess of the truth, easured by soe nonparaetric distance such as φ-divergence (Pardo 2005). When this ball contains the true distribution, the optial values of the worst-case optiizations will cover the true easure. Thus, it is argued that one should calibrate the ball size by estiating the divergence between the data and the baseline distribution, and consequently translate the confidence of this estiationto a confidence bound for the perforance easure. In practice, this

2 iplies calibration using a goodness-of-fit χ 2 -quantile, with the degree of freedo corresponding to the effective nuber of categories in a discrete distribution, or divergence estiation that involves estiating a functional of density. These ethods could runto the statistical challenges in divergence estiation, and also potentially a loss of coverage accuracy in translating the uncertainty set into a CI. One ain contribution of this paper is thus a new way of interpreting these DROs via the EL ethod. Through this connection we locate the precise nonparaetric distance one should use in constructing valid CIs under nonparaetric input uncertainty. Our distance deviates fro those in the literature as it consists of a weighted average of a collection of divergences, each applied on andependent input odel needed in the siulation. We also show that the size of the nonparaetric neighborhood can be taken as a χ 2 -quantile, with degree of freedo exactly one, independent of the nuber of input odels and the continuity of these rando variates. This is in sharp contrast with the proposed calibration ethods in the DRO literature. In general, the optiization progras we ipose is siulation-based. As another contribution, we investigate the properties of irror descent stochastic approxiation (MDSA) (Neirovski et al. 2009) used to locally solve these progras. This siulation-based optiization fraework provides an alternate approach to quantifying nonparaetric input uncertainty copared to the current ajor technique of bootstrap resapling. The latter is a sapling approach that repeatedly generates new epirical distributions to drive the siulation runs. On a high level, our approach trades the coputational load in resapling with the optiization routine needed in the iteration of the SA algorith. We investigate this tradeoff, and copare the perforances of the obtained CIs and their vulnerability to the siulation noise. The rest of this paper is organized as follows. Section 2 lays out our optiization progras and explains the using the EL ethod. Section 3 presents and analyzes our MDSA algorith. Section 4 show soe nuerical results and coparison with the bootstrap. 2 EMPIRICAL-LIKELIHOOD-BASED CONFIDENCE INTERVAL We consider a perforance easure in the for Z(P 1,...,P ) = E P1,...,P [h(x 1,...,X )], (1) where each X i = (X i (1),...,X i (T i )) is a sequence of T i i.i.d. rando variables under andependent input odel (or distribution) P i, and T i is a deterinistic run length. The function h apping fro X T 1 1 XT to R is assued coputable given the inputs X i s. Our preise is that each P i is unknown but i.i.d. data X i,1,...,x i,ni are available. The true value of (1) is therefore unknown even under abundant siulation runs. Our goal is to find a (1 α) level CI for the true perforance easure. Our approach is the following. For odel i, we consider the probability siplex of the weights w i = (w i,1,...,w i,ni ) over the support set {X i,1,...,x i,ni }. We consider the pair of optiization progras L α /U α :=in/ax Z(w 1,...,w ) subject to 2 log w i, j X 2 1,1 α w i, j = 1, for all i w i, j 0, for all i, j (2) where X 2 1,1 α is the 1 α quantile of the chi-square distribution with degree of freedo one, and each w i in Z(w 1,...,w ) should be interpreted as the distribution putting weight w i, j on X i, j, for j = 1,...,. The forulation (2) can be interpreted as the worst-case optiizations over the independent input distributions subject to a weighted-averaged divergence. Note that the quantity (1/ ) logw i, j is

3 the Burg-entropy divergence (Ben-Tal et al. 2013) between the probability weights w i and the unifor weights. Thus, letting n = be the total saple size, we have ( ) 1 n log w i, j = 1 n log w i, j which can be viewed as a weighted average of the individual Burg-entropy divergences iposed on different input odels, each weight being /n. The first constraint in (2) iposes a bound X1,1 α 2 /(2n) on this averaged divergence. 2.1 Epirical Likelihood Theory for Su of Means We justify the forulation (2) using the EL ethod. First proposed by Owen (1988), this ethod can be viewed as a nonparaetric counterpart of axiu likelihood theory. Analogous to Wilks Theore (Cox and Hinkley 1979) in axiu likelihood theory that states the convergence of the so-called logarithic likelihood ratio to a χ 2 -distribution, the nonparaetric profile likelihood ratio in EL converges siilarly. This will be the key to obtaining our forulation (2). We describe the EL ethod. Given independent sets of data {X i,1,...,x i,ni }, we define the nonparaetric likelihood, in ters of the probability weights w i for the i-th input odel, to be w i, j. The ulti-saple likelihood is w i, j. It can be shown, by a siple convexity arguent, that unifor weights w i, j = 1/ for each odel axiize (1/). Moreover, unifor weights still axiize evef one allows putting weights outside the support of data, in which case w i, j < 1 for soe i, aking w i, j even saller. Therefore, (1/) can be viewed as the nonparaetric axiu likelihood estiate. To proceed, we need to define a paraeter of interest that is deterined by the input odels. Inference about this paraeter is carried out based on the so-called profile nonparaetric likelihood ratio, which is the axiu likelihood ratio between all weights that epirically produce a given value of the paraeter, and the unifor weights (i.e. the MLE). In our case the paraeter of interest is the perforance easure Z(P 1,...,P ), which is however possibly nonlinear in P i s and hence not easy to work with. Fortunately, it turns out that there isn t uch loss to deal with the special case h(x 1,...,X ) = h i(x i (1)) for soe h i : R R and T i = 1 for all i, i.e. the paraeter of interest is siply the su of eans of rando variables h i (X i (1)). We will focus on this case for now. To siplify further, let us consider estiating EX i, where for convenience we write X i in place of X i (1). The profile nonparaetric likelihood ratio is defined as R(µ) = ax { w i, j w i, j X i, j = µ, w i, j = 1,w i, j 0, for all i, j }, (3) and is defined to be 0 if the optiization proble in (3) is infeasible. Before getting into details of the asyptotic theory, we note that usually the logarithic profile nonparaetric likelihood ratio at the true value, i.e. 2logR( EX i(1)), asyptotically follows soe chisquare distribution whose degree of freedo is equal to the nuber of non-probability-siplex constraints, e.g. w i, jx i, j = µ in (3). As another interpretation, treating the weights w i as paraeters, the degree of freedo is the difference in diensions of the full and constrained paraeter space, and typically d constraints result in a loss of d diensions of the paraeter space. Now we state our theore. Theore 1 Let X i be a rando variable distributed under P i. Assue 0 < Var(X i) <, and i I c i I always holds for soe constant c > 0, where I = {i Var(X i ) > 0}. Then 2logR( EX i) converges in distribution to X1 2, the chi-squared distribution with degree of freedo one, as for i I. In this theore only saple sizes of those having positive variance are required to grow to infinity. To see the reason for this, note that data of inputs with zero variance are always equal to the true ean so we

4 can always assign the the unifor weights in (3). Other than these zero-variance inputs, the condition i I c i I basically forces the saple sizes to grow at the sae rate. Theore 1 is a ulti-saple generalization of the well known epirical likelihood theore for single-saple ean, which can be found in Chapter 2 of Owen (2001). We state it as a special case where = 1. Theore 2 Let Y 1,Y 2,...,Y n be i.i.d. rando variables distributed under soe distribution P, 0 < Var(Y 1 ) <. Then 2logR(EY 1 ) converges in distribution to X1 2, as n. The function R( ) here is the sae as that in (3) with = 1,n 1 = n,x 1, j = Y j. 2.2 Epirical-likelihood-based Confidence Interval We discuss how to construct confidence interval for the quantity Z(P 1,...,P ) based on the EL theory presented in the last section, and thus justify the validity of the forulation (2). We will first study the linear output case, and then discuss the general perforance easure in (1). As pointed out in the last section, linear output takes the for h(x 1,...,X ) = h i(x i ), where X i is distributed under P i. For output of this particular for, Theore 1 iplies that the optiization pair (2) gives an asyptotically correct confidence interval with coverage probability 1 α, naely Theore 3 Assue 0 < Var(h i(x i )) <, and i I c i I for soe constant c > 0, where I = {i Var(h i (X i )) > 0}. Then ( where P L α Eh i (X i ) U α ) 1 α, as for i I, L α /U α :=in/ax and X 2 1,1 α is the 1 α quantile of X 2 1. subject to 2 w i, j h i (X i, j ) log w i, j X 2 1,1 α w i, j = 1, for all i w i, j 0, for all i, j Proof. By applying Theore 1 to h i (X i, j ), we know that the set {µ R 2logR(µ) X 2 1,1 α } contains the true su Eh i(x i ) with probability 1 α asyptotically. Note that this set can be identified as U = { w i, j h i (X i, j ) 2 log w i, j X 2 1,1 α, w i, j = 1,w i, j 0 It is obvious that L α /U α = in/ax{µ µ U}. So if the set U is convex, then U = [L α,u α ], and we conclude the theore. To show convexity, it s enough to notice that the feasible set of (4) is convex, and the objective is linear in w i, j. We now extend our result to the general perforance easure (1). We show that, by decoposing Z(w 1,...,w ) into linear and nonlinear parts via a Taylor-like expansion, applying Theore 3 to the linear part, and controlling the agnitude of the nonlinear part, we can construct a CI that has an asyptotic guarantee siilar to Theore 3. We assue that Assuption 1 Var(G i(x i )) > 0, where G i (x) = T i E P 1,...,P [h(x 1,...,X ) X i ( j) = x]. }. (4)

5 Assuption 2 Denote the index I i = (I i (1),...,I i (T i )) {1,2,...,T i } T i, and X i,ii = (X i (I i (1)),...,X i (I i (T i ))). Assue that there exists soe positive integer k such that h(x 1,I1,...,X,I ) has finite 2k-th oent for all possible choices of I i s. It is not difficult to see that Assuption 1 is the counterpart of the condition Var(h i(x i )) > 0 in Theore 3, which is needed in dealing with the linear part of Z(w 1,...,w ), and Assuption 2 is used for controlling the nonlinear part. We state our result for the general perforance easure below. Theore 4 If Assuptions 1, 2 hold, and i c always holds for soe constant c > 0, then ( P ( 1 L α + O p n k 1 k ) Z(P 1,...,P ) U α + O p ( 1 n k 1 k )) 1 α, as, where n = and L α,u α are defined as in (2). This theore shows that if at least sixth oents (k 3) of the function h are finite, then the difference O p (1/n k 1 k ) is asyptotically negligible copared with O p (1/ n), the length of the CI. This holds trivially if, for instance, h is a bounded function. 3 MIRROR DESCENT STOCHASTIC APPROXIMATION This section focuses on solving optiization proble (2), using the irror descent (MD) algorith. The MD algorith was first proposed by Neirovsky et al. (1982) for deterinistic convex optiization general nored space, otivated by the challenge that the standard gradient descent in the Euclidean space ay not generally ake sense because the prial space, where the solution lies on, can be different fro its dual, where the gradient is defined on. It resolves the issue by apping the solution to the dual space and conducts gradient descent therein. For optiizations in R n, one does not necessarily need to use MD since the prial and the dual space are the sae. However, with a given set of constraints, a judicious choice of a prial-dual ap can result iteration subroutine that is coputationally efficient. Such observations extend to the setting where gradient can only be siulated. In this case, the algorith becoes a constrained SA procedure, and leads to MDSA. Here we outline the MDSA procedure for the following generic version of proble (2) with η > 0 in Z(w 1,...,w ) subject to 2 log w i, j η w i, j = 1, for all i w i, j 0, for all i, j (5) The feasible set here, denoted by A, is convex and satisfies A P R n, where P denotes the probability siplex associated with a support set of size, and n =. The nor on R s chosen to be the 1-nor (w 1,...,w ) 1 = w i, j. First we take the entropy distance generating function ω on P (used in the so-called entropic descent algorith (Beck and Teboulle 2003)) ω(w 1,...,w ) = w i, j logw i, j. Strong convexity of ω is guaranteed by the following proposition, whose proof is siilar to that of the special case = 1 in Neirovski et al. (2009).

6 Proposition 1 ω(w 1,...,w ) is strongly convex with paraeter 1/ w.r.t. 1-nor in the relative interior of P, i.e. ω(u 1,...,u ) ω(w 1,...,w ) ω w i, j (w1,...,w ) (u i, j w i, j ) ( u i w i 1 ) 2. Corresponding to ω is the prox-function responsible for projecting the solution back to the prial decision space ω V (u 1,...,u ;w 1,...,w ) = ω(u 1,...,u ) ω(w 1,...,w ) i, j w i, j ) (w1,...,w )(u = w i, j log w i, j u i, j. w i, j In each step of MDSA, the current solution (w k 1,...,wk ) is updated via the prox-apping prox(w k 1,...,w k ) = argin (w1,...,w ) A γ k i, j where γ k is the step size, and 3.1 Gradient Estiation Z (w i, j w k i, w j) +V (w k 1,...,w k ;w 1,...,w ), (6) i, j Z/ wi, j is an estiate for each coponent of the gradient. Here we discuss how to estiate the gradient of the objective Z(w 1,...,w ), which is needed in the prox-apping (6). It is not hard to see that Z(w 1,...,w ) is a ultivariate polynoial in w i, j, and hence its gradient is also a polynoial. However, a closer exaination reveals that the nuber of ters in this polynoial grows as fast as n T, where s the order of the saple size and T the tie horizon. Thus it is generally ipossible to copute the gradient by naively suing up all the ters, and siulation is needed. We adopt the proposal by Ghosh and La (2015a) and Ghosh and La (2015b) of using the directional derivative ψ i, j = d dε Z(w 1,...,(1 ε)w i + εe i, j,...,w ) ε=0 instead of the standard derivative Z/ w i, j, where e i, j is the j-th coordinate vector of R. It s shown that substituting ψ i, j in place of Z/ w i, j retains all properties needed in the prox-apping (6), and appealingly ψ i, j is siulable. Here is an extension of their result. Proposition 2 For any (w 1,...,w ) P with each w i, j > 0, let ψ i, j = d dε Z(w 1,...,(1 ε)w i + εe i, j,...,w ) ε=0, and Z/ w i, j be the standard derivative. Then and where ψ i, j (u i, j w i, j ) = Z (u i, j w i, j ), for any (u 1,...,u ) w i, j ψ i, j = E w1,...,w [h(x 1,...,X )S i, j (X i )], S i, j (X i ) = T i t=1 1 { X i (t) = X i, j } w i, j T i. This proposition suggests the following unbiased estiator for ψ i, j ˆψ i, j = 1 R R r=1 P h(x r 1,...,X r )S i, j (X r i ), (7) where X r 1,...,Xr,r = 1,...,R are R independent replications of the input processes generated under weights (w 1,...,w ), and are used siultaneously in all ˆψ i, j s.

7 3.2 Coputing the Prox-apping We discuss how to solve the iniization proble in prox-apping (6) with Z/ wi, j replaced by ˆψ i, j. Let s work with the generic for in subject to 2 ξ i, j w i, j +V (u 1,...,u ;w 1,...,w ) log w i, j η w i, j = 1, for all i w i, j 0, for all i, j. (8) where ξ i, j s are put in place of the gradient estiators. Seeing that the decision space of (8) has diension, which can be significantly larger than the nuber of functional constraints +1, our strategy is to first solve the Lagrangian dual proble, then use the dual optial solution to recover the prial optial solution. Consider the Lagrangian = L(w 1,...,w, λ,β) [ ni ( ξ i, j w i, j + w i, j log w i, j u i, j The dual function defined for λ R,β 0 is g(λ,β) = in L(w 1,...,w, λ,β) w 1,...,w 0 = = in w i, j 0 ) ( )] ( ni + λ i w i, j 1 β 2 ( (ξ i, j + λ i )w i, j + w i, j log w i, j u i, j 2β log w i, j ( (ξ i, j + λ i ) w i, j + w i, j log w i, j u i, j 2β log w i, j ) log w i, j + η ) ) λ i βη (9) λ i βη, (10) where w i, j is the iniizer of the inner optiization (9). This inner optiization can be solved efficiently using Newton s ethod. In other words, g(λ,β) = L( w 1,..., w, λ,β) can be easily evaluated. Moreover, it can be shown that w i, j is always strictly positive, hence L/ w i, j wi, j = w i, j = 0. By this observation and the chain rule, the first derivatives of the dual function are n g i = λ i w i, j 1, g β = 2 log w i, j η. (11) To obtain the second derivatives, note that w i, j is deterined by ξ i, j + λ i + log w i, j µ i, j + 1 2β w i, j = 0. Iplicit differentiation gives w i, j λ i = w2 i, j w i, j + 2β, w i, j λ s = 0, for s i, w i, j β = 2 w i, j w i, j + 2β.

8 So the second derivatives are 2 g λi 2 = w 2 i, j w i, j + 2β, 2 g λ i λ s = 0 for s i, 2 g β 2 = 2 g λ i β = ni 4 w i, j + 2β, 2 w i, j w i, j + 2β. (12) Therefore we have shown (10), (11) and (12) that the dual function g(λ,β), its gradient and its Hessian can all be efficiently evaluated. This allows us to solve the dual proble ax λ,β g(λ,β) subject to λ = (λ 1,...,λ ) R,β 0 (13) efficiently using, e.g., the Interior Point ethod. The following proposition helps justify our proposal of solving the dual (13) instead. Proposition 3 There is a unique optial solution (λ1,...,λ,β ) to the dual (13). Moreover, the prial optial solution w i, j to (8) can be obtained by solving ξ i, j + λi + log w i, j + 1 2β u i, j w = 0,i = 1,...,, j = 1,...,. i, j The full MDSA algorith is described in Algorith 1, which possesses the following convergence guarantee. Theore 5 Suppose there exists a unique optial solution (w 1,...,w ) for (5) such that for any (w 1,...,w ) in the feasible set Z (w 1,...,w )(w i, j w i, w j) = 0 if and only if w i = w i for all i. (14) i, j Then the sequence ( w k 1,...,wk ) generated by Algorith 1 with step size γk such that k=1 γ k =, k=1 γ 2 k <, converges to (w 1,...,w ) alost surely. It is easy to verify that the condition (14) ust hold if the objective is convex, provided the optial solutios unique. Besides convexity, condition (14) also applies to objectives that along any ray starting fro the optial solutios a strictly increasing function. 4 NUMERICAL EXPERIMENT In this section we present soe siulation study on the validity of the proposed ethod, and copare our ethod with bootstrap resapling (Barton and Schruben 1993, Barton and Schruben 2001). We will show that our ethod produces statistically valid CIs that have siilar coverage, and in soe sense are ore stable copared with the bootstrap. We consider the canonical M/M/1 queue with arrival rate 0.8 and service rate 1. The syste is epty when the first custoer coes in. We are interested in the probability that the 20-th custoer waits for

9 Algorith 1 MDSA for solving (5) Input: a paraeter η > 0, anitial feasible solution (w 1 1,...,w1 ), a step size sequence γ k, and nuber of replications per iteration R. Iteration: set k = 1 1. Estiate ˆψ k i, j = ˆψ i, j(w k 1,...,wk ),i = 1,...,, j = 1,..., using ˆψ k i, j = 1 R R r=1 h(x r 1,...,X r )S i, j (X r i ) where X r 1,...,Xr,r = 1,...,R are R independent replications of the input processes under distributions (w k 1,...,wk ) 2. Copute the optial solution (λ1 k+1,...,λ k+1,β k+1 ) to ax g(λ,β) subject to λ = (λ 1,...,λ ) R,β 0 as discussed in Section 3.2, with ξ i, j = γ k ˆψ k i, j,u i, j = w k i, j. 3. Solve the equations γ k ˆψ k i, j + λ k+1 i for (w k+1 1,...,w k+1 ), then go back to 1. + log wk+1 i, j w k i, j + 1 2β k+1 w k+1 i, j = 0 longer than 2 units of tie to get served. To put it in the for of (1), let A t be the inter-arrival tie between the t-th and (t + 1)-th custoers, S t be the service tie of the t-th custoer, and h(a 1,A 2,...,A 19,S 1,S 2,...,S 19 ) = 1 {W20 >2}, (15) where the waiting tie W 20 is calculated via the Lindley recursion W 1 = 0,W t+1 = ax{w t + S t A t,0}, for t = 1,...,19. (16) To test the ethod, we pretend that both the inter-arrival tie distribution and service tie distribution are unknown, but data for both inputs are accessible. Specifically, we generate two saples of size n 1 and n 2 fro exponential distribution with rate 0.8 and 1 respectively, and plug in the data into (2) to copute a 95% CI using Algorith 1. The paraeters of Algorith 1 are epirically chosen based on several test runs. In all the following experients, the step size is set to be γ k = 1/(2 n 1 k), and the nuber of replications for gradient estiation R = 30. The algorith is terinated if the difference easured in 1-nor between the average of the last 50 iterates and that of the last 51 to 100 iterates is less than soe sall ε > 0, which shall be specified in each of the experients below. The average of the last 50 iterates also serves as the final output of the algorith. Apart fro the stochastic nature of the algorith, another source of uncertainty that affects the CI is the final evaluation of the objective at the confidence bounds, i.e. coputing L α,u α. The bounds are coputed by taking the average of a nuber of replications, called R e, of (15). Since the length of the CI with only input uncertainty is of order 1/ n 1, to ake the stochastic error negligible, R e is chosen to be oderately larger than n 1,n 2. The value of R e for each experients is also to be specified below.

10 4.1 Accuracy of the EL confidence interval We draw 100 saples of sizes n 1 = n 2 = 30,n 1 = n 2 = 50,n 1 = n 2 = 100, respectively, and construct one CI based on each saple. Table 1 shows that the CIs have accurate coverage probabilities aong all the considered saple sizes, and can be coputed in a reasonably short tie. Here R e = 1000 is large enough because the saple size is no ore than 100. To verify that the algorith has indeed converged under these stopping criteria ε, trace plots of coponents of the weight vector are extracted. See Figure 1 for soe of the. Table 1: Perforance of EL ethod under various saple sizes. # of replications for final evaluation R e = 1000 for all, stopping criterion paraeter ε are specified in each case. Saple size 30,ε = ,ε = ,ε = Run tie per CI(seconds) Coverage probability estiate % CI for coverage probability [0.893, 0.987] [0.867, 0.973] [0.893, 0.987] # of replications of h per CI # of iterations per CI(in+ax) w 1, w 2, iteration iteration (a) w 1,1 (b) w 2,1 Figure 1: Trace plot of the first coponent of the weight vectors for inter-arrival tie and service tie. 4.2 Coparison with bootstrap resapling The bootstrap has been widely used to approxiate sapling distribution of a statistic fro which CIs can be constructed. One ethod of constructing CIs using the bootstrap is the percentile bootstrap used in Barton and Schruben (1993) and Barton and Schruben (2001). Given a pair of saples, A 1,A 2,...,A n 1 for the inter-arrival tie and S 1,S 2,...,S n 2 for the service tie, the percentile bootstrap proceeds as follows. First choose B, the nuber of bootstrap saples and N, the nuber of siulation replications for each bootstrap saple. Then for each b = 1,2,...,B, (1) draw a siple rando saple of size n 1 with replaceent fro {A 1,...,A n 1} and a saple of size n 2 fro {S 1,...,S n 2}, denoted by {A 1 b,...,an 1 b }, and {S1 b,...,sn 2 b } respectively; (2) generate N replications of h with the inter-arrival distribution being unifor over A 1 b,...,an 1 b and the service tie distribution being unifor over Sb 1,...,Sn 2 b, and take the average Z b as the estiate of Eh. Finally output the 0.025(B + 1)-th and 0.975(B + 1)-th order statistics of {Z b } B b=1, Z ( 0.025(B+1) ) and Z ( 0.975(B+1) ), as the lower and upper liits of the CI.

11 To copare our ethod with bootstrap resapling, we study two cases. In the first case (Table 2), we draw 100 saples of size n 1 = n 2 = 50, and copute CIs for each of the. In the second case (Table 3), we draw only one saple of size n 1 = n 2 = 50, and repeat coputing 50 CIs based on a single saple. To ake the coparison fair, we appropriately set the paraeters B,N for the bootstrap ethod, and ε,r e for the EL ethod so that the run ties are coparable. The result shows that the EL ethod generates coparable but slightly narrower CIs than the bootstrap, while still keeps siilar coverage probabilities. In the first case, ost of the uncertainty in either ethod coes fro the input data, so the CIs they generate exhibit siilar variation. In the second case, the input data is fixed, and hence the fluctuations of the CIs are solely due to the resapling and the stochastic noises. Table 3 shows that the EL ethod gives ore stable CIs than the bootstrap, eaning that the EL is less vulnerable to these noises. This can be attributed to the fact that EL is an optiization-based ethod and hence does not succub to the additional resapling noise introduced in the bootstrap. Table 2: EL ethod versus bootstrap, 100 confidence intervals for 100 saples. Bootstrap EL ethod B = 1000,N = 500 B = 500,N = 1000 ε = ,R e = 500 Run tie per CI(seconds) Coverage probability estiate % CI for coverage probability [0.841, 0.959] [0.893, 0.987] [0.880, 0.980] Mean CI length Standard deviation of CI length Mean lower liit Standard deviation of lower liit Mean upper liit Standard deviation of upper liit # replications of h per CI Table 3: EL ethod versus bootstrap, 50 replications of confidence intervals for a single saple. Bootstrap,B = 500,N = 1000 EL,ε = ,R e = Run tie per CI(seconds) Standard deviation of CI length Standard deviation of lower liit Standard deviation of upper liit # replications of h per CI CONCLUSION We have proposed an optiization-based ethod to quantify nonparaetric input uncertainty for siulation perforance easures. The ethod coputes CIs that account for input errors by positing a pair of optiizations subject to a weighted average of Burg-entropy divergence constraints on the collection of epirical input odels. We have argued the statistical accuracy of these optiizations via the EL ethod. We have also investigated an MDSA algorith to locally solve the optiizations, and copared its nuerical perforances with bootstrap resapling. In future work, we will investigate the generalization of this approach to other types of perforance easures, and ore efficient algoriths to solve the involved class of optiizations.

12 ACKNOWLEDGMENTS We gratefully acknowledge support fro the National Science Foundation under grants CMMI / and CMMI / REFERENCES Barton, R. R., and L. W. Schruben Unifor and Bootstrap Resapling of Epirical Distributions. In Proceedings of the 1993 Winter Siulation Conference, edited by G. Evans, M. Mollaghasei, E. Russell, and W. Biles, ACM. Barton, R. R., and L. W. Schruben Resapling Methods for Input Modeling. In Proceedings of the 2001 Winter Siulation Conference, edited by B. A. Peters, J. S. Sith, D. J. Medeiros, and M. W. Rohrer, Volue 1, Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, Inc. Beck, A., and M. Teboulle Mirror Descent and Nonlinear Projected Subgradient Methods for Convex Optiization. Operations Research Letters 31 (3): Ben-Tal, A., D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen Robust Solutions of Optiization Probles Affected by Uncertain Probabilities. Manageent Science 59 (2): Cox, D. R., and D. V. Hinkley Theoretical statistics. CRC Press. Delage, E., and Y. Ye Distributionally Robust Optiization Under Moent Uncertainty with Application to Data-Driven Probles. Operations Research 58 (3): Ghosh, S., and H. La. 2015a. Coputing Worst-Case Input Models in Stochastic Siulation. arxiv preprint arxiv: Ghosh, S., and H. La. 2015b. Mirror Descent Stochastic Approxiation for Coputing Worst-Case Stochastic Input Models. In Proceedings of the 2015 Winter Siulation Conference, edited by S. Jain, R. R. Creasey, J. Hielspach, K. P. White, and M. Fu, Piscataway, New Jersey: IEEE Press: Institute of Electrical and Electronics Engineers, Inc. Goh, J., and M. Si Distributionally Robust Optiization and its Tractable Approxiations. Operations Research 58 (4-part-1): Neirovski, A., A. Juditsky, G. Lan, and A. Shapiro Robust Stochastic Approxiation Approach to Stochastic Prograing. SIAM Journal on Optiization 19 (4): Neirovsky, A.-S., D.-B. Yudin, and E.-R. Dawson Proble Coplexity and Method Efficiency in Optiization. John Wiley & Sons, Inc., Panstwowe Wydawnictwo Naukowe (PWN). Owen, A. B Epirical Likelihood Ratio Confidence Intervals for a Single Functional. Bioetrika 75 (2): Owen, A. B Epirical Likelihood. CRC Press. Pardo, L Statistical Inference Based on Divergence Measures. CRC Press. AUTHOR BIOGRAPHIES HENRY LAM is an Assistant Professor in the Departent of Industrial and Operations Engineering at the University of Michigan, Ann Arbor. His research focuses on stochastic siulation, risk analysis, and siulation optiization. His eail address is khla@uich.edu. HUAJIE QIAN is a Ph.D. student in Applied and Interdisciplinary Matheatics at the University of Michigan, Ann Arbor. His research interest lies in applied probability and siulation optiization. His eail address is hqian@uich.edu.

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION GUANGHUI LAN AND YI ZHOU Abstract. In this paper, we consider a class of finite-su convex optiization probles defined over a distributed

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Estimation of the Mean of the Exponential Distribution Using Maximum Ranked Set Sampling with Unequal Samples

Estimation of the Mean of the Exponential Distribution Using Maximum Ranked Set Sampling with Unequal Samples Open Journal of Statistics, 4, 4, 64-649 Published Online Septeber 4 in SciRes http//wwwscirporg/ournal/os http//ddoiorg/436/os4486 Estiation of the Mean of the Eponential Distribution Using Maiu Ranked

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds.

Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. Proceedings of the 2015 Winter Siulation Conference L. Yilaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. ROBUST SIMULATION OF STOCHASTIC SYSTEMS WITH INPUT UNCERTAINTIES

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Multiscale Entropy Analysis: A New Method to Detect Determinism in a Time. Series. A. Sarkar and P. Barat. Variable Energy Cyclotron Centre

Multiscale Entropy Analysis: A New Method to Detect Determinism in a Time. Series. A. Sarkar and P. Barat. Variable Energy Cyclotron Centre Multiscale Entropy Analysis: A New Method to Detect Deterinis in a Tie Series A. Sarkar and P. Barat Variable Energy Cyclotron Centre /AF Bidhan Nagar, Kolkata 700064, India PACS nubers: 05.45.Tp, 89.75.-k,

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

Quantifying Stochastic Model Errors via Robust Optimization

Quantifying Stochastic Model Errors via Robust Optimization Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations

More information

Research Article Robust ε-support Vector Regression

Research Article Robust ε-support Vector Regression Matheatical Probles in Engineering, Article ID 373571, 5 pages http://dx.doi.org/10.1155/2014/373571 Research Article Robust ε-support Vector Regression Yuan Lv and Zhong Gan School of Mechanical Engineering,

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Statistica Sinica 6 016, 1709-178 doi:http://dx.doi.org/10.5705/ss.0014.0034 AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Nilabja Guha 1, Anindya Roy, Yaakov Malinovsky and Gauri

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax:

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax: A general forulation of the cross-nested logit odel Michel Bierlaire, EPFL Conference paper STRC 2001 Session: Choices A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics,

More information

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

An incremental mirror descent subgradient algorithm with random sweeping and proximal step An increental irror descent subgradient algorith with rando sweeping and proxial step Radu Ioan Boţ Axel Böh April 6, 08 Abstract We investigate the convergence properties of increental irror descent type

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

An incremental mirror descent subgradient algorithm with random sweeping and proximal step

An incremental mirror descent subgradient algorithm with random sweeping and proximal step An increental irror descent subgradient algorith with rando sweeping and proxial step Radu Ioan Boţ Axel Böh Septeber 7, 07 Abstract We investigate the convergence properties of increental irror descent

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models 2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

Tight Complexity Bounds for Optimizing Composite Objectives

Tight Complexity Bounds for Optimizing Composite Objectives Tight Coplexity Bounds for Optiizing Coposite Objectives Blake Woodworth Toyota Technological Institute at Chicago Chicago, IL, 60637 blake@ttic.edu Nathan Srebro Toyota Technological Institute at Chicago

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

1 Identical Parallel Machines

1 Identical Parallel Machines FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j

More information

The Wilson Model of Cortical Neurons Richard B. Wells

The Wilson Model of Cortical Neurons Richard B. Wells The Wilson Model of Cortical Neurons Richard B. Wells I. Refineents on the odgkin-uxley Model The years since odgkin s and uxley s pioneering work have produced a nuber of derivative odgkin-uxley-like

More information

Mixed Robust/Average Submodular Partitioning

Mixed Robust/Average Submodular Partitioning Mixed Robust/Average Subodular Partitioning Kai Wei 1 Rishabh Iyer 1 Shengjie Wang 2 Wenruo Bai 1 Jeff Biles 1 1 Departent of Electrical Engineering, University of Washington 2 Departent of Coputer Science,

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,

More information

An Extension to the Tactical Planning Model for a Job Shop: Continuous-Time Control

An Extension to the Tactical Planning Model for a Job Shop: Continuous-Time Control An Extension to the Tactical Planning Model for a Job Shop: Continuous-Tie Control Chee Chong. Teo, Rohit Bhatnagar, and Stephen C. Graves Singapore-MIT Alliance, Nanyang Technological Univ., and Massachusetts

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Ufuk Demirci* and Feza Kerestecioglu**

Ufuk Demirci* and Feza Kerestecioglu** 1 INDIRECT ADAPTIVE CONTROL OF MISSILES Ufuk Deirci* and Feza Kerestecioglu** *Turkish Navy Guided Missile Test Station, Beykoz, Istanbul, TURKEY **Departent of Electrical and Electronics Engineering,

More information

The Frequent Paucity of Trivial Strings

The Frequent Paucity of Trivial Strings The Frequent Paucity of Trivial Strings Jack H. Lutz Departent of Coputer Science Iowa State University Aes, IA 50011, USA lutz@cs.iastate.edu Abstract A 1976 theore of Chaitin can be used to show that

More information

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab

Support Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a

More information

UNCERTAINTIES IN THE APPLICATION OF ATMOSPHERIC AND ALTITUDE CORRECTIONS AS RECOMMENDED IN IEC STANDARDS

UNCERTAINTIES IN THE APPLICATION OF ATMOSPHERIC AND ALTITUDE CORRECTIONS AS RECOMMENDED IN IEC STANDARDS Paper Published on the16th International Syposiu on High Voltage Engineering, Cape Town, South Africa, 2009 UNCERTAINTIES IN THE APPLICATION OF ATMOSPHERIC AND ALTITUDE CORRECTIONS AS RECOMMENDED IN IEC

More information