A New Approach to Sequential Stopping for Stochastic Simulation

Size: px
Start display at page:

Download "A New Approach to Sequential Stopping for Stochastic Simulation"

Transcription

1 A New Approach to Sequential Stopping for Stochastic Siulation Jing Dong Northwestern University, Evanston, IL, 60208, Peter W. Glynn Stanford University, Stanford, CA, In this paper, we solve the sequential stopping proble for a class of siulation probles in which variance estiation is difficult. We establish the asyptotic validity of sequential stopping procedures for estiators constructed using the sectioning (replication ethods with a fixed nuber of sections. The liiting distribution of the estiators at stopping ties as the error size (the absolute error or the relative error goes to 0 is characterized in closed for. This liiting distribution is different fro the liiting distribution of the estiator constructed based on a fixed nuber of saples as the saple size goes to infinity, which indicates that we need a different scaling paraeter when constructing the corresponding confidence intervals using the sequential stopping procedure. In particular, the scaling paraeters we derived are larger than those suggested by the corresponding Student-t distribution. We also investigate the epirical perforance of our proposed sequential stopping algoriths through soe siulation experients. Key words : stochastic siulation, sequential stopping 1. Introduction Suppose that we wish to copute soe perforance easures α via siulation. Siulation-based estiators fall, roughly speaking, into two categories: fixed saple size procedure and sequential procedure. In a fixed saple size procedure, one decides a priori on the nuber of saples that will be generated to for an estiator for α, either based on knowledge of siilar probles or on a set of initial trail runs to estiate the associated variance (thereby providing an estiate for the required saple size. On the other hand, in a sequential procedure, one continues drawing observations until soe pre-specified level of absolute or relative precision has been reached. In ost coputational settings, sequential procedures are algorithically ore natural. 1

2 2 The ost widely studied class of sequential algoriths assues that the estiator α(t, based on a saple size t, satisfies the central liit theore (CLT, so that there exists σ > 0 for which α(t D α + σ t N(0, 1 (1 when t is large, where N(0, 1 is a noral rando variable (rv having ean 0 and variance 1, and D eans has approxiately the sae distribution as ; see 2 for a ore rigorous forulation. Relation (1 iplies that [ α(t z δ/2σ, α(t + z ] δ/2σ t t is an approxiate 100(1 δ% confidence interval for α when t is large, provided that one choses z δ/2 so that P ( z δ/2 N(0, 1 z δ/2 = 1 δ. To obtain a confidence interval for prescribed halfwidth, one then saples until z δ/2 σ/ t. Of course, in practice, it will never be the case that σ is known, so it ust be estiated fro the observed data by soe estiator S(t, say. This suggests that one continuous sapling until z δ/2 S(t/ t. This type of sequential procedures was first studied by Chow and Robbins (1965, when α(t is a saple ean of t independent and identically distributed (iid observations; in this setting, S 2 (t is just the corresponding saple variance. Their analysis established that when the siulation is stopped in this way, then P (α [α(t (, α(t ( + ] 1 δ as 0, where T ( is the (rando saple size at which z δ/2 S(t/ t is first less than. Such a procedure is said to achieve absolute precision, with an asyptotic confidence level of 1 δ as 0. Siilarly, a sequential procedure achieves relative precision with an asyptotic confidence level of 1 δ (based on stopping at saple size T ( if P (α [α(t ((1, α(t ((1 + ] 1 δ as 0 when α > 0. Glynn and Whitt (1992 extend the Chow-Robbins ethodology to uch ore general siulation settings, in which α(t either can not be expressed as a saple ean (e.g. saple quantile or α(t is a saple ean constructed fro dependent observations (e.g. a steady-state siulation. In particular, they showed that if S(t is strongly consistent for σ in the setting that S(t σ (2

3 3 alost surely (a.s. as t, then Chow-Robbins type absolute and relative precision procedures exhibit the correct asyptotic coverage levels as 0 (under a slight enhanceent of the CLT assuption, known as the functional central liit theore (FCLT. They further showed that such asyptotic coverage guarantees ay fail to be valid when (2 is relaxed to, for exaple, the requireent of consistency for S(t (in which case one requires only that S(t converges to σ in probability as t. Related literature on Chow-Robbins type processes can be found in Lavenberg and Sauer (1977, Law and Carson (1979, Jones et al. (2006, and Bayraksan and Pierre-Louis (2012. Unfortunately, there exist any siulation settings in which even consistent estiation of σ is either theoretically difficult or coputationally inconvenient (let alone guaranteeing that the estiator is also strongly consistent. For exaple, in the steady-state siulation context, the quantity σ 2 is known as the tie-average variance constant (TAVC. The TAVC is difficulty to consistently estiate, because σ 2 reflects the coplicated auto-correlation structure of the underlying stochastic process. In particular, when the underlying process is non-regenerative, the proble of consistently estiating the TAVC is an ongoing research area; see Alexopoulos et al. (2007, Wu (2009, Meterelliyoz et al. (2012. Even less is known about strong consistency of such estiators; see however Daerdji (1994. Coputation of quantiles and conditional value-at-risk (CVaR are exaples of other settings in which estiating σ is challenging (in part because the forula for σ includes the density and density estiation is known to be both practically and theoretically difficult. Our goal in this paper is to introduce a entirely new approach to sequentially estiating α to a given precision that apply in uch greater generality than do the above Chow-Robbins type procedures. The ethod apply to steady-state siulations, quantile coputations, and CVaR estiations, and can be used ore generally as well. Our idea stes fro an alternative fixed saple size confidence interval procedure that avoids the need to consistently estiate σ. In particular, the sectioning (replication ethod, run

4 4 independent replications of stochastic process to tie t. One then coputes the ean of each section (replication and constructs the associated saple variance estiator S 2 (t fro the section eans. In this case, (α(t α S (t D t 1 (3 in great generality when t is large, where t 1 is a Student-t rv with 1 degrees of freedo. Note that is fixed in this analysis. The approxiation (3 holds even when the siulation output is auto-correlated, so that it applies to steady-state siulations. As a consequence, the fixed saple size confidence interval is [ ] S (t S α(t t δ/2, 1 (t, α(t + t δ/2, 1 where t δ/2, 1 is chosen so that P ( t δ/2, 1 < t 1 < t δ/2, 1 = 1 δ. If one proceeds as for Chow-Robbins, one then is led to sequential procedures for which one stops sapling when t δ/2, 1 S (t/. As we shall see later, this naive ipleentation fails, because (α(t ( α S (T ( (4 does not converge in distribution to t 1 as 0. (The corresponding liit theore does hold in the Chow-Robbins setting, and is the fundaental explanation for the theoretical coverage results obtained there when 0. Epirical evidence of the failures of such sequential procedures was observed a nuber of years ago; see Ada (1983. However, it turns out (4 does indeed converge to a liit as 0, albeit not a Student-t rv. If one calibrate the confidence interval in ters of the correct liit distribution (rather than the t δ/2, 1 value associated with t 1, one gets an asyptotically valid sequential procedure. The sectioning fraework is naturally parallelizable, as each section can be run on a different processor. We run the sequential checking in a synchronized fashion, and the unsynchronized ipleentation would be an interesting future research. For a fairly large class of probles, the fraework also has iniu storage requireent, as we only need to store and update the saple

5 5 ean for each section. These are advantages over other cancellation ethods such as standardized tie series (Glynn and Iglehart 1990 (e.g. batch eans ethod with bathes, where we divide a single long siulation run into sections. The following notations are used throughout the text. R denotes the Euclidean space and R + denotes the nonnegative subspace. C[0, t] denotes the space of continuous real-valued functions on [0, t] endowed with the unifor topology. C[0, denotes the space of continuous real-valued functions on [0, endowed with the unifor nor on copact tie intervals. Let D(0, denote the space of right continuous real-valued functions with left liits on the interval (0,, endowed with the Skorohod J 1 topology. Let I denote the identity ap fro R + to R +, i.e. I(t = t for t The sectioning ethod Let X = {X(t : t 0} be a real-valued stochastic process, which represents the output of a siulation run. We can handle discrete-tie processes by setting X(t = X( t. Let F denote the stationary distribution of X, assuing its existence. When X(k s i.i.d. saples, F is the distribution of X(k, k = 1, 2,.... We are interested in estiating the quantity α = ψ(f, where ψ is a real-valued functional on the space of probability distributions. For exaple the expectation can be written as ψ(f = xf (dx. Other exaples of ψ(f include sooth functions of the expectation, i.e. ψ(f = h ( xf (dx, quantiles (also known as value at risk, i.e. the p-quantile of F is defined as ψ(f = inf{x : F (x p}, etc. Let ˆF (t, denote the epirical distribution based on {X(s : 0 s t}, i.e. ˆF (t, x := 1 t t We ipose the following assuption on X and ψ. 0 1{X(s x}ds Assuption 1. There exists a finite constant σ > 0, such that t ( (t/ 2 α σb(t in D(0, as 0, where B is a standard Brownian otion.

6 6 Reark 1. Assuption 1 iplies that the estiator satisfies a CLT, i.e. ( n (n α N(0, 1 as n as we can fixed t = 1 and set n = 2. In particular, Assuption 1, which is known as the Functional Central Liit Theore (FCLT, is a slightly stronger assuption than the CLT. We will establish the FCLT for soe interesting applications in 4. Under Assuption 1, we have i.e. (t (t/ 2 α as 0, is a consistent estiator of α. The challenge lies in assessing the quality of the estiator arises fro the unknown constant σ. For the sectioning ethod with sections, we siulate independent runs of X. We denote the ith run as X i = {X i (t : t 0}, for i = 1, 2,...,. Let ˆF i denote the epirical distribution based on the ith run for i = 1, 2,...,. At tie t, we output ψ( ˆF i (t fro the i-th section (replication. We also write ˆF 0 as the epirical distribution based on the aggregated data fro sections, i.e. ˆF 0 (t, x := 1 t t 0 1{X i (s x}ds = 1 ˆF i (t, x. We ake the following assuption about the relationship between 0 (t i = 1,.... and i (t s for Assuption 2. ( t 0 (t/ 2 1 ( i (t/ 2 0 in D(0, as 0. Reark 2. If we are interested in estiating the expectation, i.e. ψ (F = xdf (x, then 0 (t = 1 t t 0 X i (sds = 1 i (t/ 2. In a lot of other applications, Assuption 1 & 2 are satisfied siultaneously (see exaples in 4.1 & 4.2.

7 Under Assuption 1 & 2 we can use either 1 ψ( ˆF i (t (the average of output fro the sections or ψ( ˆF 0 (t as an estiator of α. We would prefer the later because it in general has less bias with a finite saple size. To estiate the error size, we write Γ ψ (t, ˆF := 1 ( 2 i (t 0 (t ( 1 Before we present the sequential stopping procedure, we shall first introduce soe preliinary results about the sectioning ethod Basic results about the sectioning ethod The sectioning ethod uses cancellation idea, in which the key is to cancel out the unknown variance paraeter, which appears as a coon factor in the nuerator and the denoinator of the estiator. In particular, we write the sectioning estiator as the ratio of 0 (t α and Γ ψ (t, ˆF. The following proposition characterize the asyptotic distribution of the scaled ratio estiator. 7 Proposition 1. Under Assuption 1 & 2, we have 0 (t/ 2 α B(t Γ ψ (t/ 2, ˆF (B i(t B(t 2 1 ( 1 as 0, where B i s, i = 1, 2,...,, are independent Brownian otion and B(t = 1 B i(t. For fixed value of t, 1 ( 1 B(t (B i(t B(t t 1. 2 Let ν denote the (1 δ/2-quantile of t 1. If we generate t saples for each section (each run is of length t, then we can construct the asyptotic 100(1 δ% confidence interval as i.e. li t P [ ( 0 (t νγ ψ t, ˆF (, 0 (t + νγ ψ t, ˆF ], ( ( 0 (t νγ ψ t, ˆF ( < α < 0 (t + νγ ψ t, ˆF = 1 δ. Note that here the total nuber of saples generated is t.

8 8 3. Sequential stopping for the sectioning ethod For the sequential stopping procedure, to achieve an absolute precision level, we consider a sequence of stopping ties, indexed by, { κ 0 ψ( := inf t > 0 : Γ ψ (t, ˆF } <. { If we find the correct scaling paraeter u, then κ 0 ψ(/u := inf t > 0 : uγ ψ (t, ˆF } < would be the tie at which the siulation terinates, i.e. it is the tie when the width of the corresponding confidence interval is less than, where is soe user-specified precision level. However, we notice that κ 0 ψ( can terinate uch too early if Γ ψ (t, ˆF is badly behaved for sall t. For exaple, when applying the sectioning ethod to estiate the steady-state expectation of a discrete tie Markov chain X, if X i (0 = 0 for i = 1, 2,...,, then Γ ψ (t, ˆF = 0 for t < 1. To avoid this early terination issue, we next introduce a odification of the stopping ties as in (Glynn and Whitt We first notice fro the proof of Proposition 1 that tγ ψ (t, ˆF χ 2 1/(( 1 as t. This suggests that Γ ψ (t, ˆF decays roughly at rate t 1/2, which is consistent with the decay rate of saple standard deviation. Thus, we let a(t be a strictly positive function that decreases onotonically to 0 as t, and satisfies a(t = O(t γ for γ > 1/2, i.e. it decays at a faster rate than t 1/2. We then define { κ ψ ( := inf t > 0 : Γ ψ (t, ˆF } + a(t <. We notice that κ 0 ( := inf {t > 0 : a(t < } as 0 and κ ψ ( κ 0 (. of To find the appropriate scaling paraeter, our task is to characterize the liiting distribution 0 (κ ψ ( α Γ ψ (κ ψ (, ˆF as 0. This task is divided into two steps, where we shall characterize the liiting distribution of κ ψ ( first. We define K(σ := inf t > 0 : σ 1 ( 1t 2 ( Bi (t B(t 2 < 1 where B i s, i = 1, 2,...,, are independent Brownian otions and B = 1 B i.

9 9 Theore 1. Under Assuption 1 and 2, 2 κ ψ ( K(σ as 0. Theore 1 suggests that κ ψ ( is O p ( 2. This is consistent with the convergence rate of Monte Carlo ethods in general. We also notice that K(σ = 1 σ 2 σ inf 2 t > 0 : σ 1 (B ( 1t 2 i (t B(t 2 < 1 = inf t > 0 : σ 1 (B ( 1(σ 2 t 2 i (σ 2 t B(σ 2 t 2 < 1 d = inf t > 0 : 1 (B ( 1t 2 i (t B(t 2 < 1 = K(1, indicating that κ ψ ( is stochastically larger for larger values of σ. We next take a closer look at K(1. Lea 1. (1 When = 2, K(1 = 0. (2 When = 3, K(1 = 0. (3 When 4, K(1 follows a Gaa distribution with shape paraeter γ = ( 3/2 and rate λ = ( 1/2. Having K(1 = 0 is probleatic as it will leads to an infinite scaling paraeter (This will becoe apparent fro Theore 2. To overcoe this issue, we shall restrict to 4. Notice that when saple size t is fixed, we only require 2. When applying sequential stopping procedure, we need to put ore restrictions on the nuber of sections, i.e. 4. Based on Theore 1 and Lea 1, the following theore characterizes the liiting distribution of the sectioning estiator. Theore 2. Under Assuption 1 and 2, for 4, 0 (κ ψ ( α Γ ψ (κ ψ (, ˆF as 0, where Z N(0, 1 is independent of K(1. σ B(K(σ K(σ d = Z K(1

10 10 Theore 2 provides us with the right scaling paraeter when constructing asyptotically valid confidence intervals under the sequential stopping procedure. In particular, if one wants to achieve a confidence level of 100(1 δ%, one can select u such that P ( Z/ K(1 u s = 1 δ. Notice that the independence of Z and K(1 ensures that the distribution of Z/ K(1 is syetric around zero. Then the asyptotic 100(1 δ% confident interval with an absolute precision level is constructed as [ ] 0 (κ ψ (/u, 0 (κ ψ (/u +. Specifically, we have ( P 0 (κ ψ (/u α 0 (κ ψ (/u + ( ψ = P ˆF 0 (κ ψ (/u α 0 (κ ψ (/u α = P /u < u 0 (κ ψ (/u α = P Γ ψ (κ ψ (/u, ˆF < u P ( Z/ K(1 < u = 1 δ as 0. The actual siulation algorith goes as follows. Algorith 1 Input: the nuber of sections, the error bound, the scaling paraeter u (see Table 1 for soe of the coonly used scaling paraeters, and the step size. We choose a(t = t 1. Output: a 100(1 δ% confidence interval with half width. (i Saple X i (t for 0 t 1 and i = 1, 2,...,. Initialize T = 1 +. (ii Saple X i (t between T and T. Calculate i (T, for i = 1, 2,...,, and 0 (T Γ = 1 ( 2 i (T 0 (T ( 1

11 11 (iii If u (Γ + a(t >=, set T = T +, go back to step (ii; otherwise, output [ ] 0 (T, 0 (T + as the 100(1 δ% confidence interval. Reark 3. In Algorith 1, we pick a(t = t 1. As for t < 1, a(t >, we start by sapling X(t for 0 t The distribution of Z/ K(1 For fixed t, we have for 2, 0 (t/ 2 α Z Γ ψ (t/ 2, ˆF χ 2 1 /( 1 as 0, where Z is independent of χ 2 1; while for κ ψ (, we have for 4, 0 (κ ψ ( α Γ ψ (κ ψ (, ˆF Z as 0. K(1 The distribution of K(1 is different fro the distribution of χ 2 1/( 1 for 4. In particular, fro Lea 1, K(1 Gaa(( 3/2, ( 1/2. Fro the connection of Gaa distribution and Chi-square distribution, we have χ 2 1/( 1 Gaa(( 1/2, ( 1/2. Then K(1 has a lighter tail than χ 2 1/( 1. Thus, Z/ K(1 has a heavier tail than t 1, which indicates that for the sae level of confidence, the sequential stopping procedure would require a larger ultiplier to the variance estiator, Γ ψ, than the fixed-budget procedures. We interpret this as the price we pay to get control over the size of the error (width of the confidence interval. As increases, Z/ K(1 has a lighter tail. In particular, we notice that 1 ( 2 t B 1 t 2 i B t 1 as. t k=1 If we define K (1 := inf{t : 1/t < 1} = 1, then Z/ K (1 N(0, 1. Table 1 lists soe saple percentile of Z/ K(1 (with the corresponding 95% confidence interval, which can be used as the corresponding ultiplier when applying the sectioning ethod with sections.

12 12 Table 1 Saple percentile of Z/ K(1 (10 6 i.i.d. saples = ± ± ± ± ± = ± ± ± ± ± = ± ± ± ± ± = ± ± ± ± ± = ± ± ± ± ± Relative error We have analyzed the sequential stopping rules to achieve an absolute precision. There is another coonly used precision criteria called relative error (Asussen and Glynn 2007, where we want to achieve a precision level relative to the value of the quantity of interests. In particular, we define a new sequence of stopping ties indexed by as { κ ψ ( := inf t > 0 : Γ ψ (t, ˆF } + a(t < 0 (t. Siilar to the absolute precision case, if we find the correct scaling paraeter u, then we can { ( terinate the siulations at κ ψ (/u = inf t > 0 : u Γ ψ (t, ˆF } + a(t < 0 (t. The following theore characterized the scaled liiting distribution of κ ψ (. Theore 3. Under Assuption 1 and 2, 2 κ ψ ( K(σ/ α as 0. Siilar to κ ψ ( in the absolute precision forulation, κ ψ ( is O p ( 2 and is stochastically larger for lager values of σ. Unlike the absolute precision case, κ ψ ( is also influenced by the value of α. In particular, it is stochastically larger for saller values of α. The following theore establishes the liiting distribution of (ψ( κ ψ (, ˆF α/γ ψ ( κ ψ (, ˆF as 0, which is the sae as the liiting distribution of (ψ(κ ψ (, ˆF α/γ ψ (κ ψ (, ˆF established in Theore 2. This suggests we can use the sae scaling paraeter for the two forulations (absolute v.s. relative.

13 13 Theore 4. Under Assuption 1 and 2, for 4, ψ( κ ψ (, ˆF α Γ ψ ( κ ψ (, ˆF σ B(K(σ/ α α K(σ/ α as 0, where Z N(0, 1 is independent of K(1. d = Z K(1 Fro Theore 4, we have, when applying the sequential stopping procedure, we can select u such that P ( Z/ K(1 u = 1 δ. Then the 100(1 δ% asyptotic confidence interval with a relative precision level is constructed as [ 0 ( κ ψ (/u 0 ( κ ψ (/u ], 0 ( κ ψ (/u + 0 ( κ ψ (/u. The actual siulation algorith goes as follows. Algorith 2 Input: the nuber of sections, the error bound, the scaling paraeter u s (see Table 1 for soe of the coonly used scaling paraeters and the step size. We choose a(t = t 1 Output: a 100(1 δ% confidence interval with half width Ȳ (t. (i Saple X i (t for 0 t 1 and i = 1, 2,...,. Initialize T = (ii Saple X i (t between T and T. Calculate i (T, for i = 1, 2,...,, and 0 (T Γ = 1 ( 2 i (T 0 (T ( 1 (iii If u (Γ + a(t 0 (T, set T = T +, go back to step (ii; otherwise, output [ ] 0 (T 0 (T, 0 (T + 0 (T as the 100(1 δ% confidence interval. 4. Exaples for which the sectioning fraework applies In this section, we present soe exaples, for which Assuption 1 & 2 are satisfied, so that we can apply the sequential stopping sectioning fraework.

14 Exaple A: Sooth functions of expectations Estiating sooth functions of expectations arise a lot in statistics and siulation output analysis (see exaples in Asussen and Glynn (2007 Chapter III.3. Let X(k s be i.i.d. rando variables. We are interested in coputing h(µ, where µ = E[X(k] and h is a sooth function. In this case, ψ(f = h ( xf (dx. We next show that under proper soothness assuptions of h, Assuption 1 & 2 are satisfied. Theore 5. Assue that h is differentiable and h is Lipchitz continuous at µ. We also assue that V ar(x(k = σ 2 0 <. in D(0, as 0. t (h(ȳ (t/2 h(µ σ 0 h (µb(t The value of h (µ is in general unknown and estiating it would be another sooth function of expectation proble. In the sectioning fraework, we generate independent copies of X = {X(t : t 0}, which are denote as X i for i = 1, 2,...,. We also denote Ȳi(t := 1 t t X(sds, for i = 1, 2,...,, and 0 Ȳ 0 (t := 1 Ȳi(t. The following lea follows directly fro the proof of Theore 5. Lea 2. Under the assuptions of Theore 5, in D(0, as Exaple B: quantiles ( t h ( Ȳ 0 (t/ 1 h ( Ȳ i (t/ 2 0 Let X be a real-valued rando variable with CDF F (. For a real-valued function G, we define the generalized inverse function as G 1 (y = inf{x : G(x y}. For any 0 < p < 1, the p-th quantile of X is then defined as ξ p = F 1 (p. Let ˆξ t,p := ˆF 1 (t, p denote the saple quantile of X. If we write X (n,1 X (n,2 X (n,n as the ordered statistics of X(k s, then ˆξ n,p = X (n, np. For t (n, n + 1, we define ˆξ t,p = ˆξ n,p + (n + 1 t(ˆξ n+1,p ˆξ n,p using linear interpolation.

15 Bahadur representation draws the connection between ˆξ n,p and ˆF (n, ξ p (Bahadur Specifically if F is differentiable in soe neighborhood of ξ p and the density function f(ξ p = df (ξ p /dξ p 15 (0,, then ˆξ n,p = ξ p ˆF (n, ξ p p f(ξ p + O ( n 3/4 (log n 1/2 (log log n 1/4 a.s. Building on Bahadur representation, we next show that Assuption 1 & 2 are satisfied. Theore 6. If F is differentiable in soe neighborhood of ξ p and the density function f(ξ p = df (ξ p /dξ p (0,, then t (ˆξt/ 2,p ξ p σb(t in D(0, as 0, where σ = p(1 p/f(ξ p. Reark 4. It is possible to extend Theore 6 to soe other stationary sequence of rando variables (Sen 1972, Wu The difficulty in constructing confidence intervals for quantiles arises in the unknown value of f(ξ p, and estiating the density function, f(, is a costly task. When ipleenting the sectioning fraework, we generate independent sequence of X = {X(k : k 0}. Let ˆξ i n,p denote the saple quantile of the i-th section based on {X i (1,..., X i (n}. We also denote ˆξ 0 n,p as the saple quantile based on {X i (k : i = 1,...,, k = 1,... n}. The following lea follows fro the proof of Theore 6. Lea 3. Under the assuption of Theore 6, ( t ˆξ 0 t/ 1 2,p in D(0, as Exaple C: Steady-state siulation ˆξ i t/ 2,p For steady-state perforance analysis probles, we are interested in estiating α = ψ(f = h(xdf (s, where F is the equilibriu distribution of X and h is a real-valued function defined 0. on the state space of the stochastic process X. In this case, ψ( ˆF (t = 1 t t 0 h(x(sds.

16 16 A variety of stochastic processes satisfy Assuption 1. These include φ-ixing processes and regenerative processes (Glynn and Iglehart As we explained in Reark 2, Assuption 2 is autoatically satisfied when ψ(f denote the expectation operator. In steady-state siulation, σ 2, which is known as the tie-average variance constant, is in general difficulty to consistently estiate, because it reflects the coplicated auto-correlation structure of the underlying stochastic process. 5. Siulation experients In this section, we deonstrate the perforance of our sequential stopping procedures through soe siulation experients Quantile estiation We apply Algorith 1 to construct the 95% confidence interval (CI for the 0.8-quantile of an Exponential rando variable with unit rate. We set = 1. The quantile can be calculated in closed for. Specifically, ξ 0.8 = Fix = 10, we apply Algorith 1 with different values of. Table 2 suarizes the results. When = 10, the scaling paraeter is u = 2.680, which is the quantile of Z/ K(1 (see Table 1, whereas the quantile of t 9 (a student-t distribution with 9 degrees of freedo is The coverage rates and E[κ ψ (] s are calculated based on 10 3 independent applications of the algorith. We also report the corresponding 95% CIs. The coverage rate for t 1 shows the coverage rate if we plug in the wrong scaling paraeter suggested by the corresponding Student-t distribution. We notice that because the quantity we are interested in is quite sall, when = 0.5, both scaling paraeter achieves very good perforance but the coverage rate is uch larger than As gets saller, plugging in the right scaling paraeter becoes iportant. Specifically, we always achieve around 95% coverage with correct scaling paraeter, while if we use the scaling paraeter suggested by t 9, the coverage rate eventually suffers undercoverage. We also notice that 2 E[κ ψ (] s are of about the sae value for different values of, as suggested by Lea 1. Table 3 suarize the results for the relative accuracy forulation, where we apply Algorith 2 to construct the CIs. The observations are siilar to the absolute precision case.

17 17 Table 2 Perforance of the sequential stopping procedure to achieve an absolute precision level with = 10 sections: the 0.8-quantile of Exp(1 Coverage rate E[κ ψ (] 2 E[κ ψ (] Coverage rate for t ± (1.27 ± ± ± (2.25 ± ± ± (8.80 ± ± ± (2.06 ± ± Table 3 Perforance of the sequential stopping procedure to achieve a relative precision level with = 10 sections: the 0.8-quantile of Exp(1 Coverage rate E[κ ψ (] 2 E[κ ψ (] Coverage rate for t ± ± ± ± (1.01 ± ± ± (3.84 ± ± ± (8.99 ± ± Steady-state siulation of M/M/1 queue We consider an M/M/1 queue, which has a Poisson arrival process with rate λ, and independent identically distributed exponential service ties with ean 1/µ. Let X(t denote the nuber of custoers in the syste at tie t. We are interested in estiating α = li T T 0 X(tdt = ρ/(1 ρ (the steady-state average nuber of people in the syste, where ρ = λ/µ, is called the traffic intensity. We apply Algorith 1 to construct the 95% CI for α for different values of and. The results are suarized in Table 4 & 5. We set the scaling paraeter u = for = 10 and u = for = 20 (the value of these quantiles can be found in Table 1. The statistics in Table 4 are calculated based on 10 3 independent applications of Algorith 1. We observe that with the right scaling paraeter, we achieve the claied coverage rate (confidence level when is sall enough and plugging in the wrong scaling paraeter (suggested by t 1 will lead to undercoverage. As

18 18 increases fro 20 to 30, the undercoveraged caused by plugging in t δ/2, 1 is less severe. We also notice that for fixed, 2 E[κ ψ (] s are of about the sae value. As increases (fro 10 to 20, the siulation cost E[κ ψ (] decreases. Table 4 Perforance of the sequential stopping procedure to achieve an absolute precision level with sections: Queue length (nuber of people in the syste process of an M/M/1 queue (λ = 0.8, µ = 1, = 10 Coverage rate E[κ ψ (] 2 E[κ ψ (] Coverage rate for t ± (3.27 ± ± ± (9.65 ± ± ± (3.82 ± ± ± (9.53 ± ± Table 5 Perforance of the sequential stopping procedure to achieve an absolute precision level with sections: Queue length (nuber of people in the syste process of an M/M/1 queue (λ = 0.8, µ = 1, = 20 Coverage rate E[κ ψ (] 2 E[κ ψ (] Coverage rate for t ± (1.15 ± ± ± (3.84 ± ± ± (1.53 ± ± ± (3.94 ± ± We also tested Algorith 1 for systes with different values of traffic intensity ρ. The results are suarized in Table 6. We ake two observations here. The first one is that the coverage rates are around 95% for all the traffic intensity values tested. This is in contrast to ABATCH and ASPS (Steiger et al type of procedure, where the perforance deteriorate draatically as ρ increases. This is because our algorith does not involve any statistical tests for correlation. The second one is that as ρ increases, E[κ ψ (] increases. This is expected, as E[κ ψ (] is increasing in σ.

19 19 Table 6 Perforance of the sequential stopping procedure to achieve an absolute precision level = 0.1 with = 10 sections: Queue length (nuber of people in the syste process of an M/M/1 queue (λ = ρ, µ = 1 ρ Coverage rate E[κ ψ (] Coverage rate for t 0.025, ± (1.05 ± ± ± (3.73 ± ± ± (2.00 ± ± ± (3.44 ± ± Coparison to resapling techniques and concluding rearks Traditionally, for siulation probles where variance estiation is difficult, resapling techniques has been applied either to i construct strongly consistent variance estiator so that the original sequential stopping fraework of Glynn and Whitt (1992 can be applied, or to ii estiate the distribution of saple statistics so that we can develop corresponding sequential stopping procedures. Bootstrap (Efron 1979, Jackknife (delete-d Jackknife(Wu 1986 and Subsapling (Politis et al are aong the ostly coonly used techniques. Take the bootstrap as an exaple. Let θ denote the true quantity that we are interested in, and T n (X 1,..., X n denote the estiator constructed based on n saples. We write the bootstrap variance as ˆσ 2 n := V ar (T n (X 1,..., X n X 1, X 2,..., X n, where X i are i.i.d. saples fro epirical distribution F n conditional on X 1, X 2,..., X n. Then for the sooth functions of expectations proble and quantile estiation proble covered in 4, under ild oents and soothness conditions, we have ˆσ 2 n is a strongly consistent estiator of σ 2 n = V ar(t n (X 1,..., X n, i.e. ˆσ 2 n /σ 2 n 1 a.s. as n (see Theore 3.8 & 3.9 in Shao and Tu (1995. Moreover, let ˆp n ( := P (T n (X 1,..., X n < T n (X 1,..., X n < T n (X 1,..., X n + and N( := inf{n z 1 δ /2 : ˆp n ( 1 δ}.

20 20 Then it is shown in Swanepoel et al. (1983 that li P (T ( N( X1,..., X N( < θ < T N( (X 1,..., X N( + = 1 δ. δ 0 Thus, we could use bootstrap to find the probability of converge of a fixed width confidence interval of the for [T n (X 1,..., X n, T n (X 1,..., X n + ], sequentially for each n, until we find an appropriate saple size that achieves a coverage probability of (1 δ. Siilar results would hold for delete-d Jackknife where the delete saple size d n as n (Wu 1986, and Subsapling where the subset size b n grows to infinity with the saple size n, but at a uch slower rate i.e. li n b n /n = 0 (Politis et al In general, Subsapling works for a ore general class of estiation probles as its corresponding asyptotic results require uch less soothness conditions on ψ(f (Politis et al. 2001, where ψ is a real-valued functional on the space of probability distributions as introduced in 2. Copared to the sectioning fraework we studied in this paper, the resapling techniques are uch ore coputationally intensive (expensive. Take the subsapling ethod as an exaple, at each step n, we need to conduct the saple statistics coputation ( n b n ties for saples of size b n each. For the bootstrap ethod, if we can not calculate the distribution of bootstrap statistics in closed for, then we need to conduct Monte Carlo siulation based on the epirical distribution (sapling with replaceent. This requires drawing nb saples at step n, and calculate B saple statistics each based on saples of size n. To control the error in this Monte Carlo estiation step, we require B to be large as well. Even if we can calculate the distribution of bootstrap statistics in closed for, the calculation can be intensive (e.g. for quantiles, it involves n!. In contrast, for the sectioning fraework we used here, we often have a recursive way of updating the statistics such as saple ean and saple variance within each section, and the cross-section calculation often involves only aggregated statistics. Here, is the nuber of sections and it does not grow with n.

21 21 To conclude, in this paper, we analyze the sequential stopping proble for a class of siulation probles in which variance estiation is difficult. Specifically, we prove that when applying sequential stopping rules to the sectioning ethod, by properly adjusting the scaling paraeter, we will be able to construct asyptotically valid confidence intervals. Our nuerical results confirs that our sequential stopping algoriths perfor very well (achieves the correct coverage rate when the precision level is reasonably sall, e.g The sequential stopping procedure for the sectioning ethod we developed is very easy to ipleent (Algorith 1 & 2. It is naturally adapted to the parallel coputing environent and has iniu storage requireent for a lot of interesting applications. The scaling paraeters only depend on the nuber of sections and the confidence level we want to achieve. We characterize the distribution leading to the scaling paraeters in closed for, and provide a table (Table 1 that can be used to find the corresponding scaling paraeters. Appendix A: Proof of Section 2 & 3 ( Proof of Proposition 1 Let h(t, x 1, x 2,..., x = 1 x t i(t. As i (t/ 2 α σb i (t in D(0, and P (σb i D(h = 0, applying the continuous apping theore, we have As t 1 ( ( 0 (t/ 2 1 i (t/( 2 = t ( i (t/( 2 α σ B(t as 0. 0 as 0, we have t ( 0 (t/ 2 α σ B(t as 0. Using again the continuous apping theore, we have 0 (t/ 2 α Γ ψ (t/ 2, ˆF ( ( t ψ ˆF 1 ( 1 1 ( 1 ( ( ( t i (t/ 2 0 (t/ 2 α B(t (B i(t B(t as 0 2 α t ( 2 0 (t/ 2 α

22 22 Before we prove Theore 1, we prove an auxiliary lea (Lea 4 that characterize the liiting variance estiation process. This lea will also be useful in characterizing the distribution of K(σ (the proof of Lea 1. Let for t > 0. S(t = (B i (t B(t 2 Lea 4. {S(t : t > 0} is a Bessel process of diension ( 1. When > 2, S(t solves the SDE when = 2, ds(t = 2 dt + dw (t; 2S(t ds(t = dw (t + dl 0 (t where W (t is a standard Brownian otion and L 0 (t is the local tie at zero of W (t. Proof. We denote B = (B 1, B 2,..., B and R(t = (B i(t B(t 2. Then dr(t = 2 (B i (t B(tdB i (t + ( 1dt As W (t := { t (B i (u B(u j=1 1 (B 0 (B j (u B(u 2 j=1 j(u B(u } 2 0 db i (u is a continuous local artingale and [W, W ] t = t, we have W (t is a standard Brownian otion (Jeanblanc et al Thus, dr(t = 2 R(tdW (t + ( 1dt. Following Revuz and Yor (1999, when > 2, when = 2, ds(t = 2 dt + dw (t; 2S(t ds(t = dw (t + dl 0 (t. Proof of Theore 1 We first notice that 2 κ ψ ( = 2 inf t > 0 : 1 ( 2 i (t 0 (t + a(t < ( 1

23 23 = inf t > 0 : 1 1 ( 1 = inf t > 0 : 1 ( 1t a ( t/ 2 } < 1. ( 2 1 i (t/ 2 0 (t/ 2 + a ( t/ 2 < 1 ( t ( i (t/ 2 α t ( 2 0 (t/ 2 α Let T (Y (b = inf{t > 0 : Y (t < b}, denoting the apping induced by first passage tie. As 1 ( t ( i (t/ ( 1t 2 2 α t ( 2 0 (t/ 2 α + 1 a ( t/ 2 σ2 (B ( 1t 2 i (t B(t in D(0, as, and the first passage tie is a continuous apping in M 1 (Whitt 2002, we have T 1 ( t ( i (t/ ( 1t 2 2 α t ( 2 0 (t/ 2 α + 1 a ( t/ 2 } : t 0 T σ 1 (B ( 1t 2 i (t B(t 2 : t > 0 in D(0, when endowed with M 1 topology as. Fro Lea 4, we have 1 (B ( 1t 2 i (t B(t S(t 2 = d t ( 1, where S(t is a Bessel process, which is a Feller process. Thus, we have the convergence of the first passage tie process at point 1 (Whitt 2002, i.e. inf t > 0 : 1 ( t ( i (t/ ( 1t 2 2 α t ( 2 0 (t/ 2 α + 1 a ( t/ 2 } < 1 inf t > 0 : σ 1 ( Bi (t ( 1t B(t 2 < 1 as 0. 2 Proof of Lea 1 By the reversibility of Bessel process, we have { K(1 = inf t > 0 : S(t < } ( 1t { d = inf t > 0 : ts (1/t < } ( 1t { = inf t > 0 : S (1/t < } ( 1 d = 1 { sup t > 0 : S(t < } ( 1

24 24 When = 2, S(t hits zero at arbitrarily large ties (Revuz and Yor Therefore, K(1 = 0. When = 3, li inf t S(t = 0 (Revuz and Yor Therefore, K(1 = 0. When 4, it follows fro the results in Section 8 of Pitan and Yor (1981 that K(1 has density where γ = ( 3/2 and λ = ( 1/2. f K(1 (t = 1 Γ(γ λγ t γ 1 e λt Proof of Theore 2 We first notice that 0 (κ ψ ( α Γ ψ (κ ψ (, ˆF = 2 κ ψ ( ( 0 ( 2 κ ψ (/ 2 α 2 κ ψ ( Γ ψ ( 2 κ ψ (/ 2, ˆF By Functional Central Liit Theore (see Proof of Proposition 1, and standard rando-tie-change arguent, and Convergence Together Theore (Billingsley 1999, we have ( 0 ( 2 κ ψ (/ 2 α = 2 κ ψ ( 2 κ ψ ( 1 ( 1 Γ ψ ( 2 κ ψ (/ 2, ˆF σ B(K(σ (σb i(k(σ σ B(K(σ 2 σ B(K(σ 1 K(σσ (B ( 1K(σ 2 i(k(σ B(K(σ 2 = σ B(K(σ K(σ = B(K(σ/ K(σ K(σ/σ 2 We next show that B(K(σ/ K(σ is independent of K(σ and B(K(σ/ K(σ N(0, 1/. We first define a sequence of discretized versions of K(σ. Let n = 10 n and define K n (σ = in{k n : k n > K(σ}. Then K n (σ K(σ alost surely as n (Revuz and Yor For K n (σ, we have for any z R, ( P B(Kn (σ/ K n (σ z, K n (σ = k n ( = P B(Kn (σ/ K n (σ z K n (σ = k n P (K n (σ = k n = P B(k n / k n z inf σ 1 (B 0<t<(k 1 n ( 1t 2 i (t B(t 2 1, inf σ 1 (B (k 1 n t<k n ( 1t 2 i (t B(t 2 < 1 P (K n (σ = k n (5

25 25 As B(t is independent of (B i(t B(t 2, and B(k n B(t is independent of (B i(t B(t 2 for t k n, B(k n / k n is independent of (B i(t B(t 2 for t k n, ( (5 = P B(k n / k n z P (K n (σ = k n = P (Z/ zp (K n (σ = k n where Z N(0, 1. Thus, B(Kn (σ/ K n (σ follows N(0, 1/, and is independent of K n (σ. As B(t/ t is continuous in t, ( K n (σ, B(K n (σ/ ( K n (σ K(σ, B(K(σ/ K(σ as n. Then B(K(σ/ K(σ is distributed as N(0, 1/, and for any z R, t R +, B(K(σ B(Kn (σ P z, K(σ t = li P z, K n (σ t K(σ n K(σ = li P (Z/ zp (K n (σ t n = P (Z/ zp (K(σ t Therefore, B(K(σ/ K(σ is independent of K(σ. We also notice that K(σ/σ 2 d = K(1 Thus, B(K(σ/ K(σ K(σ/σ 2 d = Z K(1. The proof of Theore 3 follows siilar lines of arguent as the proof of Theore 1. We only need to notice that under Assuption 1 (t/ 2 αu(t in D(0, as 0, where U(t = 1 for t R. We shall oit the rest of the proof to avoid repetition. Siilarly, the proof of Theore 4 is very siilar to the proof of Theore 2. We shall oit it here. Appendix B: Proof of Section 4 Proof of Theore 5 We first notice that t (Ȳ (t/2 µ σ 0 B(t in D(0, as 0 by Donsker s theore. We next prove the results by showing that for any T > 0, sup 0 t T t ( h( Ȳ (t/ 2 h(µ h (µ t (Ȳ (t/ 2 µ 0

26 26 in probability as 0. Define a sequence of t such that t and 2 t 0 as 0 We first notice that sup t 0 t T (h(ȳ (t/2 h(µ h (µ t (Ȳ (t/ 2 µ sup s 0 s 2 t (h(ȳ (s/2 h(µ + sup h (µ s (Ȳ (s/ 2 µ 0 s 2 t + sup s (h(ȳ (s/2 h(µ h (µ s (Ȳ (s/ 2 µ (6 2 t s T We then show that each of the three parts on the right hand side of (6 converges to zero in probability as 0. For the first part, sup 0 s 2 t s (h(ȳ (s/2 h(µ = sup s(h(ȳ 0 s t (s h(µ Let K 1 denote the Lipschitz constant of h at µ, then we have, for any δ > 0, ( P sup s(h(ȳ (s h(µ > δ 0 s t ( P sup K 1 (Y (s µs > δ 0 s t ( P sup 0 k t K 1 (Y (k µk > δ 2 K 2 1 t E [(X(i µ 2 ] δ 2 0 as 0. The last inequality holds by Doob s Martingale inequality as {Y (k µk : k 0} is a Martingale. For the second part, as t (Ȳ (t/ 2 µ σ 0 B(t and B is tight, sup 0 s 2 t h (µ s (Ȳ (s/ 2 µ 0 in probability as 0. For the third part, let K 2 denote the Lipschitz constant of h at µ. Then, we have sup 2 t s T s (h(ȳ (s/2 h(µ h (µ s (Ȳ (s/ 2 µ = sup s { h(ȳ (s f(µ h (µ(ȳ (s µ} t s T / 2 sup K 2 s ( Ȳ (s µ } 2 t s T / 2 { sup t s T / 2 { T K 2 s ( Ȳ (s µ } 2 s

27 27 sup k t { K 2 T ( Y (k µk k 3/4 0 in probability as 0 2 } The convergence follows fro the Law of Iterated Logarith. Thus, in probability as 0. sup 2 t s T s (f(ȳ (s/2 f(µ σf (µb(s 0 Proof of Theore 6 The proof follows fro the proof of Theore 6.2 in Sen (1972, we include it here for copleteness. For siplicity of notation, we denote Ξ (t = t (ˆξt/ 2,p ξ p and Ξ (t = t ( ˆF (t/ 2, ξ p p f(ξ p We first notice that Ξ (t σb(t in D(0, as 0 by Donsker s theore. We next prove that for any T > 0, sup 0 t T Ξ (t Ξ (t 0 in probability as 0. For every positive integer k, fix δ > 0, we define k = F 1 (k (1+δ + F 1 (1 k (1+δ + 1. We then consider a sequence of positive integers k satisfying that k and k k 0 as 0 Then for any T < sup Ξ (t Ξ (t sup Ξ (t + sup Ξ (t + 0 t T 0 t 2 k 0 t 2 k sup 2 k t T Ξ (t Ξ (t We shall show the three ters on the righthand converges to zero as 0 one by one. For sup 0 t 2 k Ξ (t, we first notice that sup 0 t 2 k Ξ (t = ax 1 n k {n(ˆξ n,p ξ p } k ( X (k,k ξ p + X (k,1 ξ p.

28 28 We then notice that as k (1 F (X (k,k Exp(1 as 0, P (ξ p X (k,k F 1 (1 k 1 δ = P (k (1 p k (1 F (X (k,k > k δ 1 as 0. Siilarly, as k F (X (k,1 Exp(1 as 0, we have P (F 1 (k 1 δ X (k,1 ξ p 1 as 0. Thus, sup Ξ (t 0 in probability as 0. 0 t 2 k For sup 0 t 2 k Ξ (t, as Ξ (t σb(t and B is tight, sup 0 t 2 k Ξ (t 0 in probability as 0. Lastly the Bahadur representation indicates that in probability as 0. { ( sup Ξ (t Ξ (t = sup n (ˆξn,p ξ p F } n(ξ p p 0 2 k t T k n T f(ξ 2 p Acknowledgents References Ada N (1983 Achieving a confidence interval for paraeters estiated by siulation. Manageent Science 29(7. Alexopoulos C, Argon N, Goldsan D, Tokol G, Wilson J (2007 Overlapping variance estiators for siualtion. Operations Research 55: Asussen S, Glynn P (2007 Stochastic siulation: Algoriths and Analysis (New York, USA: Springer. Bahadur R (1966 A note on quantiles in large saples. Annals of Matheatical Statistics 37: Bayraksan G, Pierre-Louis P (2012 Fixed-width sequential stopping rules for a class of stochastic progras. SIAM Journal on Optiization 22(4: Billingsley P (1999 Convergence of Probability Measures (New York, USA: Wiley, 2 edition. Chow Y, Robbins H (1965 On the asyptotic theory of fixed-width sequential confidence intervals for the ean. Annals of Matheatical Statistics 36: Daerdji H (1994 Strong consistency of the varaince estiator in steady-state siulation output analysis. Matheatics of Operations Research 19:

29 29 Efron B (1979 Bootstrap ethods: another look at the jackknife. Annals of Statistics 7:1 26. Glynn P, Iglehart D (1990 Siulation output analysis using standardized tie series. Matheatics of Operations Research 15(1:1 16. Glynn P, Whitt W (1992 The asyptotic validity of sequential stopping rules for stochastic siulation. Annals of Applied Probability 2(1: Jeanblanc M, Yor M, Chesney M (2009 A special faily of diffusions: bessel processes. Matheatical Methods for Financial Markets, (London: Springer. Jones G, Haran M, Caffo B, Neath R (2006 Fixed-width output analysis for Markov Chain Monte Carlo. Journal of the Aerican Statistical Association 101(476: Lavenberg S, Sauer C (1977 Sequential stopping rules for the regenerative ethod of siulation. IBM J. Res. Develop. 21: Law A, Carson J (1979 A sequential procedure for deterining the length of a steady-state siulation. Operations Research 27(5: Meterelliyoz M, Alexopoulos C, Goldsan D (2012 Folded overlapping variance estiators for siualtion. European Journal of Operational Research 220(1: Pitan J, Yor M (1981 Bessel processes and infinitely divisible laws. Stochastic Integrals, , Lecture Notes in Matheatics (Berlin Heidelberge, Gerany: Springer. Politis D, Roano J, Wolf M (2001 On the asyptotic theory of subsapling. Statistica Sinica 11: Revuz D, Yor M (1999 Continuous Martingales and Brownian Motion (Berlin Heidelberge, Gerany: Springer, 3 edition. Sen P (1972 On the Bahadur representation of saple quantiles for sequences of φ-ixing rando variables. Journal of Multivariate Analysis 2: Shao J, Tu D (1995 The Jackknife and Bootstrap (New York, USA: Springer. Steiger N, Lada E, Wilson J, Joines J, Alexopoulos C, Goldsan D (2005 ASAP3: a batch eans procedure for steady-state siulation analysis. ACM Transactions on Modeling and Coputer Siulation 15(1:39 73.

30 30 Swanepoel J, van Wyk J, Venter J (1983 Fixed width confidence intervals based on bootstrap procedures. Sequential Anal. 2: Whitt W (2002 Stochastic-process liits: an introduction to stochastic-process liits and their application to queues (New York, USA: Springer. Wu C (1986 Jackknife, bootstrap and other resapling ethods in regression analysis (with discussions. Annals of Statistics 14: Wu W (2005 On the Bahadur representation of saple quantiles for dependent sequences. The Annals of Statistics 3(4: Wu W (2009 Recursive estiation of tie-average variance constants. The Annals of Applied Probability 19(4:

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Asymptotics of weighted random sums

Asymptotics of weighted random sums Asyptotics of weighted rando sus José Manuel Corcuera, David Nualart, Mark Podolskij arxiv:402.44v [ath.pr] 6 Feb 204 February 7, 204 Abstract In this paper we study the asyptotic behaviour of weighted

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Statistica Sinica 6 016, 1709-178 doi:http://dx.doi.org/10.5705/ss.0014.0034 AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Nilabja Guha 1, Anindya Roy, Yaakov Malinovsky and Gauri

More information

Some Proofs: This section provides proofs of some theoretical results in section 3.

Some Proofs: This section provides proofs of some theoretical results in section 3. Testing Jups via False Discovery Rate Control Yu-Min Yen. Institute of Econoics, Acadeia Sinica, Taipei, Taiwan. E-ail: YMYEN@econ.sinica.edu.tw. SUPPLEMENTARY MATERIALS Suppleentary Materials contain

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

arxiv: v1 [math.pr] 17 May 2009

arxiv: v1 [math.pr] 17 May 2009 A strong law of large nubers for artingale arrays Yves F. Atchadé arxiv:0905.2761v1 [ath.pr] 17 May 2009 March 2009 Abstract: We prove a artingale triangular array generalization of the Chow-Birnbau- Marshall

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

Simultaneous critical values for t-tests in very high dimensions

Simultaneous critical values for t-tests in very high dimensions Bernoulli 17(1, 2011, 347 394 DOI: 10.3150/10-BEJ272 Siultaneous critical values for t-tests in very high diensions HONGYUAN CAO 1 and MICHAEL R. KOSOROK 2 1 Departent of Health Studies, 5841 South Maryland

More information

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011)

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011) E0 370 Statistical Learning Theory Lecture 5 Aug 5, 0 Covering Nubers, Pseudo-Diension, and Fat-Shattering Diension Lecturer: Shivani Agarwal Scribe: Shivani Agarwal Introduction So far we have seen how

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds.

Proceedings of the 2015 Winter Simulation Conference L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. Proceedings of the 205 Winter Siulation Conference L. Yilaz, W. K. V. Chan, I. Moon, T. M. K. oeder, C. Macal, and M. D. ossetti, eds. EFFICIENT SIMULATION FO BANCHING LINEA ECUSIONS Ningyuan Chen Mariana

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine CSCI699: Topics in Learning and Gae Theory Lecture October 23 Lecturer: Ilias Scribes: Ruixin Qiang and Alana Shine Today s topic is auction with saples. 1 Introduction to auctions Definition 1. In a single

More information

Supplement to: Subsampling Methods for Persistent Homology

Supplement to: Subsampling Methods for Persistent Homology Suppleent to: Subsapling Methods for Persistent Hoology A. Technical results In this section, we present soe technical results that will be used to prove the ain theores. First, we expand the notation

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

Optimal Jackknife for Discrete Time and Continuous Time Unit Root Models

Optimal Jackknife for Discrete Time and Continuous Time Unit Root Models Optial Jackknife for Discrete Tie and Continuous Tie Unit Root Models Ye Chen and Jun Yu Singapore Manageent University January 6, Abstract Maxiu likelihood estiation of the persistence paraeter in the

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION A eshsize boosting algorith in kernel density estiation A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION C.C. Ishiekwene, S.M. Ogbonwan and J.E. Osewenkhae Departent of Matheatics, University

More information

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. Proceedings of the 2016 Winter Siulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtan, E. Zhou, T. Huschka, and S. E. Chick, eds. THE EMPIRICAL LIKELIHOOD APPROACH TO SIMULATION INPUT UNCERTAINTY

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Estimation of the Mean of the Exponential Distribution Using Maximum Ranked Set Sampling with Unequal Samples

Estimation of the Mean of the Exponential Distribution Using Maximum Ranked Set Sampling with Unequal Samples Open Journal of Statistics, 4, 4, 64-649 Published Online Septeber 4 in SciRes http//wwwscirporg/ournal/os http//ddoiorg/436/os4486 Estiation of the Mean of the Eponential Distribution Using Maiu Ranked

More information

Generalized Augmentation for Control of the k-familywise Error Rate

Generalized Augmentation for Control of the k-familywise Error Rate International Journal of Statistics in Medical Research, 2012, 1, 113-119 113 Generalized Augentation for Control of the k-failywise Error Rate Alessio Farcoeni* Departent of Public Health and Infectious

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe PROPERTIES OF MULTIVARIATE HOMOGENEOUS ORTHOGONAL POLYNOMIALS Brahi Benouahane y Annie Cuyt? Keywords Abstract It is well-known that the denoinators of Pade approxiants can be considered as orthogonal

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Learnability of Gaussians with flexible variances

Learnability of Gaussians with flexible variances Learnability of Gaussians with flexible variances Ding-Xuan Zhou City University of Hong Kong E-ail: azhou@cityu.edu.hk Supported in part by Research Grants Council of Hong Kong Start October 20, 2007

More information

Necessity of low effective dimension

Necessity of low effective dimension Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that

More information

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period An Approxiate Model for the Theoretical Prediction of the Velocity... 77 Central European Journal of Energetic Materials, 205, 2(), 77-88 ISSN 2353-843 An Approxiate Model for the Theoretical Prediction

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

arxiv: v1 [stat.ot] 7 Jul 2010

arxiv: v1 [stat.ot] 7 Jul 2010 Hotelling s test for highly correlated data P. Bubeliny e-ail: bubeliny@karlin.ff.cuni.cz Charles University, Faculty of Matheatics and Physics, KPMS, Sokolovska 83, Prague, Czech Republic, 8675. arxiv:007.094v

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies

Lost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Bayesian inference for stochastic differential mixed effects models - initial steps

Bayesian inference for stochastic differential mixed effects models - initial steps Bayesian inference for stochastic differential ixed effects odels - initial steps Gavin Whitaker 2nd May 2012 Supervisors: RJB and AG Outline Mixed Effects Stochastic Differential Equations (SDEs) Bayesian

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax:

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax: A general forulation of the cross-nested logit odel Michel Bierlaire, EPFL Conference paper STRC 2001 Session: Choices A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics,

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Simulation of Discrete Event Systems

Simulation of Discrete Event Systems Siulation of Discrete Event Systes Unit 9 Queueing Models Fall Winter 207/208 Prof. Dr.-Ing. Dipl.-Wirt.-Ing. Sven Tackenberg Benedikt Andrew Latos M.Sc.RWTH Chair and Institute of Industrial Engineering

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs International Cobinatorics Volue 2011, Article ID 872703, 9 pages doi:10.1155/2011/872703 Research Article On the Isolated Vertices and Connectivity in Rando Intersection Graphs Yilun Shang Institute for

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Ufuk Demirci* and Feza Kerestecioglu**

Ufuk Demirci* and Feza Kerestecioglu** 1 INDIRECT ADAPTIVE CONTROL OF MISSILES Ufuk Deirci* and Feza Kerestecioglu** *Turkish Navy Guided Missile Test Station, Beykoz, Istanbul, TURKEY **Departent of Electrical and Electronics Engineering,

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information