Folded standardized time series area variance estimators for simulation

Size: px
Start display at page:

Download "Folded standardized time series area variance estimators for simulation"

Transcription

1 Submitted to IIE Transactions. Folded standardized time series area variance estimators for simulation CLAUDIA ANTONINI 1 Departamento de Matemáticas Puras y Aplicadas, Universidad Simón Bolivar, Sartenejas, 18, Venezuela CHRISTOS ALEXOPOULOS 2 H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 3332, USA DAVID GOLDSMAN 3 H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 3332, USA JAMES R. WILSON 4 Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Campus Box 796, Raleigh, NC , USA Member, Institute of Industrial Engineers. cfmantonini@usb.ve. Telephone: Member, Institute of Industrial Engineers. christos@isye.gatech.edu. Telephone: (44) Fax: (44) Member, Institute of Industrial Engineers, and corresponding author. sman@gatech.edu. Telephone: (44) Fax: (44) Fellow, Institute of Industrial Engineers. jwilson@ncsu.edu. Telephone: (919) Fax: (919) iie8.tex 1 August 5, 27 13:5

2 Folded standardized time series area variance estimators for simulation We estimate the variance parameter of a stationary simulation-generated process using folded versions of standardized time series area estimators. Asymptotically, different folding levels yield unbiased estimators that are independent scaled chi-squared variates, each with one degree of freedom. We exploit this result to formulate improved variance estimators based on the combination of multiple levels as well as the use of batching; the improved estimators preserve the asymptotic bias properties of their predecessors, but have substantially lower variance. A Monte Carlo example demonstrates the efficacy of the new methodology. 1. Introduction One of the most important problems in simulation output analysis is the estimation of the mean µ of a steady-state (stationary) simulation-generated process {Y i : i = 1, 2,...}. For instance, we may be interested in determining the steady-state mean transit time in a job shop or the long-run expected profit per period arising from a certain inventory policy. Assuming that the simulation is indeed operating in steady state, the estimation of µ is not itself a particularly difficult problem simply use the sample mean of the observations, Ȳ n 1 n n Y i, as the point estimator. But point estimation of the mean is usually not enough, since any serious statistical analysis should also include a measure of the variability of the sample mean. One of the most commonly used measures of this variability is the variance parameter, which is defined as the sum of covariances of the process at all lags, and which can often be written as the intuitively pleasing σ 2 = lim nvar(ȳn). With knowledge of such a measure in hand, we could provide, among other benefits, confidence intervals for µ typically of the form µ Ȳn ± t ˆσ 2 /n, where t is a quantile from the appropriate pivot distribution and ˆσ 2 is an estimator for σ 2. Unfortunately, the problem of estimating the variance of the sample mean is not so straightforward. The trouble is caused by the facts that discrete-event simulation data are almost always serially correlated as well as non-normal, e.g., consecutive waiting times in a queueing system. These characteristics render as inappropriate traditional statistical analysis iie8.tex 2 August 5, 27 13:5

3 methods which may rely on the assumption of independent and identically distributed (i.i.d.) normal observations. This article is concerned with providing underlying theory for estimating the variance parameter σ 2 of a stationary simulation-generated process. Over the years, a number of methodologies for estimating σ 2 have been proposed in the literature (see Law, 26), e.g., the techniques referred to as nonoverlapping batch means (NBM), overlapping batch means (OBM), standardized time series (STS), spectral analysis, and regeneration. NBM conceptually the simplest of these methodologies divides the data {Y i : i = 1, 2,..., n} into nonoverlapping batches, and uses the sample variance of the sample means from the batches (i.e., the batch means) as a foundation to estimate σ 2. OBM (Meketon and Schmeiser, 1984), on the other hand, effectively re-uses the data by forming overlapping batches, and then invokes an appropriately scaled sample variance of the resulting sample means from the batches to estimate σ 2. The result is an OBM variance estimator having about the same bias as, but significantly lower variance than, the benchmark NBM estimator employing the same batch and total sample sizes. STS (Schruben, 1983) uses a functional central limit theorem to standardize a stationary time series, such as output from a steady-state discrete-event simulation, into a process that converges to limiting Brownian bridge processes as the batch or total sample sizes become large. Known properties of the Brownian bridge process are then used to obtain estimators for σ 2. Similar to OBM, overlapping batched versions of various STS estimators have been shown to have the same bias as, but substantially lower variance than, their nonoverlapping counterparts (Alexopoulos et al., 27b,c). Additional variance-reducing tricks in which STS re-uses data involve orthonormalizing (Foley and Goldsman, 1999) and linearly combining different STS estimators (Aktaran-Kalaycı et al., 27; Goldsman et al., 27). A recurring theme that emerges in the development of new estimators for σ 2 is that of the re-use of data. In the current article, we study the consequences of a folding operation on the original STS process (and its limiting Brownian bridge process). The folding operation produces multiple standardized time series processes, which in turn will ultimately allow us to use the original data to produce multiple estimators for σ 2 estimators that are often asymptotically independent as the sample size grows. These folded estimators will lead to combined estimators having smaller variance than existing estimators not based on the folding operation. The article is organized as follows. Section 2 gives some background material on STS. In Section 3, we introduce the notion of folding a Brownian bridge, and we show that each application of folding yields a new Brownian bridge process. We also derive useful expressions for these folded processes in terms of the original Brownian bridge and in terms of the original underlying Brownian motion. Section 4 is concerned with derivations of the iie8.tex 3 August 5, 27 13:5

4 expected values, variances, and covariances of certain functionals related to the area under a folded Brownian bridge. In Section 5, we finally show how to apply these results to the problem of estimating the variance parameter of a steady-state simulation process. The idea is to start with a single STS, form folded versions of that original STS (which converge to corresponding folded versions of a Brownian bridge process), calculate an estimator for σ 2 from each folded STS, and then combine the estimators into one low-variance estimator. We illustrate the efficacy of the folded estimators via analytical and Monte Carlo examples, and we find that the new estimators indeed reduce estimator variance at little cost in bias. Section 6 presents conclusions, while the technical details of some of the proofs are relegated to the Appendix. 2. Background This section lays out preliminaries on the STS methodology. We begin with some standard assumptions that we shall invoke whenever needed in the sequel. In plain English, these assumptions will ensure that our upcoming variance estimators work properly on a wide variety of stationary stochastic processes. Assumptions A 1. The process {Y i, i 1} is stationary and satisfies the following Functional Central Limit Theorem. For n = 1, 2,... and t [, 1], the process X n (t) nt (Ȳ nt µ) σ n (1) satisfies X n = W, where: µ is the steady-state mean, σ 2 is the variance parameter, denotes the greatest integer function; W is a standard Brownian motion process on [, 1]; and = denotes weak convergence in D[, 1], the space of functions on [, 1] that are right-continuous with left-hand limits, as n. See also Billingsley (1968) and Glynn and Iglehart (199). 2. i= R i = σ 2 (, ), where R i Cov(Y 1, Y 1+i ), i =, 1, 2, i2 R i <. 4. The function f( ), defined on [, 1], is twice continuously differentiable. Further, f(t) defines the normalizing condition f(s)f(t)[min{s, t} st] ds dt = 1. iie8.tex 4 August 5, 27 13:5

5 Assumptions A.1 A.3 are mild conditions that hold for a variety of stochastic processes encountered in practice (see Glynn and Iglehart, 199). Assumption A.4 gives conditions on the normalized weight function f( ) that will be used in our estimators for σ 2. Of fundamental importance to the rest of the paper is the standardized time series of the underlying stochastic process. It is the STS that will form the basis of all of the estimators studied herein. Definition 1. As in Schruben (1983), the (level-) standardized time series of the process {Y i } is for t [, 1]. T,n (t) nt (Ȳn Ȳ nt ) σ n (2) In the next section, we discuss how the STS (2) is related to a Brownian bridge process; and in Section 5.2, we show how to use this process to derive estimators for σ Folded Brownian bridges Our development requires some additional nomenclature. First of all, we define a Brownian bridge, which will turn out to be the limiting process of a standardized time series, much as the standard normal distribution is the limiting distribution of a properly standardized sample mean. Definition 2. Suppose that W( ) is a standard Brownian motion process. The associated level- Brownian bridge process is B (t) B(t) W(t) tw(1) for t [, 1]. In fact, a Brownian bridge {B(t) : t [, 1]} is a Gaussian process with E[B(t)] = and Cov(B(s), B(t)) = min{s, t} st, for s, t [, 1]. Brownian bridges are important for our purposes because under Assumptions A.1 A.3, Schruben (1983) shows that T,n ( ) = and that n(ȳn µ) and T,n ( ) are asymptotically independent as n. The contribution of the current paper is the development and evaluation of folded estimators for σ 2. We now define precisely what we mean by the folding operation, a map that can be applied either to a STS or a Brownian bridge. iie8.tex 5 August 5, 27 13:5 B( )

6 Definition 3. The folding map Ψ : Y D[, 1] Ψ Y D[, 1] is defined by Ψ Y (t) Y ( ) ( ) t 2 Y 1 t 2 for t [, 1]. Moreover for each nonnegative integer k, we define Ψ k : Y D[, 1] Ψ k Y D[, 1], the k-fold composition of the folding map Ψ, so that for every t [, 1], { Ψ k Y (t) Y (t), if k =, Ψ Ψ k 1 Y (t), if k = 1, 2,.... The folding operation can be performed multiple times on the Brownian bridge process, as demonstrated by the following definition. Definition 4. (see Shorack and Wellner, 1986) For k = 1, 2,..., the level-k folded Brownian bridge is B k (t) Ψ Bk 1 (t) = B k 1 ( t ) B 2 k 1(1 t ), 2 so that B k (t) = Ψ k B (t) for t [, 1]. Intuitively speaking, when the folding operator Ψ is applied to a Brownian bridge process {B (t) : t [, 1]}, it does the following: (i) Ψ reflects (folds) the portion of the original process defined on the subinterval [ 1, 1] (shown in the upper right-hand portion of Figure 2 1a) about the vertical line t = 1 (yielding the subprocess shown in the upper left-hand 2 portion of Figure 1a); and (ii) Ψ takes the difference between these two subprocesses defined on [, 2] 1 and stretches that difference over the unit interval [, 1] (yielding the new process shown in Figure 1b). Lemma 1 shows that as long as we start with a Brownian bridge, folding it will produce another Brownian bridge as well. The proof simply requires that we verify the necessary covariance structure (see Antonini, 25). [Ref 1 asks why we need Lemma 1 in light of Def 4 above. I believe that Def 4 actually only goes to one level. I ll check Shorack and Wellner today to make sure.] Lemma 1. For k = 1, 2,..., the process {B k (t) : t [, 1]} is a Brownian bridge. The next lemma gives an equation relating the level-k Brownian bridge with the original (level-) Brownian bridge and the initial Brownian motion process. These results will be useful later on when we derive properties of certain functionals of B k (t). iie8.tex 6 August 5, 27 13:5

7 a. Reflecting about t = 1/2 b. Differencing and Stretching Fig. 1. Geometric Illustration of Folded Brownian Bridges. Lemma 2. For k = 1, 2,..., B k (t) = 2 k 1 [ B( i 1 2 k ) B( i 2 k 1 t 2 k ) ] (3) = (1 t)w(1) + 2 k 1 [ W( i 1 2 k ) W( i 2 k 1 t 2 k ) ]. (4) The proof of Lemma 2 is a direct consequence of Definition 4. See Antonini (25) for the details. 4. Some functionals of folded Brownian bridges The purpose of this section is to highlight results on the weighted areas under successively higher levels of folded Brownian bridges. Such functionals will be used in Section 5 to construct estimators for the variance parameter σ 2 arising from a stationary stochastic process. Definition 5. For k =, 1,..., the weighted area under the level-k folded Brownian bridge is N k (f) f(t)b k (t) dt. Under simple conditions, Theorem 4.1 shows that N k (f) has a standard normal distribution; its proof is in the Appendix. iie8.tex 7 August 5, 27 13:5

8 Theorem 4.1. For any normalized weight function f(t) and any nonnegative integer k, we have N k (f) Nor(, 1). Corollary 1. Under the conditions of Theorem 4.1, we have A k (f) σ 2 N 2 k (f) σ2 χ 2 1. Of course, the corollary is an immediate consequence of Theorem 4.1. Besides the distributional result, it follows that E[A k (f)] = σ 2, a finding that we will revisit in Theorem 5.3 when we develop estimators for σ 2. Meanwhile, we proceed with several results concerning the joint distribution of the {N k (f) : k =, 1,...}. Our first such result, the proof of which is in the Appendix, gives an explicit expression for the covariance between folded area functionals from different levels. Before stating the theorem, for any weight function f( ), we define F (t) t f(s) ds, t F F (1), F (t) F (s) ds, and F F (1). Theorem 4.2. Let f 1 (t) and f 2 (t) be normalized weight functions. Then for l =, 1,... and k = 1, 2,..., we have Cov[N l (f 1 ), N l+k (f 2 )] = 2 k 1 f 2 (t)[ F1 ( i 2 k 1 t 2 k ) F1 ( i 1 2 k ) ] dt F 1 F2. (5) Lemmas 3 6 give results on the covariance between functionals of Brownian motion from different levels; these will be used later on to establish asymptotic covariances of estimators for σ 2 from different levels. In particular, Lemmas 5 and 6 give simple conditions under which these functionals are uncorrelated. Lemma 3. For l, k =, 1,... and s, t [, 1], Cov[B l (s), B l+k (t)] = Cov[B (s), B k (t)]. Proof. Follows by induction on k. Lemma 4. For l, k =, 1,..., Cov[N l (f 1 ), N l+k (f 2 )] = Cov[N (f 1 ), N k (f 2 )]. Proof. By Lemma 3, Cov[N l (f 1 ), N l+k (f 2 )] = = f 1 (s)f 2 (t)cov[b l (s), B l+k (t)] ds dt f 1 (s)f 2 (t)cov[b (s), B k (t)] ds dt. Lemma 5. If the normalized weight function f(t) satisfies f(t) = f(1 t) for all t [, 1], then Cov(N (f), N k (f)) = for all k = 1, 2,.... iie8.tex 8 August 5, 27 13:5

9 Proof. Applying integration by parts to Eq. (5) with f 1 = f 2 = f, we obtain Cov[N (f), N k (f)] = 1 2k 1 2 k = 1 2 k 1 F (t) 2 k = F F 2 F 2, F (t) [ F ( i 2 k 1 t 2 k ) + F ( i 1 2 k )] dt F 2 [ F ( 1 ( i 1 2 k )) + F ( i 1 2 k )] dt F 2 which follows since F (1 x) = x f(y) dy = f(1 z) dz = f(z) dz, so that F (1 x x x) + F (x) = F for all x [, 1]. The proof is completed by noting that F = = = = /2 /2 /2 f(x)(1 x) dx f(x)(1 x) dx + f(x)(1 x) dx + f(x) dx = F 2. /2 /2 f(1 y)y dy f(y)y dy Lemma 6. If the normalized weight function satisfies f(t) = f(1 t) for all t [, 1], then for l =, 1,... and k = 1, 2,..., Cov[N l (f), N l+k (f)] = Cov[N (f), N k (f)] =. Proof. Immediate from Lemmas 4 and 5. The following lemma, proven in the Appendix, establishes the multivariate normality of the random vector N(f) [N (f), N 1 (f),..., N k (f)]. It will be used in Theorem 4.3 to obtain the remarkable result that, under relatively simple conditions, the folded functionals {N k (f) : k =, 1,...} are i.i.d. Nor(, 1). Lemma 7. If the normalized weight function satisfies f(t) = f(1 t) for all t [, 1], then for each positive integer k the random vector N(f) has a nonsingular multivariate normal distribution. Theorem 4.3. If the normalized weight function f(t) satisfies f(t) = f(1 t) for all t [, 1], then the random variables {N k (f) : k =, 1,...} are i.i.d. Nor(, 1) random variables. Proof. Lemma 6 implies that Cov(N k (f), N j (f)) = for every k j. Now, since by Lemma 7 the random vector N(f) has a multivariate normal distribution, we can conclude iie8.tex 9 August 5, 27 13:5

10 that the random variables N (f), N 1 (f),... are i.i.d. Nor(, 1). The next corollary, which is immediate from Theorem 4.3, will serve as the basis for our new variance estimators to be derived in Section 5. Corollary 2. Under the conditions of Theorem 4.3, the random variables {A k (f) : k =, 1,...} are i.i.d. σ 2 χ 2 1. Example 1. The following weight functions arise in simulation output analysis applications (see Foley and Goldsman, 1999 and Section 5 of the current article): f (t) 12, f 2 (t) 84(3t 2 3t + 1/2), and f cos,j (t) 8πj cos(2πjt), j = 1, 2,..., all for t [, 1]. By Theorem 4.3, {N k (f), k } are i.i.d. Nor(, 1), and by Corollary 2, {A k (f), k } are i.i.d. σ 2 χ 2 1 for f = f, f 2, or f cos,j, j = 1, 2, Application to variance estimation We finally show how our work on properties of area functionals of folded Brownian bridges can be used in simulation output analysis. With this application in mind, we apply the folding transformation to Schruben s level- STS (Schruben, 1983) in Section 5.1, thereby obtaining several new versions of the STS. These new series are used in Section 5.2 to produce new estimators for σ 2. Section 5.3 gives obvious methods to improve the estimators, and Section 5.4 presents a simple Monte Carlo example showing that the estimators work as intended Folded standardized time series Analogous to the level-k folded Brownian bridge from Definition 4, we define the level-k folded STS. Definition 6. For k = 1, 2,..., the level-k folded STS is ( T k,n (t) Ψ Tk 1,n (t) = T t ( ) k 1,n 2) Tk 1,n 1 t 2 so that T k,n (t) = Ψ k T,n (t) for t [, 1]. The next goal is to examine the convergence of the level-k folded STS to the analogous level-k folded Brownian bridge process. The following result is an immediate consequence of the almost-sure continuity of Ψ k on D[, 1] for k =, 1,..., and the Continuous Mapping Theorem (CMT) (Billingsley, 1968). iie8.tex 1 August 5, 27 13:5

11 Theorem 5.1. If Assumptions A.1 A.3 hold, then for any fixed nonnegative integer k, we have [ T,n ( ),..., T k,n ( ) ] = [ B ( ),..., B k ( ) ]. Moreover, n ( Ȳ n µ ) and [ T,n ( ),..., T k,n ( ) ] are asymptotically independent as n Folded area estimators We introduce folded versions of the STS area estimator for σ 2, along with their asymptotic distributions, expected values, and variances. To begin, we define our new estimators, along with their limiting Brownian bridge functionals. Definition 7. For each nonnegative integer k, the STS level-k folded area estimator for σ 2 is where A k (f; n) Nk 2 (f; n), N k (f; n) 1 n n j=1 f ( ( j j ) n) σtk,n n and f( ) is a normalized weight function (satisfying Assumption A.4). The case k =, f = f corresponds to Schruben s original area estimator (Schruben, 1983). Definition 8. Let A(f; n) [A (f; n), A 1 (f; n),..., A k (f; n)] and A(f) [A (f), A 1 (f),..., A k (f)]. The following definitions provide the necessary set-up to establish in Theorem 5.2 below the asymptotic distribution of the random vector A(f; n) as n. Definition 9. Let Λ denote the class of strictly increasing, continuous mappings of [, 1] onto itself such that for every λ Λ, we have λ() = and λ(1) = 1. If X, Y D[, 1], then the Skorohod metric ρ(x, Y ) defining the distance between X and Y in D[, 1] is the infimum of those positive ξ for which there exists a λ Λ such that sup t [,1] λ(t) t ξ and sup t [,1] X(t) Y [λ(t)] ξ. (See Billingsley, 1968 for further details.) Definition 1. For each positive integer n, let Ω n : Y D[, 1] Ω n Y approximate (discrete) STS map D[, 1] be the Ω n Y (t) nt Y (1) n Y (t) iie8.tex 11 August 5, 27 13:5

12 for t [, 1]. Moreover, let Ω : Y D[, 1] Ω Y D[, 1] denote the corresponding asymptotic STS map for t [, 1]. Ω Y (t) lim Ω n Y (t) = ty (1) Y (t) Note that Ω n maps the process (1) into the corresponding standardized time series (2) so that we have Ω n X n (t) = T,n (t) for t [, 1] and n = 1, 2,... ; moreover, Ω maps a standard Brownian motion process into a standard Brownian bridge process, Ω W (t) = tw(1) W(t) B (t) for t [, 1]. Definition 11. For a given normalized weight function, for every nonnegative integer k, and for every positive integer n, the approximate (discrete) folded area map Θ k n : Y D[, 1] Θ k n (Y ) R is defined by Θ k n (Y ) [ 1 n n f ( ( i n) Ψ k Ω n i ) ] 2 Y. n Moreover, the corresponding asymptotic folded area map Θ k : Y D[, 1] Θ k (Y ) R is defined by [ 2 Θ k (Y ) f(t)ψ k Ω Y (t) dt]. [Ref 1 correctly points out that we re missing a σ on the RHS of the eqns for Θ k n(y ) and Θ k (Y ). I propose that Jim take the first cut at fixing this, and anything that happens to cascade from it (which may be nothing at all.] In terms of Eq. (1), the definition of A k (f) from Corollary 1, and the Definitions 8 11, we see that Θ k n (X n) = A k (f; n) and Θ k (W) = A k (f) for every nonnegative integer k. We are now ready to proceed with the main convergence theorem, which shows that the folded area estimators converge jointly to their asymptotic counterparts. Theorem 5.2. If Assumptions A hold, then A(f, n) = A(f). (6) Sketch of Proof. Although the proof of Theorem 5.2 is detailed in the Appendix, it can be summarized as follows. Our goal is to apply the generalized CMT that is, Theorem 5.5 of Billingsley (1968) to prove that the (k + 1) 1 random vector with jth element Θ j n(x n ) converges in distribution to the (k + 1) 1 random vector with jth element Θ j (W) iie8.tex 12 August 5, 27 13:5

13 for j =, 1,..., k. To establish the hypotheses of the generalized CMT, we show that if {x n } D[, 1] is any sequence of functions converging to a realization W of a standard Brownian motion process in the Skorohod metric on D[, 1], then the real-valued sequence Θ j n(x n ) converges to Θ j (W) almost surely. First we exploit the almost-sure continuity of W(u) at every u [, 1] and the convergence of {x n } to W in D[, 1] to show that for every nonnegative integer j, with probability one we have Ψ j Ω n x n (t) Ψ j Ω n W (t) uniformly for t [, 1] as n ; and it follows that lim Θ j n (x n ) Θ j n (W) = with probability one. (7) Next we exploit the almost-sure convergence Ψ j Ω n W (t) Ψj Ω W (t) for all t [, 1] as n together with the almost-sure continuity and Riemann integrability of f(t)ψ j Ω W (t) for t [, 1] to show that lim Θ j n (W) Θ j (W) = with probability one. (8) Combining (7) and (8) and applying the triangle inequality, we see that the corresponding vector-valued sequence {[ Θ n (x n),..., Θ k n (x n) ] : n = 1, 2,... } converges to [ Θ (W),..., Θ k (W) ] in R k+1 with probability one; and thus the desired result follows directly from the generalized CMT. Remark 1. Under Assumptions A, and for the weight functions in Example 1, Theorem 5.2 and Corollary 1 imply that A (f; n),..., A k (f; n) are asymptotically (as n ) i.i.d. σ 2 χ 2 1 random variables. Under relatively modest conditions, Theorem 5.3 gives asymptotic expressions for the expected values and variances of the level-k area estimators. Theorem 5.3. Suppose that Assumptions A hold. Further, for fixed k, suppose that the family of random variables { A 2 k (f; n) : n 1} is uniformly integrable (see Billingsley, 1968 for a definition and sufficient conditions). Then we have E[A k (f; n)] E[A k (f)] = σ 2, and Var[A k (f; n)] Var[A k (f)] = 2σ 4. Remark 2. One can obtain finer-tuned results for E[A (f; n)] and E[A 1 (f; n)]. In particular, under Assumptions A, Foley and Goldsman (1999) and Goldsman et al. (199) show that E[A (f; n)] = σ 2 + [(F F ) 2 + F 2 ]γ 2n + o(1/n), iie8.tex 13 August 5, 27 13:5

14 where γ 2 ir i (Song and Schmeiser, 1995). [As I discovered in the WSC paper, Ref 1 points out that we never defined γ!] In a companion paper, Alexopoulos et al. (27a), we find that if Assumptions A hold and n is even, then E[A 1 (f; n)] = σ 2 + F 2 γ n + o(1/n) Enhanced estimators The individual estimators whose properties are given in Theorem 5.3 are all based on a single long run of n observations, and all involve a single level k of folding. This section discusses some obvious extensions of the estimators that have improved asymptotic properties batching and combining levels. Batching: In actual applications, we often organize the data by breaking the n observations into b contiguous, nonoverlapping batches, each of size m so that n = bm; and then we can compute the folded variance estimators from each batch separately. As the batch size m, the variance estimators computed from different batches are asymptotically independent under broadly applicable conditions on the original (unbatched) process {Y i, i 1}; and thus more stable (i.e., more accurate) variance estimators can be obtained by combining the folded variance estimators computed from all available batches. In view of this motivation, suppose that the ith batch of size m consists of the observations Y (i 1)m+1, Y (i 1)m+2,..., Y im, for i = 1, 2,..., b. Using the obvious minor changes to the appropriate definitions, one can construct the level-k STS from the ith batch of observations, say T k,m,i (t); and from there, one can obtain the resulting level-k area estimator from the ith batch, say A k,i (f; m). Finally, we define the level-k batched folded area estimator for σ 2 by à k (f; b, m) 1 b b A k,i (f; m). Under the conditions of Theorem 5.3, we have lim m E[Ãk(f; b, m)] = σ 2 and lim m Var[Ãk(f; b, m)] = 2σ 4 /b, where the latter result follows from the fact that the A k,i (f; m) s, i = 1, 2,..., b, are asymptotically independent as m. Thus, we obtain batched estimators with approximately the same expected value σ 2 as a single folded estimator arising from one long run, yet with substantially smaller variance. Combining levels of folding: Theorem 5.3 shows that, for a particular weight function f( ), all of the estimators from different levels of folding behave about the same asymptotically in terms of their expected value and variance. We can improve upon these individual iie8.tex 14 August 5, 27 13:5

15 estimators by combining the different levels. To this end, denote the average of the folded area estimators from levels, 1,..., k by Ā k (f; n) 1 k + 1 k A j (f; n). Under the conditions of Remark 1 and Theorem 5.3, we have lim E[Āk(f; n)] = σ 2, and lim Var[Āk(f; n)] = 2σ 4 /(k + 1). Thus, we obtain combined estimators with approximately the same expected value σ 2 as a single folded estimator arising from one level, yet with significantly smaller variance. j= 5.4. Monte Carlo Examples We illustrate the performance of the new folded estimator with simple Monte Carlo experiments involving a stationary first-order autoregressive [AR(1)] process and a stationary M/M/1 queue-waiting-time process. In both cases, simulation runs are based on 4,96 independent replications using b = 32 batches. We used common random numbers across all variance estimation methods based on the combined generator given in Figure 1 of L Ecuyer (1999) for random number generation. Can somebody add this reference correctly? We also checked the performance of the batched estimators when we combine levels. In particular, we used the realizations from the individual levels and 1 with quadratic weight function f 2 ( ) to calculate realizations of A 1 (f 2 ; 32, m) 1 2 [Ã(f 2 ; 32, m) + Ã1(f 2 ; 32, m)], and we obtained the estimated performance characteristics for this combined estimator, shown in column 4 of all Tables below. [we need to include the material about the NBM here. Can somebody else do this?] We also compare the folded area estimators againts the NMB. See column 5 of Tables 1 and and column 6 of Tables 2 and AR(1) Process An AR(1) process is constructed by setting Y i = φy i 1 + ɛ i, i = 1, 2,..., where the ɛ i s are i.i.d. Nor(, 1 φ 2 ), Y is a standard normal random variable that is independent of the ɛ i s, and 1 < φ < 1 (to preserve stationarity). It is well known that, for the AR(1) process, R k = φ k, k =, 1, 2..., and σ 2 = (1 + φ)/(1 φ). In the current example, iie8.tex 15 August 5, 27 13:5

16 We set the parameter φ =.9 corresponding to a highly positive autocorrelation structure with variance parameter σ 2 = 19. We calculated the estimated mean (Table 1) and standard error and correlations of the estimators (Table 2), averaged over the 496 realizations. With the purpose of demonstrating the convergence of the expected values of the estimators to the variance parameter σ 2 as the batch size m increases, we show the numerical results in Table 1. We compare the folded estimators versus the non-overlapping batch mean variance estimator. Results are shown in column 5 of Tables 1 and 3 and column 6 of Tables 2 and 4. Table 1. Estimated Expected Values of the Enhanced Folded Area Estimators versus the Non-Overlapping Batch Means for an AR(1) Process with φ =.9, σ 2 = 19, and b = 32 m Ã(f 2 ; 32, m) Ã 1 (f 2 ; 32, m) A 1 (f 2 ; 32, m) N (32; m) We summarize our conclusions from Tables 1 and 2 regarding the expected values and standard errors respectively of the variance estimators under consideration as follows: The estimated expected values of all variance estimators converge to σ 2 (19 in this case) as m increases, in accordance with our theoretical results. For small values of m, the NBM yield expected values that are superior to those of the folded area estimators, but the folded area estimators quickly catch up to the batch-mean estimator as m gets larger. iie8.tex 16 August 5, 27 13:5

17 Table 2. Estimated Standard Error and Correlation of the Enhanced Folded Area Estimators versus the Non-Overlapping Batch Means for an AR(1) Process with φ =.9, σ 2 = 19, and b = 32 m Ã(f 2 ; 32, m) Ã 1 (f 2 ; 32, m) A 1 (f 2 ; 32, m) Corr(Ã, Ã1; 32, m) N (32; m) As the batch size m becomes large, the combined estimator A 1 (f 2 ; 32, m) attains comparable expected values with the bonus of substantially ( 5%) reduced variance. Theoretically, we know that the asymptotic variance of the individual levels of the batched folded area estimators is 2σ 4 /b. This translates in asymptotic variance (as batch size m becomes large) for the AR(1) process we chose. This gives us, when performing 4,96 replications, an.742 standard error. Similarly, the theoretical asymptotic variance for the NBM is 2σ 4 /(b 1) (we need to add the right reference for this) which is not significantly larger. However, when we observe the theoretical asymptotic variance for the combined estimator A 1 (f 2 ; 32, m), it is divided by 2 (see section 5.3) since we are combining levels and 1. As a result, this estimator has half the asymptotic variance of the other 3 while preserving rates of convergence to σ 2 similar to the single level estimators and catching up relatively fast with NBM. This variance reduction translates in a reduction in the standard error by a factor of 2. Obtaining, in this case, a standard error of.525. We can observe this behavior in column 4 of Table 2. As Lemma 6 establishes, the correlation goes to zero as the batch size m increases. We can observe that in column 5 of Table 2. iie8.tex 17 August 5, 27 13:5

18 M/M/1 Process We also consider the stationary queue-waiting-time process for an M/M/1 queue with arrival rate λ and traffic intensity ρ < 1; i.e., a queueing system with Poisson arrivals and firstin-first-out i.i.d. exponential service times at a single server. For this process, we have σ 2 = ρ 3 (2 + 5ρ 4ρ 2 + ρ 3 )/[λ 2 (1 ρ) 4 ]; see, for example, Steiger and Wilson (21). In this particular example, We set the arrival rate at.8 and the service rate at 1., so that the server utilization parameter is ρ =.8 corresponding to a highly positive autocorrelation structure and variance parameter σ 2 = 1, 976. With the purpose of demonstrating the convergence of the expected values of the estimators to the variance parameter σ 2 as the batch size m increases, we show the numerical results in Table 3. Similarly, we show the standard errors and correlations of the estimators in Table 4. In columns 2 4 of Tables 3 and 4 we show the results for the enhanced folded area estimators with quadratic weight function f 2. We also compare the folded estimators versus the non-overlapping batch mean variance estimator. Results of this experiment are shown in column 5 of Table 3 and column 6 of Table 4. Table 3. Estimated Expected Values of the Enhanced Folded Area Estimators versus the Non-Overlapping Batch Means for an M/M/1 Waiting Time Process with ρ =.8, σ 2 = 1976, and b = Step Warm-up for the M/M/1 Process. m Ã(f 2 ; 32, m) Ã 1 (f 2 ; 32, m) A 1 (f 2 ; 32, m) N (32; m) , , , ,515 1,135 1,325 1, ,827 1,584 1,75 1, ,922 1,831 1,877 1, ,975 1,923 1,949 1, ,974 1,972 1,973 1, ,979 1,97 1,975 1, ,982 1,98 1,981 1, ,977 1,972 1,975 1,957 iie8.tex 18 August 5, 27 13:5

19 Table 4. Estimated Standard Error and Correlation of the Enhanced Folded Area Estimators versus the Non-Overlapping Batch Means for an M/M/1 Waiting Time Process with ρ =.8, σ 2 = 1976, and b = Step Warm-up for the M/M/1 Process. m Ã(f 2 ; 32, m) Ã 1 (f 2 ; 32, m) A 1 (f 2 ; 32, m) C(Ã, Ã1; 32, m) N (32; m) We summarize our conclusions from Tables 3 and 4 regarding the expected values and standard errors respectively of the variance estimators under consideration as follows: The estimated expected values of all variance estimators converge to σ 2 (1976 in this case) as m increases, in accordance with our theoretical results. For small values of m, the NBM yield expected values that are superior to those of the folded area estimators, but the folded area estimators quickly catch up to the batch-mean estimator as m gets larger. Similarly to the example with the AR(1) process, the combined estimator A 1 (f 2 ; 32, m) attains comparable expected values with the bonus of substantially ( 5%) reduced variance. This translates in 244, 36 asymptotic variance (as batch size m becomes large) for the M/M/1 process under study. This gives us, when performing 4,96 replications, an standard error. Again, the asymptotic variance for the NBM is not significantly larger. However, when we observe the theoretical asymptotic variance for the combined estimator A 1 (f 2 ; 32, m), it is divided by 2 (see section 5.3) since we are combining levels and 1. As a result, this estimator has half the asymptotic variance of the other 3 while preserving rates of convergence to σ 2 similar to the single level estimators and catching up relatively fast with NBM. This variance reduction translates in a reduction in the standard error by a factor of 2. Obtaining, in this case, a standard error of We can observe this behavior in column 4 of Table 4. iie8.tex 19 August 5, 27 13:5

20 As Lemma 6 establishes, the correlation goes to zero as the batch size m increases. We can observe that in column 5 of Table 4. The convergence to zero in this case seems to be slower than the one in the AR(1) example. Do you have an explanation for this? It would be nice to include it here. 6. Conclusions The main purpose of this article was to introduce folded versions of the standardized time series area estimator for the variance parameter arising from a stationary simulation process. We provided theoretical results showing that the folded estimators converge to appropriate functionals of Brownian motion; and these convergence results allow us to produce asymptotically unbiased and low-variance estimators using multiple folding levels in conjunction with standard batching techniques. At each folding level, and for each weight function in Example 1, the proposed estimators can be computed in O(n) time; the detailed computations will be listed in a forthcoming article. Ongoing work includes the following. As in Remark 2, we can derive precise expressions for the expected values of the folded estimators expressions that show just how quickly any estimator bias dies off as the batch size increases. We can also produce analogous folding results for other primitive STS variance estimators, e.g., for Cramér von Mises estimators, as described in Antonini (25). In addition, whatever type of primitive estimator we choose to use, there is interest in finding the best ways to combine batching and multiple folding levels in order to produce even-better estimators for σ 2, and subsequently, good confidence intervals for the underlying steady-state mean µ. In any case, we have also planned a massive Monte Carlo analysis to examine estimator performance over a variety of benchmark processes. Future work includes the development of overlapping versions of the folded estimators, as in Alexopoulos et al. (27b,c) and in Meketon and Schmeiser (1984). Acknowledgements Partial support for our research was provided by National Science Foundation Grant DMI References Aktaran-Kalaycı, T., Goldsman, D., and Wilson, J. R. (27) Linear combinations of overlapping variance estimators for simulation. Operations Research Letters, to appear. iie8.tex 2 August 5, 27 13:5

21 Alexopoulos, C., Antonini, C., Goldsman, D., Meterelliyoz, M., and Wilson, J. R. (27a) Properties of folded standardized time series variance estimators for simulation. Technical Report, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA. Alexopoulos, C., Argon, N. T., Goldsman, D., Steiger, N. M., Tokol, G., and Wilson, J. R. (27b) Efficient computation of overlapping variance estimators for simulation. IN- FORMS Journal on Computing, to appear. Alexopoulos, C., Argon, N. T., Goldsman, D., Tokol, G., and Wilson, J. R. (27c) Overlapping variance estimators for simulation. Operations Research, to appear. Anderson, T. W. (1984) An Introduction to Multivariate Statistical Analysis, 2nd ed., Wiley, New York. Antonini, C. (25) Folded variance estimators for stationary time series. Ph.D. dissertation, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA. Apostol, T. M. (1974) Mathematical Analysis, 2nd ed., Addison-Wesley, Reading, MA. Bartle, R. G. (1976) The Elements of Real Analysis, 2nd ed., Wiley, New York. Billingsley, P. (1968) Convergence of Probability Measures, Wiley, New York. Foley, R. D., and Goldsman, D. (1999) Confidence intervals using orthonormally weighted standardized time series. ACM Transactions on Modeling and Computer Simulation, 9, Glynn, P. W., and Iglehart, D. L. (199) Simulation output analysis using standardized time series. Mathematics of Operations Research, 15, Goldsman, G., Kang, K., Kim, S.-H., Seila, A. F., and Tokol, G. (27) Combining standardized time series area and Cramér von Mises variance estimators. Naval Research Logistics, to appear. Goldsman, D., Meketon, M., and Schruben, L. (199) Properties of standardized time series weighted area variance estimators. Management Science, 36, Grimmett, G. R., and Stirzaker, D. R. (1992) Probability and Random Processes, Oxford Science Publications, Oxford. Law, A. M. (26) Simulation Modeling and Analysis, 4th ed., McGraw-Hill, New York. iie8.tex 21 August 5, 27 13:5

22 Loève, M. Probability Theory II, 4th ed., Springer-Verlag, New York. Meketon, M. S., Schmeiser, B. W. (1984) Overlapping batch means: Something for nothing?, in Proceedings of the 1984 Winter Simulation Conference, Sheppard, S., Pooch, U., Pegden, D. (eds.), Institute of Electrical and Electronics Engineers, Piscataway, NJ, pp Schruben, L. (1983) Confidence interval estimation using standardized time series. Operations Research, 31, Shorack, G. R., and Wellner, J. A. (1986) Empirical Processes with Applications to Statistics, Wiley, New York. Song, W. T., and Schmeiser, B. W. (1995) Optimal mean-squared-error batch sizes. Management Science, 41, Steiger, N. M., and Wilson, J. R. 21. Convergence properties of the batch-means method for simulation output analysis. INFORMS J. Comput., Appendix Proof of Theorem 4.1 Since N k (f) is the integral of a continuous function over the closed interval [, 1], its Riemann sum satisfies (see p. 229 of Bartle, 1976) N k,m (f) 1 m m f( i m )B k( i m ) a.s. m N k (f), (A1) where a.s. m denotes almost-sure convergence as m. For fixed k and m, N k,m (f) is normal since it is a finite linear combination of jointly normal random variables. Furthermore, N k,m (f) has expected value and variance Var [N k,m (f)] = 1 m 2 m = j=1 = 1 + O(1/m). m f( i )f( j )Cov [ B m m k ( i ), B m k( j )] m f(s)f(t) [min{s, t} st] ds dt + O(1/m) iie8.tex 22 August 5, 27 13:5

23 Eq. (A1) implies that the characteristic function of N k,m (f) converges to the characteristic function of N k (f) as m (see p. 172 of Grimmett and Stirzaker, 1992). Since N k,m (f) is normal with mean and variance 1 + O(1/m), its characteristic function is given by ϕ m (t) = E { exp [ 1 tn k,m (f) ]} } = exp { t22 [1 + O(1/m)]. It follows immediately that lim m ϕ m (t) = exp ( t 2 /2), the characteristic function of the standard normal distribution; and thus N k (f) Nor(, 1). Proof of Theorem 4.2 By Lemma 4 and Eq. (3), Cov[N l (f 1 ), N l+k (f 2 )] = Cov[N (f 1 ), N k (f 2 )] = = = = = = = 2 k 1 2 k 1 2 k 1 2 k 1 2 k 1 f 1 (s)f 2 (t)cov[b (s), B k (t)] ds dt f 2 (t) i 2 k 1 t 2 k [ f 1 (s)f 2 (t)cov B(s), B ( ) ( i 1 + t 2 k 1 2 B i ) ] t ds dt k 2 k 1 2 k [ f 1 (s)f 2 (t) min { } { } ] s, i 1 + t 2 k 1 2 min s, i t k 2 k s(1 t) ds dt k 2 k 1 [ i 1 2 k [ f 2 (t) ( i 2 k 1 t 2 k ) f 1 (s)s ds i 2 k 1 t 2 k i 1 f 1 (s)s ds + i 2 k 1 t 2 k f 1 (s) f 1 (s)s ds + 2 k ] i 2 k 1 t 2 k f 1 (s) ds i 1 2 k f 1 (s) ( i 2 k 1 t 2 k ) ds ( i 1 2 k ) dt + (F 1 F 1 ) F 2 ( i 1 2 k ) ds ] dt + (F 1 F 1 ) F 2 i 1 2 k f 1 (s) ds [ f 2 (t) ( ) ( i t i ) ( 2 k 1 2 k F1 t 2 k i 1 ) ( + t i 1 ) k 2 k 1 2 k F1 + t 2 k 1 2 k + F 1 ( i 2 k 1 t 2 k ) F1 ( i 1 2 k ) + ( i 1 2 k ) ( F1 (1) F 1 ( i 1 2 k )) ( i 2 k 1 t 2 k ) ( F1 (1) F 1 ( i 2 k 1 t 2 k )) ] dt + (F 1 F 1 ) F 2 (since b sf a 1(s) ds = bf 1 (b) af 1 (a) F 1 (b) + F 1 (a)) 2 k 1 from which the result follows. { ( f 2 (t) i ) ( F1 t 2 k 1 2 i 1 ) } F1 + t k 2 k 1 2 (1 t)f 1 dt + (F k 2 k 1 1 F 1 ) F 2, iie8.tex 23 August 5, 27 13:5

24 Proof of Lemma 7 First, we show that every linear combination k j= a jn j (f) has a normal distribution, and hence, N(f) has a multivariate normal distribution by virtue of Theorem of Anderson (1984). Indeed, k a j N j (f) = j= [ k ] a j f(t)b j (t) dt. j= Further, by Definition 2 and Eq. (4), for each B j (t), Z(t) k a j f(t)b j (t) j= = a f(t)(w(t) tw(1)) + ( k + j=1 k j=1 a j )(1 t)f(t)w(1). a j 2j 1 f(t) [ W ( i 1 2 j 1 + t 2 j ) W ( i 2 j 1 t 2 j )] Now, let c 1, c 2,..., c m be real constants and t 1 < < t m 1. Then m c l Z(t l ) = l=1 m c l a f(t l )(W(t l ) t l W(1)) l=1 + + m l=1 c l k j=1 a j 2j 1 f(t l ) [ W ( i 1 2 j 1 + t l 2 j ) W ( i 2 j 1 t l 2 j )] ( m k c l a j )(1 t l )f(t l )W(1). j=1 l=1 Let T be the set of all times of the form i 1 + t 2 j 1 l i or t 2 j 2 j 1 l, for some l = 1,..., m, 2 j j = 1,..., k, and i = 1,..., 2 j 1. Let {τ 1,..., τ L } be an increasing ordering of T {1}. Clearly, we can write m l=1 c lz(t l ) as L l=1 d lw(τ l ), for some real constants d 1,..., d L since the function f( ) is deterministic. Since W is a Gaussian process, the latter summation is Gaussian and thus, Z is a Gaussian process. Notice also that Z has continuous paths because W has continuous paths. Finally recall that k j= a jn j (f) = Z(t) dt; and the same methodology used in Theorem 4.1 can be used to show that the latter integral is a normal random variable. To prove that N(f) = [N (f),..., N k (f)] has a nonsingular multivariate normal distribution, we show that the variance-covariance matrix Σ N(f) is positive definite. This follows iie8.tex 24 August 5, 27 13:5

25 immediately from Lemma 6 since [ k ] aσ N(f) a T = Var a j N j (f) = j= k a 2 j >, j= for all a = (a,..., a k ) R k+1 {}. Proof of Theorem 5.2 In terms of the definition (1) and the Definitions 8 11, we see that Θ k n (X n) = A k (f; n) for k =, 1,... ; and we seek to apply the generalized CMT that is, Theorem 5.5 of Billingsley (1968) to prove that the (k +1) 1 random vector with jth element Θ j n (X n), j =, 1,..., k, converges in distribution to the (k + 1) 1 random vector with jth element Θ j (W). To verify the hypotheses of the generalized CMT, we establish the following result. In terms of the set of discontinuities { D j x D[, 1] : for some sequence {x n } D[, 1] converging to x, } the sequence {Θ j n (x n)} does not converge to Θ j (x) for j =,..., k, we will show that { } k Pr W( ) D[, 1] D j = 1. (A2) To prove (A2), we will exploit the almost-sure continuity of sample paths of W( ): j= With probability 1, the function W(t) is continuous at every t ; (A3) see 41.3.A of Loève (1978) or p. 64 of Billingsley (1968). Thus we may assume without loss of generality that we are restricting our attention to an event H D[, 1] for which (A3) holds so that Pr { W H } = 1. (A4) Suppose {x n } D[, 1] converges to W H and that j {, 1,..., k} is a fixed integer. Next we seek to prove the key intermediate result, For each W H and for each sequence {x n } D[, 1] converging to W, we have lim Ψ j Ω n x n (t) Ψ j Ω n W (t) = uniformly for t [, 1]. (A5) iie8.tex 25 August 5, 27 13:5

26 We prove (A5) by induction on j, starting with j =. Choose ε > arbitrarily. Throughout the following discussion, W H and {x n } are fixed; and thus virtually all the quantities introduced in the rest of the proof depend on W and {x n }. The sample-path continuity property (A3) and Theorem 4.47 of Apostol (1974) imply that W(t) is uniformly continuous on [, 1]; and thus we can find ζ > such that For all t, t [, 1] with t t < ζ, we have W(t) W(t ) < ε/4. (A6) Because {x n } converges to W in D[, 1], there is a sufficiently large integer N such that for each n N, there exists λ n ( ) Λ satisfying and sup λ n (t) t < min{ζ, ε/4} (A7) t [,1] sup x n (t) W[λ n (t)] < min{ζ, ε/4}. (A8) t [,1] When j =, the map Ψ j is the identity; and in this case for each n N we have Ψ j Ω n x n (t) Ψ j Ω n W (t) = Ω n xn (t) Ω n W (t) [ ] [ = nt xn (1) nt W(1) x n (t) W(t)] n n nt n x n(1) W(1) + x n (t) W(t) (A9) x n (1) W[λ n (1)] + W[λ n (1)] W(1) + x n (t) W[λ n (t)] + W[λ n (t)] W(t) (A1) ε/4 + ε/4 + ε/4 + ε/4 = ε for t [, 1], (A11) where (A9) and (A1) follow from the triangle inequality and (A11) follows from (A6), (A7) and (A8). This establishes (A5) for j =. Now suppose that (A5) holds for some j. Again we choose ε > arbitrarily. The induction hypothesis ensures that there exists N sufficiently large such that for each n N, we have We have Ψ j Ω n x n (t) Ψ j Ω n W(t) < ε/2 for all t [, 1]. (A12) Ψ j+1 Ω n x n (t) Ψ j+1 Ω n W (t) [ ( = Ψ j Ω n t ( x n 2) Ψ j Ω n x n 1 t 2) ] [ Ψ j ΩW( n t 2) Ψ j ΩW( n 1 t 2) ] (A13) ( Ψ j Ω n t x n 2) Ψ j ΩW( n t ) Ψ ( ( 2 + j Ω n x n 1 t 2) Ψ j Ω n W 1 t 2) (A14) < ε/2 + ε/2 = ε for t [, 1] and n N, (A15) iie8.tex 26 August 5, 27 13:5

27 where: (A13) follows from Definition 3 of the folding map; (A14) follows from the triangle inequality; and (A15) follows from (A12). This establishes (A5) for j =, 1,.... Since f( ) is continuous on [, 1] by Assumption A.4, we have f max f(t) < ; t [,1] (A16) thus (A4), (A5), and (A16) imply that lim Θ j n (x n ) Θ j n(w) f 1 lim n n ( Ψ j Ω n i ) ( x n n Ψ j Ω n i ) W n = with probability 1. (A17) An argument similar to that justifying (A5) proves that With probability 1, lim Ψ j Ω n W (t) Ψj Ω W (t) =. In view of the almost-sure continuity of sample paths of W( ) and the continuity of f( ), it is straightforward to show that [Ref 2 wants to move the next eqn into this text. In any case, I got rid of the eqn number since it s not referenced later.] With probability 1, the function f(t)ψ j Ω W (t) is continuous at every t [, 1]; and thus it follows that f(t)ψ j Ω W (t) is Riemann integrable with probability 1 and that lim Θ j n (W) Θ j (W) 1 n = lim f ( ( i n n) Ψ j Ω i ) 1 W n f(t)ψ j Ω W (t) dt = with probability 1. (A18) Combining (A17) and (A18) and applying the triangle inequality, we see that for each j {, 1,..., k}, lim Θ j n (x n ) Θ j (W) lim Θ j n (x n ) Θ j n (W) + lim Θ j n (W) Θ j (W) = with probability 1. It follows that the corresponding vector-valued sequence {[ Θ n (x n),..., Θ k n (x n) ] : n = 1, 2,... } converges to [ Θ (W),..., Θ k (W) ] in R k+1 with probability one; and thus the desired result (6) follows directly from the generalized CMT. iie8.tex 27 August 5, 27 13:5

Area variance estimators for simulation using folded standardized time series

Area variance estimators for simulation using folded standardized time series To appear, IIE Transactions. Area variance estimators for simulation using folded standardized time series CLAUDIA ANTONINI 1 Departamento de Matemáticas Puras y Aplicadas, Universidad Simón Bolívar, Sartenejas,

More information

Variance Parameter Estimation Methods with Data Re-use

Variance Parameter Estimation Methods with Data Re-use Variance Parameter Estimation Methods with Data Re-use Christos Alexopoulos, David Goldsman, Melike Meterelliyoz Claudia Antonini James R. Wilson Georgia Institute of Technology, Atlanta, GA, USA Universidad

More information

REFLECTED VARIANCE ESTIMATORS FOR SIMULATION. Melike Meterelliyoz Christos Alexopoulos David Goldsman

REFLECTED VARIANCE ESTIMATORS FOR SIMULATION. Melike Meterelliyoz Christos Alexopoulos David Goldsman Proceedings of the 21 Winter Simulation Conference B. Johansson, S. Jain, J. Montoya-Torres, J. Hugan, and E. Yücesan, eds. REFLECTE VARIANCE ESTIMATORS FOR SIMULATION Melike Meterelliyoz Christos Alexopoulos

More information

A SEQUENTIAL PROCEDURE FOR ESTIMATING THE STEADY-STATE MEAN USING STANDARDIZED TIME SERIES. Christos Alexopoulos David Goldsman Peng Tang

A SEQUENTIAL PROCEDURE FOR ESTIMATING THE STEADY-STATE MEAN USING STANDARDIZED TIME SERIES. Christos Alexopoulos David Goldsman Peng Tang Proceedings of the 2013 Winter Simulation Conference R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, eds. A SEQUENTIAL PROCEDURE FOR ESTIMATING THE STEADY-STATE MEAN USING STANDARDIZED TIME

More information

Online Supplement to Efficient Computation of Overlapping Variance Estimators for Simulation INFORMS Journal on Computing

Online Supplement to Efficient Computation of Overlapping Variance Estimators for Simulation INFORMS Journal on Computing Online Supplement to Efficient Computation of Overlapping Variance Estimators for Simulation INFORMS Journal on Computing Christos Alexopoulos H. Milton Stewart School of Industrial and Systems Engineering,

More information

Optimal Linear Combinations of Overlapping Variance Estimators for Steady-State Simulation

Optimal Linear Combinations of Overlapping Variance Estimators for Steady-State Simulation Optimal Linear Combinations of Overlapping Variance Estimators for Steady-State Simulation Tûba Aktaran-Kalaycı, Christos Alexopoulos, David Goldsman, and James R. Wilson Abstract To estimate the variance

More information

A New Model-Free CuSum Procedure for Autocorrelated Processes

A New Model-Free CuSum Procedure for Autocorrelated Processes A New Model-Free CuSum Procedure for Autocorrelated Processes Seong-Hee Kim, Christos Alexopoulos, David Goldsman, and Kwok-Leung Tsui School of Industrial and Systems Engineering Georgia Institute of

More information

SIMULATION OUTPUT ANALYSIS

SIMULATION OUTPUT ANALYSIS 1 / 64 SIMULATION OUTPUT ANALYSIS Dave Goldsman School of ISyE Georgia Tech Atlanta, Georgia, USA sman@gatech.edu www.isye.gatech.edu/ sman April 15, 2016 Outline 1 Introduction 2 Finite-Horizon Simulation

More information

Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds.

Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DISTRIBUTION-FREE TABULAR CUSUM CHART FOR CORRELATED DATA WITH AUTOMATED

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

SIMULATION OUTPUT ANALYSIS

SIMULATION OUTPUT ANALYSIS 1 / 71 SIMULATION OUTPUT ANALYSIS Dave Goldsman School of ISyE Georgia Tech Atlanta, Georgia, USA sman@gatech.edu www.isye.gatech.edu/ sman 10/19/17 2 / 71 Outline 1 Introduction 2 A Mathematical Interlude

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

NEW ESTIMATORS FOR PARALLEL STEADY-STATE SIMULATIONS

NEW ESTIMATORS FOR PARALLEL STEADY-STATE SIMULATIONS roceedings of the 2009 Winter Simulation Conference M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, eds. NEW ESTIMATORS FOR ARALLEL STEADY-STATE SIMULATIONS Ming-hua Hsieh Department

More information

ISyE 6644 Fall 2014 Test 3 Solutions

ISyE 6644 Fall 2014 Test 3 Solutions 1 NAME ISyE 6644 Fall 14 Test 3 Solutions revised 8/4/18 You have 1 minutes for this test. You are allowed three cheat sheets. Circle all final answers. Good luck! 1. [4 points] Suppose that the joint

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Monitoring Autocorrelated Processes Using A Distribution-Free Tabular CUSUM Chart With Automated Variance Estimation

Monitoring Autocorrelated Processes Using A Distribution-Free Tabular CUSUM Chart With Automated Variance Estimation To appear, IIE Transactions. Monitoring Autocorrelated Processes Using A Distribution-Free Tabular CUSUM Chart With Automated Variance Estimation JOONGSUP (JAY) LEE 1 CHRISTOS ALEXOPOULOS 2 DAVID GOLDSMAN

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

SPC Monitoring and Variance Estimation

SPC Monitoring and Variance Estimation SPC Monitoring and Variance Estimation C. Alexopoulos D. Goldsman K.-L. Tsui School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332-0205 W. Jiang INSIGHT, AT&T Labs

More information

On the Asymptotic Validity of Fully Sequential Selection Procedures for Steady-State Simulation

On the Asymptotic Validity of Fully Sequential Selection Procedures for Steady-State Simulation On the Asymptotic Validity of Fully Sequential Selection Procedures for Steady-State Simulation Seong-Hee Kim School of Industrial & Systems Engineering Georgia Institute of Technology Barry L. Nelson

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Simulation. Where real stuff starts

Simulation. Where real stuff starts 1 Simulation Where real stuff starts ToC 1. What is a simulation? 2. Accuracy of output 3. Random Number Generators 4. How to sample 5. Monte Carlo 6. Bootstrap 2 1. What is a simulation? 3 What is a simulation?

More information

E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments

E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments E-Companion to Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments Jun Luo Antai College of Economics and Management Shanghai Jiao Tong University

More information

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke (

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke ( The Departure Process of the GI/G/ Queue and Its MacLaurin Series Jian-Qiang Hu Department of Manufacturing Engineering Boston University 5 St. Mary's Street Brookline, MA 2446 Email: hqiang@bu.edu June

More information

ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS

ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS Emily K. Lada Anup C. Mokashi SAS Institute Inc. 100 SAS Campus Drive, R5413 Cary, NC 27513-8617, USA James R. Wilson North Carolina

More information

A GENERAL FRAMEWORK FOR THE ASYMPTOTIC VALIDITY OF TWO-STAGE PROCEDURES FOR SELECTION AND MULTIPLE COMPARISONS WITH CONSISTENT VARIANCE ESTIMATORS

A GENERAL FRAMEWORK FOR THE ASYMPTOTIC VALIDITY OF TWO-STAGE PROCEDURES FOR SELECTION AND MULTIPLE COMPARISONS WITH CONSISTENT VARIANCE ESTIMATORS Proceedings of the 2009 Winter Simulation Conference M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, eds. A GENERAL FRAMEWORK FOR THE ASYMPTOTIC VALIDITY OF TWO-STAGE PROCEDURES

More information

HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES

HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES HEAVY-TRAFFIC EXTREME-VALUE LIMITS FOR QUEUES by Peter W. Glynn Department of Operations Research Stanford University Stanford, CA 94305-4022 and Ward Whitt AT&T Bell Laboratories Murray Hill, NJ 07974-0636

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 22 12/09/2013. Skorokhod Mapping Theorem. Reflected Brownian Motion

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 22 12/09/2013. Skorokhod Mapping Theorem. Reflected Brownian Motion MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.7J Fall 213 Lecture 22 12/9/213 Skorokhod Mapping Theorem. Reflected Brownian Motion Content. 1. G/G/1 queueing system 2. One dimensional reflection mapping

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

A Review of Basic FCLT s

A Review of Basic FCLT s A Review of Basic FCLT s Ward Whitt Department of Industrial Engineering and Operations Research, Columbia University, New York, NY, 10027; ww2040@columbia.edu September 10, 2016 Abstract We briefly review

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

CONFIDENCE INTERVALS FOR QUANTILES WITH STANDARDIZED TIME SERIES. James M. Calvin Marvin K. Nakayama

CONFIDENCE INTERVALS FOR QUANTILES WITH STANDARDIZED TIME SERIES. James M. Calvin Marvin K. Nakayama Proceedings of the 2013 Winter Simulation Conference R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, eds. CONFIDENCE INTERVALS FOR QUANTILES WITH STANDARDIZED TIME SERIES James M. Calvin Marvin

More information

A SPACED BATCH MEANS PROCEDURE FOR SIMULATION ANALYSIS

A SPACED BATCH MEANS PROCEDURE FOR SIMULATION ANALYSIS Proceedings of the 2007 Winter Simulation Conference S. G. Henderson, B. Biller, M.-H. Hsieh, J. Shortle, J. D. Tew, and R. R. Barton, eds. SBatch: A SPACED BATCH MEANS PROCEDURE FOR SIMULATION ANALYSIS

More information

ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS

ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS Proceedings of the 2013 Winter Simulation Conference R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, eds. ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS Emily K. Lada

More information

SEQUENTIAL ESTIMATION OF THE STEADY-STATE VARIANCE IN DISCRETE EVENT SIMULATION

SEQUENTIAL ESTIMATION OF THE STEADY-STATE VARIANCE IN DISCRETE EVENT SIMULATION SEQUENTIAL ESTIMATION OF THE STEADY-STATE VARIANCE IN DISCRETE EVENT SIMULATION Adriaan Schmidt Institute for Theoretical Information Technology RWTH Aachen University D-5056 Aachen, Germany Email: Adriaan.Schmidt@rwth-aachen.de

More information

Monitoring autocorrelated processes using a distribution-free tabular CUSUM chart with automated variance estimation

Monitoring autocorrelated processes using a distribution-free tabular CUSUM chart with automated variance estimation IIE Transactions (2009) 41, 979 994 Copyright C IIE ISSN: 0740-817X print / 1545-8830 online DOI: 10.1080/07408170902906035 Monitoring autocorrelated processes using a distribution-free tabular CUSUM chart

More information

Proceedings of the 2014 Winter Simulation Conference A. Tolk, S. Y. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds.

Proceedings of the 2014 Winter Simulation Conference A. Tolk, S. Y. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds. Proceedings of the 2014 Winter Simulation Conference A. Tolk, S. Y. Diallo, I. O. Ryzhov, L. Yilmaz, S. Buckley, and J. A. Miller, eds. SEQUEST: A SEQUENTIAL PROCEDURE FOR ESTIMATING STEADY-STATE QUANTILES

More information

ECONOMETRICS II, FALL Testing for Unit Roots.

ECONOMETRICS II, FALL Testing for Unit Roots. ECONOMETRICS II, FALL 216 Testing for Unit Roots. In the statistical literature it has long been known that unit root processes behave differently from stable processes. For example in the scalar AR(1)

More information

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr. Simulation Discrete-Event System Simulation Chapter 0 Output Analysis for a Single Model Purpose Objective: Estimate system performance via simulation If θ is the system performance, the precision of the

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

A distribution-free tabular CUSUM chart for autocorrelated data

A distribution-free tabular CUSUM chart for autocorrelated data IIE Transactions (007) 39, 317 330 Copyright C IIE ISSN: 0740-817X print / 1545-8830 online DOI: 10.1080/07408170600743946 A distribution-free tabular CUSUM chart for autocorrelated data SEONG-HEE KIM

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

On the Goodness-of-Fit Tests for Some Continuous Time Processes

On the Goodness-of-Fit Tests for Some Continuous Time Processes On the Goodness-of-Fit Tests for Some Continuous Time Processes Sergueï Dachian and Yury A. Kutoyants Laboratoire de Mathématiques, Université Blaise Pascal Laboratoire de Statistique et Processus, Université

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS. S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974

THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS. S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974 THE HEAVY-TRAFFIC BOTTLENECK PHENOMENON IN OPEN QUEUEING NETWORKS by S. Suresh and W. Whitt AT&T Bell Laboratories Murray Hill, New Jersey 07974 ABSTRACT This note describes a simulation experiment involving

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Bivariate Uniqueness in the Logistic Recursive Distributional Equation

Bivariate Uniqueness in the Logistic Recursive Distributional Equation Bivariate Uniqueness in the Logistic Recursive Distributional Equation Antar Bandyopadhyay Technical Report # 629 University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

TOWARDS BETTER MULTI-CLASS PARAMETRIC-DECOMPOSITION APPROXIMATIONS FOR OPEN QUEUEING NETWORKS

TOWARDS BETTER MULTI-CLASS PARAMETRIC-DECOMPOSITION APPROXIMATIONS FOR OPEN QUEUEING NETWORKS TOWARDS BETTER MULTI-CLASS PARAMETRIC-DECOMPOSITION APPROXIMATIONS FOR OPEN QUEUEING NETWORKS by Ward Whitt AT&T Bell Laboratories Murray Hill, NJ 07974-0636 March 31, 199 Revision: November 9, 199 ABSTRACT

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

7.1 Coupling from the Past

7.1 Coupling from the Past Georgia Tech Fall 2006 Markov Chain Monte Carlo Methods Lecture 7: September 12, 2006 Coupling from the Past Eric Vigoda 7.1 Coupling from the Past 7.1.1 Introduction We saw in the last lecture how Markov

More information

Recall the Basics of Hypothesis Testing

Recall the Basics of Hypothesis Testing Recall the Basics of Hypothesis Testing The level of significance α, (size of test) is defined as the probability of X falling in w (rejecting H 0 ) when H 0 is true: P(X w H 0 ) = α. H 0 TRUE H 1 TRUE

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be

More information

Asummary and an analysis are given for an experimental performance evaluation of WASSP, an automated

Asummary and an analysis are given for an experimental performance evaluation of WASSP, an automated INFORMS Journal on Computing Vol. 19, No. 2, Spring 2007, pp. 150 160 issn 1091-9856 eissn 1526-5528 07 1902 0150 informs doi 10.1287/ijoc.1050.0161 2007 INFORMS Performance of a Wavelet-Based Spectral

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

IEOR 8100: Topics in OR: Asymptotic Methods in Queueing Theory. Fall 2009, Professor Whitt. Class Lecture Notes: Wednesday, September 9.

IEOR 8100: Topics in OR: Asymptotic Methods in Queueing Theory. Fall 2009, Professor Whitt. Class Lecture Notes: Wednesday, September 9. IEOR 8100: Topics in OR: Asymptotic Methods in Queueing Theory Fall 2009, Professor Whitt Class Lecture Notes: Wednesday, September 9. Heavy-Traffic Limits for the GI/G/1 Queue 1. The GI/G/1 Queue We will

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 13 Sequences and Series of Functions These notes are based on the notes A Teacher s Guide to Calculus by Dr. Louis Talman. The treatment of power series that we find in most of today s elementary

More information

Overall Plan of Simulation and Modeling I. Chapters

Overall Plan of Simulation and Modeling I. Chapters Overall Plan of Simulation and Modeling I Chapters Introduction to Simulation Discrete Simulation Analytical Modeling Modeling Paradigms Input Modeling Random Number Generation Output Analysis Continuous

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Theoretical Statistics. Lecture 1.

Theoretical Statistics. Lecture 1. 1. Organizational issues. 2. Overview. 3. Stochastic convergence. Theoretical Statistics. Lecture 1. eter Bartlett 1 Organizational Issues Lectures: Tue/Thu 11am 12:30pm, 332 Evans. eter Bartlett. bartlett@stat.

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Lecture 4: September Reminder: convergence of sequences

Lecture 4: September Reminder: convergence of sequences 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

Economics 583: Econometric Theory I A Primer on Asymptotics

Economics 583: Econometric Theory I A Primer on Asymptotics Economics 583: Econometric Theory I A Primer on Asymptotics Eric Zivot January 14, 2013 The two main concepts in asymptotic theory that we will use are Consistency Asymptotic Normality Intuition consistency:

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Chapter 11. Output Analysis for a Single Model Prof. Dr. Mesut Güneş Ch. 11 Output Analysis for a Single Model

Chapter 11. Output Analysis for a Single Model Prof. Dr. Mesut Güneş Ch. 11 Output Analysis for a Single Model Chapter Output Analysis for a Single Model. Contents Types of Simulation Stochastic Nature of Output Data Measures of Performance Output Analysis for Terminating Simulations Output Analysis for Steady-state

More information

Statistical Data Analysis

Statistical Data Analysis DS-GA 0 Lecture notes 8 Fall 016 1 Descriptive statistics Statistical Data Analysis In this section we consider the problem of analyzing a set of data. We describe several techniques for visualizing the

More information

Simulation. Where real stuff starts

Simulation. Where real stuff starts Simulation Where real stuff starts March 2019 1 ToC 1. What is a simulation? 2. Accuracy of output 3. Random Number Generators 4. How to sample 5. Monte Carlo 6. Bootstrap 2 1. What is a simulation? 3

More information

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ) Chapter 4 Stochastic Processes 4. Definition In the previous chapter we studied random variables as functions on a sample space X(ω), ω Ω, without regard to how these might depend on parameters. We now

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

A Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data

A Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data A Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data Yujun Wu, Marc G. Genton, 1 and Leonard A. Stefanski 2 Department of Biostatistics, School of Public Health, University of Medicine

More information

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Lecture No. # 33 Probabilistic methods in earthquake engineering-2 So, we have

More information

The square root rule for adaptive importance sampling

The square root rule for adaptive importance sampling The square root rule for adaptive importance sampling Art B. Owen Stanford University Yi Zhou January 2019 Abstract In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall. .1 Limits of Sequences. CHAPTER.1.0. a) True. If converges, then there is an M > 0 such that M. Choose by Archimedes an N N such that N > M/ε. Then n N implies /n M/n M/N < ε. b) False. = n does not converge,

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Max stable Processes & Random Fields: Representations, Models, and Prediction

Max stable Processes & Random Fields: Representations, Models, and Prediction Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

Chapter 2 Random Processes

Chapter 2 Random Processes Chapter 2 Random Processes 21 Introduction We saw in Section 111 on page 10 that many systems are best studied using the concept of random variables where the outcome of a random experiment was associated

More information