ACCOUNTING FOR INPUT-MODEL AND INPUT-PARAMETER UNCERTAINTIES IN SIMULATION. <www.ie.ncsu.edu/jwilson> May 22, 2006

Similar documents
The Jeffreys Prior. Yingbo Li MATH Clemson University. Yingbo Li (Clemson) The Jeffreys Prior MATH / 13

ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS

Queueing Theory II. Summary. ! M/M/1 Output process. ! Networks of Queue! Method of Stages. ! General Distributions

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

Penalized Loss functions for Bayesian Model Choice

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

ARD: AN AUTOMATED REPLICATION-DELETION METHOD FOR SIMULATION ANALYSIS

An Introduction to Bayesian Linear Regression

David Giles Bayesian Econometrics

ABC methods for phase-type distributions with applications in insurance risk problems

Principles of Bayesian Inference

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Outline. Simulation of a Single-Server Queueing System. EEC 686/785 Modeling & Performance Evaluation of Computer Systems.

Example: Ground Motion Attenuation

Introduction to Rare Event Simulation

Probability Models in Electrical and Computer Engineering Mathematical models as tools in analysis and design Deterministic models Probability models

Data Analysis and Uncertainty Part 2: Estimation

EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 19

CS281A/Stat241A Lecture 22

Bayesian Inference and MCMC

Estimation of Operational Risk Capital Charge under Parameter Uncertainty

Fluid Models of Parallel Service Systems under FCFS

Sampling Distributions

Output Data Analysis for a Single System

The exponential distribution and the Poisson process

Remarks on Improper Ignorance Priors

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP

Lecture 4: Dynamic models

Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Introduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??

Name of the Student:

Sampling Distributions

Overall Plan of Simulation and Modeling I. Chapters

The aim of this expository survey on Bayesian simulation is to stimulate more work in the area by decision

Principles of Bayesian Inference

Bayesian Inference. Chapter 2: Conjugate models

Classical and Bayesian inference

Bayesian Regression Linear and Logistic Regression

COS513 LECTURE 8 STATISTICAL CONCEPTS

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

ST440/540: Applied Bayesian Statistics. (9) Model selection and goodness-of-fit checks

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Computational statistics

Chapter 11. Output Analysis for a Single Model Prof. Dr. Mesut Güneş Ch. 11 Output Analysis for a Single Model

Miscellany : Long Run Behavior of Bayesian Methods; Bayesian Experimental Design (Lecture 4)

Part 4: Multi-parameter and normal models

Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices.

IGD-TP Exchange Forum n 5 WG1 Safety Case: Handling of uncertainties October th 2014, Kalmar, Sweden

Introduction to Bayesian Statistics. James Swain University of Alabama in Huntsville ISEEM Department

REDUCING INPUT PARAMETER UNCERTAINTY FOR SIMULATIONS. Szu Hui Ng Stephen E. Chick

Bayesian Inference: Concept and Practice

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Contents LIST OF TABLES... LIST OF FIGURES... xvii. LIST OF LISTINGS... xxi PREFACE. ...xxiii

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

A note on multiple imputation for general purpose estimation

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

Probability and Distributions

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Plausible Values for Latent Variables Using Mplus

Estimation of Quantiles

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence

Principles of Bayesian Inference

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

inferences on stress-strength reliability from lindley distributions

Bayesian Model Diagnostics and Checking

Bayesian Econometrics

The bootstrap and Markov chain Monte Carlo

Queueing Theory and Simulation. Introduction

Quantifying Stochastic Model Errors via Robust Optimization

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33

Stat 451 Lecture Notes Monte Carlo Integration

Using Model Selection and Prior Specification to Improve Regime-switching Asset Simulations

Bayesian Prediction of Code Output. ASA Albuquerque Chapter Short Course October 2014

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Checking the Reliability of Reliability Models.

Data analysis and stochastic modeling

4. Distributions of Functions of Random Variables

Density Estimation. Seungjin Choi

Bayesian methods in economics and finance

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Down by the Bayes, where the Watermelons Grow

Variational Bayesian Logistic Regression

Consistent high-dimensional Bayesian variable selection via penalized credible regions

EE/PEP 345. Modeling and Simulation. Spring Class 11

Practice Problems Section Problems

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014

Simulation. Where real stuff starts

APPENDICES APPENDIX A. STATISTICAL TABLES AND CHARTS 651 APPENDIX B. BIBLIOGRAPHY 677 APPENDIX C. ANSWERS TO SELECTED EXERCISES 679

BAYESIAN MODEL CRITICISM

COMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017

Bayesian Modeling of Accelerated Life Tests with Random Effects

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Theoretical and Computational Approaches to Dempster-Shafer (DS) Inference

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Transcription:

ACCOUNTING FOR INPUT-MODEL AND INPUT-PARAMETER UNCERTAINTIES IN SIMULATION Slide 1 Faker Zouaoui Sabre Holdings James R. Wilson NC State University <www.ie.ncsu.edu/jwilson> May, 006 Slide From American Scientist, September October 004.

OVERVIEW I. Introduction A. Objectives B. Structure of the Simulation Experiment C. Bayesian Model Averaging (BMA) Slide 3 II. BMA-Based Simulation Replication Algorithm A. Variance Decomposition B. Replication-Allocation Procedures C. Output Analysis III. Application to M/G/1 Queueing Simulation IV. Conclusions and Recommendations I. Introduction A. Objectives Slide 4 Formulation of a Bayesian approach to selection of input models in stochastic simulation that accounts for 3 main sources of uncertainty: 1. Stochastic uncertainty arises from dependence of the simulation output on the random numbers generated and used during each run.. Model uncertainty arises when choosing between different types of input models with different functional forms that adequately fit available sample data or subjective information. 3. Parameter uncertainty arises when the parameters of the selected input model(s) are unknown and must be estimated from sample data or subjective information. Evaluation of the performance of the Bayesian approach versus conventional frequentist techniques.

B. Structure of Simulation Experiment The output of interest y is an unknown function of the random number input u, the input model M, and its parameter vector θ M, Slide 5 y = y(u,m,θ M ). Let η(m, θ M ) E u [ y(u,m,θm ) M,θM ] = y(u,m,θ M ) du denote the conditional expected value of y given M and θ M. Let M ={M k : k = 1,...,K} denote the set of adequate input models that fit the data X. The kth model M k has prior probability p(m k ) and parameter vector θ k with prior distribution p(θ k M k ). Slide 6 We seek point and confidence-interval estimators of the posterior mean E(y X) = p(m k X) η(m k, θ k )p(θ k X,M k ) dθ k, where p(θ k X,M k ) is the posterior distribution of θ k under model M k and p(m k X) is the posterior probability of model M k.

C. Bayesian Model Averaging (BMA) The posterior model probabilities p(m k X) are computed as p(m k X) = p(m k )p(x M k ) Kj=1 p(m j )p(x M j ) for k = 1,...,K, (1) Slide 7 where the marginal data density given input model M k is p(x M k ) = p(x M k, θ k )p(θ k M k )dθ k for k = 1,...,K. () The posterior distributions p(θ k X,M k ) are computed using Bayes rule as p(θ k X,M k ) = p(θ k M k )p(x M k, θ k ) for k = 1,...,K. (3) p(x M k ) Computational methods: numerical integration, asymptotic approximations, Markov Chain Monte Carlo (MCMC) methods. II. BMA-Based Simulation Replication Algorithm Slide 8 for k = 1,...,K set the input model M M k for r = 1,...,R k generate the rth sample θ r independently from p(θ X,M) set the input-parameter vector θ θ r for j = 1,...,m generate the jth sample u j of i.i.d. random numbers set the random-number input u u j perform the jth simulation run using u, M, and θ calculate the simulation output response y krj = y(u,m,θ) end loop compute y kr = m / j=1 y krj m end loop compute the grand mean for the kth input model, y k = R k r=1 y kr/ Rk end loop compute the weighted mean β K p(m k X)y k to estimate E(y X)

A. Variance Decomposition Basic Assumptions The response from the jth run using the random-number input u j, the kth input model M k, and the rth sample of the associated input-model parameters θk r is y krj = y(u j,m k, θ r k ) = η(m k, θ r k ) + e j (u j,m k, θ r k ) (4) Slide 9 for k = 1,...,K; r = 1,...,R k ; and j = 1,...,m, where [ E uj ej (u j,m k, θk r ) X,M k, θk r ] = 0 [ Var uj ej (u j,m k, θk r ) X,Mk, θk r ] = τk so that E uj (y krj X,M k, θk r ) = η(m k, θk r ) Var uj (y krj X,M k, θk r ) = τ k for k = 1,...,K; r = 1,...,R k ; and j = 1,...,m. (5) (6) Slide 10 We take η(m k, θ r k ) = β k + δ kr (M k, θ r k ) (7) for k = 1,...,K and r = 1,...,R k, where [ β k E θ r η(mk k, θk r ) ] X,Mk = η(m k, θ k )p(θ k X,M k ) dθ k = E(y X,M k ) is the posterior mean response given the kth input model for k = 1,...,K so that we have [ E θ r δkr k (M k, θk r ) ] X,M k = 0 [ Var θ r δkr k (M k, θk r ) ] X,M k = σk (8) (9) for k = 1,...,K.

Main Variance-Decomposition Result For simplicity, let p k p(m k X) denote the kth posterior model probability for k = 1,...,K. Based on assumptions (4) (9), the posterior variance of y has the decomposition Slide 11 Var(y X) = p k (β k β) }{{} from model uncertainty + p k σk }{{} from parameter uncertainty + p k τk }{{} from stochastic uncertainty, where β E(y X) = is the posterior mean response. p k β k Estimating the Variance Components To estimate the {σk }, we also assume Cov uj,θ r k[ η(mk, θ r k ), e j (u j,m k, θ r k ) X,Mk ] = 0 (10) for k = 1,...,K; r = 1,...,R k ; and j = 1,...,m. Slide 1 Using (10), we estimate {β k }, {τk }, and {σ k } from the output of the BMA-Based Simulation Replication Algorithm as follows: β k y k, τ k 1 R k m (y krj y R k (m 1) kr ), r=1 j=1 [ ] σ k 1 R k ( ) ykr y (R k 1) k τ k for k = 1,...,K. m r=1

Estimating the Posterior Mean Based on the { β k : k = 1,...,K }, we estimate the posterior mean response E[y X] as β = p k β k. Slide 13 Estimating the Posterior Variance Based on the { τ k, σ k : k = 1,...,K}, we estimate the posterior response variance Var(y X) as Var(y X) = K p k ( β k β) + p k σ k + p k τ k = V mod + V par + V sto. B. Replication-Allocation Procedures Optimal Allocation Procedure (OAP) Decision Variables the sample sizes {R k : k = 1,...,K} that are allocated to the input models {M k : k = 1,...,K}. Objective Function the variance of the posterior mean estimator is Slide 14 Optimization Problem Var ( β ) = min {R k :1 k K} subject to: pk σk + R k R k p k p k [ σk + τ k ] m τ k mr k. R k = N/m = N. (11)

Optimal Allocation Scheme if ψ k = σk + τ k /m, then solving (11) yields the optimal replication counts R k = N p k ψk Ki=1 p i ψi for k = 1,...,K. (1) Slide 15 Approximately Optimal Allocation Scheme Make a small equal number of pilot runs, using each input model M k for k = 1,...,K. Estimate ψ k by ψ k = σ k + τ k /m for k = 1,...,K. Allocate the rest of the runs according to (1). Proportional Allocation Procedure (PAP) One easily implemented solution to (11) is the proportional allocation scheme R k = p k N for k = 1,...,K, (13) yielding the weighted mean response β pa computed over N runs. Slide 16 Simple Random Sampling (SRS) Procedure At the start of each run, randomly sample a new input model and its input-parameter vector from their respective posterior distributions; and compute the (unweighted) average response β srs over N independent runs. Variance Reduction Result Var ( β srs X ) = Var ( β pa X ) + 1 N p k [(β k β) (m 1)σk ].

C. Output Analysis Variance Estimator Slide 17 where Var( β) = V k = R k p k ( σ k + τ ) k = R k mr k (y kr y k ) r=1 R k (R k 1) pk V k, (14) for k = 1,...,K. Satterthwaite Approximation: (14) is approximately chi-squared with effective degrees of freedom ( K ) / f eff = pk V K p 4 V k k k. (15) (R k 1) Approximate Confidence Interval An approximate 100(1 α)% confidence interval for β is Slide 18 ( K ) 1/ p k y k ± t 1 α/, f eff pk V k. (16) Whether or not assumption (10) is satisfied, (16) is still an approximately valid 100(1 α)% confidence interval for β; and we have E [ R k Vk ] = ψk for k = 1,...,K as required in the optimal allocation scheme (1).

III. Application to M/G/1 Queueing Simulation Slide 19 Available system information Poisson arrivals with unknown arrival rate λ; and Service times randomly sampled from an unknown distribution, with gamma and lognormal input models equally plausible. True system configuration Arrival rate λ 0 = 5.7 customers per time unit; Service times randomly sampled from Pareto c.d.f. { 0, if x<ξ, G X (x) = 1 (ξ/x) ω, if x ξ, where ξ = 0.1 and ω = 3.5 so that the server utilization is 80%; and Pollaczek-Khintchine formula implies mean queue waiting time is λ 0 ξ ω(ω 1) β 0 = = 0.333 time units. (ω )(ω 1 λξω) Input model for interarrival times M 1,1 : p(x 1,i λ 1,1 ) = λ 1,1 exp ( λ 1,1 x 1,i ) for x1,i 0, where λ 1,1 was unknown; and we observed a sample data set of size n = 1,000 from the true interarrival-time distribution. Slide 0 Input models for service times Gamma distribution M,1 : p(x,i M,1,α,1,μ,1 ) = μα,1,1 Ɣ(α,1 ) xα,1 1,i exp ( ) μ,1 x,i for x,i > 0. Lognormal distribution { 1 M, : p(x,i M,,μ,,σ, ) = exp [ ln(x,i) μ, ] } σ, x,i π σ, for x,i > 0. We take prior model probabilities p(m 1,1 ) = 1 and p(m,k ) = 1 for k = 1,.

Setup for Using the BMA-Based Simulation Replication Algorithm For the qth input process (where q = 1, ), we used a training sample z q ={z q,1,...,z q,t } of size T = 100 Slide 1 to obtain proper prior distributions for the corresponding parameter vectors that are used to calculate the marginal data densities (). For the qth input process (where q = 1, ), we used a regular sample x q ={x q,1,...,x q,n } of size n = 1000 to obtain posterior distributions (3) for the corresponding parameter vectors. Complete input models for the M/G/1 queueing simulation Input model M 1 = ( ) M 1,1,M,1 has prior probability p(m 1 ) = p(m 1,1 )p(m,1 ) = 1, and input-parameter vector Slide θ 1 = (λ 1,1,α,1,μ,1 ); and given M 1 and θ 1, the data X = (x 1, x ) have the joint conditional p.d.f. p(x M 1, θ 1 ) n = λ 1,1 exp ( ) μ α,1,1 λ 1,1 x 1,i Ɣ(α,1 ) xα,1 1,i exp ( ) μ,1 x,i. i=1

Input model M = ( M 1,1,M, ) has prior probability and input-parameter vector p(m ) = p(m 1,1 )p(m, ) = 1 Slide 3 θ = (λ 1,1,μ,,σ, ); and given M and θ, the data X = (x 1, x ) have the joint conditional p.d.f. p(x M, θ ) n = λ 1,1 exp ( { ) 1 λ 1,1 x 1,i exp [ ln(x },i) μ, ] σ, x,i π σ,. i=1 Calculation of Data Densities Marginal Density of Interarrival-Time Data Ɣ(n + T) ( T ) T p(x 1 M 1,1 ) = t=1 z 1,t Ɣ(T ) ( T t=1 z 1,t + n ) n+t. i=1 x 1,i Slide 4 Marginal Density of Service-Time Data Given Gamma Input Model p(x M,1 ) = Ɣ( n ν,1 + T ν,1 ) Ɣ(T ν,1 )Ɣ( ν,1 ) ( ni=1 x,i ) ν,1 1 ( Tt=1 z,t ) T ν,1 ( Tt=1 z,t + n i=1 x,i ) n ν,1 +T ν,1, where ν,1 = z / S z and we let z and S z respectively denote the mean and variance of the service-time training sample z.

Marginal Density of Service-Time Data Given Lognormal Input Model p(x M, ) = π n/ (T 1)/ Ɣ[(n + T 1)/]S w Ɣ[(T 1)/]S (n+t 1)/ Slide 5 where (T 1)/ (T 1) ( ni=1 ), x,i (T + n) 1/ (n + T 1) (n+t 1)/ S = (T 1)S v n + T 1 + (n 1)S w n + T 1 + nt (w v ) (n + T 1)(n + T), where w and Sw respectively denote the sample mean and variance of the logged regular sample {w,i ln(x,i ) : i = 1,...,n}; and v and Sv respectively denote the sample mean and variance of the logged training sample {v,t ln(z,t ) : t = 1,...,T}. Calculation of Posterior Densities Posterior model probababilities are calculated from the marginal data densities via (1). Slide 6 Input model M 1,1 for interarrival times: p(λ 1,1 x 1,M 1,1 ) is a gamma p.d.f. with shape parameter n and scale parameter 1 /( n i=1 x 1,i ). Input model M,1 for service times: We took α,1 = x / S x as if it were the true value of α 1,1, where x and S x respectively denote the mean and variance of x. p(μ,1 x,m,1 ) is a gamma p.d.f. with shape parameter n α,1 and scale parameter 1 /( n i=1 x,i ).

Input model M, for service times: Slide 7 p(σ, x,m, ) is an inverse-gamma p.d.f. with shape parameter (n 1)/ and scale parameter n ( ) / i=1 w,i w. The posterior p.d.f. of μ, is a generalized Student s t-distribution, p(μ, x,m, ) = Ɣ(n/) n Ɣ[(n 1)/]S w (n 1)π [ 1 + n ( ) ] μ, w n/. n 1 S w Layout of the Simulation Experiments Each run consisted of C = 10,000 customer waiting times, with the first C 0 = 10,000 waiting times deleted to eliminate start-up effects. Slide 8 The interarrival times {A i } and service times {X i } are sampled by inversion; and the associated waiting times {W i } are computed by taking W 1 = 0 and W i = max{w i 1 + X i 1 A i, 0} for i =, 3,.... On the jth simulation run using the kth input model M k and the rth random sample θ r k from p(θ k X,M k ), the simulation response was y krj = y(u j,m k, θ r k ) = 1 C C 0 C i=c 0 +1 W i (17) for k = 1,...,K; r = 1,...,R k ; and j = 1,...,m, where we took K =, R 1 = R = 100, and m = 10.

We performed a metaexperiment consisting of 00 independent replications of a basic simulation experiment with the following steps: Slide 9 a) We generated the training samples z 1, z and data samples x 1, x independently to obtain the required posterior input-model and input-parameter distributions; b) Using the BMA-Based Simulation Replication Algorithm, we executed KR 1 m =,000 runs of the simulation, each with C = 10,000 waiting times and with the truncated mean waiting time (17) computed on each run; and c) From the results of step b), we constructed a nominal 90% confidence interval for the steady-state mean waiting time β 0 based on (16). Posterior probability, mean and variance estimates for each candidate input model for service times Slide 30 Service-time Model Post. Prob. Mean Stochastic Var. Parameter Var. M,k p(m,k x ) β k τ k σ k k = 1: Gamma 0.45 0.35 6.57E 05 6.58E 03 k = : Lognormal 0.55 0.30 4.30E 05 3.31E 03

Mean absolute percentage error (MAPE), mean squared error (MSE), and standard error (SE) of the MSE for each approach s estimator of average waiting time in the queue Slide 31 Approach Service-time Mean MAPE MSE SE(MSE) Model β 100E [ ]/ β β 0 β0 % E [( ) ] β β 0 Classical Gamma 0.34 15% 0.005 0.0008 Frequentist Lognormal 0.9 16% 0.004 0.0003 Partial Bayes Gamma 0.35 16% 0.006 0.001 Lognormal 0.30 16% 0.004 0.0003 BMA Mixture 0.3 4% 0.001 0.0006 BMA + PAP Mixture 0.3 5% 0.001 0.0005 BMA + OAP Mixture 0.33 3% 0.00 0.0007 Performance of nominal 90% confidence intervals for the average waiting time in terms of the CIL, the sample average confidence-interval length, and CV(CIL), the sample coefficient of variation of the CI length Slide 3 Approach Model CIL CV(CIL) Coverage Classical Gamma 0.04 0.39 7% Frequentist Lognormal 0.03 0.40 14% Partial Bayes Gamma 0.3 0.43 85% Lognormal 0.17 0.36 79% BMA Mixture 0.19 0.39 88% BMA + PAP Mixture 0.19 0.38 88% BMA + OAP Mixture 0.0 0.40 89%

Summary of Key Results for the M/G/1 Queueing Simulation In terms of posterior probabilities, the lognormal service-time model M, was slightly better than the gamma model M,1. Slide 33 The point-estimator accuracy for the BMA approach was much better than for the classical frequentist and partial Bayes approaches. From the variance estimates V mod, V par, and V sto, we obtained the following decomposition of the posterior response variance: % is due to stochastic uncertainty; 80% is due to parameter uncertainty; and 18% is due to model uncertainty. In comparison with the classical frequentist approach, the BMA approach delivered confidence intervals with much higher coverage probabilities. Slide 34 The BMA-based proportional allocation (PAP) and optimal allocation (OAP) procedures delivered confidence intervals with slightly better coverage probabilities compared with those resulting from the equal-allocation BMA approach. In other applications with highly disparate posterior model probabilities, we observed large improvements in the accuracy and reliability of confidence intervals based on BMA + PAPorBMA+ OAP.

IV. Conclusions and Recommendations Main Results We developed a BMA-based framework for simulation input modeling that is designed to handle input-model and input-parameter uncertainties as well as the conventional stochastic uncertainty. Slide 35 We developed a BMA-based Simulation Replication Algorithm for estimating the posterior mean response and for assessing the sources of variability in the simulation output. We formulated an approximate confidence interval on the posterior mean response. Failure to account for input-model and input-parameter uncertainties can result in misleading output performance measures. Recommendations for Future Research Extension of the response-surface model (4) (9) to eliminate the restrictive assumption (6) since in general the residual variance about the response surface, τk, does depend on the parameter vector θ k. Slide 36 Development of correlated sampling schemes for improving the efficiency of the BMA-based Simulation Replication Algorithm. Implementation of a comprehensive experimental performance evaluation of Bayesian techniques for simulation input modeling. Implementation of a user-friendly software tool that enables (nearly) routine use of Bayesian techniques in simulation input modeling and output analysis.