Simulation and Random Number Generation

Similar documents
3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Probability and Random Variable Primer

CS-433: Simulation and Modeling Modeling and Probability Review

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

First Year Examination Department of Statistics, University of Florida

Suites of Tests. DIEHARD TESTS (Marsaglia, 1985) See

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

SELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:

Strong Markov property: Same assertion holds for stopping times τ.

RELIABILITY ASSESSMENT

Lecture Notes on Linear Regression

Randomness and Computation

Limited Dependent Variables

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

CS 798: Homework Assignment 2 (Probability)

Probability Theory (revisited)

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Convergence of random processes

10.34 Fall 2015 Metropolis Monte Carlo Algorithm

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Modeling and Simulation NETW 707

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Chapter 2 Transformations and Expectations. , and define f

Lecture 3: Probability Distributions

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

Lecture 4: Universal Hash Functions/Streaming Cont d

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Lecture 4 Hypothesis Testing

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

6. Stochastic processes (2)

Applied Stochastic Processes

6. Stochastic processes (2)

Lecture 6 More on Complete Randomized Block Design (RBD)

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Hydrological statistics. Hydrological statistics and extremes

Error Probability for M Signals

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo

Markov Chain Monte Carlo Lecture 6

Exercises of Chapter 2

Stat 543 Exam 2 Spring 2016

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

ISQS 6348 Final Open notes, no books. Points out of 100 in parentheses. Y 1 ε 2

NUMERICAL DIFFERENTIATION

Monte Carlo Simulation and Generation of Random Numbers

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Economics 130. Lecture 4 Simple Linear Regression Continued

Engineering Risk Benefit Analysis

A be a probability space. A random vector

What would be a reasonable choice of the quantization step Δ?

THE ROYAL STATISTICAL SOCIETY 2006 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE

MATH 281A: Homework #6

Analysis of Discrete Time Queues (Section 4.6)

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

Appendix B. The Finite Difference Scheme

APPENDIX A Some Linear Algebra

Chapter 7 Channel Capacity and Coding

Generalized Linear Methods

Statistics Spring MIT Department of Nuclear Engineering

Lecture 3: Shannon s Theorem

Chapter 7 Channel Capacity and Coding

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

Problem Set 9 - Solutions Due: April 27, 2005

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Notes on Frequency Estimation in Data Streams

Composite Hypotheses testing

Lecture 21: Numerical methods for pricing American type derivatives

Digital Signal Processing

Chapter Newton s Method

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Comments on Detecting Outliers in Gamma Distribution by M. Jabbari Nooghabi et al. (2010)

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Lecture 3. Ax x i a i. i i

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Simulation and Probability Distribution

DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR. Introductory Econometrics 1 hour 30 minutes

An Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation

Review of Taylor Series. Read Section 1.2

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

Finding Dense Subgraphs in G(n, 1/2)

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

U-Pb Geochronology Practical: Background

Binomial Distribution: Tossing a coin m times. p = probability of having head from a trial. y = # of having heads from n trials (y = 0, 1,..., m).

Lecture 12: Discrete Laplacian

Stat 543 Exam 2 Spring 2016

STAT 511 FINAL EXAM NAME Spring 2001

Transcription:

Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty of the random number generator Some probablty results Evaluatng ntegrals usng Monte-Carlo smulaton Generatng random numbers from varous dstrbutons Generatng dscrete random varates from a gven pmf Generatng contnuous random varates from a gven dstrbuton

Dscrete Tme vs. Dscrete Event Smulaton Solve the fnte dfference equaton x = mn, x +.x + log x + u { } k+ k k k k Gven the ntal condtons x, x - and the nput functon u k, k=,, we can smulate the output! Ths s a tme drven smulaton. For the systems we are nterested n, ths method has two man problems. x(k) 5-5 - -5-2 3 4 5 k Issues wth Tme-Drven Smulaton System Input System Dynamcs f parcel arrves at t u () t = otherwse f truck arrves at t u2 () t = otherwse u (t) xt () + f u() t =, u2() t =, + xt ( ) = xt () f u() t =, u2() t = xt () otherwse t u 2 (t) x(t) Ineffcent: At most steps nothng happens Accuracy: Event order n each nterval t t t

Tme Drven vs. Event Drven Smulaton Models Tme Drven Dynamcs Event Drven Dynamcs xt () + f u() t =, u2() t =, + xt ( ) = xt () f u() t =, u2() t = xt ( ) otherwse f ( x, e ) x+ f e = a = x f e = d In ths case, tme s dvded n ntervals of length t, and the state s updated at every step State s updated only at the occurrence of a dscrete event Pseudo-Random Number Generaton It s dffcult to construct a truly random number generator. Random number generators consst of a sequence of determnstcally generated numbers that look random. A determnstc sequence can never be truly random. Ths property, though seemngly problematc, s actually desrable because t s possble to reproduce the results of an experment! Multplcatve Congruental Method x k+ = a x k modulo m The constants a and m should follow the followng crtera For any ntal seed x, the resultant sequence should look lke random For any ntal seed, the perod (number of varables that can be generated before repetton) should be large. The value can be computed effcently on a dgtal computer.

Pseudo-Random Number Generaton To ft the above requrements, m should be chosen to be a large prme number. Good choces for parameters are m= 2 3 - and a=7 5 for 32-bt word machnes m= 2 35 - and a=5 5 for 36-bt word machnes Mxed Congruental Method x k+ =(a x k + c) modulo m A more general form: n xk = ax j k j mod m j= where n s a postve constant x k /m maps the random varates n the nterval [,) Qualty of the Random Number Generator (RNG) The random varate u k =x k /m should be unformly dstrbuted n the nterval [,). Dvde the nterval [,) nto k subntervals and count the number of varates that fall wthn each nterval. Run the ch-square test. The sequence of random varates should be uncorrelated Defne the auto-correlaton coeffcent wth lag k > C k and verfy that E[C k ] approached for all k=,2, n k ( )( ) Ck = u u 2 + k 2 n k = where u s the random varate n the nterval [,).

Some Probablty Revew/Results Probablty mass functons Cumulatve mass functons Pr{=}=p F ( x) = Pr{ x} Pr{=2}=p x 2 Pr{=3}=p = Pr 3 { = } = 2 3 p + p 2 + p 3 = { } Pr = = p + p 2 p 2 3 Some Probablty Results/Revew Probablty densty functons Cumulatve dstrbuton f (x) ( x) = Pr{ } F x Pr{x=x }=! x x x = f ( y) dy Pr { x ε x ε} + = x + ε = x ε f ( x) dx F (x) x

Independence of Random Varables Jont cumulatve probablty dstrbuton. { } F ( x, y) = Pr x, Y y Y Independent Random Varables. { C Y D} = { C} { Y D} Pr, Pr Pr For dscrete varables { = x Y = y} = { = x} { Y = y} Pr, Pr Pr For contnuous varables f ( x, y) = f ( x) f ( y) F ( x, y) = F ( x) F ( y) Y Y Y Y Condtonal Probablty Condtonal probablty for two events. Bayes Rule. { A B} Pr { } { A} Total Probablty = Pr Pr { AB} { B} { = x} = { = x Y = y } { Y = y } Pr Pr Pr k k { } { } Pr{ B} Pr AB Pr B A Pr A Pr { B A} = Pr { A B} = Pr k

Expectaton and Varance Expected value of a random varable E [ ] = x Pr{ = x} = E [ ] = xf ( xdx ) Expected value of a functon of a random varable E[ g ( )] = g( x )Pr{ = x} = Varance of a random varable var[ ] = E E[ ] E [ g ( )] = g( x) f ( xdx ) ( ) 2 Covarance of two random varables The covarance between two random varables s defned as cov [ Y, ] = E ( E[ ] )( Y E[ Y] ) [ ] = [ [ ] [ ] + [ ] [ ]] cov Y, E Y E Y YE E E Y = E[ Y ] E[ ] E[ Y ] The covarance between two ndependent random varables s (why?) [ Y ] = j Pr { =, = j} E x y x Y y j { } Pr{ } [ ] [ Y ] = x Pr = x y Y = y = E E j j j

Markov Inequalty If the random varable takes only non-negatve values, then for any value a >. Pr { a} E [ ] a Note that for any value <a < E[], the above nequalty says nothng! Chebyshev s Inequalty If s a random varable havng mean µ and varance σ 2, then for any value k >. Pr { µ kσ} 2 σ k f (x) kσ Ths may be a very conservatve bound! Note that for ~N(,σ 2 ), for k=2, from Chebyshev s nequalty Pr{.}<.25, when n fact the actual value s Pr{.}<.5 µ kσ x

The Law of Large Numbers Weak: Let, 2, be a sequence of ndependent and dentcally dstrbuted random varables havng mean µ. Then, for any ε > +... + Pr n µ > ε, as n n Strong: wth probablty, lm n +... + n n = µ The Central Lmt Theorem Let, 2, n be a sequence of ndependent random varables wth mean µ and varance σ ι2. Then, defne the random varable, Let σ = +... + n = E[ ] = E[ +... + n] = µ µ 2 n 2 ( µ ) 2 = E = σι Then, the dstrbuton of approaches the normal dstrbuton as n ncreases, and f are contnuous, then f ( x) e 2π = ( x µ ) 2 /2σ n =

Ch-Square Test Let k be the number of subntervals, thus p =/k, =,,k. Let N be the number of samples n each subnterval. Note that E[N ]=Np where N s the total number of samples Null Hypothess H : The probablty that the observed random varates are ndeed unformly dstrbuted n (,). Let T be k k 2 k ( ) = = N T = N Np = N Np N k Defne p-value= P H {T>t} ndcate the probablty of observng a value t assumng H s correct. For large N, T s approxmated by a ch-square dstrbuton wth k- degrees of freedom, thus we can use ths approxmaton to evaluate the p-value The H s accepted f p-value s greater than.5 or. 2 Monte-Carlo Approach to Evaluatng Integrals Suppose that you want to estmate θ, however t s rather dffcult to analytcally evaluate the ntegral. θ = g( x) dx Suppose also that you don t want to use numercal ntegraton. Let U be a unformly dstrbuted random varable n the nterval [,) and u are random varates of the same dstrbuton. Consder the followng estmator. n ˆ θ = g u n = ( ) Also note that: E[ g ( U )] as n [ ] θ = g( u) du = E g ( U ) Strong Law of Large Numbers

Monte-Carlo Approach to Evaluatng Integrals Use Monte-Carlo smulaton to estmate the followng b ntegral. θ = g( x) dx a Let y=(x-a)/(b-a), then the ntegral becomes θ = ( b a) g ( b a) y+ a dy = h y dy What f θ = g( x) dx Use the substtuton y= /(+x), ( ) ( ) What f ( ) θ =... g x,..., x dx... dx n n Example: Estmate the value of π. (-,) Area of Crcle π = Area of square 4 (,) (-,-) (,-) SOLUTION Let, Y, be ndependent random varables unformly dstrbuted n the nterval [-,] The probablty that a pont (,Y) falls n the crcle s gven by 2 2 π Pr{ + Y } = 4 Generate N pars of unformly dstrbuted random varates (u,u 2 ) n the nterval [,). Transform them to become unform over the nterval [-,), usng (2u -,2u 2 -). Form the rato of the number of ponts that fall n the crcle over N

Dscrete Random Varates Suppose we would lke to generate a sequence of dscrete random varates accordng to a probablty mass functon N Pr = x = p, j =,,..., N, p = u { } Inverse Transform Method j j j j= x x f u p x f p u p + p M M = j j xj f p u p = = M M Dscrete Random Varate Algorthm (D-RNG-) Algorthm D-RNG- Generate u=u(,) If u<p, set =x, return; If u<p +p, set =x, return; Set =x n, return; Recall the requrement for effcent mplementaton, thus the above search algorthm should be made as effcent as possble! Example: Suppose that {,,,n} and p = p = = p n = /(n+), then = ( n + ) u

Dscrete Random Varate Algorthm Assume p =., p =.2, p 2 =.4, p 3 =.3. What s an effcent RNG? D-RNG-: Verson Generate u=u(,) If u<., set =x, return; If u<.3, set =x, return; If u<.7, set =x2, return; Set =x3, return; D-RNG-: Verson 2 Generate u=u(,) If u<.4, set =x2, return; If u<.7, set =x3, return; If u<.9, set =x, return; Set =x, return; More Effcent Geometrc Random Varables Let p the probablty of success and q=-p the probablty of falure, then s the tme of the frst success wth pmf Pr{ = } = pq Usng the prevous dscrete random varate algorthm, =j f j = j Pr { = } U < Pr{ = } = = j { } { } Pr = = Pr > j = q { j } = mn j: q < U { ( ) ( )} = mn j: jlog q < log U log ( U ) = mn j: j > log ( q) j As a result: j < q U q j j q < U q log ( U ) = + log ( q) j

Posson and Bnomal Dstrbutons Posson Dstrbuton wth rate λ. { } λ e λ Pr = =, =,,...! λ Note: p = p + + The bnomal dstrbuton (n,p) gves the number of successes n n trals gven that the probablty of success s p. n! n Pr { = } = p ( p), =,,..., n! ( n )! Note: p+ = n p p + p Accept/Reject Method (D-RNG-AR) Suppose that we would lke to generate random varates from a pmf {p j, j } and we have an effcent way to generate varates from a pmf {q j, j }. Let a constant c such that p j c for all j such that pj q > j In ths case, use the followng algorthm D-RNG-AR:. Generate a random varate Y from pmf {q j, j }. 2. Generate u=u(,) 3. If u< py/(cqy), set =Y and return; 4. Else repeat from Step.

Accept/Reject Method (D-RNG-AR) Show that the D-RNG-AR algorthm generates a random varate wth the requred pmf {p j, j }. p = Pr{ = } = Pr{ not stop for,..., k } Pr { = and stop at teraton k} k = { = k} = p p { = Y = } { Y = } { Y = } = = Pr and stop after Pr and s accepted Pr Y accepted Pr p /cq q Pr{ Not stop up to k } = Pr Y not accepted Y = j Pr Y = j j k k p j = q j = j cq j c Therefore k p p = = p c c k = { } { } q cq k c D-RNG-AR Example Determne an algorthm for generatng random varates for a random varable that take values,2,.., wth probabltes.,.2,.9,.8,.2,.,.9,.9,.,. respectvely. p c = max =.2 q D-RNG-: Generate u=u(,) k=; whle(u >cdf(k)) k=k+; x()=k; D-RNG-AR: u=u(,), u2=u(,) Y=floor(*u + ); whle(u2 > p(y)/c) u= U(,); u2=u(,); Y=floor(*rand + ); y()=y;

The Composton Approach Suppose that we have an effcent way to generate varates from two pmfs {q j, j } and {r j, j } Suppose that we would lke to generate random varates for a random varable havng pmf, a (,). { } Pr = j = p = aq + ( a) r, j j j j Let have pmf {q j, j } and 2 have pmf {r j, j } and defne wth probablty a = 2 wth probablty - a Algorthm D-RNG-C: Generate u=u(,) If u <= a generate Else generate 2 Contnuous Random Varates Inverse Transform Method Suppose we would lke to generate a sequence of contnuous random varates havng densty functon F (x) Algorthm C-RNG-: Let U be a random varable unformly dstrbuted n the nterval (,). For any contnuous dstrbuton functon, the random varate s gven by = F ( U ) F (x) u x

Example: Exponentally Dstrbuted Random Varable Suppose we would lke to generate a sequence of random varates havng densty functon Soluton f x ( x) = λe λ Fnd the cumulatve dstrbuton x λ y F ( x) = λe dy = e Let a unformly dstrbuted random varable u λx u = F ( x) = e ln ( u) = λx x = ln ( u) λ Equvalently, snce -u s also unformly dstrbuted n (,) x = ln ( u ) λ λx Convoluton Technques and the Erlang Dstrbuton Suppose the random varable s the sum of a number of ndependent dentcally dstrbuted random varables Algorthm C-RNG-Cv: n = Y = Generate Y,,Yn from the gven dstrbuton =Y+Y2+ +Yn. An example of such random varable s the Erlang wth order n whch s the sum of n d exponental random varables wth rate λ. n λx ( λx) e f ( x) = n!

Accept/Reject Method (C-RNG-AR) Suppose that we would lke to generate random varates from a pdf f (x) and we have an effcent way to generate varates from a pdf g (x). Let a constant c such that f ( x) for all g ( x) c x In ths case, use the followng algorthm C-RNG-AR:. Generate a random varate Y from densty g (x). 2. Generate u=u(,) 3. If u< f (Y)/(cg (Y)), set =Y and return; 4. Else repeat from Step. Accept/Reject Method (C-RNG-AR) The C-RNG-AR s smlar to the D-RNG-AR algorthm except the comparson step where rather than comparng the two probabltes we compare the values of the densty functons. Theorem The random varates generated by the Accept/Reject method have densty f (x). The number of teratons of the algorthm that are needed s a geometrc random varable wth mean c Note: The constant c s mportant snce s mples the number of teratons needed before a number s accepted, therefore t s requred that t s selected so that t has ts mnmum value.

C-RNG-AR Example Use the C-RNG-AR method to generate random varates that are normally dstrbuted wth mean and varance, N(,). Frst consder the pdf of the absolute value of. 2 2 x 2 f ( x) = e 2π We know how to generate exponentally dstrbuted random varates Y wth rate λ=. x g ( x) = e, x Y Determne c such that t s equal to the maxmum of the rato f ( ) 2 x 2 x x 2 = e 2e c = g ( x) 2π π Y C-RNG-AR Example C-RNG-AR for N(,): u=u(,), u2=u(,); Y= -log(u); whle(u2 > exp(-.5(y-)*(y-))) u= U(,); u2=u(,); Y= -log(u); u3= U(,); If u3 <.5 =Y; Else = -Y; Suppose we would lke Z~N(µ, σ 2 ), then Z : = σ + µ

Generatng a Homogeneous Posson Processes A homogenous Posson process s a sequence of ponts (events) where the nter-even tmes are exponentally dstrbuted wth rate λ (The Posson process wll be studed n detal durng later classes) Let t denote the th pont of a Posson process, then the algorthm for generatng the frst N ponts of the sequence {t, =,2,,N} s gven by Algorthm Posson-λ: k=, t(k)=; Whle k<n k= k+; Generate u=u(,) t(k)= t(k-) log(u)/lambda; Return t. Generatng a Non-Homogeneous Posson Processes Suppose that the process s non-homogeneous.e., the rate vares wth tme,.e., λ(t) λ, for all t<t. Let t denote the th pont of a Posson process, and τ the actual tme, then the algorthm for generatng the frst N ponts of the sequence {t, =,2,,N} s gven by Algorthm Thnnng Posson-λ: k=, t(k)=, tau= ; Whle k<n Generate u=u(,); tau= tau log(u)/lambda; Generate u2= U(,); If(lambda(tau)\lambda < u2) k= k+, t(k)= tau; Return t.

Generatng a Non-Homogeneous Posson Processes Agan, suppose that the process s non-homogeneous.e., the rate vares wth tme,.e., λ(t) λ, for all t<t but now we would lke to generate all ponts t drectly, wthout thnnng. Assumng that we are at pont t, then the queston that we need to answer s what s the cdf of S where S s the tme untl the next event s = { < = } = exp λ ( t + ) F () s Pr S s t t S { } y dy Thus, to smulate ths process, we start from t and generate S from F S to go to t =t +S. Then, from t, we generate S 2 from F S2 to go to t 2 =t +S 2 and so on. Example of Non-Homogeneous Posson Processes Suppose that λ(t)= /(t+α), t, for some postve constant a. Generate varates from ths non-homogeneous Posson process. Frst, let us determne the rate of the cdf s s s + t + a λ ( t ) + y dy = dy = log ( a+ t + y) t a + s+ t + a t + a s F = exp log S = t + a = s + t + a s+ t + a Invertng ths yelds ( t + a) u F = S u

Example of Non-Homogeneous Posson Processes Inverse Transform F = ( t ) + S a u u Thus we start from t = ( t + a) u au t = t + = u u t 2 t ( ) u 2 t + a t + au = t + = u u 2 ( ) 2 2 t + a u t + au = t + = u u The Composton Approach Suppose that we have an effcent way to generate varates from cdfs G (x),, G n (x). Suppose that we would lke to generate random varates for a random varable havng cdf n F ( x) = rg ( x), r =, r >, =,..., n = = n Algorthm C-RNG-C: Generate u=u(,) If u<p, get from G(x), return; If u<p +p 2, get from G2(x), return;

Polar Method for Generatng Normal Random Varates Let and Y be ndependent normally dstrbuted random varables wth zero mean and varance. Then the jont densty functon s gven by 2 2 x y 2 2 2 2 2 ( x + y ) fy ( x, y) = e e = e 2π 2π 2π Y R θ Then make a varable change r x y θ = 2 2 = + arctan y x The new jont densty functon s Unform n the nterval [,2π] frθ (, r θ ) = e 2π 2 2 r Exponental wth rate /2 Polar Method for Generatng Normal Random Varates Algorthm C-RNG-N: Generate u=u(,), u2=u(,); R= -2*log(u); W= 2*p*u2; = sqrt(r) cos(w); Y= sqrt(r) sn(w); But, sne and cosne evaluatons are neffcent! Algorthm C-RNG-N2:. Generate u=u(,), u2=u(,); 2. Set V= 2*u-, V2= 2*u2-; (-,) 3. S=V*V+V2*V2; 4. If S >, Go to 5. R= sqrt(-2*log(s)/s); 6. = R*V; 7. Y= R*V2; (-,-) Generates 2 ndependent RVs (V,V 2 ) (,) (,-)

Smulaton of Dscrete Event Systems INITIALIZE STATE Update State x =f(x,e ) EVENT CALENDAR e t e 2 t 2 TIME Update Tme t =t CLOCK STRUCTURE RNG Delete Infeasble Events Add New Feasble Events Verfcaton of a Smulaton Program Standard debuggng technques Debug modules or subroutnes Create smple specal cases, where you know what to expect as an output from each of the modules Often choosng carefully the system parameters, the smulaton model can be evaluated analytcally. Create a trace whch keeps track of the state varables, the event lst and other varables.