Question: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds.

Similar documents
Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

1 Acceptance-Rejection Method

Chapter 5. Chapter 5 sections

Generating Random Variates 2 (Chapter 8, Law)

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

1 Review of Probability and Distributions

Probability and Distributions

4 Branching Processes

1 Stat 605. Homework I. Due Feb. 1, 2011

Generation from simple discrete distributions

4. CONTINUOUS RANDOM VARIABLES

Ch3. Generating Random Variates with Non-Uniform Distributions

Multivariate distributions

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 )

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6

Chapter 5 continued. Chapter 5 sections

Chapter 3. Chapter 3 sections

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

1 Presessional Probability

MA6451 PROBABILITY AND RANDOM PROCESSES

2905 Queueing Theory and Simulation PART IV: SIMULATION

Stat 35, Introduction to Probability.

STAT 3128 HW # 4 Solutions Spring 2013 Instr. Sonin

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Spring 2012 Math 541B Exam 1

Sample Spaces, Random Variables

i=1 k i=1 g i (Y )] = k

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.


Statistics for scientists and engineers

Chapter 2: Random Variables

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x?

Transformations and Expectations

Expansion of 1/r potential in Legendre polynomials

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

6.1 Moment Generating and Characteristic Functions

Chapter 2. Discrete Distributions

STA 294: Stochastic Processes & Bayesian Nonparametrics

Quick Tour of Basic Probability Theory and Linear Algebra

Chapter 4. Chapter 4 sections

Example 1. Assume that X follows the normal distribution N(2, 2 2 ). Estimate the probabilities: (a) P (X 3); (b) P (X 1); (c) P (1 X 3).

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Spring 2018 Professor: Jared Speck

2 Functions of random variables

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Fall 2011 Professor: Jared Speck

Basic concepts of probability theory

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

1 Review of Probability

Mathematical Statistics 1 Math A 6330

Random Variables and Their Distributions

Stat 100a, Introduction to Probability.

1 Solution to Problem 2.1

conditional cdf, conditional pdf, total probability theorem?

Probability and Measure

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet

Lecture 11: Probability, Order Statistics and Sampling

1 Continuous-time chains, finite state space

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

2 Random Variable Generation

Basic concepts of probability theory

Basic concepts of probability theory

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

18 Bivariate normal distribution I

SOLUTION FOR HOMEWORK 12, STAT 4351

STA 711: Probability & Measure Theory Robert L. Wolpert

STAT 430/510 Probability

More on Distribution Function

EE4601 Communication Systems

Joint Distributions: Part Two 1

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

ECO220Y Continuous Probability Distributions: Uniform and Triangle Readings: Chapter 9, sections

ω X(ω) Y (ω) hhh 3 1 hht 2 1 hth 2 1 htt 1 1 thh 2 2 tht 1 2 tth 1 3 ttt 0 none

Continuous-time Markov Chains

Chapter 5 Joint Probability Distributions

THE QUEEN S UNIVERSITY OF BELFAST

CS145: Probability & Computing

Change Of Variable Theorem: Multiple Dimensions

Statistics & Data Sciences: First Year Prelim Exam May 2018

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Instructions: No books. No notes. Non-graphing calculators only. You are encouraged, although not required, to show your work.

e x3 dx dy. 0 y x 2, 0 x 1.

Order Statistics. The order statistics of a set of random variables X 1, X 2,, X n are the same random variables arranged in increasing order.

Review of Probability Theory

[Chapter 6. Functions of Random Variables]

This does not cover everything on the final. Look at the posted practice problems for other topics.

X100/701 MATHEMATICS ADVANCED HIGHER. Read carefully

1 Review of di erential calculus

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

Poisson Latent Feature Calculus for Generalized Indian Buffet Processes

Multiple Random Variables

Chapter 1. Sets and probability. 1.3 Probability space

1 Inverse Transform Method and some alternative algorithms

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie)

STAT Chapter 5 Continuous Distributions

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Transcription:

Simulation Question: My computer only knows how to generate a uniform random variable. How do I generate others?. Continuous Random Variables Recall that a random variable X is continuous if it has a probability density function f X so that a X b} The distribution function F X for X is defined as b F X (x) X x} x a f X (x)dx. f X (s)ds. Notice that F X (x) f X(x). A uniform[0, ] random variable U has density function if 0 u, f U (u) 0 otherwise. Its distribution function is then given by u 0 if u < 0 F U (u) f U (s)ds u if 0 u if u >. We will assume that the computer has some function to simulate a uniform[0, ] random variable. For example, in C, the standard library rand can be used: the line U rand()/(float) RAND MAX will simulate a uniform r.v. Fix a random variable X with distribution function F X we would like to simulate from. Consider the follow instructions:. Generate a uniform random variable U.

2. Output F X (U) This will output a random variable with distribution function F X. To see this, observe: F X (U) x} U F X (x)} F X (x). As an example, consider the exponential distribution: the density of an exponential r.v. X is given by e x x 0 f X (x) 0 x < 0. Thus, its distribution function is given by F X (x) e x. Let us determine the inverse to F X (x). We need to solve for x in e x u. The solution is x log( u), that is F X (u) log( u). Hence, to simulate an exponential random variable, do the following:. Generate a uniform r.v. U. 2. Output log( U). The next method is the rejection method. We want to simulate X with p.d.f. f(x). Suppose we know how to simulation Y, with p.d.f. g(y), and assume that there is some constant c so that f(y) g(y) c. The rejection alogorithm is as follows:. Generate a uniform r.v. U and the r.v. Y. 2. If U f(y ) cg(y ) then set W Y and output W. Otherwise go to step. () 2

We now show that this method works. We need to show that W has the correct distribution. We only output Y when the condition () is met. Thus W w} Y w U f(y ) } cg(y ) } Y w and U f(y ) cg(y ), K } where K def U f(y ). cg(y ) Now U and Y are independent, so the joint density function for (U, Y ) is the product of the density of U and the density of Y : f U,Y (u, y) g(y) if 0 u 0 otherwise Thus Y w and U f(y ) } cg(y ) c c y w, u f(y)/cg(y) w f(y)/cg(y) w w g(y) 0 g(y) f(y) cg(y) dy f(y)dy X w}. f U,Y (u, y)dudy dudy Thus W w} X w}. ck Letting w shows that ck, and thus W has the same distribution function as X. Thus W has the correct distribution. Let us see how to use this to generate a normal random variable. In this case, we will let Y be an exponential random variable. Instead of simulating a normal r.v. at first we will simulate the absolute value of a normal r.v. Such a r.v. has density f(x) 2 2π e 2 x2. 3

Notice that Y has p.d.f. g(y) e y, and Thus, we can set c 2e/π and then f(y) g(y) 2 e 2(y 2 2y) 2π 2e π e 2 (y )2 2e π. f(y) cg(y) exp( 2 (y )2 ). Thus we have the following algorithm for generating the absolute value of a normal random variable:. Generate two uniform random variables, U, V. 2. Set Y log(v ). 3. If U < exp( 2 (Y )2 ) output Y. Otherwise go to step. In particular cases, there can be clever ways to simulate random variables. Example. (Two independent normals). Let (X, Y ) be two independent standard normal variables. Thus, (X, Y ) has a joint density f X,Y (x, y) 2π e 2 (x2 +y 2). We will make use of the polar representation of (X, Y ). In particular, if R 2 X 2 + Y 2, and Θ arctan(y/x), then (X, Y ) (R cos Θ, R sin Θ). What is the joint distribution of (R 2, Θ)? We use the change of variable formula. The density is given by f R 2,Θ(ρ, θ) f X,Y ( ρ cos θ, ρ sin θ)j T (ρ, θ), 4

where J T is the Jacobian of the transformation T (ρ, θ) ( ρ cos θ, ρ sin θ). Then J T (ρ, θ) d dρ d dρ ρ cos θ ρ sin θ d dθ ρ cos θ d dθ ρ sin θ 2 cos θ ρ sin θ ρ 2 sin θ ρ ρ cos θ 2 cos2 θ + 2 sin2 θ 2. Thus f R 2,Θ(ρ, θ) 2π e }} 2π f Θ (θ) 2 ( ρ cos θ) 2 +( ρ sin θ) 2 } 2 2 e 2 ρ }} f R 2 (ρ) We conclude that the joint density for (R 2, Θ) factors into the product of a density involving only θ and a density involving only ρ. Thus (R 2, Θ) are independent random variables. Furthermore, from the densities we know their distributions: Θ is uniform on [0, 2π], and R 2 is exponential(/2). From this we obtain the following algorithm for simulating two independent normal random variables:. Generate U which is uniform on [0, 2π]. 2. Generate R 2, an exponential(/2) r.v. 3. Let (X, Y ) (R cos Θ, R sin Θ), and output (X, Y ). 2 Simulating Discrete Random Variables A discrete random variable X takes values in a countable set Ω ω, ω 2,...}. It has an associated probability mass function p X (ω k ) X ω k }. 5

Here is a general algorithm for simulating a discrete random variable: Let F k k p X (ω k ). j. Generate U a uniform random variable. 2. Initialize k 0. 3. Replace k by k +. 4. If F k < U < F k output ω k and stop. Otherwise go to step 3 This works because F k < U < F k } F k F k p X (ω k ). Let us see one example using this general procedure. Example 2. (Geometric). A geometric(p) random variable takes values in, 2,...} and has mass function p(k) p( p) k. It will be convenient to work with Q k P k : Q k jk+ p( p) j p( p) k ( p)l ( p) k. Then using the algorithm above, we output the smallest k so that P k < U < P k. Equivalently, this is the first k so that l0 P k > U > P k. Notice that U is also uniform, so it is equivalent to generate a uniform, and output the smallest k so that U > ( p) k. 6

Taking logarithms, we output the smallest k so that log U > k log( p) k > log U log( p). Another way to say this is that we output [ ] log u +, log( p) where [x] is the largest integer smaller than x. There are many clever tricks to simulate specific random variables, that are faster than the general algorithm above. We discuss here one example. Example 2.2 (Binomial). A r.v. X has the Binomial(n, p) distribution if its probability mass function is ( ) n p X (k) p k ( p) n k, k 0,..., n. k X can be represented as X sum n k I k, where I, I 2,..., I n are independent Bernoulli(p) random variables. That is Hence a naive method of simulating X is: I k } p I k 0} p.. Simulate U,..., U n uniform random variables. 2. For each k,..., n, set I k if U k < p and I k 0 if U k p. 3. Output n k I k. This method requires we generate n uniform random variables. We now show another method which requires only a single uniform random variable. The basic observation is the following: If U is uniform on [0, ], then (i) Given U < p, the distribution of U is uniform on [0, p]. 7

(ii) Given U > p, the distribution of U is uniform on [p, ]. We now show (i) holds. Suppose that 0 < u < p. U u U < p} u p. (ii) follows by a similar argument. This leads us to consider the following algorithm:. Generate a uniform U. 2. Initialize k 0. 3. Let k k +. 4. If U < p do: (a) Set I K. (b) Replace U by p U. 5. If U p do: (a) Set I k 0. (b) Replace U by U p p. 6. If k n output n j I k and stop. 7. Go to step 3 3 Markov Chains Definition 3.. A matrix P is stochastic if U u and U < p} U < p} U u} p P (i, j) 0 and j P (i, j). 8

Definition 3.2. A collection of random variables X 0, X,...} is a Markov chain if there exists a stochastic matrix P so that If µ(k) X n+ k X 0 j 0, X j,..., X n j} P (j, k). X 0 k}, we can write down the probability of any event: X 0 i 0, X i,..., X n i n } µ(i 0 )P (i 0, i ) P (i n, i n ). Notice that the future position (X n+ ) depends on the past (X 0, X,..., X n ) only through the current position (X n ). Example 3.3 (Random Walk). Let D k } be an i.i.d. sequence of, +}-valued random variables, with D k +} p D k } p. Then let S n n k D k. S n takes values in..., 2,, 0,, 2,...} and at each unit of time, either increases or decreases by. It increases with probability p. The reader should carefully verify that S n } is a Markov chain with transition matrix p if k j + P (j, k) p if k j. Example 3.4 (Ehrenfest Model of Diffusion). Suppose N molecules are contained in two containers. At each unit of time, a molecule is selected at random and moved from its container to the other. Let X n be the number of molecules in the first container at time n. P (k, k + ) k N P (k, k ) k N. In other words, P 0 0 0 0 0 0 0 0 0 N N 2 0 0 2 0 0 N N...... 0 0 0 0 0 N 0 0 0 0 0. 9

Example 3.5 (Bernoulli-Laplace Model of Diffusion). There are two urns, each always with N particles. There are a total of N white particles, and a total of N black particles. At each unit of time, a particle is chosen from each urn and interchanged. Let X n be the number of white particles in the first urn. Then ( P (k, k + ) k ) 2 N ( ) 2 k P (k, k ) N P (k, k) 2 k ( k ). N N 3. n-step transition probabilities Our first goal is to show that X n+m k X m j} P n (j, k), (2) where P n (j, k) is the (j, k)th entry of the nth matrix power of P. Define M m,n to be the matrix with (j, k) entry equal to the left-hand side of (2). Then X m+n k X m j} l l X m+n k X m j, X m+n l} X m+n l X m j} P (l, k)m m,n (j, l). Thus M m,n M m,n P. Notice that M m,0 is the indentity matrix I. Thus, we can continue to get M m,n M m,0 } P P} P n. n times. Let µ be the row vector (µ(),...). Suppose that X 0 has the distribion X 0 k} µ(k). 0

Then the distribution at time n is given by X n k} l l X n k X 0 l} X 0 l} µ(l)p n (l, k). In other words, the distribution at time n is given by µp n.