Question: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds.

Size: px
Start display at page:

Download "Question: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds."

Transcription

1 Simulation Question: My computer only knows how to generate a uniform random variable. How do I generate others?. Continuous Random Variables Recall that a random variable X is continuous if it has a probability density function f X so that a X b} The distribution function F X for X is defined as b F X (x) X x} x a f X (x)dx. f X (s)ds. Notice that F X (x) f X(x). A uniform[0, ] random variable U has density function if 0 u, f U (u) 0 otherwise. Its distribution function is then given by u 0 if u < 0 F U (u) f U (s)ds u if 0 u if u >. We will assume that the computer has some function to simulate a uniform[0, ] random variable. For example, in C, the standard library rand can be used: the line U rand()/(float) RAND MAX will simulate a uniform r.v. Fix a random variable X with distribution function F X we would like to simulate from. Consider the follow instructions:. Generate a uniform random variable U.

2 2. Output F X (U) This will output a random variable with distribution function F X. To see this, observe: F X (U) x} U F X (x)} F X (x). As an example, consider the exponential distribution: the density of an exponential r.v. X is given by e x x 0 f X (x) 0 x < 0. Thus, its distribution function is given by F X (x) e x. Let us determine the inverse to F X (x). We need to solve for x in e x u. The solution is x log( u), that is F X (u) log( u). Hence, to simulate an exponential random variable, do the following:. Generate a uniform r.v. U. 2. Output log( U). The next method is the rejection method. We want to simulate X with p.d.f. f(x). Suppose we know how to simulation Y, with p.d.f. g(y), and assume that there is some constant c so that f(y) g(y) c. The rejection alogorithm is as follows:. Generate a uniform r.v. U and the r.v. Y. 2. If U f(y ) cg(y ) then set W Y and output W. Otherwise go to step. () 2

3 We now show that this method works. We need to show that W has the correct distribution. We only output Y when the condition () is met. Thus W w} Y w U f(y ) } cg(y ) } Y w and U f(y ) cg(y ), K } where K def U f(y ). cg(y ) Now U and Y are independent, so the joint density function for (U, Y ) is the product of the density of U and the density of Y : f U,Y (u, y) g(y) if 0 u 0 otherwise Thus Y w and U f(y ) } cg(y ) c c y w, u f(y)/cg(y) w f(y)/cg(y) w w g(y) 0 g(y) f(y) cg(y) dy f(y)dy X w}. f U,Y (u, y)dudy dudy Thus W w} X w}. ck Letting w shows that ck, and thus W has the same distribution function as X. Thus W has the correct distribution. Let us see how to use this to generate a normal random variable. In this case, we will let Y be an exponential random variable. Instead of simulating a normal r.v. at first we will simulate the absolute value of a normal r.v. Such a r.v. has density f(x) 2 2π e 2 x2. 3

4 Notice that Y has p.d.f. g(y) e y, and Thus, we can set c 2e/π and then f(y) g(y) 2 e 2(y 2 2y) 2π 2e π e 2 (y )2 2e π. f(y) cg(y) exp( 2 (y )2 ). Thus we have the following algorithm for generating the absolute value of a normal random variable:. Generate two uniform random variables, U, V. 2. Set Y log(v ). 3. If U < exp( 2 (Y )2 ) output Y. Otherwise go to step. In particular cases, there can be clever ways to simulate random variables. Example. (Two independent normals). Let (X, Y ) be two independent standard normal variables. Thus, (X, Y ) has a joint density f X,Y (x, y) 2π e 2 (x2 +y 2). We will make use of the polar representation of (X, Y ). In particular, if R 2 X 2 + Y 2, and Θ arctan(y/x), then (X, Y ) (R cos Θ, R sin Θ). What is the joint distribution of (R 2, Θ)? We use the change of variable formula. The density is given by f R 2,Θ(ρ, θ) f X,Y ( ρ cos θ, ρ sin θ)j T (ρ, θ), 4

5 where J T is the Jacobian of the transformation T (ρ, θ) ( ρ cos θ, ρ sin θ). Then J T (ρ, θ) d dρ d dρ ρ cos θ ρ sin θ d dθ ρ cos θ d dθ ρ sin θ 2 cos θ ρ sin θ ρ 2 sin θ ρ ρ cos θ 2 cos2 θ + 2 sin2 θ 2. Thus f R 2,Θ(ρ, θ) 2π e }} 2π f Θ (θ) 2 ( ρ cos θ) 2 +( ρ sin θ) 2 } 2 2 e 2 ρ }} f R 2 (ρ) We conclude that the joint density for (R 2, Θ) factors into the product of a density involving only θ and a density involving only ρ. Thus (R 2, Θ) are independent random variables. Furthermore, from the densities we know their distributions: Θ is uniform on [0, 2π], and R 2 is exponential(/2). From this we obtain the following algorithm for simulating two independent normal random variables:. Generate U which is uniform on [0, 2π]. 2. Generate R 2, an exponential(/2) r.v. 3. Let (X, Y ) (R cos Θ, R sin Θ), and output (X, Y ). 2 Simulating Discrete Random Variables A discrete random variable X takes values in a countable set Ω ω, ω 2,...}. It has an associated probability mass function p X (ω k ) X ω k }. 5

6 Here is a general algorithm for simulating a discrete random variable: Let F k k p X (ω k ). j. Generate U a uniform random variable. 2. Initialize k Replace k by k If F k < U < F k output ω k and stop. Otherwise go to step 3 This works because F k < U < F k } F k F k p X (ω k ). Let us see one example using this general procedure. Example 2. (Geometric). A geometric(p) random variable takes values in, 2,...} and has mass function p(k) p( p) k. It will be convenient to work with Q k P k : Q k jk+ p( p) j p( p) k ( p)l ( p) k. Then using the algorithm above, we output the smallest k so that P k < U < P k. Equivalently, this is the first k so that l0 P k > U > P k. Notice that U is also uniform, so it is equivalent to generate a uniform, and output the smallest k so that U > ( p) k. 6

7 Taking logarithms, we output the smallest k so that log U > k log( p) k > log U log( p). Another way to say this is that we output [ ] log u +, log( p) where [x] is the largest integer smaller than x. There are many clever tricks to simulate specific random variables, that are faster than the general algorithm above. We discuss here one example. Example 2.2 (Binomial). A r.v. X has the Binomial(n, p) distribution if its probability mass function is ( ) n p X (k) p k ( p) n k, k 0,..., n. k X can be represented as X sum n k I k, where I, I 2,..., I n are independent Bernoulli(p) random variables. That is Hence a naive method of simulating X is: I k } p I k 0} p.. Simulate U,..., U n uniform random variables. 2. For each k,..., n, set I k if U k < p and I k 0 if U k p. 3. Output n k I k. This method requires we generate n uniform random variables. We now show another method which requires only a single uniform random variable. The basic observation is the following: If U is uniform on [0, ], then (i) Given U < p, the distribution of U is uniform on [0, p]. 7

8 (ii) Given U > p, the distribution of U is uniform on [p, ]. We now show (i) holds. Suppose that 0 < u < p. U u U < p} u p. (ii) follows by a similar argument. This leads us to consider the following algorithm:. Generate a uniform U. 2. Initialize k Let k k If U < p do: (a) Set I K. (b) Replace U by p U. 5. If U p do: (a) Set I k 0. (b) Replace U by U p p. 6. If k n output n j I k and stop. 7. Go to step 3 3 Markov Chains Definition 3.. A matrix P is stochastic if U u and U < p} U < p} U u} p P (i, j) 0 and j P (i, j). 8

9 Definition 3.2. A collection of random variables X 0, X,...} is a Markov chain if there exists a stochastic matrix P so that If µ(k) X n+ k X 0 j 0, X j,..., X n j} P (j, k). X 0 k}, we can write down the probability of any event: X 0 i 0, X i,..., X n i n } µ(i 0 )P (i 0, i ) P (i n, i n ). Notice that the future position (X n+ ) depends on the past (X 0, X,..., X n ) only through the current position (X n ). Example 3.3 (Random Walk). Let D k } be an i.i.d. sequence of, +}-valued random variables, with D k +} p D k } p. Then let S n n k D k. S n takes values in..., 2,, 0,, 2,...} and at each unit of time, either increases or decreases by. It increases with probability p. The reader should carefully verify that S n } is a Markov chain with transition matrix p if k j + P (j, k) p if k j. Example 3.4 (Ehrenfest Model of Diffusion). Suppose N molecules are contained in two containers. At each unit of time, a molecule is selected at random and moved from its container to the other. Let X n be the number of molecules in the first container at time n. P (k, k + ) k N P (k, k ) k N. In other words, P N N N N N

10 Example 3.5 (Bernoulli-Laplace Model of Diffusion). There are two urns, each always with N particles. There are a total of N white particles, and a total of N black particles. At each unit of time, a particle is chosen from each urn and interchanged. Let X n be the number of white particles in the first urn. Then ( P (k, k + ) k ) 2 N ( ) 2 k P (k, k ) N P (k, k) 2 k ( k ). N N 3. n-step transition probabilities Our first goal is to show that X n+m k X m j} P n (j, k), (2) where P n (j, k) is the (j, k)th entry of the nth matrix power of P. Define M m,n to be the matrix with (j, k) entry equal to the left-hand side of (2). Then X m+n k X m j} l l X m+n k X m j, X m+n l} X m+n l X m j} P (l, k)m m,n (j, l). Thus M m,n M m,n P. Notice that M m,0 is the indentity matrix I. Thus, we can continue to get M m,n M m,0 } P P} P n. n times. Let µ be the row vector (µ(),...). Suppose that X 0 has the distribion X 0 k} µ(k). 0

11 Then the distribution at time n is given by X n k} l l X n k X 0 l} X 0 l} µ(l)p n (l, k). In other words, the distribution at time n is given by µp n.

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah Math 5040 1 Stochastic Processes & Simulation Davar Khoshnevisan University of Utah Module 1 Generation of Discrete Random Variables Just about every programming language and environment has a randomnumber

More information

1 Acceptance-Rejection Method

1 Acceptance-Rejection Method Copyright c 2016 by Karl Sigman 1 Acceptance-Rejection Method As we already know, finding an explicit formula for F 1 (y), y [0, 1], for the cdf of a rv X we wish to generate, F (x) = P (X x), x R, is

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Generating Random Variates 2 (Chapter 8, Law)

Generating Random Variates 2 (Chapter 8, Law) B. Maddah ENMG 6 Simulation /5/08 Generating Random Variates (Chapter 8, Law) Generating random variates from U(a, b) Recall that a random X which is uniformly distributed on interval [a, b], X ~ U(a,

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Copyright c 2007 by Karl Sigman 1 Simulating normal Gaussian rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Fundamental to many applications

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Generation from simple discrete distributions

Generation from simple discrete distributions S-38.3148 Simulation of data networks / Generation of random variables 1(18) Generation from simple discrete distributions Note! This is just a more clear and readable version of the same slide that was

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

Ch3. Generating Random Variates with Non-Uniform Distributions

Ch3. Generating Random Variates with Non-Uniform Distributions ST4231, Semester I, 2003-2004 Ch3. Generating Random Variates with Non-Uniform Distributions This chapter mainly focuses on methods for generating non-uniform random numbers based on the built-in standard

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 )

Expectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 ) Expectation Maximization (EM Algorithm Motivating Example: Have two coins: Coin 1 and Coin 2 Each has it s own probability of seeing H on any one flip. Let p 1 = P ( H on Coin 1 p 2 = P ( H on Coin 2 Select

More information

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6 MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Chapter 3. Chapter 3 sections

Chapter 3. Chapter 3 sections sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional Distributions 3.7 Multivariate Distributions

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

MA6451 PROBABILITY AND RANDOM PROCESSES

MA6451 PROBABILITY AND RANDOM PROCESSES MA6451 PROBABILITY AND RANDOM PROCESSES UNIT I RANDOM VARIABLES 1.1 Discrete and continuous random variables 1. Show that the function is a probability density function of a random variable X. (Apr/May

More information

2905 Queueing Theory and Simulation PART IV: SIMULATION

2905 Queueing Theory and Simulation PART IV: SIMULATION 2905 Queueing Theory and Simulation PART IV: SIMULATION 22 Random Numbers A fundamental step in a simulation study is the generation of random numbers, where a random number represents the value of a random

More information

Stat 35, Introduction to Probability.

Stat 35, Introduction to Probability. Stat 35, Introduction to Probability. Outline for the day: 1. Harman/Negreanu and running it twice. 2. Uniform random variables. 3. Exponential random variables. 4. Normal random variables. 5. Functions

More information

STAT 3128 HW # 4 Solutions Spring 2013 Instr. Sonin

STAT 3128 HW # 4 Solutions Spring 2013 Instr. Sonin STAT 28 HW 4 Solutions Spring 2 Instr. Sonin Due Wednesday, March 2 NAME (25 + 5 points) Show all work on problems! (5). p. [5] 58. Please read subsection 6.6, pp. 6 6. Change:...select components...to:

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

i=1 k i=1 g i (Y )] = k

i=1 k i=1 g i (Y )] = k Math 483 EXAM 2 covers 2.4, 2.5, 2.7, 2.8, 3.1, 3.2, 3.3, 3.4, 3.8, 3.9, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.9, 5.1, 5.2, and 5.3. The exam is on Thursday, Oct. 13. You are allowed THREE SHEETS OF NOTES and

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 536 December, 00 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. You will have access to a copy

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x?

2.6 Logarithmic Functions. Inverse Functions. Question: What is the relationship between f(x) = x 2 and g(x) = x? Inverse Functions Question: What is the relationship between f(x) = x 3 and g(x) = 3 x? Question: What is the relationship between f(x) = x 2 and g(x) = x? Definition (One-to-One Function) A function f

More information

Transformations and Expectations

Transformations and Expectations Transformations and Expectations 1 Distributions of Functions of a Random Variable If is a random variable with cdf F (x), then any function of, say g(), is also a random variable. Sine Y = g() is a function

More information

Expansion of 1/r potential in Legendre polynomials

Expansion of 1/r potential in Legendre polynomials Expansion of 1/r potential in Legendre polynomials In electrostatics and gravitation, we see scalar potentials of the form V = K d Take d = R r = R 2 2Rr cos θ + r 2 = R 1 2 r R cos θ + r R )2 Use h =

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Example 1. Assume that X follows the normal distribution N(2, 2 2 ). Estimate the probabilities: (a) P (X 3); (b) P (X 1); (c) P (1 X 3).

Example 1. Assume that X follows the normal distribution N(2, 2 2 ). Estimate the probabilities: (a) P (X 3); (b) P (X 1); (c) P (1 X 3). Example 1. Assume that X follows the normal distribution N(2, 2 2 ). Estimate the probabilities: (a) P (X 3); (b) P (X 1); (c) P (1 X 3). First of all, we note that µ = 2 and σ = 2. (a) Since X 3 is equivalent

More information

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Spring 2018 Professor: Jared Speck

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Spring 2018 Professor: Jared Speck MATH 8.52 COURSE NOTES - CLASS MEETING # 6 8.52 Introduction to PDEs, Spring 208 Professor: Jared Speck Class Meeting # 6: Laplace s and Poisson s Equations We will now study the Laplace and Poisson equations

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Fall 2011 Professor: Jared Speck

MATH COURSE NOTES - CLASS MEETING # Introduction to PDEs, Fall 2011 Professor: Jared Speck MATH 8.52 COURSE NOTES - CLASS MEETING # 6 8.52 Introduction to PDEs, Fall 20 Professor: Jared Speck Class Meeting # 6: Laplace s and Poisson s Equations We will now study the Laplace and Poisson equations

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Mathematical Statistics 1 Math A 6330

Mathematical Statistics 1 Math A 6330 Mathematical Statistics 1 Math A 6330 Chapter 3 Common Families of Distributions Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 28, 2015 Outline 1 Subjects of Lecture 04

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Stat 100a, Introduction to Probability.

Stat 100a, Introduction to Probability. Stat 100a, Introduction to Probability. Outline for the day: 1. Geometric random variables. 2. Negative binomial random variables. 3. Moment generating functions. 4. Poisson random variables. 5. Continuous

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................

More information

Lecture 11: Probability, Order Statistics and Sampling

Lecture 11: Probability, Order Statistics and Sampling 5-75: Graduate Algorithms February, 7 Lecture : Probability, Order tatistics and ampling Lecturer: David Whitmer cribes: Ilai Deutel, C.J. Argue Exponential Distributions Definition.. Given sample space

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

SOLUTION FOR HOMEWORK 12, STAT 4351

SOLUTION FOR HOMEWORK 12, STAT 4351 SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.

More information

STA 711: Probability & Measure Theory Robert L. Wolpert

STA 711: Probability & Measure Theory Robert L. Wolpert STA 711: Probability & Measure Theory Robert L. Wolpert 6 Independence 6.1 Independent Events A collection of events {A i } F in a probability space (Ω,F,P) is called independent if P[ i I A i ] = P[A

More information

STAT 430/510 Probability

STAT 430/510 Probability STAT 430/510 Probability Hui Nie Lecture 16 June 24th, 2009 Review Sum of Independent Normal Random Variables Sum of Independent Poisson Random Variables Sum of Independent Binomial Random Variables Conditional

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Joint Distributions: Part Two 1

Joint Distributions: Part Two 1 Joint Distributions: Part Two 1 STA 256: Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 30 Overview 1 Independence 2 Conditional Distributions 3 Transformations

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

ECO220Y Continuous Probability Distributions: Uniform and Triangle Readings: Chapter 9, sections

ECO220Y Continuous Probability Distributions: Uniform and Triangle Readings: Chapter 9, sections ECO220Y Continuous Probability Distributions: Uniform and Triangle Readings: Chapter 9, sections 9.8-9.9 Fall 2011 Lecture 8 Part 1 (Fall 2011) Probability Distributions Lecture 8 Part 1 1 / 19 Probability

More information

ω X(ω) Y (ω) hhh 3 1 hht 2 1 hth 2 1 htt 1 1 thh 2 2 tht 1 2 tth 1 3 ttt 0 none

ω X(ω) Y (ω) hhh 3 1 hht 2 1 hth 2 1 htt 1 1 thh 2 2 tht 1 2 tth 1 3 ttt 0 none 3 D I S C R E T E R A N D O M VA R I A B L E S In the previous chapter many different distributions were developed out of Bernoulli trials. In that chapter we proceeded by creating new sample spaces for

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

Chapter 5 Joint Probability Distributions

Chapter 5 Joint Probability Distributions Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger Chapter 5 Joint Probability Distributions 5 Joint Probability Distributions CHAPTER OUTLINE 5-1 Two

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Change Of Variable Theorem: Multiple Dimensions

Change Of Variable Theorem: Multiple Dimensions Change Of Variable Theorem: Multiple Dimensions Moulinath Banerjee University of Michigan August 30, 01 Let (X, Y ) be a two-dimensional continuous random vector. Thus P (X = x, Y = y) = 0 for all (x,

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Instructions: No books. No notes. Non-graphing calculators only. You are encouraged, although not required, to show your work.

Instructions: No books. No notes. Non-graphing calculators only. You are encouraged, although not required, to show your work. Exam 3 Math 850-007 Fall 04 Odenthal Name: Instructions: No books. No notes. Non-graphing calculators only. You are encouraged, although not required, to show your work.. Evaluate the iterated integral

More information

e x3 dx dy. 0 y x 2, 0 x 1.

e x3 dx dy. 0 y x 2, 0 x 1. Problem 1. Evaluate by changing the order of integration y e x3 dx dy. Solution:We change the order of integration over the region y x 1. We find and x e x3 dy dx = y x, x 1. x e x3 dx = 1 x=1 3 ex3 x=

More information

Order Statistics. The order statistics of a set of random variables X 1, X 2,, X n are the same random variables arranged in increasing order.

Order Statistics. The order statistics of a set of random variables X 1, X 2,, X n are the same random variables arranged in increasing order. Order Statistics The order statistics of a set of random variables 1, 2,, n are the same random variables arranged in increasing order. Denote by (1) = smallest of 1, 2,, n (2) = 2 nd smallest of 1, 2,,

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

[Chapter 6. Functions of Random Variables]

[Chapter 6. Functions of Random Variables] [Chapter 6. Functions of Random Variables] 6.1 Introduction 6.2 Finding the probability distribution of a function of random variables 6.3 The method of distribution functions 6.5 The method of Moment-generating

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

X100/701 MATHEMATICS ADVANCED HIGHER. Read carefully

X100/701 MATHEMATICS ADVANCED HIGHER. Read carefully X/7 N A T I O N A L Q U A L I F I C A T I O N S 9 T H U R S D A Y, M A Y. P M. P M MATHEMATICS ADVANCED HIGHER Read carefully. Calculators may be used in this paper.. Candidates should answer all questions.

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics

Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture

More information

Poisson Latent Feature Calculus for Generalized Indian Buffet Processes

Poisson Latent Feature Calculus for Generalized Indian Buffet Processes Poisson Latent Feature Calculus for Generalized Indian Buffet Processes Lancelot F. James (paper from arxiv [math.st], Dec 14) Discussion by: Piyush Rai January 23, 2015 Lancelot F. James () Poisson Latent

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Chapter 1. Sets and probability. 1.3 Probability space

Chapter 1. Sets and probability. 1.3 Probability space Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability

More information

1 Inverse Transform Method and some alternative algorithms

1 Inverse Transform Method and some alternative algorithms Copyright c 2016 by Karl Sigman 1 Inverse Transform Method and some alternative algorithms Assuming our computer can hand us, upon demand, iid copies of rvs that are uniformly distributed on (0, 1), it

More information

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. NAME: ECE 302 Division MWF 0:30-:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. If you are not in Prof. Pollak s section, you may not take this

More information

Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie)

Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie) Methods of Data Analysis Random numbers, Monte Carlo integration, and Stochastic Simulation Algorithm (SSA / Gillespie) Week 1 1 Motivation Random numbers (RNs) are of course only pseudo-random when generated

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009. NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total

More information