Nonuniform Random Variate Generation

Size: px
Start display at page:

Download "Nonuniform Random Variate Generation"

Transcription

1 Nonuniform Random Variate Generation 1 Suppose we have a generator of i.i.d. U(0, 1) r.v. s. We want to generate r.v. s from other distributions, such as normal, Weibull, Poisson, etc. Or other random objects: stochastic processes, points on a sphere, random matrix, random trees, etc. Main references: 1. L. Devroye, Non-Uniform Random Variate Generation, Springer, W. Hörmann, J. Leydold, G. Derflinger, Automatic Nonuniform Random Variate Generation, Springer-Verlag, 2004.

2 Desired properties Correct method (or very good approximation). Simple: easy to understand and to implement. Fast: Initialization ( setup ) time, if needed, and marginal time per call. Sometimes: compromise between the two. Memory required: can often increase the speed a lot by precomputing and storing large tables. Robust: the algorithm must be accurate and efficient for all parameter values of interest for the distribution. Compatible with variance reduction methods. For example, we prefer inversion because it facilitates synchronization when comparing systems or when we want to use control variates, or quasi-monte Carlo. We may want to sacrifice some speed to preserve inversion. Variance reduction method can often make us save much more speed than what we have sacrificed to preserve inversion. 2

3 Inversion 3 To generate X with distribution function F, let U U(0, 1) and X = F 1 (U) = min{x : F (x) U}. Then P [X x] = P [F 1 (U) x] = P [U F (x)] = F (x), that is, X has the right distribution function. Advantage of inversion: monotone transformation, a single U for each X. Disadvantage: for certain distributions, F is very difficult to inverse. But we can still approximate F 1 numerically.

4 Exemple: the normal distribution 4 If Z N(0, 1), then X = σz + µ : N(µ, σ 2 ). Thus, we only need to generate from the N(0, 1). Density: f(x) = (2π) 1/2 e x2 /2. No formula for F (x) or F 1 (x). But we know that for large x, F (x) resembles F (x) = 1 e x2 whose inverse is F 1 (u) = 2 ln(1 u).

5 Exemple: the normal distribution 4 If Z N(0, 1), then X = σz + µ : N(µ, σ 2 ). Thus, we only need to generate from the N(0, 1). Density: f(x) = (2π) 1/2 e x2 /2. No formula for F (x) or F 1 (x). But we know that for large x, F (x) resembles F (x) = 1 e x2 whose inverse is F 1 (u) = 2 ln(1 u). Idea: for x > 0, write x = F 1 (u) as y = F 1 (u) plus a correction term, and then approximate the correction term by a Chebychev rational approximation. Blair, Edwards, and Johnson (1976) obtain 52 bits of accuracy with this method. If U < 1/2 (x < 0), use symmetry: compute X for 1 U then return X.

6 Exemple: the normal distribution 4 If Z N(0, 1), then X = σz + µ : N(µ, σ 2 ). Thus, we only need to generate from the N(0, 1). Density: f(x) = (2π) 1/2 e x2 /2. No formula for F (x) or F 1 (x). But we know that for large x, F (x) resembles F (x) = 1 e x2 whose inverse is F 1 (u) = 2 ln(1 u). Idea: for x > 0, write x = F 1 (u) as y = F 1 (u) plus a correction term, and then approximate the correction term by a Chebychev rational approximation. Blair, Edwards, and Johnson (1976) obtain 52 bits of accuracy with this method. If U < 1/2 (x < 0), use symmetry: compute X for 1 U then return X. For chi-square, gamma, beta, etc.: things get more complicated because the shape of F 1 depends on the parameters.

7 Inversion by root finding for continuous distributions 5 Sometimes, we know how to compute F, but not F 1. For a given U, we seek X such that F (X) U = 0. Bounded interval [x min, x max ]: F (x min ) = 0 and F (x max ) = 1.

8 Inversion by root finding for continuous distributions 5 Sometimes, we know how to compute F, but not F 1. For a given U, we seek X such that F (X) U = 0. Bounded interval [x min, x max ]: F (x min ) = 0 and F (x max ) = 1. Binary search for a continuous distribution; generate U U(0, 1); let x 1 = x min and x 2 = x max ; while (x 2 x 1 > ɛ x ) and (F (x 2 ) F (x 1 ) > ɛ u ) do x = (x 1 + x 2 )/2; if F (x) < U then x 1 = x else x 2 = x; /* Here, X always belongs to [x 1, x 2 ]. */ return x = (x 1 + x 2 )/2. At each iteration, we gain 1 bit of accuracy on the root. That is, the error is divided by 2.

9 What if the support of X is unbounded? Choose x min < x max such that: x min < 0 if F (0) > 0, x max > 0 if F (0) < 1, F (x min ) is near 0, and F (x max ) is near 1. 6

10 What if the support of X is unbounded? Choose x min < x max such that: x min < 0 if F (0) > 0, x max > 0 if F (0) < 1, F (x min ) is near 0, and F (x max ) is near 1. 6 Binary search for a continuous distribution; generate U U(0, 1); let x 1 = x min and x 2 = x max ; while F (x 2 ) < U do x 1 = x 2 and x 2 = 2x 2 ; /* valid if x 2 > 0 */ while F (x 1 ) > U do x 2 = x 1 and x 1 = 2x 1 ; /* valid if x 1 < 0 */ while (x 2 x 1 > ɛ x ) and (F (x 2 ) F (x 1 ) > ɛ u ) do x = (x 1 + x 2 )/2; if F (x) < U then x 1 = x else x 2 = x; /* Invariant: at this stage, X always belongs to [x 1, x 2 ]. */ return x = (x 1 + x 2 )/2.

11 Other root-finding methods. Newton-Raphson generate U U(0, 1); let x = x m ; while F (x) U > ɛ u do return x. x = x (F (x) U)/f(x); If f is bounded and monotone, + other conditions: quadratic convergence; the error is squared at each iteration (approximately, when we are near the solution). 7

12 Other root-finding methods. Newton-Raphson generate U U(0, 1); let x = x m ; while F (x) U > ɛ u do return x. x = x (F (x) U)/f(x); If f is bounded and monotone, + other conditions: quadratic convergence; the error is squared at each iteration (approximately, when we are near the solution). But beware: in general the method may diverge. 7

13 Other root-finding methods. Newton-Raphson generate U U(0, 1); let x = x m ; while F (x) U > ɛ u do return x. x = x (F (x) U)/f(x); If f is bounded and monotone, + other conditions: quadratic convergence; the error is squared at each iteration (approximately, when we are near the solution). But beware: in general the method may diverge. 7 Also: regula falsi, secant, Brent-Dekker algorithm.

14 Other root-finding methods. Newton-Raphson generate U U(0, 1); let x = x m ; while F (x) U > ɛ u do return x. x = x (F (x) U)/f(x); If f is bounded and monotone, + other conditions: quadratic convergence; the error is squared at each iteration (approximately, when we are near the solution). But beware: in general the method may diverge. 7 Also: regula falsi, secant, Brent-Dekker algorithm. To speed up, we can precompute and store in a table the values of x s = F 1 (s/c) for s = 0,..., c. If c = 2 e, then the first e bits U tell us directly the table entry that contains x s, i.e., in which small interval [x s, x s+1 ] we can restrict the search.

15 Other root-finding methods. Newton-Raphson generate U U(0, 1); let x = x m ; while F (x) U > ɛ u do return x. x = x (F (x) U)/f(x); If f is bounded and monotone, + other conditions: quadratic convergence; the error is squared at each iteration (approximately, when we are near the solution). But beware: in general the method may diverge. 7 Also: regula falsi, secant, Brent-Dekker algorithm. To speed up, we can precompute and store in a table the values of x s = F 1 (s/c) for s = 0,..., c. If c = 2 e, then the first e bits U tell us directly the table entry that contains x s, i.e., in which small interval [x s, x s+1 ] we can restrict the search. We can also just interpolate F 1 in each of those intervals (less accurate but faster).

16 Inversion for discrete distributions 8 p(x i ) = P [X = x i ] for i = 0, 1,..., k 1; Generate U, find I = min{i F (x i ) U} and return x I. F (x) = x i x p(x i).

17 Inversion for discrete distributions 8 p(x i ) = P [X = x i ] for i = 0, 1,..., k 1; Generate U, find I = min{i F (x i ) U} and return x I. F (x) = x i x p(x i). Sequential search; generate U U(0, 1); let i = m; while F (x i ) < U do i = i + 1; while F (x i 1 ) U do i = i 1; return x i. x 0 x 1 x 2 x 3 x 4 x k 1 F (x 0 ) F (x 1 ) F (x 2 ) F (x 3 ) F (x 4 ) F (x k 1 ) Requires O(k) iterations in the worst case.

18 Inversion for discrete distributions 8 p(x i ) = P [X = x i ] for i = 0, 1,..., k 1; Generate U, find I = min{i F (x i ) U} and return x I. F (x) = x i x p(x i). Sequential search; generate U U(0, 1); let i = m; while F (x i ) < U do i = i + 1; while F (x i 1 ) U do i = i 1; return x i. x 0 x 1 x 2 x 3 x 4 x k 1 F (x 0 ) F (x 1 ) F (x 2 ) F (x 3 ) F (x 4 ) F (x k 1 ) Requires O(k) iterations in the worst case. Example: X Poisson(2500) (discuss).

19 Binary search (requires log 2 k or log 2 k iterations); generate U U(0, 1); let i = 0 and j = k; while i < j 1 do m = (i + j)/2 ; if F (x m 1 ) < U then i = m else j = m; /* Invariant: at this stage, I is in {i,..., j 1}. */ return x i. 9 Requires log 2 k iterations in the worst case and also on average. But a bit more work per iteration than sequential method. Comparison?

20 Binary search (requires log 2 k or log 2 k iterations); generate U U(0, 1); let i = 0 and j = k; while i < j 1 do m = (i + j)/2 ; if F (x m 1 ) < U then i = m else j = m; /* Invariant: at this stage, I is in {i,..., j 1}. */ return x i. 9 Requires log 2 k iterations in the worst case and also on average. But a bit more work per iteration than sequential method. Comparison? If k =, we start with a finite interval, and enlarge it when needed. Example: Poisson distribution.

21 Index search. We partition (0, 1) in c intervals of length 1/c. Let i s = inf{i : F (x i ) s/c} for s = 0,..., c. If s = cu, then U [s/c, (s + 1)/c) and we have I {i s,..., i s+1 }. 10

22 Index search. We partition (0, 1) in c intervals of length 1/c. Let i s = inf{i : F (x i ) s/c} for s = 0,..., c. If s = cu, then U [s/c, (s + 1)/c) and we have I {i s,..., i s+1 }. 10 Then it suffices to search in this interval, using sequential of binary search. Index search (combined with sequential search); generate U U(0, 1); let s = cu and i = i s ; while F (x i ) < U do i = i + 1; return x i.

23 Index search. We partition (0, 1) in c intervals of length 1/c. Let i s = inf{i : F (x i ) s/c} for s = 0,..., c. If s = cu, then U [s/c, (s + 1)/c) and we have I {i s,..., i s+1 }. 10 Then it suffices to search in this interval, using sequential of binary search. Index search (combined with sequential search); generate U U(0, 1); let s = cu and i = i s ; while F (x i ) < U do i = i + 1; return x i. The expected number of iterations in the while loop is approximately k/c. If we choose c k or c 2k, for example, then we obtain a super-fast algorithm. There is a price to pay in terms of memory usage.

24 The rejection method 11 Most important technique after inversion. May provide an efficient method when inversion is too difficult or too costly. We want to generate X from density f. The region under f is: S(f) = {(x, y) R 2 : 0 y f(x)}. Proposition. If (X, Y ) is uniform over S(f), then X has density f. Proof. If (X, Y ) is uniform over S(f), then P[X x] equals the surface of {(z, y) S(f) : z x}, which is x f(z)dz.

25 The rejection method 11 Most important technique after inversion. May provide an efficient method when inversion is too difficult or too costly. We want to generate X from density f. The region under f is: S(f) = {(x, y) R 2 : 0 y f(x)}. Proposition. If (X, Y ) is uniform over S(f), then X has density f. Proof. If (X, Y ) is uniform over S(f), then P[X x] equals the surface of {(z, y) S(f) : z x}, which is x f(z)dz. We will generate (X, Y ) uniformly over S(f). But how, if S(f) is complicated?

26 The rejection method 11 Most important technique after inversion. May provide an efficient method when inversion is too difficult or too costly. We want to generate X from density f. The region under f is: S(f) = {(x, y) R 2 : 0 y f(x)}. Proposition. If (X, Y ) is uniform over S(f), then X has density f. Proof. If (X, Y ) is uniform over S(f), then P[X x] equals the surface of {(z, y) S(f) : z x}, which is x f(z)dz. We will generate (X, Y ) uniformly over S(f). But how, if S(f) is complicated? Idea: choose a simple region B that contains S(f), then generate (X, Y ) uniformly in B. If (X, Y ) S(f), fine, otherwise start again. We will show that the retained point (X, Y ) has a uniform distribution over S(f).

27 12 Example: Want to generate X Beta(3, 2), with density f(x) = 12x 2 (1 x) on (0, 1). The density is maximal at x = 2/3. We have f(2/3) = 16/ Then we can take B = {(x, y) : 0 x 1, 0 y 16/9} (a rectangle). 0 1 a = 16/9 0 X 2/3 1 f(x) h(x) V a... To generate a point in B, we generate two independent uniforms U and V, and we put (X, Y ) = (U, av ) where a = 16/9. The probability that the point is in S(f) is 1/a = 9/16. The expected number of points (X, Y ) that need to be generated is a = 16/9.

28 General rejection method 13 We want to generate a point uniformly in a set A R d. We choose a simpler set B such that A B. We generate points independently in B, and we retain the first one that falls in A. Proposition. The retained point has the uniform distribution over A. Proof. For each D A, we have P[X D] = vol(d)/vol(b) and therefore P[X D X A] = P[X D A] P[X A] = vol(d)/vol(b) vol(a)/vol(b) = vol(d) vol(a). Thus, the law of X conditional on X A is uniform over A.

29 Rejection with a hat function. To generate X from density f, choose another density g and a constant a 1 such that f(x) h(x) def = ag(x) for all x, and for which it is easy to generate X from density g. The function h is the hat function. 14

30 Rejection with a hat function. To generate X from density f, choose another density g and a constant a 1 such that f(x) h(x) def = ag(x) for all x, and for which it is easy to generate X from density g. The function h is the hat function. We apply the rejection method with A = S(f) and 14 the region under h. B = S(h) = {(x, y) R 2 : 0 y h(x)}, Rejection algorithm; repeat generate X from the density g and V U(0, 1), independent; until V h(x) f(x); return X. Proposition. The returned random variable X has density f.

31 Rejection algorithm; repeat generate X from the density g and V U(0, 1), independent; until V h(x) f(x); return X. 15 At each iteration in the loop, the probability of accepting X is 1/a. The number R of turns before acceptance is thus a geometric random variable with parameter p = 1/a. The expected number of iterations is 1/p = a. So we want to have a 1 as small as possible. Compromise between reducing a and keeping g simple.

32 16 Example: X Beta(3, 2) (continued). To reduce a, we can take the hat function: h(x) = f(x 1 ) for x < x 1 ; 16/9 for x 1 x x 2 ; f(x 2 ) for x > x 2, where x 1 and x 2 satisfy 0 < x 1 < 2/3 < x 2 < a = 16/9 0 2/3 1 f(x) h(x) R 1 R 2...

33 The area under h is minimized by taking x 1 = and x 2 = It is then reduced from to The inverse distribution function of g is piecewise linear (easy to generate from). Why not take h piecewise linear instead of piecewise constant? Computing G 1 would then require square roots: much slower.

34 Lévy process 18 Y = {Y (t), t 0} is a Lévy process if its increments are stationary and independent. That is, for disjoint intervals (t 2j 1, t 2j ], j = 1, 2,..., the r.v. s X j = Y (t 2j ) Y (t 2j 1 ) are independent and the law of X i depends only on t 2j t 2j 1.

35 Lévy process 18 Y = {Y (t), t 0} is a Lévy process if its increments are stationary and independent. That is, for disjoint intervals (t 2j 1, t 2j ], j = 1, 2,..., the r.v. s X j = Y (t 2j ) Y (t 2j 1 ) are independent and the law of X i depends only on t 2j t 2j 1. Exsmples: Poisson process, Brownian motion, gamma process, normal inverse process, etc.

36 Lévy process 18 Y = {Y (t), t 0} is a Lévy process if its increments are stationary and independent. That is, for disjoint intervals (t 2j 1, t 2j ], j = 1, 2,..., the r.v. s X j = Y (t 2j ) Y (t 2j 1 ) are independent and the law of X i depends only on t 2j t 2j 1. Exsmples: Poisson process, Brownian motion, gamma process, normal inverse process, etc. Every Lévy process can be written as a sum of a Brownian motion and a jump process, with random jump times and sizes. If the jump rate λ (expected number of jumps per unit of time) is finite, then we can write Y (t) = µt + σb(t) + N(t) j=1 D j for t 0, where B is a standard Brownian motion, N is a Poisson process with rate λ, and the D j are i.i.d. r.v. s, independent of B and N. Easy to simulate if we know how to generate the D j.

37 For the cases where λ =, see Asmussen and Glynn (2007). 19

38 For the cases where λ =, see Asmussen and Glynn (2007). 19 Suppose we know how to generate the increment Y (t) for any t. Then we can generate the process at the observation times 0 = t 0 < t 1 < < t c via the random walk method: just generate the increments Y (t j ) Y (t j 1 ), for j = 1,..., c, successively.

39 For the cases where λ =, see Asmussen and Glynn (2007). 19 Suppose we know how to generate the increment Y (t) for any t. Then we can generate the process at the observation times 0 = t 0 < t 1 < < t c via the random walk method: just generate the increments Y (t j ) Y (t j 1 ), for j = 1,..., c, successively. In certain cases, for t 1 < s < t 2, we know how to generate Y (s) from its distribution conditional on (Y (t 1 ), Y (t 2 )).

40 For the cases where λ =, see Asmussen and Glynn (2007). 19 Suppose we know how to generate the increment Y (t) for any t. Then we can generate the process at the observation times 0 = t 0 < t 1 < < t c via the random walk method: just generate the increments Y (t j ) Y (t j 1 ), for j = 1,..., c, successively. In certain cases, for t 1 < s < t 2, we know how to generate Y (s) from its distribution conditional on (Y (t 1 ), Y (t 2 )). Then we can simulate the trajectory of Y over [0, t] by successive refinements ( Lévy bridge sampling ): first generate Y (t); then Y (t/2) conditionally on (Y (0), Y (t)); then Y (t/4) conditionally on (Y (0), Y (t/2)); then Y (3t/4) conditionally on (Y (t/2), Y (t)); then Y (t/8) conditionally on (Y (0), Y (t/4)); etc. We know how to do that for Poisson, Brownian, and gamma processes, for example. Discuss advantages.

41 Random time change. We have a process X = {X(t), t 0}, and another non-decreasing process T = {T (t), t 0}, with T (0) = 0, called a subordinator. Can be a gamma process, for example. We replace X(t) by Y (t) = X(T (t)) for each t, to obtain a new process Y = {Y (t), t 0}. This corresponds to a random nonlinear change of the time scale. If X and T are Lévy processes, then Y is also a Lévy process. 20

42 Random time change. We have a process X = {X(t), t 0}, and another non-decreasing process T = {T (t), t 0}, with T (0) = 0, called a subordinator. Can be a gamma process, for example. We replace X(t) by Y (t) = X(T (t)) for each t, to obtain a new process Y = {Y (t), t 0}. This corresponds to a random nonlinear change of the time scale. If X and T are Lévy processes, then Y is also a Lévy process. 20 If X is a Brownian motion, then this is equivalent to replacing the volatility parameter σ by a stochastic volatility process {σ(t), t 0}.

43 Poisson Process 21 A stationary Poisson process with rate λ is a Lévy process whose increment over an interval on length t has a Poisson distribution with mean λt.

44 Poisson Process 21 A stationary Poisson process with rate λ is a Lévy process whose increment over an interval on length t has a Poisson distribution with mean λt. Equivalently, it is a counting process {N(t), t 0} with jumps of size 1 at times 0 < T 1 T 2 T j, where the A j = T j T j 1 are independent exponential r.v. s with mean 1/λ.

45 Poisson Process 21 A stationary Poisson process with rate λ is a Lévy process whose increment over an interval on length t has a Poisson distribution with mean λt. Equivalently, it is a counting process {N(t), t 0} with jumps of size 1 at times 0 < T 1 T 2 T j, where the A j = T j T j 1 are independent exponential r.v. s with mean 1/λ. The easiest way to generate the process is to generate these A j. Intuition: we have a Poisson process when the events (jump times) occur at random, independently of each other.

46 Poisson Process 21 A stationary Poisson process with rate λ is a Lévy process whose increment over an interval on length t has a Poisson distribution with mean λt. Equivalently, it is a counting process {N(t), t 0} with jumps of size 1 at times 0 < T 1 T 2 T j, where the A j = T j T j 1 are independent exponential r.v. s with mean 1/λ. The easiest way to generate the process is to generate these A j. Intuition: we have a Poisson process when the events (jump times) occur at random, independently of each other. For a nonstationary Poisson process: use a nonlinear change of time scale.

47 Thinning a Poisson process 22 This is a rejection method. Suppose we have a non-stationary Poisson process with rate function λ(t), t 0. Let λ a constant such that λ(t) λ for all t. Thinning algorithm: Generate pseudo-jumps from a Poisson process with rate λ. If there is a pseudo-jump at time t, accept it with probability λ(t)/λ (it becomes a real jump), otherwise just discard it.

Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers

Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers To sample a function in a statistically controlled manner (i.e. for Monte Carlo integration)

More information

1 Acceptance-Rejection Method

1 Acceptance-Rejection Method Copyright c 2016 by Karl Sigman 1 Acceptance-Rejection Method As we already know, finding an explicit formula for F 1 (y), y [0, 1], for the cdf of a rv X we wish to generate, F (x) = P (X x), x R, is

More information

Foundations of Nonparametric Bayesian Methods

Foundations of Nonparametric Bayesian Methods 1 / 27 Foundations of Nonparametric Bayesian Methods Part II: Models on the Simplex Peter Orbanz http://mlg.eng.cam.ac.uk/porbanz/npb-tutorial.html 2 / 27 Tutorial Overview Part I: Basics Part II: Models

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

General Principles in Random Variates Generation

General Principles in Random Variates Generation General Principles in Random Variates Generation E. Moulines and G. Fort Telecom ParisTech June 2015 Bibliography : Luc Devroye, Non-Uniform Random Variate Generator, Springer-Verlag (1986) available on

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Copyright c 2007 by Karl Sigman 1 Simulating normal Gaussian rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Fundamental to many applications

More information

IEOR 4703: Homework 2 Solutions

IEOR 4703: Homework 2 Solutions IEOR 4703: Homework 2 Solutions Exercises for which no programming is required Let U be uniformly distributed on the interval (0, 1); P (U x) = x, x (0, 1). We assume that your computer can sequentially

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

Section 7.5: Nonstationary Poisson Processes

Section 7.5: Nonstationary Poisson Processes Section 7.5: Nonstationary Poisson Processes Discrete-Event Simulation: A First Course c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation: A First Course Section 7.5: Nonstationary Poisson

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS Parametric Distributions Basic building blocks: Need to determine given Representation: or? Recall Curve Fitting Binary Variables

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University

More information

Linear-Quadratic Optimal Control: Full-State Feedback

Linear-Quadratic Optimal Control: Full-State Feedback Chapter 4 Linear-Quadratic Optimal Control: Full-State Feedback 1 Linear quadratic optimization is a basic method for designing controllers for linear (and often nonlinear) dynamical systems and is actually

More information

Simulation. Where real stuff starts

Simulation. Where real stuff starts 1 Simulation Where real stuff starts ToC 1. What is a simulation? 2. Accuracy of output 3. Random Number Generators 4. How to sample 5. Monte Carlo 6. Bootstrap 2 1. What is a simulation? 3 What is a simulation?

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

The exponential distribution and the Poisson process

The exponential distribution and the Poisson process The exponential distribution and the Poisson process 1-1 Exponential Distribution: Basic Facts PDF f(t) = { λe λt, t 0 0, t < 0 CDF Pr{T t) = 0 t λe λu du = 1 e λt (t 0) Mean E[T] = 1 λ Variance Var[T]

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

In manycomputationaleconomicapplications, one must compute thede nite n

In manycomputationaleconomicapplications, one must compute thede nite n Chapter 6 Numerical Integration In manycomputationaleconomicapplications, one must compute thede nite n integral of a real-valued function f de ned on some interval I of

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

Solution of Nonlinear Equations

Solution of Nonlinear Equations Solution of Nonlinear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 14, 017 One of the most frequently occurring problems in scientific work is to find the roots of equations of the form f(x) = 0. (1)

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Random Walks A&T and F&S 3.1.2

Random Walks A&T and F&S 3.1.2 Random Walks A&T 110-123 and F&S 3.1.2 As we explained last time, it is very difficult to sample directly a general probability distribution. - If we sample from another distribution, the overlap will

More information

Learning Static Parameters in Stochastic Processes

Learning Static Parameters in Stochastic Processes Learning Static Parameters in Stochastic Processes Bharath Ramsundar December 14, 2012 1 Introduction Consider a Markovian stochastic process X T evolving (perhaps nonlinearly) over time variable T. We

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Discrete Adaptive Rejection Sampling

Discrete Adaptive Rejection Sampling Discrete Adaptive Rejection Sampling Daniel R. Sheldon September 30, 2013 Abstract Adaptive rejection sampling (ARS) is an algorithm by Gilks and Wild for drawing samples from a continuous log-concave

More information

Adaptive Rejection Sampling with fixed number of nodes

Adaptive Rejection Sampling with fixed number of nodes Adaptive Rejection Sampling with fixed number of nodes L. Martino, F. Louzada Institute of Mathematical Sciences and Computing, Universidade de São Paulo, São Carlos (São Paulo). Abstract The adaptive

More information

Approximate Bayesian inference

Approximate Bayesian inference Approximate Bayesian inference Variational and Monte Carlo methods Christian A. Naesseth 1 Exchange rate data 0 20 40 60 80 100 120 Month Image data 2 1 Bayesian inference 2 Variational inference 3 Stochastic

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Adaptive Rejection Sampling with fixed number of nodes

Adaptive Rejection Sampling with fixed number of nodes Adaptive Rejection Sampling with fixed number of nodes L. Martino, F. Louzada Institute of Mathematical Sciences and Computing, Universidade de São Paulo, Brazil. Abstract The adaptive rejection sampling

More information

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation.

Solution: The process is a compound Poisson Process with E[N (t)] = λt/p by Wald's equation. Solutions Stochastic Processes and Simulation II, May 18, 217 Problem 1: Poisson Processes Let {N(t), t } be a homogeneous Poisson Process on (, ) with rate λ. Let {S i, i = 1, 2, } be the points of the

More information

epub WU Institutional Repository

epub WU Institutional Repository epub WU Institutional Repository Wolfgang Hörmann and Josef Leydold Continuous Random Variate Generation by Fast Numerical Inversion Paper Original Citation: Hörmann, Wolfgang and Leydold, Josef (00) Continuous

More information

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

THE SECANT METHOD. q(x) = a 0 + a 1 x. with THE SECANT METHOD Newton s method was based on using the line tangent to the curve of y = f (x), with the point of tangency (x 0, f (x 0 )). When x 0 α, the graph of the tangent line is approximately the

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation.

The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation. The Distributions of Stopping Times For Ordinary And Compound Poisson Processes With Non-Linear Boundaries: Applications to Sequential Estimation. Binghamton University Department of Mathematical Sciences

More information

Monte Carlo Integration using Importance Sampling and Gibbs Sampling

Monte Carlo Integration using Importance Sampling and Gibbs Sampling Monte Carlo Integration using Importance Sampling and Gibbs Sampling Wolfgang Hörmann and Josef Leydold Department of Statistics University of Economics and Business Administration Vienna Austria hormannw@boun.edu.tr

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation

Chapter 4. Solution of Non-linear Equation. Module No. 1. Newton s Method to Solve Transcendental Equation Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-713209 email: anita.buie@gmail.com 1 . Chapter 4 Solution of Non-linear

More information

Outline. A Central Limit Theorem for Truncating Stochastic Algorithms

Outline. A Central Limit Theorem for Truncating Stochastic Algorithms Outline A Central Limit Theorem for Truncating Stochastic Algorithms Jérôme Lelong http://cermics.enpc.fr/ lelong Tuesday September 5, 6 1 3 4 Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / 3 Jérôme

More information

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational

More information

Multivariate Simulations

Multivariate Simulations Multivariate Simulations Katarína Starinská starinskak@gmail.com Charles University Faculty of Mathematics and Physics Prague, Czech Republic 21.11.2011 Katarína Starinská Multivariate Simulations 1 /

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Markov chain Monte Carlo

Markov chain Monte Carlo Markov chain Monte Carlo Feng Li feng.li@cufe.edu.cn School of Statistics and Mathematics Central University of Finance and Economics Revised on April 24, 2017 Today we are going to learn... 1 Markov Chains

More information

NON-LINEAR ALGEBRAIC EQUATIONS Lec. 5.1: Nonlinear Equation in Single Variable

NON-LINEAR ALGEBRAIC EQUATIONS Lec. 5.1: Nonlinear Equation in Single Variable NON-LINEAR ALGEBRAIC EQUATIONS Lec. 5.1: Nonlinear Equation in Single Variable Dr. Niket Kaisare Department of Chemical Engineering IIT Madras NPTEL Course: MATLAB Programming for Numerical Computations

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Succinct Data Structures for Approximating Convex Functions with Applications

Succinct Data Structures for Approximating Convex Functions with Applications Succinct Data Structures for Approximating Convex Functions with Applications Prosenjit Bose, 1 Luc Devroye and Pat Morin 1 1 School of Computer Science, Carleton University, Ottawa, Canada, K1S 5B6, {jit,morin}@cs.carleton.ca

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

13 Notes on Markov Chain Monte Carlo

13 Notes on Markov Chain Monte Carlo 13 Notes on Markov Chain Monte Carlo Markov Chain Monte Carlo is a big, and currently very rapidly developing, subject in statistical computation. Many complex and multivariate types of random data, useful

More information

Slides 5: Random Number Extensions

Slides 5: Random Number Extensions Slides 5: Random Number Extensions We previously considered a few examples of simulating real processes. In order to mimic real randomness of events such as arrival times we considered the use of random

More information

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology

More information

Ch3. Generating Random Variates with Non-Uniform Distributions

Ch3. Generating Random Variates with Non-Uniform Distributions ST4231, Semester I, 2003-2004 Ch3. Generating Random Variates with Non-Uniform Distributions This chapter mainly focuses on methods for generating non-uniform random numbers based on the built-in standard

More information

Zeros of Functions. Chapter 10

Zeros of Functions. Chapter 10 Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models Fatih Cavdur fatihcavdur@uludag.edu.tr March 20, 2012 Introduction Introduction The world of the model-builder

More information

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method.

p 1 p 0 (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- cant method. 80 CHAP. 2 SOLUTION OF NONLINEAR EQUATIONS f (x) = 0 y y = f(x) (p, 0) p 2 p 1 p 0 x (p 1, f(p 1 )) (p 0, f(p 0 )) The geometric construction of p 2 for the se- Figure 2.16 cant method. Secant Method The

More information

15 Nonlinear Equations and Zero-Finders

15 Nonlinear Equations and Zero-Finders 15 Nonlinear Equations and Zero-Finders This lecture describes several methods for the solution of nonlinear equations. In particular, we will discuss the computation of zeros of nonlinear functions f(x).

More information

Notation Precedence Diagram

Notation Precedence Diagram Notation Precedence Diagram xix xxi CHAPTER 1 Introduction 1 1.1. Systems, Models, and Simulation 1 1.2. Verification, Approximation, and Validation 8 1.2.1. Verifying a Program 9 1.2.2. Approximation

More information

Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa

Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation Luke Tierney Department of Statistics & Actuarial Science University of Iowa Basic Ratio of Uniforms Method Introduced by Kinderman and

More information

Lecture 7: Numerical Tools

Lecture 7: Numerical Tools Lecture 7: Numerical Tools Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 7: Numerical Tools January 10, 2016 1 / 18 Overview Three Steps: V (k, z) =max c,k 0 apple u(c)+ Z c + k 0 =(1 + r)k + z

More information

Generalized Linear Models for Non-Normal Data

Generalized Linear Models for Non-Normal Data Generalized Linear Models for Non-Normal Data Today s Class: 3 parts of a generalized model Models for binary outcomes Complications for generalized multivariate or multilevel models SPLH 861: Lecture

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah Math 5040 1 Stochastic Processes & Simulation Davar Khoshnevisan University of Utah Module 1 Generation of Discrete Random Variables Just about every programming language and environment has a randomnumber

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

1 Delayed Renewal Processes: Exploiting Laplace Transforms

1 Delayed Renewal Processes: Exploiting Laplace Transforms IEOR 6711: Stochastic Models I Professor Whitt, Tuesday, October 22, 213 Renewal Theory: Proof of Blackwell s theorem 1 Delayed Renewal Processes: Exploiting Laplace Transforms The proof of Blackwell s

More information

Malvin H. Kalos, Paula A. Whitlock. Monte Carlo Methods. Second Revised and Enlarged Edition WILEY- BLACKWELL. WILEY-VCH Verlag GmbH & Co.

Malvin H. Kalos, Paula A. Whitlock. Monte Carlo Methods. Second Revised and Enlarged Edition WILEY- BLACKWELL. WILEY-VCH Verlag GmbH & Co. Malvin H. Kalos, Paula A. Whitlock Monte Carlo Methods Second Revised and Enlarged Edition WILEY- BLACKWELL WILEY-VCH Verlag GmbH & Co. KGaA v I Contents Preface to the Second Edition IX Preface to the

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 5 Sequential Monte Carlo methods I 31 March 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Random linear systems and simulation

Random linear systems and simulation Random linear systems and simulation Bernardo Kulnig Pagnoncelli, Hélio Lopes and Carlos Frederico Borges Palmeira MAT. 8/06 Random linear systems and simulation BERNARDO KULNIG PAGNONCELLI, HÉLIO LOPES

More information

Sampling Random Variables

Sampling Random Variables Sampling Random Variables Introduction Sampling a random variable X means generating a domain value x X in such a way that the probability of generating x is in accordance with p(x) (respectively, f(x)),

More information

Regression with Numerical Optimization. Logistic

Regression with Numerical Optimization. Logistic CSG220 Machine Learning Fall 2008 Regression with Numerical Optimization. Logistic regression Regression with Numerical Optimization. Logistic regression based on a document by Andrew Ng October 3, 204

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Generating Random Variates 2 (Chapter 8, Law)

Generating Random Variates 2 (Chapter 8, Law) B. Maddah ENMG 6 Simulation /5/08 Generating Random Variates (Chapter 8, Law) Generating random variates from U(a, b) Recall that a random X which is uniformly distributed on interval [a, b], X ~ U(a,

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

component risk analysis

component risk analysis 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

More information

Hisashi Tanizaki Graduate School of Economics, Kobe University. Abstract

Hisashi Tanizaki Graduate School of Economics, Kobe University. Abstract A Simple Gamma Random Number Generator for Arbitrary Shape Parameters Hisashi Tanizaki Graduate School of Economics, Kobe University Abstract This paper proposes an improved gamma random generator. In

More information

CSCI-6971 Lecture Notes: Monte Carlo integration

CSCI-6971 Lecture Notes: Monte Carlo integration CSCI-6971 Lecture otes: Monte Carlo integration Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu February 21, 2006 1 Overview Consider the following

More information

Delta Boosting Machine and its application in Actuarial Modeling Simon CK Lee, Sheldon XS Lin KU Leuven, University of Toronto

Delta Boosting Machine and its application in Actuarial Modeling Simon CK Lee, Sheldon XS Lin KU Leuven, University of Toronto Delta Boosting Machine and its application in Actuarial Modeling Simon CK Lee, Sheldon XS Lin KU Leuven, University of Toronto This presentation has been prepared for the Actuaries Institute 2015 ASTIN

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

Bisection Ideas in End-Point Conditioned Markov Process Simulation

Bisection Ideas in End-Point Conditioned Markov Process Simulation Bisection Ideas in End-Point Conditioned Markov Process Simulation Søren Asmussen and Asger Hobolth Department of Mathematical Sciences, Aarhus University Ny Munkegade, 8000 Aarhus C, Denmark {asmus,asger}@imf.au.dk

More information

Importance Sampling to Accelerate the Convergence of Quasi-Monte Carlo

Importance Sampling to Accelerate the Convergence of Quasi-Monte Carlo Importance Sampling to Accelerate the Convergence of Quasi-Monte Carlo Wolfgang Hörmann, Josef Leydold Department of Statistics and Mathematics Wirtschaftsuniversität Wien Research Report Series Report

More information

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods

Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods Numerical Methods Dr. Sanjeev Kumar Department of Mathematics Indian Institute of Technology Roorkee Lecture No 7 Regula Falsi and Secant Methods So welcome to the next lecture of the 2 nd unit of this

More information

Generalization Of The Secant Method For Nonlinear Equations

Generalization Of The Secant Method For Nonlinear Equations Applied Mathematics E-Notes, 8(2008), 115-123 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generalization Of The Secant Method For Nonlinear Equations Avram Sidi

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Supplement 2: Review Your Distributions Relevant textbook passages: Pitman [10]: pages 476 487. Larsen

More information