E cient Monte Carlo for Gaussian Fields and Processes

Size: px
Start display at page:

Download "E cient Monte Carlo for Gaussian Fields and Processes"

Transcription

1 E cient Monte Carlo for Gaussian Fields and Processes Jose Blanchet (with R. Adler, J. C. Liu, and C. Li) Columbia University Nov, 2010 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

2 Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process 4 Example 2: Gaussian Random Fields Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

3 A Motivating Application Contamination level in a geographic area... (yellow area = Mexico City) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

4 A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

5 A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... What s the chance that high levels are reached in a given area? What happens to other areas given that a high level has been reached somewhere? Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

6 A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... What s the chance that high levels are reached in a given area? What happens to other areas given that a high level has been reached somewhere? X (t, s) = logarithm of ozone s concentration at position s (in yellow area) at time t... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

7 A Motivating Application When the ground-level ozone reaches 270 parts per billion, Mexico City bans the use of a signi cant portion of cars, orders industry to reduce emissions by 50% and closes gas stations... What s the chance that high levels are reached in a given area? What happens to other areas given that a high level has been reached somewhere? X (t, s) = logarithm of ozone s concentration at position s (in yellow area) at time t... Adler, Blanchet and Liu (2010): Algorithm for conditional sampling with ε relative precision in time Polyfε 1 log[1/p(high excursion)]g Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

8 A Motivating Application Another motivating application: Queues with Gaussian input Mandjes (2007) u (b) := P max X (k) > b k0 where X () is a suitably de ned Gaussian process. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

9 A Motivating Application Another motivating application: Queues with Gaussian input Mandjes (2007) u (b) := P max X (k) > b k0 where X () is a suitably de ned Gaussian process. Blanchet and Li (2010): Algorithm for conditional sampling with ε relative precision in time Polyfε 1 log[1/u (b) ]g Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

10 Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process 4 Example 2: Gaussian Random Fields Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

11 Importance Sampling Goal: Estimate u := P (A) > 0 and Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

12 Importance Sampling Goal: Estimate u := P (A) > 0 and Choose ep and simulate ω from it Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

13 Importance Sampling Goal: Estimate u := P (A) > 0 and Choose ep and simulate ω from it I.S. estimator per trial is Z = L (ω) I (ω 2 A), where L (ω) is the likelihood ratio (i.e. L (ω) = P (ω) /ep (ω)). Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

14 Importance Sampling Goal: Estimate u := P (A) > 0 and Choose ep and simulate ω from it I.S. estimator per trial is Z = L (ω) I (ω 2 A), where L (ω) is the likelihood ratio (i.e. L (ω) = P (ω) /ep (ω)). ep is called a change-of-measure Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

15 Importance Sampling Must apply I.S. with care > Can increase the variance Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

16 Importance Sampling Must apply I.S. with care > Can increase the variance Glasserman and Kou 95, Asmussen, Binswanger and Hojgaard 00 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

17 Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

18 Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

19 Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Weak E ciency: eez 2 /P (A) 2 = O 1/P (A) ε for every ε > 0. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

20 Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Weak E ciency: eez 2 /P (A) 2 = O 1/P (A) ε for every ε > 0. Polynomial E ciency: eez 2 /P (A) 2 = O (log (1/P (A)) q ) for some q > 0. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

21 Importance Sampling Goal: Choose ep simulatable and e ciently as P (A) & 0. Strong E ciency: eez 2 /P (A) 2 = O (1). Weak E ciency: eez 2 /P (A) 2 = O 1/P (A) ε for every ε > 0. Polynomial E ciency: eez 2 /P (A) 2 = O (log (1/P (A)) q ) for some q > 0. Strong Weak Polynomial Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

22 Importance Sampling Suppose we choose ep () = P ( j A) L (ω) = P (ω) I (ω 2 A) P (ω) I (ω 2 A) /P (A) = P (A) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

23 Importance Sampling Suppose we choose ep () = P ( j A) L (ω) = P (ω) I (ω 2 A) P (ω) I (ω 2 A) /P (A) = P (A) Estimator has zero variance, but requires knowledge of P (A) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

24 Importance Sampling Suppose we choose ep () = P ( j A) L (ω) = P (ω) I (ω 2 A) P (ω) I (ω 2 A) /P (A) = P (A) Estimator has zero variance, but requires knowledge of P (A) Lesson: Try choosing ep () close to P ( j A)! Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

25 Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process Target Bridge Sampling Brownian Motion E ciency Analysis Numerical Example 4 Example 2: Gaussian Random Fields Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

26 Example 1: Maximum of Gaussian Process Let X = (X k : k 0) be a Gaussian process with Var (X k ) = σ 2 k and EX k = µ k < 0. Want to e ciently estimate (as b % ) Assume u(b) = P[ max k1 X k > b] σ k c σ k H σ, jµ k j c µ k H µ, 0 < H σ < H µ < so u(b)! 0 as b!. No assumption on correlation structure. Asymptotic regime known as "large bu er scaling". Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

27 Related Asymptotics Rich literature on asymptotics for u (b) Pickands (1969), Berman (1990), Du eld and O Connell (1995), Piterbarg (1996), Husler and Piterbarg (1999), Dieker (2006), Husler (2006), Likhanov and Mazumdar (1999), Debicki and Mandjes (2003)... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

28 Related Simulation Algorithms Common basic idea is to sample Gaussian process by mean tracking the most likely path given the over ow, time-slot by time-slot. X t b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

29 Target Bridge Sampling X t 1 Identifying the target set T ; b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

30 Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

31 Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; 3 Bridging; b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

32 Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; 3 Bridging; 4 T (b) = inffk : X k > bg; X T (b) b X 0 T (b) t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

33 Target Bridge Sampling X t 1 Identifying the target set T ; 2 Targeting; 3 Bridging; 4 T (b) = inffk : X k > bg; 5 Deleting X 0 X T (b) T (b) b t Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

34 Likelihood of a Path = ep(x 1,..., X T (b) ) Z ep (τ = k) P(X 1,..., X T (b) jx k )ep (dx k ). k=t (b) b Select ep (τ = k) P (X k > bj T (b) < ) P (X k > b) Given τ = k sample ep (X k 2 ) = P (X k 2 j X k > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

35 Likelihood Ratio = = = ep(x 1,..., X T (b) ) Z ep (τ = k) P(X 1,..., X T (b) jx k )ep (dx k ) k=t (b) b k=t (b) P (X k > b) Z j=1 P (X j > b) b P(X 1,..., X T (b), X k > b) k=t (b) j=1 P (X j > b) = P(X 1,..., X T (b) ) j=1 P (X j > b) P (dx P(X 1,..., X T (b) jx k ) k ) P (X k > b) P X k > bjx 1,..., X T (b) k=t (b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

36 Consequently, the importance sampling estimator for u (b) generated by P is simply L = dp X 1,..., X T (b) d ep j=1 P (X j > b) =. j=t (b) P X j > bj X 1,..., X T (b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

37 Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

38 Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. The likelihood ratio: L = R R 0 P(B t t > b)dt T (b) P B t t > bj B T (b) = b dt is a constant! = R 0 P(B t t > b)dt R 0 P(B t t 0)dt, Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

39 Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. The likelihood ratio: L = R R 0 P(B t t > b)dt T (b) P B t t > bj B T (b) = b dt = R 0 P(B t t > b)dt R 0 P(B t t 0)dt, is a constant! Zero-variance importance sampler: eel L = P max B t t0 t > b = exp( 2b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

40 Special Case: Brownian Motion Consider the case when X t = B (t) t, is a standard Brownian motion, with negative linear drift µ t = t. The likelihood ratio: L = R R 0 P(B t t > b)dt T (b) P B t t > bj B T (b) = b dt = R 0 P(B t t > b)dt R 0 P(B t t 0)dt, is a constant! Zero-variance importance sampler: eel L = P max B t t0 t > b = exp( 2b) In this case TBS outputs a Brownian motion with drift +1. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

41 E ciency Analysis The e ciency analysis involves ee (L 2 ) P(max k1 X k b) 2 k=1 P(X k b) 2 max k1 P(X k b) k=0 P(X k > b) 2 P(max k1 X k b) Does not involve the correlation structure... Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

42 E ciency Analysis Theorem If (σ k : k 1) and (µ k : k 1) have power law type tails with power indices 0 < H σ < H µ < respectively, then we have that 1 L is an unbiased estimator of u(b) 2 Let h(b) = λ H µ, H σ b Hµ Hσ Hµ, then u(b) = O(h (b) 1/(H µ H σ ) exp( h(b))). 3 ee L 2 u(b) 2 = O(b2/H µ ); Polynomially e cient Strongly e cient in many source scaling Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

43 Summary of Performance Analysis (Updated) Method Many Sources Large Bu er Cost of each replication Single twist x x O(b 3 ) Cut-and-twist weakly x O(b 4 ) Random twist weakly x O(b 3 ) Sequential twist weakly x O(nb 3 ) Mean shift x x O(b 3 ) BMC x x O(b 3 ) TBS strongly polynomial O(b 3 ) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

44 Numerical Example: Fractional Brownian Noise Test the performance of our Target Bridge Sampler and compare it against other existing methods in the many sources setting. Suppose that fx k g are driven by fractional Brownian noises, that is, Cov(X k, X j ) = (k 2H σ + j 2H σ jk jj 2H σ )/2 and µ k = k. The numerical result is compared against what was reported in Dieker and Mandjes (2006). Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

45 Numerical Example b = 300 Cost of each Estimator Simulation Runs Time replication Naive O(b 3 ) s Single twist O(b 3 ) ~60s Cut-and-twist O(b 4 ) ~80s Random twist O(b 3 ) ~50s Sequential twist O(nb 3 ) ~100s TBS O(b 3 ) s Benchmark Table: Simulation result of Example 2 with n = 300, b = 3, H σ = 0.8. Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

46 Outline 1 Introduction 2 Importance Sampling and E ciency 3 Example 1: Maximum of Gaussian Process 4 Example 2: Gaussian Random Fields Discrete Version Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

47 Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

48 Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

49 Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: 1 Pick i-th coordinate with probability proportional to P (X i > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

50 Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: 1 Pick i-th coordinate with probability proportional to P (X i > b) 2 Sample X i given X i > b Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

51 Discrete Version: Monte Carlo Strategy Discrete version: (X 1,..., X d ) multivariate Gaussian: Consider d P max X i > b i=1 d P (X 1,..., X d ) 2 j max X i > b i=1 Monte Carlo strategy: 1 Pick i-th coordinate with probability proportional to P (X i > b) 2 Sample X i given X i > b 3 Sample the remaining values given X i ep ((X 1,..., X d ) 2 ) = d P (X i > b) P ((X 1,..., X d ) 2 jx i > b) i=1 d j=1 P (X j > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

52 Discrete Version: Importance Sampling Distribution Prob. measure ep ((X 1,..., X d ) 2 ) = = d P (X i > b) P ((X 1,..., X d ) 2 jx i > b) i=1 d j=1 P (X j > b) d P ((X 1,..., X d ) 2 ; X i > b) i=1 d j=1 P (X j > b) Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

53 Discrete Version: Likelihood Ratio Likelihood ratio I ( d max i=1 X i > b) dp d ep (X 1,..., X d ) = I ( d max i=1 X i > b) d j=1 P (X j > b) d j=1 I (X j > b) d P (X j > b). j=1 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

54 Total Variation Approximation Theorem (Adler, Blanchet and Liu (2008)) If Corr (X i, X j ) < 1 then as b % and therefore sup A as b %. P( max d d X i > b) = P (X j > b) (1 + o (1)) i=1 j=1 d jp((x 1,..., X d ) 2 Aj max X i > b) ep ((X 1,..., X d ) 2 A) j! 0 i=1 Jose Blanchet (Columbia) Monte Carlo for Gaussian Fields Nov, / 29

55 Efficient Monte Carlo for High Excursions of Gaussian Random Fields Jose Blanchet Columbia University Joint work with Robert Adler (Technion Israel Institute of Technology ) Jingchen Liu (Columbia University) 1

56 Continuous Gaussian Random Fields on Compacts Gaussian random field, where T R d is a compact set. 2

57 High Excursion Probability Interesting probability as b. More generally, where Γ( ) is suitable functional. 3

58 Asymptotic results Under very mild conditions Sharp asymptotics for mean zero and constant variance random fields (under conditions) k depends on dimension of T and continuity of f(t). Pickands (1969), Piterbarg (1995), Sun (1993), Adler (1981), Azais and Wschebor (2005), Taylor, Takemura and Adler (2005). 4

59 The change of measure mes is the Lebesgue measure A γ is the excursion set A change of measure on C(T) 5

60 Simulation from this change of measure 6

61 The estimator The estimator and its variance The second moment 7

62 The Estimator and Its Variance 8

63 The choice of γ γ = b results in infinite variance γ = b a/b 9

64 The area of excursion set, mes(a b-1/b ), given high excursion 10

65 Efficiency results the general case 11

66 Efficiency results the homogeneous case 12

67 Implementation discretize the continuous filed Discretize the space T, {t 1,,t n } It is sufficient to choose n = (b/ε) α such that Adler, Blanchet and L. (2010) 13

68 Summary Non-exponential change-of-measure for Gaussian processes and fields (efficiency properties & conditional sampling) Polynomialy efficient for general Hölder continuous fields Strong efficiency for homogeneous fields 14

Introduction to Rare Event Simulation

Introduction to Rare Event Simulation Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31

More information

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Jose Blanchet (joint work with Jingchen Liu) Columbia IEOR Department RESIM Blanchet (Columbia) IS for Heavy-tailed Walks

More information

E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks

E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks Jose Blanchet and Jingchen Liu y June 14, 21 Abstract The contribution of this paper is to introduce change

More information

Exact Simulation of Multivariate Itô Diffusions

Exact Simulation of Multivariate Itô Diffusions Exact Simulation of Multivariate Itô Diffusions Jose Blanchet Joint work with Fan Zhang Columbia and Stanford July 7, 2017 Jose Blanchet (Columbia/Stanford) Exact Simulation of Diffusions July 7, 2017

More information

Efficient rare-event simulation for sums of dependent random varia

Efficient rare-event simulation for sums of dependent random varia Efficient rare-event simulation for sums of dependent random variables Leonardo Rojas-Nandayapa joint work with José Blanchet February 13, 2012 MCQMC UNSW, Sydney, Australia Contents Introduction 1 Introduction

More information

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue J. Blanchet, P. Glynn, and J. C. Liu. September, 2007 Abstract We develop a strongly e cient rare-event

More information

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS

EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF SUMS OF HEAVY-TAILED INCREMENTS Proceedings of the 2006 Winter Simulation Conference L. F. Perrone, F. P. Wieland, J. Liu, B. G. Lawson, D. M. Nicol, and R. M. Fujimoto, eds. EFFICIENT SIMULATION FOR LARGE DEVIATION PROBABILITIES OF

More information

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

E cient Importance Sampling

E cient Importance Sampling E cient David N. DeJong University of Pittsburgh Spring 2008 Our goal is to calculate integrals of the form Z G (Y ) = ϕ (θ; Y ) dθ. Special case (e.g., posterior moment): Z G (Y ) = Θ Θ φ (θ; Y ) p (θjy

More information

Efficient Simulations for the Exponential Integrals of Hölder Continuous Gaussian Random Fields

Efficient Simulations for the Exponential Integrals of Hölder Continuous Gaussian Random Fields Efficient Simulations for the Exponential Integrals of Hölder Continuous Gaussian Random Fields 0 Jingchen Liu and Gongjun Xu, Columbia University and University of Minnesota In this paper, we consider

More information

Adaptive timestepping for SDEs with non-globally Lipschitz drift

Adaptive timestepping for SDEs with non-globally Lipschitz drift Adaptive timestepping for SDEs with non-globally Lipschitz drift Mike Giles Wei Fang Mathematical Institute, University of Oxford Workshop on Numerical Schemes for SDEs and SPDEs Université Lille June

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

DART_LAB Tutorial Section 5: Adaptive Inflation

DART_LAB Tutorial Section 5: Adaptive Inflation DART_LAB Tutorial Section 5: Adaptive Inflation UCAR 14 The National Center for Atmospheric Research is sponsored by the National Science Foundation. Any opinions, findings and conclusions or recommendations

More information

Numerical Methods I Monte Carlo Methods

Numerical Methods I Monte Carlo Methods Numerical Methods I Monte Carlo Methods Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Dec. 9th, 2010 A. Donev (Courant Institute) Lecture

More information

Small ball probabilities and metric entropy

Small ball probabilities and metric entropy Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline 1 Small ball probabilities vs. metric entropy 2 Connection to other questions 3 Recent results for

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements

Estimation of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements of the long Memory parameter using an Infinite Source Poisson model applied to transmission rate measurements François Roueff Ecole Nat. Sup. des Télécommunications 46 rue Barrault, 75634 Paris cedex 13,

More information

MONTE CARLO FOR LARGE CREDIT PORTFOLIOS WITH POTENTIALLY HIGH CORRELATIONS. Xuan Yang. Department of Statistics Columbia University New York, NY

MONTE CARLO FOR LARGE CREDIT PORTFOLIOS WITH POTENTIALLY HIGH CORRELATIONS. Xuan Yang. Department of Statistics Columbia University New York, NY Proceedings of the 2010 Winter Simulation Conference B. Johansson, S. Jain, J. Montoya-orres, J. Hugan, and E. Yücesan, eds. MONE CARLO FOR LARGE CREDI PORFOLIOS WIH POENIALLY HIGH CORRELAIONS Jose H.

More information

Introduction to Algorithmic Trading Strategies Lecture 10

Introduction to Algorithmic Trading Strategies Lecture 10 Introduction to Algorithmic Trading Strategies Lecture 10 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences

EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS. Leonardo Rojas-Nandayapa Department of Mathematical Sciences Proceedings of the 2008 Winter Simulation Conference S. J. Mason, R. R. Hill, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. EFFICIENT TAIL ESTIMATION FOR SUMS OF CORRELATED LOGNORMALS Jose Blanchet

More information

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields

Evgeny Spodarev WIAS, Berlin. Limit theorems for excursion sets of stationary random fields Evgeny Spodarev 23.01.2013 WIAS, Berlin Limit theorems for excursion sets of stationary random fields page 2 LT for excursion sets of stationary random fields Overview 23.01.2013 Overview Motivation Excursion

More information

A Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails

A Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails A Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails A. B. Dieker Columbia University Guido R. Lagos Georgia Institute of Technology March 3, 216 Abstract We consider

More information

Lecture on Parameter Estimation for Stochastic Differential Equations. Erik Lindström

Lecture on Parameter Estimation for Stochastic Differential Equations. Erik Lindström Lecture on Parameter Estimation for Stochastic Differential Equations Erik Lindström Recap We are interested in the parameters θ in the Stochastic Integral Equations X(t) = X(0) + t 0 µ θ (s, X(s))ds +

More information

Poisson Approximation for Two Scan Statistics with Rates of Convergence

Poisson Approximation for Two Scan Statistics with Rates of Convergence Poisson Approximation for Two Scan Statistics with Rates of Convergence Xiao Fang (Joint work with David Siegmund) National University of Singapore May 28, 2015 Outline The first scan statistic The second

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Negative Association, Ordering and Convergence of Resampling Methods

Negative Association, Ordering and Convergence of Resampling Methods Negative Association, Ordering and Convergence of Resampling Methods Nicolas Chopin ENSAE, Paristech (Joint work with Mathieu Gerber and Nick Whiteley, University of Bristol) Resampling schemes: Informal

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Probability Theory and Simulation Methods

Probability Theory and Simulation Methods Feb 28th, 2018 Lecture 10: Random variables Countdown to midterm (March 21st): 28 days Week 1 Chapter 1: Axioms of probability Week 2 Chapter 3: Conditional probability and independence Week 4 Chapters

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

EE/CpE 345. Modeling and Simulation. Fall Class 9

EE/CpE 345. Modeling and Simulation. Fall Class 9 EE/CpE 345 Modeling and Simulation Class 9 208 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation - the behavior

More information

On Lyapunov Inequalities and Subsolutions for Efficient Importance Sampling

On Lyapunov Inequalities and Subsolutions for Efficient Importance Sampling On Lyapunov Inequalities and Subsolutions for Efficient Importance Sampling By Jose Blanchet, Kevin Leder and Peter Glynn Columbia University, Columbia University and Stanford University July 28, 2009

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

EE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002

EE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002 EE/CpE 345 Modeling and Simulation Class 0 November 8, 2002 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Reduction of Variance. Importance Sampling

Reduction of Variance. Importance Sampling Reduction of Variance As we discussed earlier, the statistical error goes as: error = sqrt(variance/computer time). DEFINE: Efficiency = = 1/vT v = error of mean and T = total CPU time How can you make

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture VIc (19.11.07) Contents: Maximum Likelihood Fit Maximum Likelihood (I) Assume N measurements of a random variable Assume them to be independent and distributed according

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance Tommi Sottinen Department of Mathematics P. O. Box 4 FIN-0004 University of Helsinki Finland

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

ABSTRACT 1 INTRODUCTION. Jingchen Liu. Xiang Zhou. Brown University 182, George Street Providence, RI 02912, USA

ABSTRACT 1 INTRODUCTION. Jingchen Liu. Xiang Zhou. Brown University 182, George Street Providence, RI 02912, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. FAILURE OF RANDOM MATERIALS: A LARGE DEVIATION AND COMPUTATIONAL STUDY Jingchen

More information

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo Outline in High Dimensions Using the Rodeo Han Liu 1,2 John Lafferty 2,3 Larry Wasserman 1,2 1 Statistics Department, 2 Machine Learning Department, 3 Computer Science Department, Carnegie Mellon University

More information

The Moment Method; Convex Duality; and Large/Medium/Small Deviations

The Moment Method; Convex Duality; and Large/Medium/Small Deviations Stat 928: Statistical Learning Theory Lecture: 5 The Moment Method; Convex Duality; and Large/Medium/Small Deviations Instructor: Sham Kakade The Exponential Inequality and Convex Duality The exponential

More information

Free Entropy for Free Gibbs Laws Given by Convex Potentials

Free Entropy for Free Gibbs Laws Given by Convex Potentials Free Entropy for Free Gibbs Laws Given by Convex Potentials David A. Jekel University of California, Los Angeles Young Mathematicians in C -algebras, August 2018 David A. Jekel (UCLA) Free Entropy YMC

More information

Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error

Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error Estimating Tail Probabilities of Heavy Tailed Distributions with Asymptotically Zero Relative Error S. Juneja Tata Institute of Fundamental Research, Mumbai juneja@tifr.res.in February 28, 2007 Abstract

More information

Extremes and ruin of Gaussian processes

Extremes and ruin of Gaussian processes International Conference on Mathematical and Statistical Modeling in Honor of Enrique Castillo. June 28-30, 2006 Extremes and ruin of Gaussian processes Jürg Hüsler Department of Math. Statistics, University

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

STATE-DEPENDENT IMPORTANCE SAMPLING FOR REGULARLY VARYING RANDOM WALKS

STATE-DEPENDENT IMPORTANCE SAMPLING FOR REGULARLY VARYING RANDOM WALKS Adv. Appl. Prob. 40, 1104 1128 2008 Printed in Northern Ireland Applied Probability Trust 2008 STATE-DEPENDENT IMPORTANCE SAMPLING FOR REGULARLY VARYING RANDOM WALKS JOSE H. BLANCHET and JINGCHEN LIU,

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 8: Importance Sampling Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 8: Importance Sampling 8.1 Importance Sampling Importance sampling

More information

Hybrid Dirichlet processes for functional data

Hybrid Dirichlet processes for functional data Hybrid Dirichlet processes for functional data Sonia Petrone Università Bocconi, Milano Joint work with Michele Guindani - U.T. MD Anderson Cancer Center, Houston and Alan Gelfand - Duke University, USA

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

NUMERICAL ANALYSIS PROBLEMS

NUMERICAL ANALYSIS PROBLEMS NUMERICAL ANALYSIS PROBLEMS JAMES KEESLING The problems that follow illustrate the methods covered in class. They are typical of the types of problems that will be on the tests.. Solving Equations Problem.

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

Patterns of Scalable Bayesian Inference Background (Session 1)

Patterns of Scalable Bayesian Inference Background (Session 1) Patterns of Scalable Bayesian Inference Background (Session 1) Jerónimo Arenas-García Universidad Carlos III de Madrid jeronimo.arenas@gmail.com June 14, 2017 1 / 15 Motivation. Bayesian Learning principles

More information

Basics on Probability. Jingrui He 09/11/2007

Basics on Probability. Jingrui He 09/11/2007 Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

ABSTRACT 1 INTRODUCTION. Jose Blanchet. Henry Lam. Boston University 111 Cummington Street Boston, MA 02215, USA

ABSTRACT 1 INTRODUCTION. Jose Blanchet. Henry Lam. Boston University 111 Cummington Street Boston, MA 02215, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. RARE EVENT SIMULATION TECHNIQUES Jose Blanchet Columbia University 500 W 120th

More information

MA 8101 Stokastiske metoder i systemteori

MA 8101 Stokastiske metoder i systemteori MA 811 Stokastiske metoder i systemteori AUTUMN TRM 3 Suggested solution with some extra comments The exam had a list of useful formulae attached. This list has been added here as well. 1 Problem In this

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

Introduction to Self-normalized Limit Theory

Introduction to Self-normalized Limit Theory Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized

More information

International Conference on Applied Probability and Computational Methods in Applied Sciences

International Conference on Applied Probability and Computational Methods in Applied Sciences International Conference on Applied Probability and Computational Methods in Applied Sciences Organizing Committee Zhiliang Ying, Columbia University and Shanghai Center for Mathematical Sciences Jingchen

More information

Continuous Random Variables

Continuous Random Variables MATH 38 Continuous Random Variables Dr. Neal, WKU Throughout, let Ω be a sample space with a defined probability measure P. Definition. A continuous random variable is a real-valued function X defined

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

On the inefficiency of state-independent importance sampling in the presence of heavy tails

On the inefficiency of state-independent importance sampling in the presence of heavy tails Operations Research Letters 35 (2007) 251 260 Operations Research Letters www.elsevier.com/locate/orl On the inefficiency of state-independent importance sampling in the presence of heavy tails Achal Bassamboo

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set MAS 6J/1.16J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Monte Carlo Integration I [RC] Chapter 3

Monte Carlo Integration I [RC] Chapter 3 Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the

More information

Introductory Econometrics

Introductory Econometrics Introductory Econometrics Violation of basic assumptions Heteroskedasticity Barbara Pertold-Gebicka CERGE-EI 16 November 010 OLS assumptions 1. Disturbances are random variables drawn from a normal distribution.

More information

State-dependent Importance Sampling for Rare-event Simulation: An Overview and Recent Advances

State-dependent Importance Sampling for Rare-event Simulation: An Overview and Recent Advances State-dependent Importance Sampling for Rare-event Simulation: An Overview and Recent Advances By Jose Blanchet and Henry Lam Columbia University and Boston University February 7, 2011 Abstract This paper

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Monte Carlo Integration

Monte Carlo Integration Monte Carlo Integration SCX5005 Simulação de Sistemas Complexos II Marcelo S. Lauretto www.each.usp.br/lauretto Reference: Robert CP, Casella G. Introducing Monte Carlo Methods with R. Springer, 2010.

More information

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Random Processes. DS GA 1002 Probability and Statistics for Data Science. Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Modeling quantities that evolve in time (or space)

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information