A framework for adaptive Monte-Carlo procedures

Size: px
Start display at page:

Download "A framework for adaptive Monte-Carlo procedures"

Transcription

1 A framework for adaptive Monte-Carlo procedures Jérôme Lelong (with B. Lapeyre) Journées MAS Bordeaux Friday 3 September 2010 J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 1 / 26

2 Outline Randomly truncated algorithm: Chen s technique Averaging 4 The Basic idea Numerical implementation J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 2 / 26

3 Monte-Carlo framework Compute E(Z) using a Monte-Carlo method Suppose we know that E(Z) = E[H(θ,X)],θ R d Aim: use the previous representation to reduce the variance. Find a way to minimize the variance. θ s.t. θv(θ) v(θ ). v(θ) = Var(H(θ,X)) = E [ H(θ,X) 2] E[Z] 2 J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 3 / 26

4 The algorithms I Algorithm (Non adaptive importance sampling, Fu and Su (2002), Arouna (2003)) 1 Draw (X 1,...,X n ) to compute θ n an estimator of θ. 2 Draw (X 1,...,X n ) independent of (X 1,...,X n ) and compute ξ n = 1 n n H(θ n,x i ). i=1 J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 4 / 26

5 The algorithms II Algorithm (Adaptive Importance Sampling, Arouna (2004)) Draw (X 1,...,X n ) and do for i = 1 : n 1 Compute an estimator θ i of θ using (X 1,...,X i ) 2 Update ξ i Remarks: ξ i+1 = i i + 1 ξ i + 1 i + 1 H(θ i,x i+1 ), with ξ 0 = 0. No need to store the whole sequence (X 1,...,X n ) for computing ξ n ξ n = 1 n n i=1 H(θ i 1,X i ) J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 5 / 26

6 Common frameworks I Importance sampling framework in a Gaussian setting If G N (0,I d ). For all θ R d, E [ f (G) ] ] θ 2 θ G = E [e 2 f (G + θ), ] θ 2 θ G+ v(θ) = E [e 2 f 2 (G) E[f (G)] 2. v is strongly convex if ε > 0 s.t. E[ f (G) 2+ε ] >. Arouna (2004 and 2005), Lemaire and Pagès (2008), Jourdain and L. (2009) J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 6 / 26

7 Common frameworks II Escher transform Let X be a r.v. in R d with density p and θ R d. p θ (x) = p(x)e θ x ψ(θ) with ψ(θ) = loge [ e θ X ]. Let X (θ) have p θ as a density, then [ ] E[f (X)] = E f (X (θ) ) p(x (θ) ) p θ (X (θ), ) [ v(θ) = E f (X) 2 p(x) ] E[f (X)] 2. p θ (X) v is strongly convex if ε > 0 s.t. E[ f (G) 2+ε ] > and lim θ p θ (x) = 0 for all x. Kawai (2007 and 2008), Lemaire and Pagès (2008). J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 7 / 26

8 1 2 3 Randomly truncated algorithm: Chen s technique Averaging 4 The Basic idea Numerical implementation J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 8 / 26

9 An adaptive strong law Assume E[Z] = E[H(θ,X)] for all θ and v(θ) = Var(H(θ,X)) is strongly convex. Let (X n,n 0) be i.i.d X. F n = σ(x 1,...,X n ). Theorem 1 Let (θ n ) n 0 be a (F n ) adapted sequence with values in R d, s. t. for all n 0, θ < p.s. (H1) For any compact subset K R d, sup θ K E[ H(θ,X) 2 ] < 1 n (H2) inf v(θ) > 0 and v(θ i ) < θ R d n i=0 Then, ξ n = 1 n n H(θ i 1,X i ) i=1 a.s n E(Z). J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 9 / 26

10 Remarks on the assumptions (H1) : No need of E [ sup θ K H(θ,X) 2] < by the use of locally square integrable martingales. If θ E[ H(θ,X) 2 ] is continuous, (H1) is equivalent to E[ H(θ,X) 2 ] < for all θ R d. (H2) : is clearly true when θ n converges to a deterministic constant θ and v is continuous at θ. No assumption to be checked along the path (θ n ) n. J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 10 / 26

11 A Central Limit Theorem Theorem 2 Let the sequence (θ n,n 0) be adapted to F n. Assume (H1), (H2) and θ n θ p.s. η > 0 s.t. θ E[ H(θ,X) 2+η ] is continuous at θ and finite θ R d. v is continuous at θ and v(θ ) > 0 Then, Moreover, assume that ( ) 1 n n ξ D n E[Z] N n (0,v(θ )). η > 0 s.t. θ E[ H(θ,X) 4+η ] is continuous at θ and finite θ R d. Then, n σ n (ξ n E[Z]) D N (0,1) with n σ2 n = 1 n n H(θ i 1,X i ) 2 ξ 2 n J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 11 / 26 i=1

12 1 2 3 Randomly truncated algorithm: Chen s technique Averaging 4 The Basic idea Numerical implementation J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 12 / 26

13 Randomly truncated algorithm: Chen s technique Variance minimisation v is strongly convex v is infinitely differentiable and θ v(θ) = θ E [ (H(θ,X) 2 ) ] = E[U(θ,X)] Minimizing v is equivalent to finding θ s.t. E [ U(θ,X) ] = 0 J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 13 / 26

14 Randomly truncated algorithm: Chen s technique Randomly truncated procedure Let (γ n ) n 0 s.t. n γ n = + and n γ 2 n < +. For θ 0 K 0 and α 0 = 0, we define if θ n+ 1 2 if θ n+ 1 2 θ n+ 1 = θ n γ n+1 U(θ n,x n+1 ), 2 α n+1 = α n, K αn θ n+1 = θ n+ 1 2 K αn θ n+1 = θ 0 α n+1 = α n + 1. α n = number of truncations up to time n. θ n+1 = T Kαn ( θn γ n+1 U(θ n,x n+1 ) ) θ n is F n measurable and X n+1 is independent of F n. Introduced by Chen and Zhu (1986). J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 14 / 26

15 Randomly truncated algorithm: Chen s technique a.s convergence Let u(θ) = E[U(θ,X)]. Theorem 3 (L., 2008) Assume (H3) u is continuous and! θ R d,u(θ ) = 0 and θ R d, θ θ, (θ θ u(θ)) > 0. For all q > 0, sup θ q E[ U(θ,X) 2 ] <. Then, the sequence (θ n ) n converges a.s. to θ and the sequence (α n ) n is a.s. finite. Previous results from Chen and Zhu (1986), and Chen, Gao and Guo (1988). (H3) satisfied if U is the gradient of a strictly convex function J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 15 / 26

16 Averaging Moving window average Assume γ n = γ (n+1) α with 1/2 < α < 1. For all τ > 0, we set ˆθ n (τ) = γ p τ p+ τ/γ p i=p θ i with p = sup{k 1 : k + τ/γ k n} n. Averaging smooths the convergence. Averaging from a strictly positive rank reduces the impact of the initial condition. If (θ n ) n converges, so does ( ˆθ n (τ)) n for all τ > 0. True Césaro averaging : Polyak and Juditsky (1992), Pelletier (2000), Andrieu and Moulines (2006). J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 16 / 26

17 1 2 3 Randomly truncated algorithm: Chen s technique Averaging 4 The Basic idea Numerical implementation J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 17 / 26

18 The Basic idea General problem I Generalised multidimensional Black-Scholes Model ds t = S t (µ(t,s t )dt + σ(t,s t ) dw t ), S 0 = x Payoff ˆψ(S t,t T), price V 0 = E[e rt ˆψ(S t,t T)] Can be approximated by ˆV 0 = E[e rt ˆψ(S T1,...,S Td )] With G N (0,I d ) ˆV 0 = E[ψ(G)] = E [ψ(g + Aθ)e Aθ G Aθ 2 2 ] with θ R p and A R d p, p << d. J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 18 / 26

19 The Basic idea General Problem II [ [ Minimise v(θ) = E ψ(g + Aθ) 2 e 2Aθ G Aθ 2] = E ψ(g) 2 e Aθ 2 Aθ G+ 2 ]. v(θ) = E [A (Aθ G)ψ(G) 2 e Aθ G+ Aθ 2 2 v(θ) = E [ A Gψ(G + Aθ) 2 e 2Aθ G+Aθ2] ] U 1 (θ,g) = A (Aθ G)φ(G) 2 Aθ 2 Aθ G+ e 2 U 2 (θ,g) = A Gφ(G + Aθ) 2 e 2Aθ G+ Aθ 2 we can write v(θ) = E[U 2 (θ,g)] = E[U 1 (θ,g)] to construct two estimators of θ : (θ 1 n ) n and (θ 2 n ) n J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 19 / 26

20 The Basic idea Bespoke estimators We define θ 1 n+1 = T K αn ( θ 1 n γ n+1 U 1 (θ 1 n,g n+1) ) involves φ(g) θ 2 n+1 = T K αn ( θ 2 n γ n+1 U 2 (θ 2 n,g n+1) ) involves φ(g + Aθ 2 n ) and their averaging versions ( θ 1 n) n and ( θ 2 n) n. For the different estimators of θ, we can define as many approximations of E(φ(G)) ξ 1 n = 1 n n i=1h(θ 1 i 1,G i), ξ 2 n = 1 n H(θ 2 i 1 n,g i), i=1 ξ 1 n = 1 n n i=1h( θ 1 i 1,G i ), ξ2 n = 1 n H( θ 2 i 1,G i ), n i=1 Aθ 2 Aθ G with H(θ,G) = φ(g + Aθ)e 2 and where the sequence (G n ) n has already been used to build the (θ n ) n estimators. J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 20 / 26

21 The Basic idea Complexity of the different estimators We assume : complexity number of evaluations of φ. Non-adaptive algorithms : we need 2n samples to achieve a convergence rate v(θ )/n. Complexity: 2n, efficient only when v(θ ) v(0)/2. Adaptive algorithms : we need n samples to achieve a convergence rate v(θ )/n. Estimators ξ 1 2 ξ ξ 1 ξ 2 Complexity 2n n 2n 2n FIG.: Complexities of the different estimators J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 21 / 26

22 Numerical implementation Basket Option : ( d i=1 ωi S i T K ) + where (ω 1,...,ω d ) R d ρ K γ Price Var MC Var ξ 2 Var ξ TAB.: Basket option in dimension d = 40 with r = 0.05, T = 1, S i 0 = 50, σi = 0.2, ω i = 1 for all i = 1,...,d and n = d Estimators MC ξ 2 ξ 2 CPU time TAB.: CPU times for the option of Table 1. J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 22 / 26

23 Numerical implementation Barrier Basket option: ( D i=1 ωi S i T K ) +1 { i D, j N, S i tj L i } B t1 B t2. B tn 1 B tn t1 I D t1 I D t2 t 1 I D = G tn 1 t N 2 I D 0 t1 t2 t 1 I D... tn 1 t N 2 I D tn t N 1 I D where I D is the identity matrix in dimension D. t1 I D t2 t 1 I D A =. tn t N 1 I D G + Aθ corresponds to (B t1 + θt 1,B t2 + θt 2,...,B tn + θt N ). θ R 5 whereas G R 120. J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 23 / 26

24 Numerical implementation Barrier Basket option K γ Price Var MC Var ξ 2 Var ξ 2 Var Var ξ 2 Var ξ 2 Var θ 2 +MC θ2 +MC reduced reduced reduced TAB.: Down and Out Call option in dimension I = 5 with σ = 0.2, S 0 = (50,40,60,30,20), L = (40,30,45,20,10), ρ = 0.3, r = 0.05, T = 2, ω = (0.2,0.2,0.2,0.2,0.2) and n = Estimators MC ξ 2 ξ 2 θ 2 + MC ξ 2 ξ 2 θ 2 + MC reduced reduced reduced CPU time TAB.: CPU times for the option of Table 3. J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 24 / 26

25 Numerical implementation e4 2e4 3e4 4e4 5e4 1e4 2e4 3e4 4e4 5e4 6e4 7e4 8e4 9e4 10e4 FIG.: approximation of θ with averaging FIG.: approximation of θ without averaging J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 25 / 26

26 Numerical implementation Conclusion It always reduces the variance The extra computational cost can be negligible No regularity assumptions on the payoff Averaging improves the robustness of the algorithm w.r.t the step sequence but adds an extra computational cost To encounter the fine tuning of the algorithm, one can use sample average approximation (Jourdain and L., 2008), but it cannot be implemented in an adaptive manner which increases its computational cost J. Lelong (ENSIMAG LJK) Journées MAS 2010 Bordeaux 26 / 26

Outline. A Central Limit Theorem for Truncating Stochastic Algorithms

Outline. A Central Limit Theorem for Truncating Stochastic Algorithms Outline A Central Limit Theorem for Truncating Stochastic Algorithms Jérôme Lelong http://cermics.enpc.fr/ lelong Tuesday September 5, 6 1 3 4 Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / 3 Jérôme

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Stochastic Optimization One-stage problem

Stochastic Optimization One-stage problem Stochastic Optimization One-stage problem V. Leclère September 28 2017 September 28 2017 1 / Déroulement du cours 1 Problèmes d optimisation stochastique à une étape 2 Problèmes d optimisation stochastique

More information

Adaptive Importance Sampling for Multilevel Monte Carlo Euler method

Adaptive Importance Sampling for Multilevel Monte Carlo Euler method Adaptive Importance Sampling for Multilevel Monte Carlo Euler method Mohamed Ben Alaya, Kaouther Hajji, and Ahmed Kebaier Université Paris 3, Sorbonne Paris Cité, LAGA, CNRS UMR 7539 mba@mathuniv-paris3fr

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

Importance Sampling and Statistical Romberg method

Importance Sampling and Statistical Romberg method Importance Sampling and Statistical Romberg method Mohamed Ben Alaya, Kaouther Hajji, Ahmed Kebaier o cite this version: Mohamed Ben Alaya, Kaouther Hajji, Ahmed Kebaier Importance Sampling and Statistical

More information

The Poincare map for randomly perturbed periodic mo5on

The Poincare map for randomly perturbed periodic mo5on The Poincare map for randomly perturbed periodic mo5on Georgi Medvedev Drexel University SIAM Conference on Applica1ons of Dynamical Systems May 19, 2013 Pawel Hitczenko and Georgi Medvedev, The Poincare

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Perturbed Proximal Gradient Algorithm

Perturbed Proximal Gradient Algorithm Perturbed Proximal Gradient Algorithm Gersende FORT LTCI, CNRS, Telecom ParisTech Université Paris-Saclay, 75013, Paris, France Large-scale inverse problems and optimization Applications to image processing

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Exact Simulation of Multivariate Itô Diffusions

Exact Simulation of Multivariate Itô Diffusions Exact Simulation of Multivariate Itô Diffusions Jose Blanchet Joint work with Fan Zhang Columbia and Stanford July 7, 2017 Jose Blanchet (Columbia/Stanford) Exact Simulation of Diffusions July 7, 2017

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University

More information

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models Jean-Pierre Fouque North Carolina State University SAMSI November 3, 5 1 References: Variance Reduction for Monte

More information

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Some Results on the Ergodicity of Adaptive MCMC Algorithms Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship

More information

Chapter 4: Monte-Carlo Methods

Chapter 4: Monte-Carlo Methods Chapter 4: Monte-Carlo Methods A Monte-Carlo method is a technique for the numerical realization of a stochastic process by means of normally distributed random variables. In financial mathematics, it

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011

Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Quasi Stochastic Approximation American Control Conference San Francisco, June 2011 Sean P. Meyn Joint work with Darshan Shirodkar and Prashant Mehta Coordinated Science Laboratory and the Department of

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f), 1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that

More information

Prof. Erhan Bayraktar (University of Michigan)

Prof. Erhan Bayraktar (University of Michigan) September 17, 2012 KAP 414 2:15 PM- 3:15 PM Prof. (University of Michigan) Abstract: We consider a zero-sum stochastic differential controller-and-stopper game in which the state process is a controlled

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

Asymptotic normality of randomly truncated stochastic algorithms

Asymptotic normality of randomly truncated stochastic algorithms Asymptotic normality of randomly truncated stochastic algorithms Jérôme Lelong To cite this version: Jérôme Lelong. Asymptotic normality of randomly truncated stochastic algorithms. ESAIM: Probability

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Monte Carlo Integration I [RC] Chapter 3

Monte Carlo Integration I [RC] Chapter 3 Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Duration-Based Volatility Estimation

Duration-Based Volatility Estimation A Dual Approach to RV Torben G. Andersen, Northwestern University Dobrislav Dobrev, Federal Reserve Board of Governors Ernst Schaumburg, Northwestern Univeristy CHICAGO-ARGONNE INSTITUTE ON COMPUTATIONAL

More information

Sequential Bayesian Updating

Sequential Bayesian Updating BS2 Statistical Inference, Lectures 14 and 15, Hilary Term 2009 May 28, 2009 We consider data arriving sequentially X 1,..., X n,... and wish to update inference on an unknown parameter θ online. In a

More information

A Central Limit Theorem for Robbins Monro Algorithms with Projections

A Central Limit Theorem for Robbins Monro Algorithms with Projections A Central Limit Theorem for Robbins Monro Algorithms with Projections LLONG Jérôme, CRMICS, cole Nationale des Ponts et Chaussées, 6 et 8 av Blaise Pascal, 77455 Marne La Vallée, FRANC. 2th September 25

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Negative Association, Ordering and Convergence of Resampling Methods

Negative Association, Ordering and Convergence of Resampling Methods Negative Association, Ordering and Convergence of Resampling Methods Nicolas Chopin ENSAE, Paristech (Joint work with Mathieu Gerber and Nick Whiteley, University of Bristol) Resampling schemes: Informal

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Bayesian reinforcement learning and partially observable Markov decision processes November 6, / 24

Bayesian reinforcement learning and partially observable Markov decision processes November 6, / 24 and partially observable Markov decision processes Christos Dimitrakakis EPFL November 6, 2013 Bayesian reinforcement learning and partially observable Markov decision processes November 6, 2013 1 / 24

More information

An Efficient Estimation Method for Longitudinal Surveys with Monotone Missing Data

An Efficient Estimation Method for Longitudinal Surveys with Monotone Missing Data An Efficient Estimation Method for Longitudinal Surveys with Monotone Missing Data Jae-Kwang Kim 1 Iowa State University June 28, 2012 1 Joint work with Dr. Ming Zhou (when he was a PhD student at ISU)

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

Large sample distribution for fully functional periodicity tests

Large sample distribution for fully functional periodicity tests Large sample distribution for fully functional periodicity tests Siegfried Hörmann Institute for Statistics Graz University of Technology Based on joint work with Piotr Kokoszka (Colorado State) and Gilles

More information

Reformulation of chance constrained problems using penalty functions

Reformulation of chance constrained problems using penalty functions Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

Assessing financial model risk

Assessing financial model risk Assessing financial model risk and an application to electricity prices Giacomo Scandolo University of Florence giacomo.scandolo@unifi.it joint works with Pauline Barrieu (LSE) and Angelica Gianfreda (LBS)

More information

Gradient-based Adaptive Stochastic Search

Gradient-based Adaptive Stochastic Search 1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Higher order weak approximations of stochastic differential equations with and without jumps

Higher order weak approximations of stochastic differential equations with and without jumps Higher order weak approximations of stochastic differential equations with and without jumps Hideyuki TANAKA Graduate School of Science and Engineering, Ritsumeikan University Rough Path Analysis and Related

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Monday, September 26, 2011 Lecture 10: Exponential families and Sufficient statistics Exponential Families Exponential families are important parametric families

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES ECO 513 Fall 2009 C. Sims CONDIIONAL EXPECAION; SOCHASIC PROCESSES 1. HREE EXAMPLES OF SOCHASIC PROCESSES (I) X t has three possible time paths. With probability.5 X t t, with probability.25 X t t, and

More information

Bandits : optimality in exponential families

Bandits : optimality in exponential families Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions Hao Xing University of Michigan joint work with Erhan Bayraktar, University of Michigan SIAM Conference on Financial

More information

Nonparametric Tests for Multi-parameter M-estimators

Nonparametric Tests for Multi-parameter M-estimators Nonparametric Tests for Multi-parameter M-estimators John Robinson School of Mathematics and Statistics University of Sydney The talk is based on joint work with John Kolassa. It follows from work over

More information

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,

More information

Random Processes. DS GA 1002 Probability and Statistics for Data Science.

Random Processes. DS GA 1002 Probability and Statistics for Data Science. Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Modeling quantities that evolve in time (or space)

More information

Martingale Theory for Finance

Martingale Theory for Finance Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.

More information

Neighboring feasible trajectories in infinite dimension

Neighboring feasible trajectories in infinite dimension Neighboring feasible trajectories in infinite dimension Marco Mazzola Université Pierre et Marie Curie (Paris 6) H. Frankowska and E. M. Marchini Control of State Constrained Dynamical Systems Padova,

More information

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels. Advanced Financial Models Example sheet - Michaelmas 208 Michael Tehranchi Problem. Consider a two-asset model with prices given by (P, P 2 ) (3, 9) /4 (4, 6) (6, 8) /4 /2 (6, 4) Is there arbitrage in

More information

Lecture 4: September Reminder: convergence of sequences

Lecture 4: September Reminder: convergence of sequences 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Practical unbiased Monte Carlo for Uncertainty Quantification

Practical unbiased Monte Carlo for Uncertainty Quantification Practical unbiased Monte Carlo for Uncertainty Quantification Sergios Agapiou Department of Statistics, University of Warwick MiR@W day: Uncertainty in Complex Computer Models, 2nd February 2015, University

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Robust Backtesting Tests for Value-at-Risk Models

Robust Backtesting Tests for Value-at-Risk Models Robust Backtesting Tests for Value-at-Risk Models Jose Olmo City University London (joint work with Juan Carlos Escanciano, Indiana University) Far East and South Asia Meeting of the Econometric Society

More information

dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés

dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Inférence pénalisée dans les modèles à vraisemblance non explicite par des algorithmes gradient-proximaux perturbés Gersende Fort Institut de Mathématiques de Toulouse, CNRS and Univ. Paul Sabatier Toulouse,

More information

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis

Location Multiplicative Error Model. Asymptotic Inference and Empirical Analysis : Asymptotic Inference and Empirical Analysis Qian Li Department of Mathematics and Statistics University of Missouri-Kansas City ql35d@mail.umkc.edu October 29, 2015 Outline of Topics Introduction GARCH

More information

Brownian Motion. Chapter Definition of Brownian motion

Brownian Motion. Chapter Definition of Brownian motion Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier

More information

Gradient Estimation for Attractor Networks

Gradient Estimation for Attractor Networks Gradient Estimation for Attractor Networks Thomas Flynn Department of Computer Science Graduate Center of CUNY July 2017 1 Outline Motivations Deterministic attractor networks Stochastic attractor networks

More information

The stochastic modeling

The stochastic modeling February 211 Gdansk A schedule of the lecture Generation of randomly arranged numbers Monte Carlo Strong Law of Large Numbers SLLN We use SAS 9 For convenience programs will be given in an appendix Programs

More information

Towards stability and optimality in stochastic gradient descent

Towards stability and optimality in stochastic gradient descent Towards stability and optimality in stochastic gradient descent Panos Toulis, Dustin Tran and Edoardo M. Airoldi August 26, 2016 Discussion by Ikenna Odinaka Duke University Outline Introduction 1 Introduction

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Tail inequalities for additive functionals and empirical processes of. Markov chains

Tail inequalities for additive functionals and empirical processes of. Markov chains Tail inequalities for additive functionals and empirical processes of geometrically ergodic Markov chains University of Warsaw Banff, June 2009 Geometric ergodicity Definition A Markov chain X = (X n )

More information

Homework for MATH 4603 (Advanced Calculus I) Fall Homework 13: Due on Tuesday 15 December. Homework 12: Due on Tuesday 8 December

Homework for MATH 4603 (Advanced Calculus I) Fall Homework 13: Due on Tuesday 15 December. Homework 12: Due on Tuesday 8 December Homework for MATH 4603 (Advanced Calculus I) Fall 2015 Homework 13: Due on Tuesday 15 December 49. Let D R, f : D R and S D. Let a S (acc S). Assume that f is differentiable at a. Let g := f S. Show that

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Statistical Properties of Numerical Derivatives

Statistical Properties of Numerical Derivatives Statistical Properties of Numerical Derivatives Han Hong, Aprajit Mahajan, and Denis Nekipelov Stanford University and UC Berkeley November 2010 1 / 63 Motivation Introduction Many models have objective

More information

Monte Carlo methods for sampling-based Stochastic Optimization

Monte Carlo methods for sampling-based Stochastic Optimization Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Problem Set 2 Solution

Problem Set 2 Solution Problem Set 2 Solution Aril 22nd, 29 by Yang. More Generalized Slutsky heorem [Simle and Abstract] su L n γ, β n Lγ, β = su L n γ, β n Lγ, β n + Lγ, β n Lγ, β su L n γ, β n Lγ, β n + su Lγ, β n Lγ, β su

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Information geometry for bivariate distribution control

Information geometry for bivariate distribution control Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic

More information