Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)
|
|
- Dulcie Smith
- 5 years ago
- Views:
Transcription
1 Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the convention inf. xercice (Distribution of the maximum of a random walk). Let X be a Z-valued random variable and (X n, n N ) be independent random variables distributed as X. We assume that P(X 2) 0, p P(X ) > 0, X < + and m X < 0. We consider the random walk S (S n, n N) defined by S 0 0 and S n+ S n + X n+ for n N, and its natural filtration F (F n,n N). Our aim is to study the distribution of the global maximum of S : I Preliminaries. Prove that P(M < + ). M sups n. n N 2. Check that S is a Markov chain. Is it irreducible, transient, recurrent? 3. Check that the first hitting time of the level k N for the random walk S : τ k inf{n 0; S n k} is a stopping time with respect to filtration F. 4. Is the random time η inf{n 0; S n M} a stopping time? For λ 0, we set : ϕ(λ) e λx. 5. Prove that ϕ is of class C over (0,+ ), and check it is strictly convex. Prove there exists a unique λ 0 > 0 such that ϕ(λ 0 ). Check that ϕ (λ 0 ) > 0. II An auxiliary Markov chain We set q k e λ 0k P(X n k) for k Z, so that according to the previous question q (q k,k Z) is a probability distribution. Let (Y n,n N ) beindependent Z-valued random variables with probability distribution q. We introduce the random walk V (V n, n N) defined by V 0 0 and V n+ V n +Y n+ for n N.. Prove that for all measurable bounded or non-negative function f, we have : f(y,...,y n )e λ 0V n f(x,...,x n ). 2. Check that Y is integrable and that Y > 0. Let G (G n,n N) be the natural filtration of V and ρ k inf{n 0; V n k} be the first hitting time of the level k N for the random walk V. 3. Let k N. Check that V (k) (V n ρk,n N) is a sub-martingale (with respect to the filtration G) bounded from above by k. 4. Prove that V (k) a.s. converges. Deduce that ρ k is a.s. finite and compute V ρk.
2 5. Prove that ( e λ 0V n,n ) is a martingale (with respect to the filtration G). Compute : ρk ne λ 0V n G ρk n. 6. Deduce that P(τ k n) e λ 0k P(ρ k n), and then compute P(τ k < ). 7. Prove that M has a shifted geometric distribution, that is P(M k) α 0 ( α 0 ) k for k N. Write α 0 using λ 0. III The simple random walk Weconsiderthesimplerandomwalk withnegative drift:p(x ) p andp (0,/2).. Prove that P(S n < 0; n ) ( p)p(m 0). 2. Compute α 0 and deduce that the probability, P(S n < 0; n ), for the simple random walk S with negative drift to never come back to {0} is equal to 2p. xercice 2 (SomeBrownian martingale). Let B (B t,t R + ) beastandard Brownian motion and F (F t,t R + ) its natural filtration. For t 0,), we set : X t t e B2 t /2( t). We recall that the density of the standard Gaussian distribution N(0,) is given by : f(x) e x2 /2, x R, and that if G is a standard Gaussian random variable then, for all λ C, we have : e λg e λ2 /2.. Let 0 t s+t <. Compute X t+s F t. 2. Deduce that (X t,t 0,) is a continuous martingale. And deduce X t for all t 0,). 3. Prove that a.s. lim t X t Deduce that sup X t +. t 0,) 2
3 Correction I Preliminaries xercice. By the law of large number we have that a.s. lim n + S n /n m < 0. This implies that a.s. lim n + S n and thus that a.s. M < The process S is a stochastic dynamical system and thus a Markov chain. Since P(X ) > 0 and since there exists k N such that P(X k) > 0 (as m < 0), we deduce that S is irreducible. Since lim n + S n, we get that S is transient. 3. We have {τ k > n} {S j < k, 0 j n} F n for all n N and thus τ k is a stopping time with respect to F. 4. The random time η is not a stopping time because {η 0} n N {S n < 0} doesn t belong to F Notice that X n e λx max(e λ,sup x 0 x n e λx ), so the functions ϕ n (λ) X n e λx, for λ (0,+ ), are well defined for n N and locally bounded. Using Fubini, we get that b a ϕ n+(r)dr ϕ n (b) ϕ n (a) for all 0 < a b < +. This implies that ϕ is of class C over (0,+ ) with n-th derivative ϕ n. Since X 2 e λx is non-negative and positive with positive probability, we deduce that ϕ 2 > 0 and thus ϕ is strictly convex on (0, ). By dominated convergence, we get that ϕ and ϕ are continuous over 0,+ ), and we have ϕ(0), ϕ(+ ) +, ϕ (0) m < 0. The strict convexity of ϕ implies the existence of a unique root of ϕ(λ) on (0,+ ), say λ 0. Furthermore, we have that ϕ (λ 0 ) > 0. II An auxiliary Markov chain. We have that : {Y k,...,y nk n}e λ 0V n n j q kj e λ 0 n j k j This proves the result as the random variables X l and Y l are discrete. n p kj {X k,...,x nk n}. 2. We deduce from the previous question that Y X e λ 0X e λ 0 X < +. Thus Y is integrable and we have Y Xe λ 0X ϕ (λ 0 ) > Clearly V n is integrable, G n -measurable, and since Y n+ is independent of G n, we get Y n+ G n > 0. This implies that V is a sub-martingale. Similarly to Question I.3, we get that ρ k is a stopping time with respect to G. This gives that V (k) is the sub-martingale V stopped at the stopping time ρ k. Therefore, the process V (k) is a sub-martingale. By construction, we have V n < k on {n < ρ k }. Since Y n > 0 implies a.s. that Y n, we deduce that V ρk k on {n ρ k < }. In conclusion, V (k) is a sub-martingale bounded from above by k. 4. Since (V n ρk,n N) is a sub-martingale bounded from above by k, we deduce that it converges a.s. to a limit (which is integrable). Since the a.s. convergence of V (k) holds in Z, we get that either ρ k < or V (k) is constant for large time. The latter condition implies that the sequence (Y n,n N ) is constant for large n. This being impossible, we deduce that a.s. ρ k < +. Since ρ k is a.s. finite, we get that a.s. V ρk k. 5. Set N n e λ 0V n for n N. Notice that the process N (N n,n N) is G-adapted and non-negative. We also have : N n+ F n N n e λ 0Y n+ G n N n e λ 0Y n+ N n, j 3
4 where we used that N n is G n -measurable for the first equality, that Y n+ is independent of G n for the second and for the third that, by definition of the distribution of Y n+, we have e λ 0Y n+ e λ 0 X n+ +λ 0 X n+. We deduce that Nn+ N 0 and thus that N n+ is integrable. We have proven that N is a non-negative martingale. Notice that {ρ k n} belongs to G ρk n. By the stopping time theorem, since ρ k n is a bounded stopping time, we get : {ρk n}e λ 0V n G ρk n {ρk n} e λ 0V n F ρk n {ρk n}e λ 0V ρk n {ρk n}e λ 0k, where, for the last equality, we used that V ρk n V ρk k on {ρ k n}. 6. We have thanks to the previous question and Question II. that : P(τ k n) ρk ne λ 0V n ρk ne λ 0k e λ0k P(ρ k n). Letting n goes to infinity, and as ρ k is a.s. finite, we get that P(τ k n) e λ 0k. 7. Let k N. As {M k} {τ k < }, we deduce that P(M k) ( α 0 ) k with α 0 e λ 0. We deduce that P(M k) P(M k) P(M k +) α 0 ( α 0 ) k. III The simple random walk. Let M max n N S n X. Notice that M is distributed as M and is independent of X. Since {S n < 0; n } {X } {M 0}, we deduce that : P(S n < 0; n ) P(X )P(M 0) ( p)p(m 0). 2. We have ϕ(λ) pe λ +( p)e λ. We get that e λ 0 (0,) is a root of x ( p)x 2 x+p 0. This gives e λ 0 p/( p). We deduce that α 0 e λ 0 ( 2p)/( p). This gives : ( p)p(m 0) ( p)α 0 2p. xercice 2. The process X (X t,t 0,)) is F-adapted. Since X t+s is non-negative, we can compute its conditional expectation. Set η 2 t s. Using that B t+s B t is independent of F t and distributed as sg, with G a standard Gaussian random variable, we get that : X t+s F t η e (B t+s B t+b t) 2 /2η 2 Ft η e B2 t /2η2 H(B t ), with H(x) e (B t+s B t+x) 2 x 2 /2η 2 e ( sg+x) 2 x 2 /2η 2. 4
5 We get with z y t/η : We get : H(x) η t η t e ( sy+x) 2 x 2 /2η 2 y 2 /2 dy e y2 (s+η 2 )/2η 2 sxy/η 2 dy e z2 /2 sxz/η t dz e sxg/η t η t e sx2 /2η 2 ( t). η e x2 /2η 2 H(x) t e x2 /2( t). 2. We deduce from the previous question that X t+s F t X t. This gives, with t 0, that X s for all s 0,). Thus, as X t is non-negative, we have that X t L for all t 0,). Since the process X is F-adapted, we deduce that X is a martingale. Since B is continuous, we get that X is continuous. 3. As B 0 a.s. and B is continuous, we deduce there exists ε (0,) (random), such that B t 0 for all t ( ε,. Then use that lim x + xe x2 /2 0 to deduce that a.s. lim t X t Consider M (M n X /(n+),n N), which is a non-negative martingale with M 0 and a.s. lim n M n 0. If sup n N M n < +, then the martingale would also converge in L by dominated convergence. Since this not the case, we get that sup n N M n +. Then use that sup t 0,) X t sup n N M n to conclude. 5
Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012
Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of
More informationModern Discrete Probability Branching processes
Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationarxiv: v1 [math.pr] 6 Jan 2014
Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice
More informationProperties of an infinite dimensional EDS system : the Muller s ratchet
Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationBrownian Motion and the Dirichlet Problem
Brownian Motion and the Dirichlet Problem Mario Teixeira Parente August 29, 2016 1/22 Topics for the talk 1. Solving the Dirichlet problem on bounded domains 2. Application: Recurrence/Transience of Brownian
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationHitting Probabilities
Stat25B: Probability Theory (Spring 23) Lecture: 2 Hitting Probabilities Lecturer: James W. Pitman Scribe: Brian Milch 2. Hitting probabilities Consider a Markov chain with a countable state space S and
More informationOn optimal stopping of autoregressive sequences
On optimal stopping of autoregressive sequences Sören Christensen (joint work with A. Irle, A. Novikov) Mathematisches Seminar, CAU Kiel September 24, 2010 Sören Christensen (CAU Kiel) Optimal stopping
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More informationVerona Course April Lecture 1. Review of probability
Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is
More informationlim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),
1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More information(A n + B n + 1) A n + B n
344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationMath 6810 (Probability) Fall Lecture notes
Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationHomework #6 : final examination Due on March 22nd : individual work
Université de ennes Année 28-29 Master 2ème Mathématiques Modèles stochastiques continus ou à sauts Homework #6 : final examination Due on March 22nd : individual work Exercise Warm-up : behaviour of characteristic
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationSome remarks on the elliptic Harnack inequality
Some remarks on the elliptic Harnack inequality Martin T. Barlow 1 Department of Mathematics University of British Columbia Vancouver, V6T 1Z2 Canada Abstract In this note we give three short results concerning
More informationMath 456: Mathematical Modeling. Tuesday, March 6th, 2018
Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary
More informationOn the submartingale / supermartingale property of diffusions in natural scale
On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property
More informationWeak convergence and large deviation theory
First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov
More informationSolutions to the Exercises in Stochastic Analysis
Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional
More informationMAT 135B Midterm 1 Solutions
MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above.
More informationProblem Points S C O R E Total: 120
PSTAT 160 A Final Exam Solution December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationµ n 1 (v )z n P (v, )
Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic
More informationStrong approximation for additive functionals of geometrically ergodic Markov chains
Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationBrownian Motion and Stochastic Calculus
ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s
More informationarxiv: v2 [math.pr] 4 Sep 2017
arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More informationExercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).
Exercises: sheet 1 1. Prove: Let X be Poisson(s) and Y be Poisson(t) distributed. If X and Y are independent, then X + Y is Poisson(t + s) distributed (t, s > 0). This means that the property of a convolution
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationWeak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij
Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationPREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES. 1. Introduction.
Acta Math. Univ. Comenianae Vol. LXXVII, 1(28), pp. 123 128 123 PREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES M. EL KADIRI Abstract. We prove as for the real case that a martingale
More informationarxiv: v1 [math.pr] 17 May 2009
A strong law of large nubers for artingale arrays Yves F. Atchadé arxiv:0905.2761v1 [ath.pr] 17 May 2009 March 2009 Abstract: We prove a artingale triangular array generalization of the Chow-Birnbau- Marshall
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationNon-homogeneous random walks on a semi-infinite strip
Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our
More informationExistence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models
Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University
More informationDoléans measures. Appendix C. C.1 Introduction
Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration
More informationTheoretical Tutorial Session 2
1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations
More informationHeavy Tailed Time Series with Extremal Independence
Heavy Tailed Time Series with Extremal Independence Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Herold Dehling Bochum January 16, 2015 Rafa l Kulik and Philippe Soulier Regular variation
More informationLecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationRare-Event Simulation
Rare-Event Simulation Background: Read Chapter 6 of text. 1 Why is Rare-Event Simulation Challenging? Consider the problem of computing α = P(A) when P(A) is small (i.e. rare ). The crude Monte Carlo estimator
More informationZdzis law Brzeźniak and Tomasz Zastawniak
Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More information4.6 Example of non-uniqueness.
66 CHAPTER 4. STOCHASTIC DIFFERENTIAL EQUATIONS. 4.6 Example of non-uniqueness. If we try to construct a solution to the martingale problem in dimension coresponding to a(x) = x α with
More informationA Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition
A Stochastic Online Sensor Scheduler for Remote State Estimation with Time-out Condition Junfeng Wu, Karl Henrik Johansson and Ling Shi E-mail: jfwu@ust.hk Stockholm, 9th, January 2014 1 / 19 Outline Outline
More informationT 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:
108 OPTIMAL STOPPING TIME 4.4. Cost functions. The cost function g(x) gives the price you must pay to continue from state x. If T is your stopping time then X T is your stopping state and f(x T ) is your
More informationExercises: sheet 1. k=1 Y k is called compound Poisson process (X t := 0 if N t = 0).
Exercises: sheet 1 1. Prove: Let X be Poisson(s) and Y be Poisson(t) distributed. If X and Y are independent, then X + Y is Poisson(t + s) distributed (t, s > 0). This means that the property of a convolution
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More informationImportance Sampling Methodology for Multidimensional Heavy-tailed Random Walks
Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Jose Blanchet (joint work with Jingchen Liu) Columbia IEOR Department RESIM Blanchet (Columbia) IS for Heavy-tailed Walks
More informationSimultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationFinal Examination. STA 711: Probability & Measure Theory. Saturday, 2017 Dec 16, 7:00 10:00 pm
Final Examination STA 711: Probability & Measure Theory Saturday, 2017 Dec 16, 7:00 10:00 pm This is a closed-book exam. You may use a sheet of prepared notes, if you wish, but you may not share materials.
More informationMEASURE AND INTEGRATION: LECTURE 18
MEASURE AND INTEGRATION: LECTURE 18 Fubini s theorem Notation. Let l and m be positive integers, and n = l + m. Write as the Cartesian product = +. We will write points in as z ; x ; y ; z = (x, y). If
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationExercises Measure Theoretic Probability
Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family
More informationP(X 0 = j 0,... X nk = j k )
Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that
More informationConcentration Inequalities
Chapter Concentration Inequalities I. Moment generating functions, the Chernoff method, and sub-gaussian and sub-exponential random variables a. Goal for this section: given a random variable X, how does
More informationA slow transient diusion in a drifted stable potential
A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationMaximal Coupling of Euclidean Brownian Motions
Commun Math Stat 2013) 1:93 104 DOI 10.1007/s40304-013-0007-5 OIGINAL ATICLE Maximal Coupling of Euclidean Brownian Motions Elton P. Hsu Karl-Theodor Sturm eceived: 1 March 2013 / Accepted: 6 March 2013
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationPredicting the Time of the Ultimate Maximum for Brownian Motion with Drift
Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with
More informationNon-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington
Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationA regeneration proof of the central limit theorem for uniformly ergodic Markov chains
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,
More informationA Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1
Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable
More informationA relative entropy characterization of the growth rate of reward in risk-sensitive control
1 / 47 A relative entropy characterization of the growth rate of reward in risk-sensitive control Venkat Anantharam EECS Department, University of California, Berkeley (joint work with Vivek Borkar, IIT
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More informationRandom Walk in Periodic Environment
Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics
More informationDetermine whether the formula determines y as a function of x. If not, explain. Is there a way to look at a graph and determine if it's a function?
1.2 Functions and Their Properties Name: Objectives: Students will be able to represent functions numerically, algebraically, and graphically, determine the domain and range for functions, and analyze
More information