Properties of an infinite dimensional EDS system : the Muller s ratchet

Size: px
Start display at page:

Download "Properties of an infinite dimensional EDS system : the Muller s ratchet"

Transcription

1 Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011

2 A ratchet source : wikipedia

3 Plan 1 Introduction : The model of Haigh 2 3

4 Hypothesis (Biological) : The population has a fixed sized N, is haploid and asexual. Only the deleterious mutations happen.

5 Hypothesis (Biological) : The population has a fixed sized N, is haploid and asexual. Only the deleterious mutations happen. Purpose : to show that without recombination, deleterious mutations accumulate.

6 Hypothesis (Biological) : The population has a fixed sized N, is haploid and asexual. Only the deleterious mutations happen. Purpose : to show that without recombination, deleterious mutations accumulate.

7 What is the Muller s ratchet? We call the best class the group of individuals with the best fitness ( let say 0 here).

8 What is the Muller s ratchet? We call the best class the group of individuals with the best fitness ( let say 0 here). Note that deleterious mutations can t be lost.

9 What is the Muller s ratchet? We call the best class the group of individuals with the best fitness ( let say 0 here). Note that deleterious mutations can t be lost. If at any given time, the best class is empty, it shall remain empty forever. That is to say, the minimal number of deleterious in this population has increased. It can t go back ( like a ratchet).

10 Haigh s model : This model is in discrete time. 1 α 0 and λ 0 are parameters. We note Y k the number of deleterious mutations of the individual k.at each generation :

11 Haigh s model : This model is in discrete time. 1 α 0 and λ 0 are parameters. We note Y k the number of deleterious mutations of the individual k.at each generation : Each indivual choses a parent from the previous generation with probability to chose the individual k (1 α) Y k N l=0 (1 α)y l

12 Haigh s model : This model is in discrete time. 1 α 0 and λ 0 are parameters. We note Y k the number of deleterious mutations of the individual k.at each generation : Each indivual choses a parent from the previous generation with probability to chose the individual k (1 α) Y k N l=0 (1 α)y l Each individual has a number of mutations equals to the number of his parent + P(λ).

13 In this model, at each generation the ratchet happens with a probability ( λe λ) N. ( everyone mutates)

14 In this model, at each generation the ratchet happens with a probability ( λe λ) N. ( everyone mutates) So the ratchet will happens infinitely many times a.s.

15 In this model, at each generation the ratchet happens with a probability ( λe λ) N. ( everyone mutates) So the ratchet will happens infinitely many times a.s. This means that the fitness of the population.

16 Now the model studied here is the following one, with X k (t) the proportion of individual with k deleterious mutations at time t. k 0 [ ] dx k = α( lx l k)x k + λ(x k 1 X k ) dt + l=0 l N X k (0) = Xk 0 Xk X l N db k,l

17 Now the model studied here is the following one, with X k (t) the proportion of individual with k deleterious mutations at time t. k 0 [ ] dx k = α( lx l k)x k + λ(x k 1 X k ) dt + l=0 l N X k (0) = Xk 0 where B k,l are independent Brownian motions, except for B k,l = B l,k, Xk X l N db k,l

18 Now the model studied here is the following one, with X k (t) the proportion of individual with k deleterious mutations at time t. k 0 [ ] dx k = α( lx l k)x k + λ(x k 1 X k ) dt + l=0 l N X k (0) = Xk 0 where B k,l are independent Brownian motions, except for B k,l = B l,k, and k X k(0) = 1, X k (0) 0 k 0 Xk X l N db k,l

19 Note that these equations are equivalent to Xk (1 X k ) dx k = [α(m 1 k)x k + λ(x k 1 X k )] dt + db k, N X k (0) = Xk 0 with M 1 (t) = k kx k(t) the median number of mutations in the population at time t, and db k standard brownian motions, with k l dx k, dx l (t) = X k X l dt

20 Why this model? Because it is some kind of limit for the Haigh s one, with an infinite population which stay finite at the same time (N). If you look at the equation, you can see : dx k = [ α(m 1 k)x }{{} k +λ(x k 1 X k ) Is the term for selection. ] dt + X k (1 X k ) N db k,

21 Why this model? Because it is some kind of limit for the Haigh s one, with an infinite population which stay finite at the same time (N). If you look at the equation, you can see : dx k = [ α(m 1 k)x k + λ(x k 1 X k ) }{{} Is the term for mutations. ] dt + X k (1 X k ) N db k,

22 Why this model? Because it is some kind of limit for the Haigh s one, with an infinite population which stay finite at the same time (N). If you look at the equation, you can see : Xk (1 X k ) dx k = [α(m 1 k)x k + λ(x k 1 X k )] dt + db k, } N {{} Is the term for resampling.

23 Difficulties of this model : There is no known explicit solution.

24 Difficulties of this model : There is no known explicit solution. Each equations use all the X k both in M 1 and in their stochastic term, they can t be separated.

25 Difficulties of this model : There is no known explicit solution. Each equations use all the X k both in M 1 and in their stochastic term, they can t be separated. The diffusion coefficient isn t lipschiztian around 0 and 1.

26 We can calculate the equation of M 1 : dm 1 (t) = (λ αm 2 (t))dt + where M 2 is the variance. M2 (t) N db t,

27 We can calculate the equation of M 1 : dm 1 (t) = (λ αm 2 (t))dt + where M 2 is the variance. M2 (t) N db t, But if we calculate the equation of M 2, M 3 and M 4 appears, and so on. The equation of M k use up to M 2k. This system can t be solved. ( M k is the k-th centered moment).

28 We can calculate the equation of M 1 : dm 1 (t) = (λ αm 2 (t))dt + where M 2 is the variance. M2 (t) N db t, But if we calculate the equation of M 2, M 3 and M 4 appears, and so on. The equation of M k use up to M 2k. This system can t be solved. ( M k is the k-th centered moment). Moreover, we have dx k, dm 1 = X k M 1 dt, they aren t independant.

29 Now the good news :

30 Now the good news : F.Yu, A. Etheridge and C.Cuthbertson have shown that our problem is well posed.

31 Now the good news : F.Yu, A. Etheridge and C.Cuthbertson have shown that our problem is well posed. From this we deduce that our problem has the Markov property.

32 Now the good news : F.Yu, A. Etheridge and C.Cuthbertson have shown that our problem is well posed. From this we deduce that our problem has the Markov property. And.. That s all.

33 What we want to show : Theorem Let T 0 = {inf t 0, X 0 (t) = 0} Then for any initial condition (X k (0)) k N, P(T 0 < + ) = 1. In other words, the ratchet will click a.s.

34 What has been already proved?

35 What has been already proved? A.Etheridge, A. Wakolbinger, P.Pfaffelhuber have shown that in the determinist case (N = + ), the system can be explicitely solved using cumulants on the M k. They obtain that the system will converge at exponential rate to the Poisson distribution with parameter λ α

36 What has been already proved? A.Etheridge, A. Wakolbinger, P.Pfaffelhuber have shown that in the determinist case (N = + ), the system can be explicitely solved using cumulants on the M k. They obtain that the system will converge at exponential rate to the Poisson distribution with parameter λ α But then it never clicks. The ratchet only happens due to the randomness. Moreover the cumulant system in the stochastic case has no known solution.

37 Plan Introduction : The model of Haigh 1 Introduction : The model of Haigh 2 3

38 The proof will use quite a few intermediate results. We will first present a comparison theorem we will use a lot : Our processes are defined on a probability space (Ω, F, P), equipped with a filtration (F t, t 0) which is such that for each k, l 0 {B k,l (t), t 0} is a F t Brownian motion. We denote by P the corresponding σ-algebra of predictable subsets of R + Ω.

39 Lemma Let B t be a standard F t Brownian motion, T a stopping time, σ be a 1/2 Hölder function, b 1 : R R a Lipschitz function and b 2 : Ω R + R R is P B(R) measurable function. Consider the two SDEs { dy1 (t) = b 1 (Y 1 (t))dt + σ(y 1 (t))db t, (3.1) Y 1 (0) = y 1 ; { dy2 (t) = b 2 (t, Y 2 (t))dt + σ(y 2 (t))db t, Y 2 (0) = y 2. (3.2)

40 Lemma Let Y 1 (resp Y 2 ) be a solution of (3.1) ( resp (3.2)). If y 1 y 2 (resp y 2 y 1 ) and outside a measurable subset of Ω of probability zero, t [0, T ], x R, b 1 (x) b 2 (t, x) (resp b 1 (x) b 2 (t, x)), then a. s. t [0, T ], Y 1 (t) Y 2 (t) (resp Y 1 (t) Y 2 (t)).

41 And, last but not least, a remark : Remark let A, B Ω. Then P(A B) P(A) + P(B) 1.

42 Plan Introduction : The model of Haigh 1 Introduction : The model of Haigh 2 3

43 Now we will compare the equation of M 1 to a more friendly one. M2 (t) (λ αm 2 (t))dt + N db t = (λ 1 2 αm M2 (t) 2(t))dt + N db t 1 2 αm 2(t)dt (λ 1 2 αx 0M1 2 M2 (t) )dt + N db t 1 2 αm 2(t)dt

44 We will show that M 1 can t grow too fast : Lemma c > 0, t > 0, P( sup 0 r t 2 M 1 (r +t) M 1 (t) λt 2 +c) 1 exp( 2αNc) > 0 : We use Z s,t = t+s t M 2 (r) N db r α s+t t We note that, at set t, exp(2αnz u,t) is both a local martingale and a surmartingale. We also have sup M 1 (s + t) M 1 (t) sup Z s,t + λt, 0 s t 0 s t using the comparison theorem and the previous remark. M 2 (r)dr,.

45 And c > 0 P( sup 0 u t Z u,t c) P( sup 0 u t exp(2αnz u,t) exp(2αnc)) exp( 2αNc) < 1 Where we have taken advantage of that exp(2αnz u,t) is a local martingale and Doob inequality, hence P(sup u 0 Z u,t c) exp( 2αNc) < 1 Then P( sup 0 r t M 1 (r + t) M 1 (t) λt + c) 1 exp( 2αNc) > 0

46 Plan Introduction : The model of Haigh 1 Introduction : The model of Haigh 2 3

47

48 What is Ω 1? For this we use a simpler model. Let X be the solution of the following system : { X (0) =δ X (t) =dt + 2 X (t)db 0

49 If X 0 (0) δ, M 1 (0) M, as long as X 0 M 1 < 2ε we can compare X 0 and X. Lemma Let T min = inf{t > 0, X 0 (t)m 1 (t) 2ε or X 0 (t) δ + µ}. Then t [0, T min ], we have X 0 (t) X (A(t)), where A(t) = 1 1 X 0 (s) ds and σ(t) = inf{u > 0, A(u) t}. 4 N t 0 t Note that A(t) t because 4 1 X 5N 4N ( thanks to the choices of µ and δ), and T was chosen such as A(T ) ε. 12λ

50 : We note X 0 (t) = X 0 (σ(t)) ( resp M 1 (t) = M 1 (σ(t))) d X 0 (t) = (α M 1 (t) λ) X 4N 0 (t) 1 X 0 (t) dt + 2 X 0 (t)dw t Since M 1 (t) X 0 (t) 2ε and 1 δ 1, 2 (α M 1 (t) λ) X 4N 0 (t) 1 X 0 (t) 1 Then, using the comparaison theorem, we obtain the conclusion.

51 Lemma Let X(t) be the solution of the previous system. and T 0 = inf{t > 0, X (t) = 0}. Then T > 0, p 2 < 1 δ > 0 such as P(T 0 T ) > p 2

52 : Let We have Then T > 0, P( 0 Introduction : The model of Haigh Y (t) = δexp ( t + 2B 0 (t)) D(t) = t 0 Y (s)ds σ(t) = inf {s > 0, D(s) > t} Y (t)dt T ) = P( X t = Y σ(t) D( ) < 0 exp ( t + 2B 0 (t)) dt T δ ) But T > 0, lim δ 0 T δ = a. s.

53 Corollary Let X(t) be the solution of (3.1), T 0 = inf{t > 0, X (t) = 0}, T µ = inf{t > 0, X (t) = δ + µ}. Then T > 0, p 2 < 1, µ > 0, δ > 0 such as P(T 0 T T µ) > p 2 : We use the same proof as before, noticing that P(T 0 T T µ) P({ {sup t 0 0 δ 0 1 exp( t + 2B 0 (t))dt T δ } exp( t + 2B 0 (t)) δ + µ }) δ

54 Now we chose a value M > 0 for M 1 (0), and we set where A is defined below, T = εn 3λ, M max = M + λa(t ) + ε, p 2 = exp( αn ε 3 ), µ = ε 6M max ε Now δ is given in terms T, M max, p 2 and µ by the previous corollary, and we let. δ = δ 1 10 ε M

55 Now we need to control T m in. We need to prove is X 0 (t)m 1 (t) 2ε 0 t A(T ) A(T µ) with probability p 3 > 1 p 2. Lemma p 3 > 1 p 2 such as P(sup 0 t A(T ) A(T µ) X 0 (t)m 1 (t) 2ε) p 3

56 : P( sup 0 t A(T ) A(T µ ) M 1 (t) M 1 (0) + λa(t ) + ε 6 ) P( sup 0 t A(T ) 1 exp( αn ε 6 ) M 1 (t) M 1 (0) + λa(t ) + ε 6 )

57 Then sup X 0 (t)m 1 (t) (δ + µ)(m 1 (0) + λa(t ) + ε 0 t A(T ) A(T µ ) 6 ) δm 1 (0) + µm 1 (0) + λa(t ) + ε 6 ε + ε 6 + ε 12 + ε 6 2ε

58 So, to sum up, with probability p fin = p 2 + p 3 1, we have : T min A(T µ) A(T ), T µ T, hence on the interval [ 0, A(T ) A(T µ) ] = [0, A(T )], we have both X 0 (t) X (A(t)) and X (A(t)) reaches 0.Hence the result.

59

60 Now for the set Ω 2 = {(X, M 1 ), X δ and XM 1 δm( ε)} We will show that X 0(0), M 1(0) has a better probability to reach 0 before time T. Let C = M 1 1 (because of the definition of X, M). Then we M have X 0 ε. C The probability to reach zero for X (t) is decreasing in δ, we increase this probability by starting from X (0) = X 0(0) δ. C Moreover, we start with X 0(0)M 1(0) ε. The only thing which is worse than the original case is M 1(0) which is greater than M 1 (0), hence a greater M max.

61 But this only appears in one place : when we defined µ. Note that if we define M 1,max = M 1(0) + λt + ε 6, the maximum reached by M 1, we have : M 1,max CM max P(M 1,max sup M 1(t)) 1 exp( αn ε 0 t T 6 ) By definition of µ, if we define µ with M 1,max we have µ µ C.

62 But if we look at the demonstration, we have, since T CT T and δ +µ = 1 + µ 1 + µ δ δ δ δ δ δ P(T 0 T T µ ( ) P { exp( t + 2B 1 (t))dt T 0 δ } ) {sup exp( t + 2B 1 (t)) δ + µ } t 0 δ ( P { exp( t + 2B 1 (t))dt T 0 δ } {sup exp( t + 2B 1 (t)) δ + µ ) } t 0 δ

63 We sum up the result in the following proposition, where ε = XM ( it s a different ε, smaller than the previous one, but we keep the notation). Proposition Let (X k (t)) k N be the solution of the initial model, and M 1 its mean as defined in the beginning. Then p p fin > 0 such as if t > 0 such as X 0 (t) δ and X 0 (t)m 1 (t) ε, P(T 0 < t + T ) p > 0

64 Plan Introduction : The model of Haigh 1 Introduction : The model of Haigh 2 3

65 First an inequality Lemma Let µ be a probability on N, x k = µ(k) k 0, M 1 = k N kx k, and M 2 = k N (k M 1) 2 x k Then M 2 1 x 0 M 2 x 0 M 2 1.

66 : By Jensen we have

67 : By Jensen we have ( k 1 X k 1 X 0 k ) 2 k 1 X k 1 X 0 k 2 with equality if and only if there exists only one k 1 such as X k > 0. Then :

68 : By Jensen we have ( k 1 X k 1 X 0 k ) 2 k 1 X k 1 X 0 k 2 with equality if and only if there exists only one k 1 such as X k > 0. Then : M 2 1 (1 X 0 ) k 1 X k k 2, hence

69 : By Jensen we have ( k 1 X k 1 X 0 k ) 2 k 1 X k 1 X 0 k 2 with equality if and only if there exists only one k 1 such as X k > 0. Then : M 2 1 (1 X 0 ) k 1 X k k 2, hence X 0 M 2 1 (1 X 0 ) k 1 X kk 2 (1 X 0 )M 2 1

70 : By Jensen we have ( k 1 X k 1 X 0 k ) 2 k 1 X k 1 X 0 k 2 with equality if and only if there exists only one k 1 such as X k > 0. Then : M 2 1 (1 X 0 ) k 1 X k k 2, hence X 0 M1 2 (1 X 0 ) k 1 X kk 2 (1 X 0 )M1 2 X 0 M1 2 (1 X 0 ) ( k 1 X ) kk 2 M1 2

71 Now we can obtain some kind of recurrence for M 1. Lemma Let Sλ T := inf{t T, X 0(t)M 1 (t) 2 2 λ+1 }. Then for any α T > 0, we have Sλ T < +

72 :

73 : On the interval [0, S λ ],we have α 2 M 2 α 2 X 0M 2 1 (λ + 1),

74 : On the interval [0, S λ ],we have α 2 M 2 α 2 X 0M 2 1 (λ + 1), and then the process M 1 is bounded from above by the process Y, solution of the SDE dy t = dt α 2 M M2 (t) 2(t)dt + N db t, Y 0 = M 1 (0).

75 Since M 1 cannot become negative, it now suffice to show that t M2 (r) Z t := N db r α t M 2 (r)dr is bounded from above a.s. If we define C(t) = 1 M N 0 2(s)ds, we have Z t = W (C(t)) N αc(t) where W is a standard 2 Brownian motion. Now, if C( ) = then lim t Z t =, hence Z t is bounded from above. Or else C( ) <, and we have sup t>0 Z t = sup 0<s<C( ) W (s) N αs < a.s. 2 t

76

77 As a consequence of the previous results : Lemma Let (X k (t)) k N be the solution of the Muller s ratchet. Then we have { lim t + X 0(t) = 0} {T 0 < + }

78 : If lim t + X 0 (t) = 0 then T 0 > 0 such as t > T 0, we have X 0 (t) δ ε2 α. 4(λ+1) Then we have a T 1 > T 0 such as X 0 (t)m 2 1 (t) 2 λ + 1 α. Let us suppose that X 0 M 1 > ε. Then M 1 = X 0M 1 > 4 λ + 1 X 0 αε X 0 M1 2 = X 0 M 1 M 1 4 λ + 1 α which is absurd. Then we have X 0 M 1 ε.

79 Then we have P(T 0 < T 1 + T ) > p. This situation presents itself infinitely many time, as long as the ratchet doesn t click, we obtain the conclusion, and since the system (X k ) k N has the markov property.

80 Now the recurrence on M 1 Let S β t = inf {t > t 0, M 1 (t ) β}. Then we will prove the following lemma : Lemma t > 0, we have P(T 0 S β t < ) = 1

81

82 : First, we note δ inf = δ ε2 α. 4(λ+1) Now we introduce the process Yt t, defined t 0, t t which is the solution of the following system : dy t t = Y t t = δ inf Y t t (1 Y t N t ) db 0 (3.3) We note that Yt t is an instance of the processes studied in the addendum, because here we have 2 < f (t) = 0 < 2. Then using Ru t = inf {t t, Yt t = u},

83 We have E(R0 t R1 t ) < + and P(R1 t < R0 t ) > 0. From this we deduce that K > 0, p > 0 such as P(R1 t K R0 t ) p > 0). In particular P(R1 t K) p > 0. We use L = K T. ( T from the step 3). Now we construct the following sequence : U 0 = U n = the first time where x 0 M1 2 2 λ+1, and n 1 U α n = the first time after U n 1 + L where x 0 M1 2 2 λ+1. This time exists a.s. and is < + α thanks to prop 1.2.

84 Now, at U 0 : Either X 0 (U 0 ) δ i nf, then using the same reasoning as 4.1, we have P(T 0 U 0 + T ) = p fin > 0. Or X 0 (U 0 ) > δ i nf. And in that case there is 2 case : Either inf U0 t U 0 +K M 1 (t) β. In that case we have (αm 1 λ)x 0 0, and then we can use the comparaison theorem and we get X 0 (t) Y t U 0. Then P(T 1 U 0 + K) p > 0. But if X 0 (t) = 1, then M 1 (t) = 0. Hence P(S β U 0 + K) p > 0. In the other case inf U0 t U 0 +K M 1 (t) < β, hence S β U 0 + K.

85 To conclude, if we use q = p p fin, we have P(T 0 S β t = + ) P(T 0 S β t U 0 + L) 1 q. Now since the processes X 0,M 1 are markovian, by iterating we obtain P(T 0 S β t = + ) = 0

86

87 Plan Introduction : The model of Haigh 1 Introduction : The model of Haigh 2 3

88 Now, starting from ((X k ) k N, M 1 ) withm 1 β we want to study a path that will reach zero. To simplify notations we reset the time. One of the main problem here is that the quadratic variation of X 0 is X 0(1 X 0 ), which is a difficulty around 1 and 0. We N need to study three separate cases : X 0 [δ 1 ; 1] X 0 δ or X 0 M 1 > ε X 0 δandx 0 M 1 ε

89 The following lemma will show that if X 0 starts too close from 1, it will quickly goes under δ 1 : Lemma Let t 1 = 8 λ 2 and δ 1 = max{ 9 3λ + 5α, 10 5(λ + α), 1 2 λ }. Then, t > 0, if X 0 (t) > δ 1, then q > 0 such as P(inf{s > t, X 0 (s) δ 1 } < t + t 1) 1 exp( N) > 0

90 : During the interval [t, T δ1 (t)], we have, since X 1 1 X 0, 2λ X 1 (t) 5(λ + α). 2λ αm 1 X 1 + λx 0 (λ + α)x 1 λx 0 (λ + α) 5(λ + α) λx 0 2λ 5 since X 0 (s) > 0, 9 as well. λ 2

91 Hence X 1 Y 1 when s [t, T δ1 (t)], where Y 1 is the solution of the following system Y 1 (t) = X 1 (t) dy 1 (s) = λ 2 ds + Y1 (1 Y 1 ) (3.4) db 1 N

92 Noticing that Y 1 (1 Y 1 ) 1 4, we have ( with C = 2 λ ). ( P { t+t 1 t ( = P ) Y1 (1 Y 1 ) db 1 < C} {T δ1 > t + t N 1} ) Y1 (1 Y 1 ) db 1 > C N t+t 1 t

93 ( ( t+t 1 Y1 (1 Y 1 ) = P exp p db 1 t N > exp (pc )) p2 t+t 1 Y t 1 (1 Y 1 ) ds 2N ( ( t+t 1 Y1 (1 Y 1 ) P exp p db 1 t N )) > exp (pc p2 8N t 1 ) exp ( pc + p2 8N t 1 ) Y t 1 (1 Y 1 ) ds 2N p2 t+t 1 t+t 1 t ) p 2 Y 1 (1 Y 1 ) ds 2N

94 Where p = 4CN t minimizes the quantity. ( ) t+t 1 Y1 (1 Y 1 ) P db 1 C 1 exp ( N) > 0 N t Now, since t+t 1 t λ 2 ds = 4 λ = 2C

95 We have t+t 1 Y1 (1 Y 1 ) { db 1 C} {T δ1 < t + t t N 1} Which implies that P (T δ1 < t + t 1) 1 exp( N)

96 Now we add a control on M 1 Lemma Let δ 1 = max{ 9, 3λ+5α, 1 2 }, 10 5(λ+α) λ t 1 = 8, ( ) λ 2 ε 0 = 1 ln 2 and β = β + λt αn 1 exp( N) 1 + ε 0. Then, t > 0, if X 0 (t) > δ 1 and M 1 (t) < β, then P ({T δ1 t + t 1} {M 1 (T δ1 ) β }) = p init > 0

97 : Here, Then, P(T δ1 t 1) q P(M 1 (T δ1 ) β ) 1 exp( snε 0 ) P ({T δ1 t 1} {M 1 (T δ1 ) β }) = p init 1 exp( N) exp( αnε 0 ) 1 exp( N) 2 > 0

98

99 Now the second part : X 0 δ 1 but either X 0 > δ or X 0 M 1 > ε. First some inequalities : Lemma Let {V t, t 0} be a standard Brownian motion, and c > 0 a constant. The for any t > 0, δ > 0, ( ) P inf {cs + B s} δ 0 s t 1 ( ) 2 δ π t + c t.

100 : We have, with Z denoting a N(0, 1) random variable, ( ) ( ) P inf s} δ P inf s δ ct 0 s t 0 s t ( ) P sup 0 s t V s δ + ct 2P(V t δ + ct) 1 P( Z δ t + c t), from which the result clearly follows.

101 From this we deduce : Lemma Let {V t, t 0} be a standard Brownian motion, and c > 0 a constant. The for any t > 0, δ > 0, µ > 0, ( ) P inf {cs + V s} δ, sup {cs + V s } µ 0 s t 0 s t ( ) [ 2 1 δ π t + c t 2 exp 1 ( ) ] 2 µ t 2 c t.

102 : ( ) P sup (cs + V s ) µ 0 s t ( ) P sup 0 s t V s µ ct = 1 2P(V t µ ct). Now, Z denoting a N(0, 1) random variable, for all p > 0, ( P(B t µ ct) = P Z µ c ) t t ( ( [ ] ) µ t = P exp(pz p 2 /2) exp p c t p2 2 ( [ ] ) µ t exp p c t + p2 2

103 Choosing p = µ/ t c t, we conclude from the above computations that ( ) [ P sup (cs + V s ) µ 1 2 exp 1 ( ) ] 2 µ t 0 s t 2 c t.

104 We set : ε = log(4) αn, so that e Nα ε = 1 4 µ = 1 δ 1 2 We will use another time change, and we use A and σ again for the time change (but they are not the same) and aim at proving that X 0 will go down to δ in a finite number of steps, while staying below X + µ (so that 1 X 0 (t) a := 1 (X + µ)), and while M 1 does not go too far on the right, all that with positive probability.

105 We have : X0 (t)[1 X 0 (t)] dx 0 (t) = (αm 1 (t) λ)x 0 (t)dt + db 0. N Let t X 0 (s)[1 X 0 (s)] A t := ds, 0 N σ t := inf{s > 0, A s > t}, X 0 (t) := X 0 (σ t ), M 1 (t) := M 1 (σ t ), and

106 We deduce σ t = t 0 X 0 (t) = X + N N X 0 (s)(1 X 0 (s)) ds, t 0 (α M 1 (s) λ) ds + B t, 1 X 0 (s) where B t is a new standard Brownian motion. At the k th step of our iterative procedure, we let X 0 start from X k 1 j=1 δ j, and we stop the process X 0 at the first time that it reaches the level X k j=1 δ j.

107 We will choose δ k and t k such as for each 1 k K (K to be defined below), ( ) P inf {A k s + B s } δ k, sup {A k s + B s } µ > 2 0 s t k 0 s t k 3, where, with T k := σ tk, A k = Nα a ( β + k ε + λ ) k T j, j=1 so that we have from Lemma 4 and our choice of ε that P( sup 0 s T k M 1 (s) A k M1 (0) A k 1 ) 1/3.

108 We show that we can choose the two sequences δ k and t k for k 1 in such a way that not only the two previous inequalities hold, but also that there exists K < such that X K δ k δ. k=1 Since during the k th step we are considering the event that 1 X 0 (t) a, and also X 0 (t) X k j=1 δ j, we have that T k N a(x k j=1 δ j) t k,

109 Then we choose Introduction : The model of Haigh A k := N α a [ First we want to insure β + k ε + N λ a k j=1 δ k tk + A k tk 0.4, which we achieve by requesting both that t j X j i=1 δ i δ k = 0.2 t k (3.5) ]. and ( ) A k tk 0.2 t k. (3.6) A k

110 On the other hand, we shall also request that for each k 1, δ k X k 1 δ j It follows from A k N αβ a C N = N αβ a A k C N + 25 N 2 λα a 2 C N + D N k 1 δ k 1 k 1 2 (X δ j ). that with and D N = N α a ( ( ε + j=1 λ ), αβ ) sup δ j k + kε Nα 1 j k a

111 Since we have j 0 δ j 0.2 t j (0.2)2 1 a 25 Nαβ Finally this leads to choosing, with κ 1 to be chosen below 25 ( ) κ δ k = inf (C N + D N k), 1 k 1 2 (x δ j ) t k = 25δ 2 k. A j j=1

112 Then from a certain rank on we have Lemma K > 0, k > K, δ k = 1 k 1 2 (X δ j ). j=1

113 : Since k N 1 = +, K 25(C N +D N > 0 such as k) K 1 k=2 > 1. Then 2 K K 25(C N +D N such as k) ( 1 inf, 1(X ) K 1 25(C N +D N K) 2 j=1 δ j) = 1(X K 1 2 j=1 δ j). Then by recurrence, if we have the previous equality at rank k, for the rank k + 1 we have 1 25(C N +D N (k+1)) δ k 1 25(C N +D N (k+1)) 1 25(C N +D N k) 1 2 since k 2

114 That is to say Introduction : The model of Haigh Hence δ k+1 = inf ( 1 25(C N + D N (k + 1)) 1 2 δ k 1 k 1 4 (X δ j ) j=1 1 k 2 (X δ j ) 1, 1(X ) k 25(C N +D N (k+1)) 2 j=1 δ j) j=1 = 1 2 (X k j=1 δ j)

115 Then for each k > K, X 0 progresses by a step equal to half the remaining distance to zero. Consequently c > 0 X k = X k j=1 δ j c2 k. We are looking for the smallest integer k such that c2 k δ, which implies that [ ] log(c) log(δ ) k 1. log(2) Since moreover A k (25δ k ) 1, there exists a constant c such that δ A k c δ log( 1 δ ). Hence there exists a δ δ (which depends only upon C N, D N, c which are constants) such that at the end of the k th step, both X 0 δ and X 0 M 1 ε.

116 . We just need to check that the probability of the previous pathes is = p trans > 0. Given the choice that we have made for ε, it suffices to make sure that ( ) 2 δk + A k tk < 1/3, k 1, π tk which is a consequence of 3 1 π/2 > 0.4, and [ 2 exp 1 ( ) ] µ 2 tk A k tk < 1/3, k 1. 2

117 This is equivalent to ( µ tk A k tk ) 2 > 2 log 6, which follows from κ 2 log 6+4C N µ 2 log 6. We therefore 10 choose κ = 1 2 log CN µ 2 log 6. 10

118 Then P( the k-th step happens ) = 1, hence, ( using the markovian property, we have p trans inf 1 k, k 12) but k < for any initial condition X, and increasing in X, so the worst k, noted k max is reached for X = δ 1, and then p trans inf k ( ) kmax 1 > 0 12 Note that the time ( in the initial scale of time) in which we reach δ is bounded by t 2 = 100Nk max. We ve finally reached third situation, which leads to the conclusion.

119 X 0 δ and X 0 M 1 ε Here we can immediatly apply the step 3 of this document, and we have a probability p fin to reach 0 before T elapsed. So to sump up, using again the markovian properties of the system, Lemma t > 0, if M 1 (t) β, then P(T 0 < t + t 1 + t 2 + T ) p fin p trans p ini > 0

120 Plan Introduction : The model of Haigh 1 Introduction : The model of Haigh 2 3

121 Here we will prove the following stronger theorem Theorem For any choice of initial condition, let (X k (t)) k Z+ the solution of (8). Then E(T 0 ) <. We first note that the reasoning of section 5 can be done with any initial value ρ for M 1, instead of β. That is to say, with S t ρ = inf {s > t, M 1 (s) ρ} (and S ρ = S 0 ρ ), Lemma t ρ 1, tρ 2, tρ 3 <, and pρ ini, pρ trans, p ρ fin > 0 such that P(T 0 < S t ρ + t ρ 1 + tρ 2 + tρ 3 ) pρ ini pρ transp ρ fin

122 Now let us choose ρ = ε 2λ. We have : δ α Lemma Let K = L + t 3 ( L to be defined below). Then p > 0, such that for any initial condition, P(T 0 S ρ K) p : We are going to argue like in the recurrence for M 1. We introduce the process Y s, defined s 0 which is the solution of the following system : dy s = αε 2 ds + Ys (1 Y s ) db 0 (s) N (3.7) Y 0 = 0

123 We define for any 0 u 1 R u = inf {s 0, Y s = u}. Since αε > 0 we deduce that L > 0, p > 0 such as 2 P(R 1 L) p > 0. We use K = L + t 3. ( t 3 from Proposition 3.2 ). Now there are several possibilities : Either inf 0 s L M 1 (s) ρ, then S ρ < L < K. Or else inf 0 s L M 1 (s) ρ. Then either inf 0 s L X 0 (s)m 1 (s) ε, then t < L such as X 0 (t)m 1 (t) ε (which implies X 0 (t) δ, becausem 1 (t) ρ ε ). In that δ case we can use Proposition 3.2, and we have P(T 0 K) = p fin > 0, which implies P(T 0 S ρ K) = p fin > 0,

124 Or else we have both inf 0 s L M 1 (s) ρ and inf 0 s L X 0 (s)m 1 (s) ε. In that last sub case we have (since X 0 ε M 1, and αm 1 λ λ > 0) inf (αm 1(s) λ)x 0 (s) 0 s L inf ε(α 0 s L αε 2, λ M 1 (s) ) and consequently we can use the comparison theorem (Lemma 2), which implies that s [0, L], X 0 (s) Y s. Then P(T 1 L) p > 0. But when X 0 hits 1, M 1 hits 0. Hence P(S ρ L) p > 0.

125 We can now conclude. We deduce from the previous results and the strong Markov property that K, p > 0 such as for all n 0, P(T 0 > nk) (1 p) n. Consequently E(T 0 ) = n=0 (n+1)k nk KP(T > nk) n=0 P(T > t)dt = K p

126 THANK YOU FOR YOUR ATTENTION!

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Some mathematical models from population genetics

Some mathematical models from population genetics Some mathematical models from population genetics 5: Muller s ratchet and the rate of adaptation Alison Etheridge University of Oxford joint work with Peter Pfaffelhuber (Vienna), Anton Wakolbinger (Frankfurt)

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

LogFeller et Ray Knight

LogFeller et Ray Knight LogFeller et Ray Knight Etienne Pardoux joint work with V. Le and A. Wakolbinger Etienne Pardoux (Marseille) MANEGE, 18/1/1 1 / 16 Feller s branching diffusion with logistic growth We consider the diffusion

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Approximation of BSDEs using least-squares regression and Malliavin weights

Approximation of BSDEs using least-squares regression and Malliavin weights Approximation of BSDEs using least-squares regression and Malliavin weights Plamen Turkedjiev (turkedji@math.hu-berlin.de) 3rd July, 2012 Joint work with Prof. Emmanuel Gobet (E cole Polytechnique) Plamen

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm Gonçalo dos Reis University of Edinburgh (UK) & CMA/FCT/UNL (PT) jointly with: W. Salkeld, U. of

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Skorokhod Embeddings In Bounded Time

Skorokhod Embeddings In Bounded Time Skorokhod Embeddings In Bounded Time Stefan Ankirchner Institute for Applied Mathematics University of Bonn Endenicher Allee 6 D-535 Bonn ankirchner@hcm.uni-bonn.de Philipp Strack Bonn Graduate School

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

Lecture 12: Diffusion Processes and Stochastic Differential Equations

Lecture 12: Diffusion Processes and Stochastic Differential Equations Lecture 12: Diffusion Processes and Stochastic Differential Equations 1. Diffusion Processes 1.1 Definition of a diffusion process 1.2 Examples 2. Stochastic Differential Equations SDE) 2.1 Stochastic

More information

Chapter 1. Poisson processes. 1.1 Definitions

Chapter 1. Poisson processes. 1.1 Definitions Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches

Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Dynamical systems with Gaussian and Levy noise: analytical and stochastic approaches Noise is often considered as some disturbing component of the system. In particular physical situations, noise becomes

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Brownian Motion. Chapter Definition of Brownian motion

Brownian Motion. Chapter Definition of Brownian motion Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Approximating diffusions by piecewise constant parameters

Approximating diffusions by piecewise constant parameters Approximating diffusions by piecewise constant parameters Lothar Breuer Institute of Mathematics Statistics, University of Kent, Canterbury CT2 7NF, UK Abstract We approximate the resolvent of a one-dimensional

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

Do stochastic processes exist?

Do stochastic processes exist? Project 1 Do stochastic processes exist? If you wish to take this course for credit, you should keep a notebook that contains detailed proofs of the results sketched in my handouts. You may consult any

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

B. LAQUERRIÈRE AND N. PRIVAULT

B. LAQUERRIÈRE AND N. PRIVAULT DEVIAION INEQUALIIES FOR EXPONENIAL JUMP-DIFFUSION PROCESSES B. LAQUERRIÈRE AND N. PRIVAUL Abstract. In this note we obtain deviation inequalities for the law of exponential jump-diffusion processes at

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

A PECULIAR COIN-TOSSING MODEL

A PECULIAR COIN-TOSSING MODEL A PECULIAR COIN-TOSSING MODEL EDWARD J. GREEN 1. Coin tossing according to de Finetti A coin is drawn at random from a finite set of coins. Each coin generates an i.i.d. sequence of outcomes (heads or

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference Optimal exit strategies for investment projects Simone Scotti Université Paris Diderot Laboratoire de Probabilité et Modèles Aléatories Joint work with : Etienne Chevalier, Université d Evry Vathana Ly

More information

Non-Zero-Sum Stochastic Differential Games of Controls and St

Non-Zero-Sum Stochastic Differential Games of Controls and St Non-Zero-Sum Stochastic Differential Games of Controls and Stoppings October 1, 2009 Based on two preprints: to a Non-Zero-Sum Stochastic Differential Game of Controls and Stoppings I. Karatzas, Q. Li,

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

13 The martingale problem

13 The martingale problem 19-3-2012 Notations Ω complete metric space of all continuous functions from [0, + ) to R d endowed with the distance d(ω 1, ω 2 ) = k=1 ω 1 ω 2 C([0,k];H) 2 k (1 + ω 1 ω 2 C([0,k];H) ), ω 1, ω 2 Ω. F

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6

MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 7 9/25/2013

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 7 9/25/2013 MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 7 9/5/013 The Reflection Principle. The Distribution of the Maximum. Brownian motion with drift Content. 1. Quick intro to stopping times.

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes

More information

An Introduction to Stochastic Processes in Continuous Time

An Introduction to Stochastic Processes in Continuous Time An Introduction to Stochastic Processes in Continuous Time Flora Spieksma adaptation of the text by Harry van Zanten to be used at your own expense May 22, 212 Contents 1 Stochastic Processes 1 1.1 Introduction......................................

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Homework #6 : final examination Due on March 22nd : individual work

Homework #6 : final examination Due on March 22nd : individual work Université de ennes Année 28-29 Master 2ème Mathématiques Modèles stochastiques continus ou à sauts Homework #6 : final examination Due on March 22nd : individual work Exercise Warm-up : behaviour of characteristic

More information

Introduction Optimality and Asset Pricing

Introduction Optimality and Asset Pricing Introduction Optimality and Asset Pricing Andrea Buraschi Imperial College Business School October 2010 The Euler Equation Take an economy where price is given with respect to the numéraire, which is our

More information

Particle models for Wasserstein type diffusion

Particle models for Wasserstein type diffusion Particle models for Wasserstein type diffusion Vitalii Konarovskyi University of Leipzig Bonn, 216 Joint work with Max von Renesse konarovskyi@gmail.com Wasserstein diffusion and Varadhan s formula (von

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition F. Ammar Khodja Clermont-Ferrand, June 2011 GOAL: 1 Show the important differences between scalar and non

More information

arxiv:math/ v4 [math.pr] 12 Apr 2007

arxiv:math/ v4 [math.pr] 12 Apr 2007 arxiv:math/612224v4 [math.pr] 12 Apr 27 LARGE CLOSED QUEUEING NETWORKS IN SEMI-MARKOV ENVIRONMENT AND ITS APPLICATION VYACHESLAV M. ABRAMOV Abstract. The paper studies closed queueing networks containing

More information

Basics of Stochastic Modeling: Part II

Basics of Stochastic Modeling: Part II Basics of Stochastic Modeling: Part II Continuous Random Variables 1 Sandip Chakraborty Department of Computer Science and Engineering, INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR August 10, 2016 1 Reference

More information

Doléans measures. Appendix C. C.1 Introduction

Doléans measures. Appendix C. C.1 Introduction Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Convergence of Markov Processes. Amanda Turner University of Cambridge

Convergence of Markov Processes. Amanda Turner University of Cambridge Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Series of Error Terms for Rational Approximations of Irrational Numbers

Series of Error Terms for Rational Approximations of Irrational Numbers 2 3 47 6 23 Journal of Integer Sequences, Vol. 4 20, Article..4 Series of Error Terms for Rational Approximations of Irrational Numbers Carsten Elsner Fachhochschule für die Wirtschaft Hannover Freundallee

More information

2905 Queueing Theory and Simulation PART IV: SIMULATION

2905 Queueing Theory and Simulation PART IV: SIMULATION 2905 Queueing Theory and Simulation PART IV: SIMULATION 22 Random Numbers A fundamental step in a simulation study is the generation of random numbers, where a random number represents the value of a random

More information