Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes

Size: px
Start display at page:

Download "Albert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes"

Transcription

1 Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics 1

2 TOPIC I: TOPIC II: Sharp maximal inequalities for continuous time processes Sharp maximal inequalities for discrete time processes 2

3 TOPIC I: Sharp maximal inequalities for continuous time processes 1. Introduction. The main method for obtaining a sharp maximal inequalities 2. Maximal inequalities for standard Brownian motion and it s modulus. Martingale and «Stefan problem» approaches 3. Maximal inequalities for skew Brownian motion. Solution to the corresponding Stefan problem 4. Maximal inequalities for Bessel processes. Solution to the corresponding Stefan problem 5. Doob maximal inequalities 3

4 TOPIC II: Sharp maximal inequalities for discrete time processes 1. Maximal inequalities for modulus of simple symmetric Random walk 2. Maximal inequalities for simple symmetric Random walk 4

5 TOPIC I: Sharp maximal inequalities for continuous time processes 1. Introduction. The main method for obtaining a sharp maximal inequalities Let X = X t t 0 be a process on Ω, F, P with natural filtration F = F t t 0, F t = σx s, s t. For any Markov time τ w.r.t. F t t 0 the inequalities E sup 0 t τ X t C X fegτ are called maximal inequalities for X. Here C X is a constant, f and g are some functions. Markov times τ = τω usually belong to the set M = {τ Markov time w.r.t. F t t 0, Eτ < }. 5

6 The inequality * is called sharp maximal inequality if there exist a non-trivial Markov time τ M such that E sup 0 t τ X t = C X feg τ. Examples of maximal inequalities for some well-known processes include Graversen, Peskir, Shiryaev : for geometric Brownian motion X t = expσb t + µ σ 2 /2t with µ < 0, σ > 0: E max X t 1 σ2 ; 0 t τ 2µ + σ2 2µ exp σ2 2µ 2 2σ 2 Eτ 1 for Ornstein-Uhlenbeck process X t t 0 with dx t = βx t dt + db t, β > 0: C 1 E ln1 + βτ E max X t C 2 E ln1 + βτ, β 0 t τ β where C 1, C 2 > 0 are some universal constants; 6

7 for bang-bang process X t t 0 with dx t = µ sgnx t dt + db t, µ > 0: where G µ x = inf c>0 E max X t 0 t τ cx + 1 2µ ln G µ Eτ, 1 + µ c. 7

8 Assume that X = X t t 0 is the Markov process. For given measurable functions L = Lx and K = Kx we define I t = t 0 LX sds, S t = max 0 s t KX s, t 0. s X t, S t s = x 0 x Consider the following optimal stopping problem: V c = sup τ E FI τ, X τ, S τ c GI τ, X τ, S τ, 1 where F, G are given measurable functions, τ M, c > 0 is a parameter. 8

9 Suppose we solved the problem 1 and found the function V c. Then for any τ and c we have EF I τ, X τ, S τ V c + c EGI τ, X τ, S τ Taking the infimum on both sides by c > 0 we obtain the inequality EFI τ, X τ, S τ HEGI τ, X τ, S τ := inf c>0 V c + c EGI τ, X τ, S τ which is true for any Markov time τ M. If infimum is minimum and it is achieved on some c > 0 then inequality 2 is sharp. 2 The corresponding solution τ c of problem 1 when c = c is a stopping time on which 2 becomes an equality. 9

10 Consider the particular case F x, y, z = z, Gx, y, z = x, Lx = cx, Kx = x. The function c = cx is assumed to be positive and continuous and it is called cost for observations. We obtain the following optimal stopping problem: where V x, s = sup τ E x,s S τ τ 0 cx t dt, 3 E s,x, s x is expectation under the measure P x,s = LawX, S P, X 0 = x, S 0 = s τ is the optimal stopping time such that E x,s τ 0 cx t dt < 10

11 In addition we assume that X = X t t 0 is a diffusion process and it is a solution of stochastic differential equation dx t = bx t dt + σx t db t, X 0 = 0, where B = B t t 0 is the Brownian motion on Ω, F, P. Diffusion coefficient σ = σx > 0 and drift coefficient b = bx are continuous. We need to know a scale function R = Rx and a speed measure m = mx in order to obtain a solution of the problem 3. It is well known that in the case of diffusion process X we have Rx = x exp y 2bu σ 2 u du dy, x R, mdx = 2dx R xσ 2 x 11

12 From the general optimal stopping theory we may decompose the state space E = {x, s R 2 : x s, s 0} of the process X, S, S t = max X u s by u t where E = C D, C = {x, s E : V x, s > s} is a continuation set. If x, s C we need to continue our observations; D = {x, s E : V x, s = s} is a stopping set. If x, s D we need to stop our observations Therefore if we start in C we need to stop at the first time when the process X, S reaches D. In other words, τ = inf{t 0 : X t, S t D }. Proposition 1. The diagonal {x, s E : x = s} does not belong to the continuation set C. 12

13 Reduce the problem V x, s = sup E x,s S τ τ cx t dt to the optimal τ 0 stopping problem in standard formulation. Consider the process t A t = a + cx udu, a 0 0 and observe that Z t = A t, X t, S t, t > 0, Z 0 = a, x, s is a Markov process. Define the function Ga, x, s = s a and observe that the initial problem takes the form Ṽ a, x, s = sup τ E a,x,s GZ τ, where times τ are such that EA τ <. However since Ṽ a, x, s = V x, s a it is sufficient to find the function V x, s i.e. to solve 2-dimensional optimal stopping problem for X, S. The infinitesimal operator of the process Z = Z t t 0 equals L Z = cx a + L X = cx a + bx x + σ2 x 2 x 2 if x < s. 13 2

14 Since the cost for observations cx is positive we should not allow the process X to decrease too fast when s is fixed. It means that for given s there exist a point gs such that we should stop the observations when X, S achieves a point gs, s. In other words τ = inf{t > 0 : X t gs t } The unknown function g = gs is called the boundary of the stopping set D. The function V x, s, gs < x s is a solution of the system L X V x, s = cx if gs < x < s, 4 V x, s = 0 normal reflection, 5 s x=s V x, s x=gs+ = s instanteneous stopping, 6 V x, s = 0 smooth fit, 7 x x=gs+ which is called a Stefan problem with moving boundary g = gs. 14

15 Explain the meaning of each equation 4-7. According to the general optimal stopping theory, L Z V x, s = 0 when x, s C. Thus we get the equation 4; The instanteneous stopping condition 6 follows from the fact that V x, s = s when x, s D ; The smooth fit condition 7 means that the derivative of the function V x, s is contintinuous on the boundary of C and D ; Clarify the normal reflection condition. Applying the Ito formula for semimartingales dfx t, S t = f xx t, S t dx t + f sx t, S t ds t f xxx t, S t d X, X t to the process fx t, S t t 0, taking the expectation E s,s on both sides and multiplying on t 1 we get 15

16 E s,s fx t, S t fs, s E s,s 1 t t 0 t = E s,s 1 t t 0 L X fx u, S u du f s X u, S u ds u L X fs, s + f s, s s + lim t 0 E s,s S t s t as t 0. Since the diffusion coefficient σ > 0 as t 0 we have 1 t E s,ss t s Therefore the condition f ss, s = 0 assures us that the limit L X fs, s+ f E s,s S t s s, s lim is finite. s t 0 t 16

17 Find the functions V x, s and gs the solutions of system 4-7. Denote τ g = inf{t > 0: X t gs t }, τ gs,s = inf{t > 0: X t / gs, s} and consider the function V g x, s = E x,s S τg τ g 0 cx t dt Using the strong Markov property of X w.r.t. time τ gs,s when x gs, s we have V g x, s = s P x,s X τgs,s = gs + V g s, sp x,s X τgs,s = s. E x,s τ gs,s 0 cx t dt = 17

18 Rs Rx = s Rs Rgs + V Rx Rgs gs, s Rs Rgs s gs G gs,s x, ycymdy, where G a,b x, y is the Green function of X on the segment [a, b]: Rb RxRy Ra if a y x, Rb Ra G a,b x, y = Rb RyRx Ra if x y b. Rb Ra Rewrite the expression for V g x, s in the following form: V g s, s s = Rs Rgs Rx Rgs V gx, s s + s gs G gs,s x, ycymdy 18

19 Suppose that V g x, s satisfies the smooth fit condition. Then lim x gs V g x, s s Rx Rgs = 1 R gs V g x x, s = 0, x=gs+ lim x gs s gs Therefore we have Rs Rgs Rx Rgs s gs Rs Rycymdy. V g s, s = s + s gs G gs,s x, ycymdy = Rs Rycymdy, 19

20 Finally we obtain V g x, s = s + x gs Rx Rycymdy, 8 for all gs x s. Now suppose that the function V g x, s is given by 8. Then it is easy to show that V g x, s is a solution of Stefan problem 4-7 if and only if the boundary g = gs belongs to C 1 and satisfies the equation g s = σ 2 gsr gs 2cgsRs Rgs. 9 20

21 Observe that the equation 9 has a whole family of solutions. We need to specify the criteria which enables us to choose the solution g = g s a boundary of the stopping set D. We call the solution gs of the equation 9 an admissible solution if gs < s for all s 0. Theorem [maximality principle]. The boundary g = g s of the stopping set D in the problem V x, s = sup τ E x,s S τ τ 0 cx t dt is a maximal admissible solution of the differential equation 9. 21

22 Theorem. Consider the stopping problem * for diffusion process X = X t t 0 such that dx t = bx t dt+σx t db t. Supremum is taken by all Markov times τ such that E x,s τ 0 cx t dt <. 10 Assume that there exist the maximal admissible solution g s of 9. Then 1 The value function V x, s in problem * is finite and can be determined on E by V x, s = s, if x g s, s + x g s Rx Rycymdy, if g s x s. 2 The Markov time τ = inf{t > 0: X t g S t } is optimal in problem * if it satisfies the condition 10; 22

23 3 If there exist an optimal stopping time σ in problem * such σ that E x,s cx t dt < then P x,s τ σ = 1 for all x, s and 0 time τ is also optimal in problem *. If the equation 9 doesn t have a maximal admissible solution then V x, s = + for all x, s and there is no optimal stopping time in problem *. Theorem [verification theorem]. Assume that for the solution V = V x, s of Stefan problem 4-7 the following statements are true: i V x, s s, x, s E; ii V x, s = E x,s Sτg τ g 0 cx tdt, x, s E for some Markov time τ g = inf{t 0: X t gs t } satisfying 10; iii V x, s E x,s V X τ, S τ for any Markov time τ satisfying 10. Then V x, s coincides with the value function V x, s in problem * and τ g is optimal. 23

24 2. Maximal inequalities for standard Brownian motion and it s modulus. Martingale and «Stefan problem» approaches Consider the standard Brownian motion B = B t t 0, B 0 = 0. This was the first process for which sharp maximal inequalities were established. square root inequality E max 0 t τ B t square root of two inequality E max B t 0 t τ Eτ 2Eτ Inequalities 11 and 12 are also called Dubins-Jacka-Schwarz- Shiryaev inequalities. 24

25 Denote S t B = max 0 u t B u and S t B = max 0 u t B u. Martingale approach. First proof the inequality 11. Consider a stochastic process Z t = cs t B B t 2 t + 1 4c, t 0 when c > 0. Due to Levy theorem LawSB B = Law B and the process B 2 t t is a martingale. Therefore Z t t 0 is also martingale w.r.t. natural filtration of B. It is easy to see that cx 1/2 c 2 0. From this inequality it follows that x ct cx 2 t + 1/4c for all x R. Thus for any τ M we get ES τ t B cτ t = ES τ t B B τ t cτ t EZ τ t = EZ 0 = 1 4c Taking the limit as t from Doob s optional sampling theorem we have ES τ B ceτ +1/4c. Taking an infimum on c > 0 on both sides we obtain

26 Prove that inequality ES τ B Eτ is sharp. For each a > 0 consider the time τ a = inf{t 0: S t B B t = a} We see that ES τa B = ES τa B B τa = a. Since Lawτ a = Lawinf{t 0: B t = a} from Wald identities we get a 2 = EBτ 2 a = Eτ a. Corollary. For any continuous local martingale M = M t t 0, M 0 = 0 we have E max 0 t T M t E M T, 13 for any T > 0. Here M t t 0 is a quadratic characteristic of M. This inequality follows from 11 and Dambis-Dubins-Schwarz theorem. Indeed, Emax t T M t = Emax t T B M t = Emax t M T B t E M T. 26

27 Prove the inequality ES τ B 2Eτ. Consider a continuous martingale U t = E B τ E B τ F B t τ, t 0 Applying 13 to max t T U t and taking T + we get Emax t 0 U t E B τ E B τ 2. Using this inequality we estimate ES τ B by E max B t = E max B t τ = E max EB τ Ft τ B 0 t τ t 0 t 0 E max t 0 E B τ F B t τ E B τ = max t 0 U t = E Eτ E B τ 2 + E B τ 2Eτ. + E B τ E B τ E B τ 2 + In order to get the last inequality in this series we used a simple inequality A x 2 + x 2A when 0 < x < A. Now show that inequality ES τ B 2Eτ is sharp. Consider the time τ a = inf{t 0: S t B B t = a} It turns out that E τ a = 2a 2 and Emax t τa B t = 2a. 27

28 «Stefan problem» approach. Basically the proof of 11 and 12 is the application of the main theorem of 1 to the problem V x, s = sup τ E x,s S τ τ 0 cx t dt in the case when cx t c > 0, X t = B t or X t = B t. First, prove the inequality ES τ B Eτ. In the case of Brownian motion Rx = x, mdx = 2dx, x R. According to the theorem the equation for boundary is g s = 1 2cs gs The maximal admissible solution of this equation is g s = s 1/2c. 28

29 Therefore the value function V x, s = sup t τ E x,s S τ B cτ when 0 s x 1/2c equals V x, s = s + 2c x x ydy = cx s 2 + x + 1 4c gs Since we need the value V 0, 0 for any τ M we get ES τ B inf c>0 {V 0, 0 + ceτ} = inf c>0 {1/4c + ceτ} = Eτ However we cannot apply directly the method from 1 in the case of X t = B t and obtain the inequality ES τ B 2Eτ. The reason is that we cannot represent X t = B t in the form dx t = bx t dt + σx t db t with continuous b and σ. But we can consider the problem W x, s = sup τ E x,s s max x + B t cτ 0 t τ and reduce it to the Stefan problem. 29

30 Infinitesimal operator of B equals L = 2 1 d2 dx2, x > 0 with endpoint x = 0. Thus Stefan problem in our case is 2 W x, s = 2c, x 0, gs < x s, x2 W 0+, s = 0, s: gs < 0; x W x, s = 0; W x, s s x=gs+ = s; W x, s = 0. x=s x x=gs+ The solution of this system is the function W x, s = s, s x 1 2c, cx s 2 + x + 1 4c, s 1/2c, s x 1/2c, cx c, 0 s 1 2c Since W 0, 0 = 1/2c for each τ inf c>0 {1/2c + ceτ} = 2Eτ. M we have ES τ B 30

31 3. Maximal inequalities for skew Brownian motion. Solution to the corresponding Stefan problem The process X α = X α t t 0 defined on probability space Ω, F, P is called a skew Brownian motion if it satisfies the stochastic equation X α t = X α 0 + B t + 2α 1L 0 t Xα, 14 where L 0 = L 0 t Xα t 0 с L 0 0 Xα = 0 is the local time of X α in zero. The skew Brownian motion with parameter α = 1/2 has the same distribution as standard Brownian motion, with parameter α = 1 as the modulus of standard Brownian motion. Denote by W α = W α t t 0 the unique strong solution of 14 such that W α 0 = 0. 31

32 Consider the optimal stopping problem V x, s = sup τ E x,s s max x + W t α cτ 0 t τ 15 with constant cost for observations c > 0. We cannot directly apply the methods from 1 since X t = x + Wt α cannot be represented in the form dx t = bx t dt + σx t db t with continuous b and σ. However we can write the analogue of Stefan problem 4-7 in the case of optimal stopping problem. The infinitesimal operator for X equals L = 2 1 d2 functions dx 2 and defined for {f : f exists on R \ {0}, f 0+ = f 0, lim x fx = 0 and αf 0+ = 1 αf 0 } 32

33 Therefore we get the Stefan problem for value function 2 V x, s = 2c, x 0, gs < x s, x2 α V 0+, s = 1 α V 0, s, s: gs < 0; x x V x, s = 0; V x, s s x=gs+ = s; V x, s x=s x The solution of this system is given in the following x=gs+ = 0 Theorem 1. The optimal stopping time τ c in the problem 15 exists and equals τ = inf{t 0 : X t gs t } 33

34 The mapping g = gs, s 0 is given by s = parameter β = 1 α/α. g + 1/2c, if g 0, β 2 1 2cβ 2 e2cβg + g β + 1 2cβ2, if g < 0, s The boundary s = sg of the stopping set when c = 1 and α = 0.1, 0.2,..., 0.9. g 34

35 If we consider the sets D = {x, s E : x gs}, C = E \ D then the value function equals V x, s = s + cx gs 2, x, s C, x 0, s 1 2c or x < 0, s < 1 2c, s + cx gs 2 + 2c1 βxgs, x, s C, x 0, s < 1 2c, s, x, s D The proof of the theorem is based on finding the solution to Stefan problem. Particularly the equation for boundary g = gs is g s = 1, 2cs gs s: gs 0, 1, 2cβs gs s: gs < 0 The general solution of this equation is sg = a 0 e 2cg + g + 1/2c when g 0 and sg = b 0 e 2cβg + g/β + 1/2cβ 2 when g < 0. 35

36 In order to prove that the solution of Stefan problem V x, s coincides with the value function V x, s = sup τ E x,s s max0 t τ x + W α t cτ we use the following analogue of Ito formula: V X t, S t = V X 0, S 0 + 2α 1 2 t 0 t 0 V xx u, S u db u + t 0 V sx u, S u ds u + V x0+, S u + V x0, S u dl 0 u V x0, S u dl 0 u t 0 t V xxx u, S u IX u 0du 0 V x0+, S u 36

37 Once we know the value V 0, 0 it is possible to obtain the maximal inequalities. Theorem 2 Lyulko For any Markov time τ M and for any α 0, 1 the following inequality holds: E max 0 t τ W α t M α Eτ, 16 where M α = α1 + A α /1 α and A α is the unique solution of the equation such that A α > 1. A α e A α+1 = 1 2α α 2, The inequality 16 is sharp i.e. for any T > 0 there exist a Markov time τ with Eτ = T such that E max 0 t τ W α t = M α Eτ. 37

38 The inequalities like 16 can be obtained not only for maximum max. Thus in [Zhitlukhin 2012] there were stated the following 0 t τ W α t inequalities for range of skew Brownian motion: E max 0 t τ W α t min 0 t τ W α t K α Eτ, where K α = C α + C 1 α, C α = α αd 2 0 α 1 α 1 α 2D αx + α 1 α 2α D α 2α 1e x α dx and D α is the unique negative solution of the equation 2α 1α 2 e D α 1 = D α M α 2 1 K α α 2 0 1/2 1α 38

39 4. Maximal inequalities for Bessel processes. Solution to the corresponding Stefan problem A continuous nonnegative Markov process X = X t x t 0, x 0 is called a Bessel process of dimension γ R X Bes γ x if it s infinitesimal operator equals L X = 1 2 γ 1 x d dx + d2 dx 2 The endpoint x = 0 is called trap if γ 0, instantaneously reflecting if γ 0, 2 and entrance if γ 2. 39

40 In the case α = n N the Bessel process can be realized as a radial part of n-dimensional Brownian motion X t x = n i=1 B i t + a i 2 1/2 where a = a 1, a 2,..., a n is a vector in R n with norm x = a a2 n. B 1, B 2,..., B n are independent Brownian motions starting from zero. The Bessel process of dimension γ = 1 is a modulus of standard Brownian motion x + B t., Consider the optimal stopping problem V x, s = sup τ where Markov times τ M. E x,s s max 0 t τ X tx cτ 40

41 Theorem 3. Let X Bes γ x where the dimension γ R and c > 0. The optimal stopping time τ in problem * exists and equals τ = inf{t 0: X t, S t D } with X t = X t x, S t = S t x, s = s max X u and stopping set 0 u t D = {x, s: s s, x g s} where g = g s is the unique nonnegative solution of the equation 2c γ 2 g sgs 1 such that gs s when s 0 and lim s g s gs s = 1, γ 2 = 1 17 s and s is the root of the equation g s = 0. When γ = 2 the equation 17 has the form 2cg sgs lns/g = 1. 41

42 Moreover if we denote C 1 = {x, s R + R + : s > s, g s < x s}, C 2 = {x, s R + R + : 0 x s s } and define a continuation set by C = C 1 C 2 then depending on the value of parameter γ the value function V x, s equals if α > 0 V x, s = s, s + c γ x2 g 2 s + 2cg2 s γγ 2 c γ x2 + s, g s x γ 2 1 x, s D,, x, s C 1, x, s C 2 ; 42

43 if α = 0 if α < 0 V x, s = V x, s = s, s, x, s D, s + c 2 g2 s x 2 + cx 2 ln x g s, x, s C ; s + c γ x2 g 2 s + 2cg2 s γγ 2 g s x γ 2 1 x, s D,, x, s C. 43

44 Using this theorem we can obtain the maximal inequalities for Bessel processes. if γ 0 then the point x = 0 is a trap. Therefore X t x 0 if t 0 and maximal inequalities do not make sense if γ > 0 then from theorem it follows that V 0, 0 = s. Denote V x, s = V γ c x, s, s = s c γ Since Bessel processes are self-similar LawX t x, t 0 = Lawc 1/2 X ct c 1/2 x the value function Vc γ x, s is also self-similar, i.e. cvc γ x, s = V γ 1 Hence s c γ = s 1 γ/c. Therefore we get the inequalties E max X t0 0 t τ inf {s 1γ/c + ceτ} = c>0 inf {V 0, 0 + ceτ} = c>0 4s 1 γeτ cx, cs. 44

45 Theorem 4 Dubins-Shepp-Shiryaev Let X Bes γ 0, γ > 0. Then for any Markov time τ M the following sharp maximal inequality holds: E max X t0 0 t τ 4s 1 γeτ, where s 1 γ is the root of equation g s = 0 such that as γ. s 1 γ γ 1 4 Observe that in the case γ = 1 we have s 1 1 = 1/2 and therefore we get the maximal inequality for modulus of standard Brownian motion E max B t 0 t τ 2Eτ. 45

46 5. Doob maximal inequalities Theorem 5. Let M = M t t 0 be a local martingale on a filtered probability space Ω, F t t 0, P. Then for any p > 0 there exist a universal constants c p и C p such that c p E[M] p/2 τ E max M t p 0 t τ C p E[M] p/2 τ, where [M] t t 0 is called a quadratic variation of M. 18 The inequalities 18 are called Burkholder-Davis-Gundy inequalities. In the case when M t = B t is standard Brownian motion we get c p Eτ p/2 E max B t p/2 0 t τ C p Eτ p/2, 19 Note that if p 2 the exact values of the constants c p and C p when inequalities 19 become sharp are still not known. 46

47 Some particular cases of Burkholder-Davis-Gundy inequalities: Davis inequalities p = 1: c 1 E τ E max B t 0 t τ C 1 E τ Doob inequalities p = 2: c 2 Eτ E max 0 t τ B2 t C 2 Eτ Consider the case p = 1. One of the possible ways to obtain the exact values of c 1, C 1 is to solve the optimal stopping problem V c = sup τ E max B t c τ 0 t τ where c > 0, τ is the Markov time such that E τ <., 20 47

48 The problem 20 can be formulated in a standard way for 3- dimensional Markov process Z t = t, X t, S t, X t = B t, S t = max u t B u But this problem is nonlinear and we cannot decrease it s dimensionality. The same situation happens when p 2. In the case p = 2 the corresponding optimal stopping problem sup τ Emax t τ B2 t cτ is linear and we can get the solution explicitly. As a consequence we obtain the Doob maximal inequalities Eτ E max 0 t τ B2 t 4Eτ, 21 where τ is the Markov time such that Eτ <. 48

49 Prove the inequality 21 and show that it is sharp. Denote S t B 2 = max 0 u t B2 u. The lower bound for ES τ B 2 follows from the Wald identity: ES τ B 2 EB 2 τ = Eτ To show that this inequality is sharp it is enough to consider the time τ T = inf{t 0: B t = T }. Then Eτ T = EB 2 τ T = T and ES τ T B2 = T. In order to prove the upper bound E sequence of stopping times max 0 t τ B2 t 4Eτ consider the σ λ,ε = inf{t > 0: max 0 s t B s λ B t ε}, where λ, ε > 0. It is known that Eσ λ,ε p/2 < if and only if λ < p/p 1. 49

50 Therefore if λ 0, 2 we have E max 0 t σ λ,ε B 2 t = λ 2 E B σλ,ε 2 + 2λεE B σλ,ε + ε 2 KE B σλ,ε 2 22 for some constant K > 0. Divide the both sides of 22 on E B σλ,ε 2 and take λ 2. Since E B σλ,ε 2 = Eσ λ,ε and E B σλ,ε /E B σλ,ε 2 1/ Eσ λ,ε 0 if λ 2 then from 22 we get K λ 2 + 2λε E B σ λ,ε EB σλ,ε 2 + ε2 4. E B σλ,ε 2 Therefore K = 4 is the best possible constant in the upper bound for ES τ B 2. 50

51 TOPIC II: Sharp maximal inequalities for discrete time processes 1. Maximal inequalities for modulus of simple symmetric Random walk In this section time t will take discrete values i.e. t = n = 0, 1, 2,... Consider the simple symmetric Random walk X n = S n = ξ ξ n, X 0 = S 0 = 0, where ξ 1,..., ξ n,... are i.i.d. random variables, Pξ 1 = 1 = Pξ 1 = 1 = 1/2 Denote the current maximums of X and X by M n S = max 0 k n S k and M n S = max 0 k n S k. 51

52 In order to obtain the maximal inequalities for S n n 0 and S n n 0 consider the following optimal stopping problems: and V c = sup E τ M W c = sup E τ M max S k cτ 0 k τ max S k cτ 0 k τ For any nonnegative integer l define the stopping times τ l = σ l = inf{k > n : M k S S k = l}, if m s < l, n, if m s l inf{k > n : S k 0, M k S S k = l}, if m s < l, n, if m s l 52

53 and a function Q l = Q l n, s, m, c such that Q l n, s, m, c = sup τ M l E s,m M τ S cτ, where the set of stopping times equals M l = {τ l, σ l : l Z + }. If the conditions 1 Q l n, s, m, c m cn, 2 Q l n, s, m, c EQ l n+1, s+ξ n+1, max{m, s+ξ n+1 }, c excessivity are satisfied then Q l n, s, m, c = sup τ n E s,m M τ S cτ i.e. the supremum on all stopping times is achieved on the stopping times of the special form τ l and σ l. Namely if l [1/2c 1/2, 1/2c] then supremum is achieved on τ l. If l [1/2c 1, 1/2c 1/2] then supremum is achieved on σ l. 53

54 Take an arbitrary l N and compute Eτ l and EM τl S. Represent τ l as a sum τ l = τ 1 + τ 2 where τ 1 = inf{k 0 : S k = l}, τ 2 = inf{k 0 : max 0 i k S i+τ 1 S τ 1 S k+τ 1 S τ 1 = l} Due to Wald identities for Random walk we have Eτ 1 = ES 2 τ 1 = l 2. Also note that the distribution law of τ 2 coincides with distribution law of the time inf{k 0 : M k S S k = l}. This Markov time can be represented as a sum of M τ 2S + 1 i.i.d. random variables with distribution of τ l,1 = inf{k 0 : S k = l or S k = 1}. Therefore since EM τ 2S = EM τ 2S S τ 2 = l we get Eτ 2 = EM τ 2 + 1Eτ l,1 = ll + 1 Here we used Wald identities ES τ l,1 = 0, ES 2 τ l,1 = Eτ l,1 in order to prove that Eτ l,1 = l. 54

55 Finally we have Eτ l = Eτ 1 + Eτ 2 = l 2 + ll + 1 = l2l + 1 and EM τl S = E max S k + E max S k = 2l i.e. 0 k τ 1 0 k τ 2 Eτ l = l2l + 1, EM τl S = 2l From this system we find that EM τl S = 8Eτ l + 1 1/2. Theorem 6 Dubins-Schwarz For any Markov time τ M the following sharp maximal inequality holds: E max S n 0 n τ If we consider the Markov time 8Eτ τ = inf{n 0 : max 0 k n S k S n = N} for any N N then 23 becomes an equality

56 2. Maximal inequalities for simple symmetric Random walk Consider the optimal stopping problem V c = sup E τ M max S k cτ 0 k τ Theorem 7. The optimal stopping time τ c and value function V c in problem equal inf{k 0 : S k 1 2 = 1 2c }, if 1 2c c, τ c = inf{k 0 : S k 1 2 = 1 2c }, if 1 2c < 1 2c. 1 V c = 2c c 2 2c c 1 1, if c c, 1 2c c 2 2c c , if 2c + 1 < 1 2 2c, where x is the integer part of x. 56

57 Proof. According to the discrete version of Levy theorem [Fujita, Mischenko] Law max S S, max S = Law S , LS, where LS = L n S n 0, L n S is the number of crossings of the level 1/2 by Random walk on [0, n]. Rewriting the problem and using Wald identities we have EM τ S cτ = EM τ S S τ ces 2 τ = E S τ 1/2 1/2 cs 2 τ 1/2 Since S 2 τ = S τ 1/2 2 + S τ 1/4 we can rewrite the last expression E S τ 1/2 cs 2 τ 1/2 = E S τ 1/2 c S τ 1/2 2 + c/4 1/2 24 Observe that the resulting expression does not depend on τ explicitly, there is only dependence on S τ 1/2. That s why the method we use is called the method of space change. 57

58 Consider the function fx = x cx 2, x 0. It attains a maximum at the point c 0 = 1/2c and therefore x cx 2 f 2c 1 = 1/4c. Hence from 24 we get sup E τ M max S n cτ 0 n τ 1 4c + c However this inequality can be not sharp if 1 does not belong to 2c the values set E = {k + 1/2} k 0 of the process S 1/2. fx 1 4c V c i i c 1 c x 58

59 Nevertheless it is clear that the maximum of S τ 1/2 c S τ 1/2 2 is attained at the closest point to 1/2c i.e. at the point i 0 = c 1. The values of optimal stopping time τ c and value function V c depend on the relation between 2 distances 1 = 1/2c i 0 + 1/2 and 2 = i 0 + 1/2 1/2c: τ c = inf{k 0 : inf{k 0 : S k 1 2 = i }, if 1 2, S k 1 2 = i }, if 1 > 2 V c = fi c 4 1 2, if 1 2, fi c 4 1 2, if 1 > 2 59

60 Theorem 8. For any Markov time τ M the following inequality holds: 4Eτ E max S n 0 n τ 2 25 If for any N N we consider the Markov time τ = inf{n 0 : max 0 k n S k S n = N} then 25 becomes an equality. Proof. Use the inequality 24 which we already proved: { E max S n inf c Eτ n τ c>0 4 4c 1 } 4Eτ = 2 2 which gives us exactly

61 Now show that 25 is sharp. Due to the discrete version of Levy theorem the time τ = inf{n 0 : max S k S n = N} coincides by 0 k n distribution with inf{n 0 : S n 1/2 1/2 = N} = inf{n 0 : S n = N or S n = N + 1} = τ N,N+1 Using Wald identities we can check that Eτ = Eτ N,N+1 = NN + 1 On the other hand EM τ = EM τ S τ = N = 4NN

62 References [1] Peskir G., Shiryaev A. Optimal stopping and free-boundary problems. Basel: Birkhauser, 2006 [2] Dubins L., Shepp L., Shiryaev A. Optimal Stopping Rules and Maximal Inequalities for Bessel Processes // Theory Probab. Appl , pp [3] Lyulko Ya. Exact inequalities for the maximum of a skew Brownian motion // Moscow University Mathematics Bulletin , pp [4] Dubins L., Schwarz G. A sharp inequality for sub-martingales and stopping times // Asterisque v pp

63 [5] Zhitlukhin M. V. A maximal inequality for skew Brownian motion // Statist. Decisions pp [6] Graversen S. E, Peskir G. On Doob s maximal inequality for Brownian motion // Stoch. Processes Appl pp [7] Peskir G., Shiryaev A. Maximal inequalities for reflected Brownian motion with drift // Theory Probab. Math. Statist pp [8] Fujita T. A random walk analogue of Levy s theorem // Studia Scientiarum Mathematicarum Hungarica pp

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

Optimal Stopping and Maximal Inequalities for Poisson Processes

Optimal Stopping and Maximal Inequalities for Poisson Processes Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

On the principle of smooth fit in optimal stopping problems

On the principle of smooth fit in optimal stopping problems 1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:

More information

Optimal Prediction of the Ultimate Maximum of Brownian Motion

Optimal Prediction of the Ultimate Maximum of Brownian Motion Optimal Prediction of the Ultimate Maximum of Brownian Motion Jesper Lund Pedersen University of Copenhagen At time start to observe a Brownian path. Based upon the information, which is continuously updated

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Optimal Stopping Problems for Time-Homogeneous Diffusions: a Review

Optimal Stopping Problems for Time-Homogeneous Diffusions: a Review Optimal Stopping Problems for Time-Homogeneous Diffusions: a Review Jesper Lund Pedersen ETH, Zürich The first part of this paper summarises the essential facts on general optimal stopping theory for time-homogeneous

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

On Doob s Maximal Inequality for Brownian Motion

On Doob s Maximal Inequality for Brownian Motion Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

On a class of optimal stopping problems for diffusions with discontinuous coefficients

On a class of optimal stopping problems for diffusions with discontinuous coefficients On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns

Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

On an Effective Solution of the Optimal Stopping Problem for Random Walks

On an Effective Solution of the Optimal Stopping Problem for Random Walks QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

On the sequential testing problem for some diffusion processes

On the sequential testing problem for some diffusion processes To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We

More information

Lectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics. Managing Editor: Michael Struwe

Lectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics. Managing Editor: Michael Struwe Lectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics Managing Editor: Michael Struwe Goran Peskir Albert Shiryaev Optimal Stopping and Free-Boundary Problems Birkhäuser

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

Solutions to the Exercises in Stochastic Analysis

Solutions to the Exercises in Stochastic Analysis Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

The Russian option: Finite horizon. Peskir, Goran. MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics

The Russian option: Finite horizon. Peskir, Goran. MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics The Russian option: Finite horizon Peskir, Goran 25 MIMS EPrint: 27.37 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: And by contacting:

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

MAXIMUM PROCESS PROBLEMS IN OPTIMAL CONTROL THEORY

MAXIMUM PROCESS PROBLEMS IN OPTIMAL CONTROL THEORY MAXIMUM PROCESS PROBLEMS IN OPTIMAL CONTROL THEORY GORAN PESKIR Receied 16 January 24 and in reised form 23 June 24 Gien a standard Brownian motion B t ) t and the equation of motion dx t = t dt+ 2dBt,wesetS

More information

Optimal stopping problems

Optimal stopping problems Optimal stopping problems Part II. Applications A. N. Shiryaev M. V. Zhitlukhin Steklov Mathematical Institute, Moscow The University of Manchester, UK Bonn May 2013 Outline The second part of the course

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

On the American Option Problem

On the American Option Problem Math. Finance, Vol. 5, No., 25, (69 8) Research Report No. 43, 22, Dept. Theoret. Statist. Aarhus On the American Option Problem GORAN PESKIR 3 We show how the change-of-variable formula with local time

More information

Stochastic Calculus February 11, / 33

Stochastic Calculus February 11, / 33 Martingale Transform M n martingale with respect to F n, n =, 1, 2,... σ n F n (σ M) n = n 1 i= σ i(m i+1 M i ) is a Martingale E[(σ M) n F n 1 ] n 1 = E[ σ i (M i+1 M i ) F n 1 ] i= n 2 = σ i (M i+1 M

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Optimal Stopping Problems and American Options

Optimal Stopping Problems and American Options Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master

More information

Bessel Processes. Maximal Inequalities for. 0_<,<_:rmaX Z2t) S.E. GRAVERSEN and G. PEKIR * < G(oOV/-(T) 8000 Aarhus, Denmark

Bessel Processes. Maximal Inequalities for. 0_<,<_:rmaX Z2t) S.E. GRAVERSEN and G. PEKIR * < G(oOV/-(T) 8000 Aarhus, Denmark J. of Inequal. & Appl., 1998, Vol. 2, pp. 99-119 Reprints available directly from the publisher Photocopying permitted by license only (C) 1998 OPA (Overseas Publishers Association) Amsterdam B.V. Published

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Dudley s representation theorem in infinite dimensions and weak characterizations of stochastic integrability

Dudley s representation theorem in infinite dimensions and weak characterizations of stochastic integrability Dudley s representation theorem in infinite dimensions and weak characterizations of stochastic integrability Mark Veraar Delft University of Technology London, March 31th, 214 Joint work with Martin Ondreját

More information

Lower Tail Probabilities and Related Problems

Lower Tail Probabilities and Related Problems Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION 1. INTRODUCTION

AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION 1. INTRODUCTION AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION DAVAR KHOSHNEVISAN, DAVID A. LEVIN, AND ZHAN SHI ABSTRACT. We present an extreme-value analysis of the classical law of the iterated logarithm LIL

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Supplement A: Construction and properties of a reected diusion

Supplement A: Construction and properties of a reected diusion Supplement A: Construction and properties of a reected diusion Jakub Chorowski 1 Construction Assumption 1. For given constants < d < D let the pair (σ, b) Θ, where Θ : Θ(d, D) {(σ, b) C 1 (, 1]) C 1 (,

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED J. Appl. Prob. 42, 826 838 (25) Printed in Israel Applied Probability Trust 25 STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED X. GUO, Cornell University J. LIU, Yale University

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Intertwinings for Markov processes

Intertwinings for Markov processes Intertwinings for Markov processes Aldéric Joulin - University of Toulouse Joint work with : Michel Bonnefont - Univ. Bordeaux Workshop 2 Piecewise Deterministic Markov Processes ennes - May 15-17, 2013

More information

4.6 Example of non-uniqueness.

4.6 Example of non-uniqueness. 66 CHAPTER 4. STOCHASTIC DIFFERENTIAL EQUATIONS. 4.6 Example of non-uniqueness. If we try to construct a solution to the martingale problem in dimension coresponding to a(x) = x α with

More information

The Wiener Sequential Testing Problem with Finite Horizon

The Wiener Sequential Testing Problem with Finite Horizon Research Report No. 434, 3, Dept. Theoret. Statist. Aarhus (18 pp) The Wiener Sequential Testing Problem with Finite Horizon P. V. Gapeev and G. Peskir We present a solution of the Bayesian problem of

More information

Random and Deterministic perturbations of dynamical systems. Leonid Koralov

Random and Deterministic perturbations of dynamical systems. Leonid Koralov Random and Deterministic perturbations of dynamical systems Leonid Koralov - M. Freidlin, L. Koralov Metastability for Nonlinear Random Perturbations of Dynamical Systems, Stochastic Processes and Applications

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Stochastic Differential Equations

Stochastic Differential Equations CHAPTER 1 Stochastic Differential Equations Consider a stochastic process X t satisfying dx t = bt, X t,w t dt + σt, X t,w t dw t. 1.1 Question. 1 Can we obtain the existence and uniqueness theorem for

More information

On Wald-Type Optimal Stopping for Brownian Motion

On Wald-Type Optimal Stopping for Brownian Motion J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f), 1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Optimal Mean-Variance Selling Strategies

Optimal Mean-Variance Selling Strategies Math. Financ. Econ. Vol. 10, No. 2, 2016, (203 220) Research Report No. 12, 2012, Probab. Statist. Group Manchester (20 pp) Optimal Mean-Variance Selling Strategies J. L. Pedersen & G. Peskir Assuming

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

Stochastic Processes III/ Wahrscheinlichkeitstheorie IV. Lecture Notes

Stochastic Processes III/ Wahrscheinlichkeitstheorie IV. Lecture Notes BMS Advanced Course Stochastic Processes III/ Wahrscheinlichkeitstheorie IV Michael Scheutzow Lecture Notes Technische Universität Berlin Wintersemester 218/19 preliminary version November 28th 218 Contents

More information