Optimal Prediction of the Ultimate Maximum of Brownian Motion
|
|
- Jesse Marsh
- 5 years ago
- Views:
Transcription
1 Optimal Prediction of the Ultimate Maximum of Brownian Motion Jesper Lund Pedersen University of Copenhagen At time start to observe a Brownian path. Based upon the information, which is continuously updated through the observation of the path, a stopping time is determined such that the path is as close as possible to its unknown ultimate maximum over a finite time interval. The closeness is measured by a q-mean or by a probability distance. This can be formulated as an optimal stopping problem. The method of proof relies upon a representation of a conditional expectation of the gain process and the principle of smooth fit at a single point.. Introduction The ultimate maximum of a Brownian motion observed during a finite time interval is unknown at any point of time in that interval and only becomes known at the terminal time. At time start to observe the Brownian path. The problem is to determine a stopping time based only on the information accumulated to date that secures that the path is as close as possible to the ultimate maximum over the given time interval. The closeness is measured by a q-mean or by a probability distance in this paper. The problem can be interpreted as predicting the unknown ultimate maximum. This has applications to financial mathematics regarding decisions on anticipated market movements without knowing the exact date of the optimal occurrence. In mathematical terms the problem is formulated the following way. Let B t t be a standard Brownian motion started at zero and let F t t be the natural filtration generated by B t. Denote the maximum process associated with B t bys t =max r t B r.forfixedq>and ε>, the problem is to compute the two value functions..2 V q W ε =inf E S B τ q =sup P S B τ ε where τ is a stopping time of B t, and to find an optimal stopping time in each of the two optimal stopping problems, that is, a stopping time for which the optimum is attained. The measured distances between S and B τ are expressed in terms of.3 Ds x =s x q and Ds x = [,ε] s x which in this paper are called the q-mean distance function and the probability distance function, respectively. The author is supported by a Steno grant from the Danish Natural Science Research Council. 2 Mathematics Subject Classification. Primary 6G4, 6J65. Secondary 6J6. Key words and phrases. Brownian motion, ultimate maximum, optimal stopping, Lévy s distributional theorem, free-boundary problem, smooth fit at a single point, viscosity solution.
2 The gain processes in the two problems are not adapted to the filtration and hence the problem falls outside the class of stopping problem studied in general optimal stopping theory see [8, Chapter 3]. Problem. was initially solved in Graversen, Peskir & Shiryaev [3] in the case of the mean-square distance, that is, Ds x =s x 2. Research in optimal stopping problems of functions of the Brownian motion and its associated maximum process was started by Jacka [4] and later by Dubins, Shepp & Shiryaev [2]. The main aim of the present paper is twofold. First, this paper extends results of [3] which cover the situation of the mean-square distance. Second, the paper introduces a new type of problem based on measuring the distance between S and B τ by a probability distance. Explicit formulas are derived for the value functions and the optimal stopping times are displayed for both types of measuring the distance. The rest of this section is devoted to establishing a basic connection between the two problems and standard finite-horizon optimal stopping problems, which are simpler to work with. The next result with its proof is an extension of the prediction result in [3, Section 3]. Let ϕy = 2π e y2 /2 and Φy = y ϕu du y R denote the density and the distribution functions of a standard normal variable, respectively. Proposition.. Let G :[, R be a Borel function and fix <t<. conditional expectation E GS B t Ft is given by where E GS B t Ft = GSt B t F t S t B t + f t y = 2 y t ϕ t S t B t Gu f t u du y t and F t y =2Φ y are, respectively, the density and the distribution functions of S t. Proof. Fix <t<. The stationary independent increments of B t givethat E GS B t Ft = E GSt {max t r B r B t +B t } B t Ft = E Gs {max t r B r B t +x} x s=s t x=b t Then the = E Gs {S t + x} x s=s t. x=b t Since S t B t, the density and the distribution function of S t are f t and F t and the above expectation for x s is E Gs {S t + x} x = Gs x F t s x+ Substituting this formula in the above equation the result follows. s x Gu f t u du. Remark.2. It is only the stationary independent increments of B t and the finite expectation of S that are used to prove Proposition.. The result can therefore, be extended to processes with these two properties. 2
3 Recalling, the function D in.3, using that EDS B τ = E EDS B τ F τ and then applying Proposition., simple calculations give that problems. and.2 can be written as V q W ε =inf E S τ B τ q +2 τ q/2 Φu /q du S τ B τ / τ q =sup E F τ ε [,ε] S τ B τ. Thus,. and.2 are rewritten as ordinary optimal stopping problems, that is, the gain processes are adapted to the filtration. The above optimal stopping problems can be further simplified. The key fact that B t t and S t B t t are equal in law by Lévy s distributional theorem see [7, Theorem 2.3, Chapter VI], together with facts from general optimal stopping theory, show that the stopping problems are equivalent to evaluating V q W ε =inf E B τ q +2 τ q/2 Φu /q du B τ / τ q =sup E F τ ε [,ε] B τ. These two problems are standard finite-horizon optimal stopping problems and are inherently two-dimensional problems. Remark.3. The idea in [3] to solve problem. with q = 2 is the following; using a stochastic integral representation of the ultimate maximum S and because Ds x =s 2 2sx + x 2 problem. can be rewritten as an equivalent path-dependent integral optimal stopping problem. This approach does only work for this particular D. However, the idea indicates that for other choices of D it will prove useful to condition the gain process on the filtration as is done above. 2. Prediction by a q-mean distance The first main result is contained in the next theorem and is the solution of problem.. For a, b R, let Ma,b,x=+ a x + aa+ b 2! bb+ x2 + denote the Kummer confluent hypergeometric function see [, Chapter 3]. Theorem 2.. Consider the optimal stopping problem. and let q>. The value function is given by V q = e z2 q /2 Hz q / M +q,, z2 q where z q is the unique strictly positive root of the equation H z 2. Hz + z =+qz M 3+q, 3, z M +q,, z and z Hz is given by 2.2 Hz =z q +2 Φu /q du for z. The optimal stopping time is given by see Figure τ =inf{ <t :S t B t z q t }. z q 3
4 .5.5 t t Figure. The left drawing is a computer simulation of a Brownian path B t ω fort [, ]. The right drawing is the maximum process S t ω associated with the Brownian path. z 4 t z q t z 2 z z.5. t Figure 2. A drawing of optimal stopping strategies 2.3 for different values of q>forthe Brownian path from Figure. Numerical computations show that z 4.348, z 2.23, z.5.69 and z..26. Remark 2.2. The case of a general time interval [,T] reduces to the case of the unit interval stated above. Indeed, the Brownian scaling property implies that inf E q S T B τ = T q/2 inf E q S B τ τ T = T q/2 V q. The same argument shows that the optimal stopping time is given by τ =inf{ <t T : S t B t z q T t }. Proof. Let q>begiven and fixed. Recall from Section that problem. is equivalent to the standard finite-horizon optimal stopping problem 2.4 V q =inf E B τ q +2 τ q/2 Φu /q du. B τ / τ q 4
5 The method of deterministic time-change see [6] can be applied due to the form of the gain function in problem 2.4. Let T t = e 2t be the time-change and let Z t t be the time-changed process given by Z t = B Tt / T t = e t B e 2t. ThenZ t is the strong solution of the stochastic differential equation 2.5 dz t = Z t dt + 2 dβ t whichcanbeverifiedbyitô s formula, where β t t is the Brownian motion given by β t = 2 Tt u db u = 2 t Tu db Tu. For more details see [3] and [6]. Therefore, Z t is a diffusion with the infinitesimal generator 2.6 L Z = z z + 2 z 2 for z R. ObservethatZ t is a non-recurrent diffusion, that is, Z t P-a.s. for t. Since the time-change t T t is strictly increasing, if σ is a stopping time of Z t thenτ = T σ is a stopping time of B t and vice versa. Then by the foregoing facts, it is straight forward that B Tσ q +2 T σ q/2 Φu /q du B T σ / T σ q = e Z qσ σ q +2 Φu /q du = e qσ H Z σ Z σ q where H is given in 2.2. Problem 2.4 reduces to computing V q =inf E e qσ H Z σ σ where the infimum is taken over all stopping times σ of Z t. This is a one-dimensional optimal stopping problem and it can be solved by a standard approach. Therefore, introduce the problem 2.7 V z =inf σ E z e qσ H Z σ for z R, wherez = z under P z and the infimum is taken as above. The first step to solve problem 2.7 is to apply heuristic arguments to establish a candidate for the value function and a candidate for the optimal stopping time. From general optimal stopping theory, the stopping time 2.8 σ =inf{ t>: Z t z q } should be optimal where z q > is the optimal stopping point to be found. Due to the stochastic differential equation 2.5 and the domain of continuous observation z q,z q the value function V should be even. To compute the value function z V z and to determine z q in view of 2.7 and 2.8 it is natural to formulate the following system 2.9 L Z V z =qvz for z z q,z q 2. V ±z q =Hz q instantaneous stopping 2. V ±z q =±H z q smooth fit with L Z as in 2.6. The system is a free-boundary Stefan problem. 5
6 The general solution to 2.9 is V z =C e z2 /2 M +q,, z2 + C ze z2 /2 M 2+q, 3, z where C and C 2 are unknown constants. As noted above the value function should be even so that C 2 = and hence 2.2 V z =C e z2 /2 M +q,, z for some C to be found. The two conditions 2. and 2. determine z q and C uniquely. Note that M a,b,x= a M + a,+b,x and taking log on both sides of 2.2 and using b 2. and 2., elementary calculations give that z q is the strictly positive root of equation 2. and C = e z2 q /2 Hz q /M +q,, q z2. These heuristic arguments give that the candidate for the value function is Hz q e M +q,, z2 z2 q z 2 / V z = M +q, 2 2, if z <z q 2 z2 q H z if z z q and the candidate for the optimal stopping time is σ zq =inf{ t>: Z t z q }. It remains to verify that z V z coincides with the value function z V z given in 2.7 and that σ zq is an optimal stopping time. Note that σ zq < P-a.s. since Z t is non-recurrent andinfactσ zq has finite expectation and therefore, E z σ zq < as well. The latter will be used later in the proof. The function z V z isc 2 everywhere except at ±z q where it is C and the Lebesgue measure of those t>forwhichz t = ±z q is zero. Then Itô-Tanaka formula and 2.5 yield that 2.3 e qt V Z t =V z+m t + where M t = 2 t t e qu L Z qv Z u { Zu z q} du e qu V Z u dβ u is a continuous local martingale and hence e qt V Z t V z +M t for all t sincel Z qv z for z z q.letσ be any stopping time for Z t and choose a localization sequence {γ n } of bounded stopping times of M t. Then E z e qσ γ n H Z σ γn E z e qσ γ n V Z σ γn V z+e z Mσ γn = V z for all n where the first inequality follows from V z H z for all z. Letting n and using Fatou s lemma and finally taking the infimum over all stopping times shows that 2.4 V z V z is valid. To prove equality in 2.4 and that σ zq 2.5 V z =E z e qσ z q H Zσz q. is optimal it is enough to verify that 6
7 Equation 2.3 yields that e qσ zq Hzq =e qσ zq V Zσz q =V z+m σ z q and taking expectation on both sides implies that V z =Hz q E z e qσ zq because Ez M σz q =. Indeed by Burkholder-Davis-Gundy inequality and the fact that σz q E z e qu V Z u 2 du C Ez σzq < it follows that E z Mσz q =,wherec is a constant see also Remark 2.5 below. The conclusion is that V q = V = e z2 q /2 Hz q / M +q,, q z2. Transforming σ zq back to problem., τ from 2.3 is optimal. Remark 2.3. If <q, the value function is given by V q = E S q = 2q /π Γ +q 2 and an optimal stopping time is τ. For <q<, the calculations to compute 2.7 are similar to the case q>and the details will be omitted. The following formulas are valid. The value function in 2.7 is V z =2 q /2 π Γ +q 2+q Γ /4 D 2 2 e z2 +q z+d +q z where /4 D +q z = e z2 u q e zu u2 /2 du Γ + q is the parabolic cylinder function see [, Chapter 9]. The optimal stopping time 2.7 is given by τ =inf{ t>: Z t =}. For q =, problem. is trivial since for any stopping time τ, the optional sampling theorem implies that ES B τ =ES. Therefore, any stopping time is optimal. A result from general optimal stopping theory states that 2.3 is the minimal optimal stopping time for problem., that is, if τ # is another optimal stopping time for. then τ τ # P-a.s. One would expect that the minimal optimal stopping time τ q converges to the minimal optimal stopping time τ forq, but this is not true in this case. Indeed, z q z + for q wherez q is the unique strictly positive root of 2.. Thus τ q inf{ <t :S t B t z + t } >τ for q. This is illustrated in Figure and 2 where z q is numerically calculated for different values of q. Remark 2.4. If Ds x =s x 2, the following problem is equivalent to problem. inf Var S B τ = V 2 ES 2 = V 2 2/π.. In this case B τ is the optimal predictor estimator of the ultimate maximum S that minimise the variance of the error. 7
8 Remark 2.5. The argument to verify 2.5 extends to a more general setting and leads to the following explicit formulas for the Laplace transform of σ zq =inf{ t> : Z t z q }. For λ>, define the function l λ z =E z e λσ zq. General Markov process theory gives that z l λ z solves 2.9 with q = λ and satisfies l λ ±z q =. The argument quoted above gives E e M +λ,, z2 z2 q z2 / z e λσ z q = M +λ,, if z <z q z2 q if z z q for λ>. Since B t t and S t B t t are equal in law, the stopping time τ from 2.3 is distributed as the stopping time τ =inf{ t : B t z q t }. This observation together with Brownian scaling and time-change, shows that E τ λ/2 = E e λσ zq. Thus, it is not difficult to get that E τ λ/2 = e z2/2/ q M +λ,, q z2. For the special cases λ =2andλ = 4 the formula reads E τ = and E τ +zq 2 2 = +2zq z4 q and then it is easy to calculate Eτ = z2 q 2zq 4 and Varτ +zq 2 = + zq zq 2 + zq 4. The expectation and variance of τ are also calculated in [3] by a different method. 3. Prediction by a probability distance Recall from Section that problem.2 is equivalent to the finite-horizon optimal stopping problem W ε =sup E F τ ε [,ε] B τ. The method of time-change does not simplify this problem. Therefore, introduce the problem 3. W t, x = sup t E x F t τ ε [,ε] B τ for t andx R, whereb = x under P x and τ is a stopping time of B t. Denote the first passage time of the reflected Brownian motion by 3.2 τ ε =inf{ t>: B t = ε }. For any stopping time τ t, let τ be the first passage time after time τ to the level ε of B t, i.e. τ =inf{ t>τ : B t = ε }. τ is a stopping time of B t and in view of the gain function F t ε [,ε] x itisclearthate x F t τ ε [,ε] B τ E x F t τ t ε [,ε] B τ t. The conclusion to draw is that in problem 3. it is only optimal to stop if B τ = ε on the set {τ < t}. Therefore, the principle of smooth fit is not satisfied in the usual sense, that is, the value function W is not C at all points of the boundary of the domain of continued observation see [8, Chapter 3.8]. This is due to the discontinuous gain function. 8
9 In order to establish a candidate for the solution for problem 3., general theory combined with Brownian scaling property prompts that the strategy to defer stopping until the remaining time is t t ε andthenstopthefirsttimethat B t isequaltoε, thatis, τ =inf{ t ε <s t : B s = ε } should be optimal where t ε is to be found. The crucial point to establish a candidate for the optimal strategy is how to determine t ε. A priori this is not clear. By virtue of the gain function is continuous in the time variable t together with the guessed shape of the domain of continued observation an intuitive argument leads to that the value function should be smooth at the point t ε,ε. This intuitive argument is denoted the principle of smooth fit at a single point. The principle provides a method to determine t ε and is the key idea in the approach to solve problem 3.. To do this, determine t ε, define the function by 3.3 W ε t, x =E x F t τε tε [,ε] B τε t for t, x [, ] R where τ ε is given in 3.2. In general x W ε t, x isonlycontinuousat x = ε. Lett ε be the point in the time interval [, ] where x W ε t ε,x is differentiable at x = ε. If x W ε t, x is not differentiable at x = ε for any t [, ] then set t ε =. Hence Brownian scaling property implies that t ε = ε/ε 2 where ε > satisfies that x W ε,x is differentiable at x = ε. The hitting density h x ε of τ ε in 3.2 see [2, Section 6] is given by h x εy = k= x ε y 3/2 k x +2k +ε x ε ϕ y y 3/2 x +2k +ε ϕ y if x<ε if x>ε for y> and it is then possible to derive an equation for ε. Indeed, computing the function W ε t, x using the density h x ε givesthat 3.4 W ε t, x = t + 2 F t y ε k= k x +2k +ε y 3/2 k sgn x +2k +ε Φ k= x +2k +ε ϕ dy y x +2k +ε t for x<εand 3.5 W ε t, x = t F t y ε x ε y 3/2 x ε ϕ dy y 9
10 for x>ε. More calculations give that where Δ ε t = lim x ε Δ ε + t = lim x ε W ε x W ε x t Λ ε F t y ε t = y 3/2 Λ ε 2 t = 2 t k= =2Λε t+2λ ε 2 t Δ ε + t 2 2π t t = lim F t y ε z y 3/2 k k= 2k +ε k ϕ. t Therefore, ε is the unique positive root of the equation 3.6 Δ ε + + =Λ ε + Λ ε 2. 2π z2 z ϕ y dy y 2k +ε2 2k +ε ϕ dy y y Numerical calculation gives that ε.7. So the candidate for the optimal strategy is 3.7 τ =inf{ t ε <s t : B s = ε } and the associated candidate for the value function is W t, x =E x F t τ ε [,ε] B τ. If t t ε then τ = τ ε t and hence W t, x =W ε t, x. If t<t ε, strong Markov property and Itô s formula imply that W t, ε =E ε W ε t ε,b ε t t = W ε t ε = W ε t ε where this last equality following from,ε+e ε t,ε E ε t ε t ε t W ε L B W ε t ε,b u { Bu ε} du W ε t t ε,b u du L B W ε t, x+ t, x = t for t<and x ε which is a fact from general Markov process theory. It was used that the infinitesimal generator of B t isgivenby Hence L B = 2 2 x 2. W ε t t, ε t=t W = t, ε = F t ε ε ε t t=t t t=t ε and W t, x issmoothatthepointt, x =t ε,ε. In other words, the candidate for the value function satisfies the principle of smooth fit at a single point.
11 The proposed solution to problem 3. tranformed back to the initially problem.2 is stated in the following theorem. Theorem 3.. Consider the optimal stopping problem.2 and let ε>. Let t ε be the point in the time interval [, ] satisfying that x W ε t ε,x is differentiable at x = ε where W ε t, x is given in 3.4 and 3.5. If x W ε t, x is not differentiable at x = ε for all t [, ] then set t ε =.Thent ε = ε/ε 2 where ε is the positive root of equation 3.6 numerical calculation gives that ε.7. i If t ε ii If t ε =, then the value function is given by ε = 2ε Φ y W ε k= >, then the value function is given by W ε = 2 t ε k 2k + y 3/2 W ε t ε,y y ϕ dy. t ε 2k +ε ϕ dy. y In both cases the optimal strategy is to defer stopping until the remaining time is t ε then stop the first time that S t B t is equal to ε see Figure 3, that is, τ =inf{ t ε <t :S t B t = ε } inf =. and.2 ε t Figure 3. A drawing of optimal stopping strategies for different values of ε for the Brownian path from Figure. Remark 3.2. The case of a general time interval [,T] reduces to the case of the unit interval stated above. Indeed, the Brownian scaling property implies that sup τ T P S T B τ ε =sup P S B τ ε/ T = W ε/ T and the optimal strategy is given by τ = inf{ t T < t T : S t B t = ε } where t T = T ε/ε 2.
12 Proof. The idea to showing that the proposed solution is correct is to apply the verification theorem in [5, Theorem 3.]. The verification theorem is based on viscosity solution techniques and requires a continuous gain function. Therefore, an approximation of F t ε [,ε] x is necessary. Some notation from [5] will be adapted. Let ε> be given and fixed. As a starting point, let S =, R and for t, x S define the continuous gain function G n t, x =F t ε n e x ε and the associated value function 3.8 W n t, x = sup E x Gn t + τ,b τ t for n and the supremum is taken over all stopping times τ t of B t. Note that t can take negative values and when needed in the sequel, the previously defined functions can easily be extended. The gain function G n is chosen such that G n t, x =F t ε for x <ε and G n t, x F t ε [,ε] x forn. Furthermore, 3.9 LB + Gn t, x > t for x ε and hence it is only optimal to stop if B τ = ε on {τ < t}. This is similar to problem 3. and the optimal strategies must have the same structure. In a similar vein as for problem 3., define the function J n ε t, x =E x Gn t + τ ε t,b τε t where τ ε is given in 3.2. Observe that J n ε dominates the gain function G n as can be seen from Itô s formula and 3.9. Let t n < be the point where x J n ε t n,x is differentiable at x = ε. Sett = ε/ε 2 which is the point where x W ε t,x is differentiable at x = ε note that t is negative when ε>ε. Since J n ε dominates W ε and J n ε t, x =W ε t, x for x ε, thent n >t and t n t for n. The heuristic argument, the principle of smooth fit at a single point, gives that the candidate for the optimal strategy in problem 3.8 is τ n =inf{ t n <s t : B s = ε } and the associated candidate for the value function is J n t, x =E x Gn τ n,b τn. The arguments exposed above for t, x W t, x givethatt, x J n t, x issmoothatthe point t, x =t n,ε. By Itô s formula and the definition of t n yields that J n dominates J n ε and hence also dominates G n. As mentioned at the start of the proof, the verification theorem in [5] based on viscosity solution is applied next to establishing the candidates indeed are the solution to problem 3.8. For fixed n, introduce the following variational inequality associated with the problem 3.8 see [5] 3. min { L B t Jt, x,jt, x Gn t, x } = fort, x S 3. Jt, x =G n t, x for t, x S. Let J : S R be a continuous function. J is a viscosity solution of the variational inequality 3. and 3. if the following two conditions hold: 2
13 . J is a viscosity subsolution of 3. and 3., that is, J satisfies 3. and if for any j C 2 S andanyt, x S such that j J and jt, x =Jt, x then 3.2 min { L B t jt, x,jt, x Gn t, x }. 2. J is a viscosity supersolution of 3. and 3., that is, J satisfies 3. and if for any j C 2 S andanyt, x S such that j J and jt, x =Jt, x then 3.3 min { L B t jt, x,jt, x Gn t, x }. The next step is to verify that J n is a viscosity solution of the variational inequality. Subsolution: Since J n t, x =G n t, x fort n t<and x = ε and L B + t J nt, x = otherwise, it is clear that 3.2 is satisfied. Thus J n is a subsolution. Supersolution: Let j C 2 S begivensuchthatj J n and jt, x =J n t, x. As above, if t<t n or x ε it is trivial that 3.3 holds. If t n <t<and x = ε there does not exist any C 2 function j such that jt, ε =J n t, ε andj J n due to J n x t, x x=ε > J n x t, x x=ε+. Hence 3.3 holds trivially. If t, x =t n,ε, then recall that J n is continuous differentiable at t, x =t n,ε. The function J n j has a local minimum at t, x =t n,εandbythe first order conditions j t t n,ε= J n t t n,ε and j x t n,ε= J n x t n,ε and by the second order conditions lim x ε L B J n t n,x L B jt n,ε. Using the fact that L B + t Jn t n,x=for x ε if follows immediately that L B + t jtn,ε and 3.3 is fulfilled. Hence J n is a supersolution. The conclusion is that J n is the viscosity solution of the variational inequality 3. and 3.. Then the verification theorem in [5, Theorem 3.] provides that the candidate J n is the value function of the optimal stopping problem 3.8. Moreover, τ n is the optimal stopping time. Finally, let t =andx =. NoticethatG n τ n,b τn F τ ε [,ε] B τ P-a.s. and by monotone convergence J n, W, for n. It is clear that W, W n, = E G n τ n,b τn for all n. Letting n then W, W, = E F τ ε [,ε] B τ. The conclusion is that the candidate W, in 3.3 is the value function in the optimal stopping problem 3. and τ in 3.7 is the optimal strategy. Since W ε = W, and transforming back the optimal strategy to the initially problem.2 proves the theorem. Example 3.3. Numerical computations of the value function as a function of the error ε is presented in Figure 4. From the figure one sees that if, for example, ε =.75 then W.75 = sup P S B τ and the optimal strategy is illustrated in Figure 3 with t Conversely, if the prediction of the ultimate maximum is done with a given probability, say 95% that is W ε =.95, then the maximal error is ε.2 and the optimal strategy is illustrated in Figure 3 with t ε.9. 3
14 .95 W ε ε Figure 4. A drawing of the value function W ε as a function of the error ε. References [] Abramowicz, M.and Stegun, I.A Handbook of Mathematical Functions. National Bureau of Standards. [2] Dubins, L.E., Shepp, L.A. and Shiryaev, A.N Optimal stopping rules and maximal inequalities for Bessel processes. Theory Probab. Appl [3] Graversen, S.E., Peskir, G. and Shiryaev, A.N. 2. Stopping Brownian motion without anticipation as close as possible to its ultimate maximum. Theory Probab. Appl [4] Jacka, S.D. 99. Optimal stopping and best constants for Doob-like inequalities I: The case p =. Ann. Probab [5] Øksendal, B.and Reikvam, K Viscosity solutions of optimal stopping problems. Stochastics Stochastics Rep [6] Pedersen, J.L.and Peskir, G. 2. Solving non-linear optimal stopping problems by the method of time-change. Stochastic Anal. Appl [7] Revuz, D.and Yor, M Continuous Martingales and Brownian Motion. Third edition. Springer. [8] Shiryaev, A.N Optimal stopping rules. Springer. Jesper Lund Pedersen Department of Applied Mathematics and Statistics, University of Copenhagen Universitetsparken 5, DK-2 Copenhagen, Denmark jesper@stat.ku.dk jesper 4
The Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationMaximum Process Problems in Optimal Control Theory
J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard
More informationOptimal Stopping Problems for Time-Homogeneous Diffusions: a Review
Optimal Stopping Problems for Time-Homogeneous Diffusions: a Review Jesper Lund Pedersen ETH, Zürich The first part of this paper summarises the essential facts on general optimal stopping theory for time-homogeneous
More informationSolving the Poisson Disorder Problem
Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem
More informationPredicting the Time of the Ultimate Maximum for Brownian Motion with Drift
Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with
More informationAlbert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes
Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics email: albertsh@mi.ras.ru 1 TOPIC I:
More informationOPTIMAL STOPPING OF A BROWNIAN BRIDGE
OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationThe Wiener Sequential Testing Problem with Finite Horizon
Research Report No. 434, 3, Dept. Theoret. Statist. Aarhus (18 pp) The Wiener Sequential Testing Problem with Finite Horizon P. V. Gapeev and G. Peskir We present a solution of the Bayesian problem of
More informationOn Reflecting Brownian Motion with Drift
Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability
More informationPavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns
Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and
More informationOn the sequential testing problem for some diffusion processes
To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationOn a class of optimal stopping problems for diffusions with discontinuous coefficients
On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem
More informationON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER
ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose
More informationOptimal stopping of integral functionals and a no-loss free boundary formulation
Optimal stopping of integral functionals and a no-loss free boundary formulation Denis Belomestny Ludger Rüschendorf Mikhail Urusov January 1, 8 Abstract This paper is concerned with a modification of
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationn E(X t T n = lim X s Tn = X s
Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationA Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1
Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable
More informationOn Doob s Maximal Inequality for Brownian Motion
Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationThe Russian option: Finite horizon. Peskir, Goran. MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics
The Russian option: Finite horizon Peskir, Goran 25 MIMS EPrint: 27.37 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: And by contacting:
More informationBayesian quickest detection problems for some diffusion processes
Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationOptimal Stopping and Maximal Inequalities for Poisson Processes
Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,
More informationLimit theorems for multipower variation in the presence of jumps
Limit theorems for multipower variation in the presence of jumps Ole E. Barndorff-Nielsen Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, DK-8 Aarhus C, Denmark oebn@imf.au.dk
More informationOn the submartingale / supermartingale property of diffusions in natural scale
On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property
More informationSolution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have
362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications
More informationLecture 22 Girsanov s Theorem
Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationIntroduction to Random Diffusions
Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationOn the principle of smooth fit in optimal stopping problems
1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:
More informationA note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series
Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationMAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS
MAXIMAL COUPLING OF EUCLIDEAN BOWNIAN MOTIONS ELTON P. HSU AND KAL-THEODO STUM ABSTACT. We prove that the mirror coupling is the unique maximal Markovian coupling of two Euclidean Brownian motions starting
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationThe Pedestrian s Guide to Local Time
The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments
More informationConstrained Optimal Stopping Problems
University of Bath SAMBa EPSRC CDT Thesis Formulation Report For the Degree of MRes in Statistical Applied Mathematics Author: Benjamin A. Robinson Supervisor: Alexander M. G. Cox September 9, 016 Abstract
More informationSTOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES
STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s
More informationSquared Bessel Process with Delay
Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More informationarxiv: v1 [math.pr] 11 Jan 2013
Last-Hitting Times and Williams Decomposition of the Bessel Process of Dimension 3 at its Ultimate Minimum arxiv:131.2527v1 [math.pr] 11 Jan 213 F. Thomas Bruss and Marc Yor Université Libre de Bruxelles
More informationOn the American Option Problem
Math. Finance, Vol. 5, No., 25, (69 8) Research Report No. 43, 22, Dept. Theoret. Statist. Aarhus On the American Option Problem GORAN PESKIR 3 We show how the change-of-variable formula with local time
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationOn the quantiles of the Brownian motion and their hitting times.
On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]
More informationOn pathwise stochastic integration
On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic
More informationWorst Case Portfolio Optimization and HJB-Systems
Worst Case Portfolio Optimization and HJB-Systems Ralf Korn and Mogens Steffensen Abstract We formulate a portfolio optimization problem as a game where the investor chooses a portfolio and his opponent,
More informationDudley s representation theorem in infinite dimensions and weak characterizations of stochastic integrability
Dudley s representation theorem in infinite dimensions and weak characterizations of stochastic integrability Mark Veraar Delft University of Technology London, March 31th, 214 Joint work with Martin Ondreját
More informationOn a class of stochastic differential equations in a financial network model
1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationApplications of Optimal Stopping and Stochastic Control
Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed
More informationSolutions to the Exercises in Stochastic Analysis
Solutions to the Exercises in Stochastic Analysis Lecturer: Xue-Mei Li 1 Problem Sheet 1 In these solution I avoid using conditional expectations. But do try to give alternative proofs once we learnt conditional
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More informationOn Distributions Associated with the Generalized Lévy s Stochastic Area Formula
On Distributions Associated with the Generalized Lévy s Stochastic Area Formula Raouf Ghomrasni Abstract A closed-form ression is obtained for the conditional probability distribution of t R s ds given
More informationSTOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED
J. Appl. Prob. 42, 826 838 (25) Printed in Israel Applied Probability Trust 25 STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED X. GUO, Cornell University J. LIU, Yale University
More informationThe Cameron-Martin-Girsanov (CMG) Theorem
The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,
More informationMA8109 Stochastic Processes in Systems Theory Autumn 2013
Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationMulti-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form
Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct
More informationThe Wiener Disorder Problem with Finite Horizon
Stochastic Processes and their Applications 26 11612 177 1791 The Wiener Disorder Problem with Finite Horizon P. V. Gapeev and G. Peskir The Wiener disorder problem seeks to determine a stopping time which
More informationfor all f satisfying E[ f(x) ] <.
. Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if
More informationMan Kyu Im*, Un Cig Ji **, and Jae Hee Kim ***
JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 19, No. 4, December 26 GIRSANOV THEOREM FOR GAUSSIAN PROCESS WITH INDEPENDENT INCREMENTS Man Kyu Im*, Un Cig Ji **, and Jae Hee Kim *** Abstract.
More informationOptimal Stopping Problems and American Options
Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master
More informationMAXIMUM PROCESS PROBLEMS IN OPTIMAL CONTROL THEORY
MAXIMUM PROCESS PROBLEMS IN OPTIMAL CONTROL THEORY GORAN PESKIR Receied 16 January 24 and in reised form 23 June 24 Gien a standard Brownian motion B t ) t and the equation of motion dx t = t dt+ 2dBt,wesetS
More informationA new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009
A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance
More information(A n + B n + 1) A n + B n
344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals
More informationLectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics. Managing Editor: Michael Struwe
Lectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics Managing Editor: Michael Struwe Goran Peskir Albert Shiryaev Optimal Stopping and Free-Boundary Problems Birkhäuser
More informationOptimal Stopping and Applications
Optimal Stopping and Applications Alex Cox March 16, 2009 Abstract These notes are intended to accompany a Graduate course on Optimal stopping, and in places are a bit brief. They follow the book Optimal
More informationConstrained Dynamic Optimality and Binomial Terminal Wealth
To appear in SIAM J. Control Optim. Constrained Dynamic Optimality and Binomial Terminal Wealth J. L. Pedersen & G. Peskir This version: 13 March 018 First version: 31 December 015 Research Report No.
More informationOptimal Stopping Games for Markov Processes
SIAM J. Control Optim. Vol. 47, No. 2, 2008, (684-702) Research Report No. 15, 2006, Probab. Statist. Group Manchester (21 pp) Optimal Stopping Games for Markov Processes E. Ekström & G. Peskir Let X =
More informationPREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES. 1. Introduction.
Acta Math. Univ. Comenianae Vol. LXXVII, 1(28), pp. 123 128 123 PREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES M. EL KADIRI Abstract. We prove as for the real case that a martingale
More informationStochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.
Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of
More information1 Basics of probability theory
Examples of Stochastic Optimization Problems In this chapter, we will give examples of three types of stochastic optimization problems, that is, optimal stopping, total expected (discounted) cost problem,
More informationChange-point models and performance measures for sequential change detection
Change-point models and performance measures for sequential change detection Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece moustaki@upatras.gr George V. Moustakides
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationGARCH processes continuous counterparts (Part 2)
GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/
More informationCOVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE
Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT
More informationMaximal Coupling of Euclidean Brownian Motions
Commun Math Stat 2013) 1:93 104 DOI 10.1007/s40304-013-0007-5 OIGINAL ATICLE Maximal Coupling of Euclidean Brownian Motions Elton P. Hsu Karl-Theodor Sturm eceived: 1 March 2013 / Accepted: 6 March 2013
More informationLarge time behavior of reaction-diffusion equations with Bessel generators
Large time behavior of reaction-diffusion equations with Bessel generators José Alfredo López-Mimbela Nicolas Privault Abstract We investigate explosion in finite time of one-dimensional semilinear equations
More informationStochastic Calculus (Lecture #3)
Stochastic Calculus (Lecture #3) Siegfried Hörmann Université libre de Bruxelles (ULB) Spring 2014 Outline of the course 1. Stochastic processes in continuous time. 2. Brownian motion. 3. Itô integral:
More informationAN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION 1. INTRODUCTION
AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION DAVAR KHOSHNEVISAN, DAVID A. LEVIN, AND ZHAN SHI ABSTRACT. We present an extreme-value analysis of the classical law of the iterated logarithm LIL
More informationWeak convergence and large deviation theory
First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov
More informationPavel V. Gapeev, Neofytos Rodosthenous On the drawdowns and drawups in diffusion-type models with running maxima and minima
Pavel V. Gapeev, Neofytos Rodosthenous On the drawdowns and drawups in diffusion-type models with running maxima and minima Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and
More informationAn approach to the Biharmonic pseudo process by a Random walk
J. Math. Kyoto Univ. (JMKYAZ) - (), An approach to the Biharmonic pseudo process by a Random walk By Sadao Sato. Introduction We consider the partial differential equation u t 4 u 8 x 4 (.) Here 4 x 4
More informationFundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales
Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time
More informationResearch Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching
Discrete Dynamics in Nature and Society Volume 211, Article ID 549651, 12 pages doi:1.1155/211/549651 Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting
More informationJump-type Levy Processes
Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic
More informationROOT S BARRIER, VISCOSITY SOLUTIONS OF OBSTACLE PROBLEMS AND REFLECTED FBSDES
OOT S BAIE, VISCOSITY SOLUTIONS OF OBSTACLE POBLEMS AND EFLECTED FBSDES HAALD OBEHAUSE AND GONÇALO DOS EIS Abstract. Following work of Dupire 005, Carr Lee 010 and Cox Wang 011 on connections between oot
More informationThe equivalence of two tax processes
The equivalence of two ta processes Dalal Al Ghanim Ronnie Loeffen Ale Watson 6th November 218 arxiv:1811.1664v1 [math.pr] 5 Nov 218 We introduce two models of taation, the latent and natural ta processes,
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More information