The Wiener Sequential Testing Problem with Finite Horizon
|
|
- Terence Higgins
- 5 years ago
- Views:
Transcription
1 Research Report No. 434, 3, Dept. Theoret. Statist. Aarhus (18 pp) The Wiener Sequential Testing Problem with Finite Horizon P. V. Gapeev and G. Peskir We present a solution of the Bayesian problem of sequential testing of two simple hypotheses about the mean value of an observed Wiener process on the time interval with finite horizon. The method of proof is based on reducing the initial optimal stopping problem to a parabolic free-boundary problem where the continuation region is determined by two continuous curved boundaries. By means of the change-of-variable formula containing the local time of a diffusion process on curves we show that the optimal boundaries can be characterized as a unique solution of the coupled system of two nonlinear integral equations. 1. Introduction The problem of sequential testing of two simple hypotheses about the mean value of an observed Wiener process seeks to determine as soon as possible and with minimal probability error which of the given two values is a true mean. The problem admits two different formulations (cf. Wald []). In the Bayesian formulation it is assumed that the unknown mean has a given distribution, and in the variational formulation no probabilistic assumption about the unknown mean is made a priori. In this paper we only study the Bayesian formulation. The history of the problem is long and we only mention a few points starting with Wald and Wolfowitz [1]-[] who used the Bayesian approach to prove the optimality of the sequential probability ratio test (SPRT) in the variational problem for i.i.d. sequences of observations. Dvoretzky, Kiefer and Wolfowitz [1] stated without proof that if the continuous-time log-likelihood ratio process has stationary independent increments, then the SPRT remains optimal in the variational problem. Mikhalevich [9] and Shiryaev [17] (see also [18; Chapter IV]) derived an explicit solution of the Bayesian and variational problem for a Wiener process with infinite horizon by reducing the initial optimal stopping problem to a free-boundary problem for a differential operator. A complete proof of the statement from [1] (under some mild assumptions) was given by Irle and Schmitz [4]. An explicit solution of the Bayesian and variational problem for a Poisson process with infinite horizon was derived in [15] by reducing the initial Network in Mathematical Physics and Stochastics (funded by the Danish National Research Foundation). Mathematics Subject Classification. Primary 6G4, 35R35, 45G15. Secondary 6C1, 6L15, 6J6. Key words and phrases: Sequential testing, Wiener process, optimal stopping, finite horizon, parabolic free-boundary problem, a coupled system of nonlinear Volterra integral equations of the second kind, curved boundary, a change-of-variable formula with local time on curves. 1
2 optimal stopping problem to a free-boundary problem for a differential-difference operator. The main aim of the present paper is to derive a solution of the Bayesian problem for a Wiener process with finite horizon. It is known that optimal stopping problems for Markov processes with finite horizon are inherently two-dimensional and thus analytically more difficult than those with infinite horizon. A standard approach for handling such a problem is to formulate a free-boundary problem for the (parabolic) operator associated with the (continuous) Markov process (see e.g. [8], [3], [19], [5], [1]). Since solutions to such free-boundary problems are rarely known explicitly, the question often reduces to prove the existence and uniqueness of a solution to the freeboundary problem, which then leads to the optimal stopping boundary and the value function of the optimal stopping problem. In some cases the optimal stopping boundary has been characterized as a unique solution of the system of (at least) countably many nonlinear integral equations (see e.g. [5; Theorem 4.3]). A method of linearization was suggested in [11] with the aim of proving that only one equation from such a system may be sufficient to characterize the optimal stopping boundary uniquely. A complete proof of the latter fact in the case of a specific optimal stopping problem was given in [13] (see also [14]). In the present paper we reduce the initial Bayesian problem to a finite-horizon optimal stopping problem for a diffusion process and a non-smooth gain function where the continuation region is determined by two continuous curved boundaries. In order to find an analytic expression for the boundaries we formulate an equivalent parabolic free-boundary problem for the infinitesimal operator of the strong Markov a posteriori probability process. By means of the method of proof proposed in [11] and [13], and using the change-of-variable formula from [1], we show that the optimal stopping boundaries can be uniquely determined from a coupled system of nonlinear Volterra integral equations of the second kind. This also leads to the explicit formula for the value (risk) function in terms of the optimal stopping boundaries. The main result of the paper is stated in Theorem.1. The optimal sequential procedure in the initial Bayesian problem is displayed more explicitly in Remark.. A simple numerical method for calculating the optimal boundaries is presented in Remark.3.. Solution of the Bayesian problem In the Bayesian formulation of the problem with finite horizon (see [18; Chapter IV, Sections 1-] for the infinite horizon case) it is assumed that we observe a trajectory of the Wiener process X = (X t ) t T with drift θµ where the random variable θ may be 1 or with probability π or 1 π, respectively..1. For a precise probabilistic formulation of the Bayesian problem it is convenient to assume that all our considerations take place on a probability space (Ω, F, P π ) where the probability measure P π has the following structure: (.1) P π = πp 1 + (1 π)p for π [, 1]. Let θ be a random variable taking two values 1 and with probabilities P π [θ = 1] = π and P π [θ = ] = 1 π, and let W = (W t ) t T be a standard Wiener process started at zero under P π. It is assumed that θ and W are independent.
3 It is further assumed that we observe a process X = (X t ) t T of the form: (.) X t = θµt + σw t where µ and σ > are given and fixed. Thus P π [X θ = i ] = P i [X ] is the distribution law of a Wiener process with drift iµ and diffusion coefficient σ > for i =, 1, so that π and 1 π play the role of a priori probabilities of the statistical hypotheses: (.3) H 1 : θ = 1 and H : θ = respectively. Being based upon the continuous observation of X our task is to test sequentially the hypotheses H 1 and H with a minimal loss. For this, we consider a sequential decision rule (τ, d), where τ is a stopping time of the observed process X (i.e. a stopping time with respect to the natural filtration Ft X = σ(x s s t) generated by X for t T ), and d is an Fτ X -measurable random variable taking values and 1. After stopping the observation at time τ, the terminal decision function d indicates which hypothesis should be accepted according to the following rule: if d = 1 we accept H 1, and if d = we accept H. The problem then consists of computing the risk function: (.4) V (π) = inf (τ,d) E π[τ + ai(d =, θ = 1) + bi(d = 1, θ = )] and finding the optimal decision rule (τ, d ) at which the infimum in (.4) is attained. Here E π [τ] is the average loss due to a cost of the observations, and ap π [d =, θ = 1] + bp π [d = 1, θ = ] is the average loss due to a wrong terminal decision, where a > and b > are given constants... By means of standard arguments (see [18; pages ]) one can reduce the Bayesian problem (.4) to the optimal stopping problem: (.5) V (π) = inf τ T E π[τ + aπ τ b(1 π τ )] for the a posteriori probability process π t = P π [θ = 1 Ft X ] for t T with P π [π = π] = 1. Setting c = b/(a + b) the optimal decision function is then given by d = 1 if π τ c and d = if π τ < c..3. It can be shown (see [18; pages ]) that the likelihood ratio process (ϕ t ) t T defined as the Radon-Nikodym derivative: (.6) ϕ t = dp 1 F X t dp F X t admits the following representation: ( µ (.7) ϕ t = exp (X σ t µ )) t while the a posteriori probability process (π t ) t T ( π (.8) π t = 1 π ϕ t can be expressed as: ) ) / ( 1 + π 1 π ϕ t 3
4 and hence solves the stochastic differential equation: (.9) dπ t = µ σ π t(1 π t ) dw t (π = π) where the innovation process (W t ) t T (.1) W t = 1 σ defined by: ( X t µ t ) π s ds is a standard Wiener process (see also [7; Chapter IX]). Using (.7) and (.8) it can be verified that (π t ) t T is a time-homogeneous (strong) Markov process under P π with respect to the natural filtration. As the latter clearly coincides with (F X t ) t T it is also clear that the infimum in (.5) can equivalently be taken over all stopping times of (π t ) t T..4. In order to solve the problem (.5) let us consider the extended optimal stopping problem for the Markov process (t, π t ) t T given by: (.11) V (t, π) = inf τ T t E t,π[g(t + τ, π t+τ )] where P t,π [π t = π] = 1, i.e. P t,π is a probability measure under which the diffusion process (π t+s ) s T t solving (.9) starts at π, the infimum in (.11) is taken over all stopping times τ of (π t+s ) s T t, and we set G(t, π) = t + aπ b(1 π) for (t, π) [, T ] [, 1]. Since G is bounded and continuous on [, T ] [, 1] it is possible to apply a version of Theorem 3 in [18; page 17] for a finite time horizon and by statement () of that theorem conclude that an optimal stopping time exists in (.11)..5. Let us now determine the structure of the optimal stopping time in the problem (.11). (i) It follows from (.9) that the scale function of (π t ) t is given by S(x) = x for x [, 1] and the speed measure of (π t ) t is given by m(dx) = (σ/µ) dx/(x(1 x)) for x, 1. Hence the Green function of (π t ) t on [π, π 1 ], 1 is given by G π,π 1 (x, y) = (π 1 x)(y π )/(π 1 π ) for π y x and G π,π 1 (x, y) = (π 1 y)(x π )/(π 1 π ) for x y π 1. Set H(π) = aπ b(1 π) for π [, 1] and let d = H(c). Take ε, d and denote by π = π (ε) and π 1 = π 1 (ε) the unique points < π < c < π 1 < 1 satisfying H(π ) = H(π 1 ) = d ε. Let σ ε = inf { t > π t / π, π 1 } and set σε T = σ ε T. Then σ ε and σε T are stopping times and it is easily verified that: (.1) E c [σ T ε ] E c[σ ε ] = π1 π G π,π 1 (x, y) m(dy) Kε for some K > large enough (not depending on ε). Similarly, we find that: (.13) E c [H(π σ T ε )] = E c [H(π σε )I(σ ε < T )] + E c [H(π T )I(σ ε T )] d ε + d P c [σ ε > T ] d ε + (d/t ) E c [σ ε ] d ε + L ε where L = dk/t. Combining (.1) and (.13) we see that: (.14) E c [G(σε T, π σ )] = E ε c[σ T T ε + H(π σ )] d ε + (K +L) ε T ε 4
5 for all ε, d. Choosing ε > in (.14) small enough we see that E c [G(σ T ε, π σ T ε )] < d. Using the fact that G(t, π) = t + H(π) is linear in t, and T > above is arbitrary, this shows that it is never optimal to stop in (.11) when π t+s = c for s < T t. In other words, this shows that all points (t, c) for t < T belong to the continuation region: (.15) C = {(t, π) [, T [, 1] V (t, π) < G(t, π)}. (ii) Recalling the solution to the problem (.5) in the case of infinite horizon, where the stopping time τ = inf { t > π t / A, B } is optimal and < A < c < B < 1 are uniquely determined from the system (4.85) in [18; page 185], we see that all points (t, π) for t T with either π A or B π 1 belong to the stopping region. Moreover, since π V (t, π) with t T given and fixed is concave on [, 1] (this is easily deduced using the same arguments as in [6; page 15] or [18; page 168]), it follows directly from the previous two conclusions about the continuation and stopping region that there exist functions g and g 1 satisfying < A g (t) < c < g 1 (t) B < 1 for all t < T such that the continuation region is an open set of the form: (.16) C = {(t, π) [, T [, 1] π g (t), g 1 (t) } and the stopping region is the closure of the set: (.17) D = {(t, π) [, T [, 1] π [, g (t) g 1 (t), 1]}. (Below we will show that V is continuous so that C is open indeed. We will also see that g (T ) = g 1 (T ) = c.) (iii) Since the problem (.11) is time-homogeneous, in the sense that G(t, π) = t + H(π) is linear in t and H depends on π only, it follows that the map t V (t, π) t is increasing on [, T ]. Hence if (t, π) belongs to C for some π, 1 and we take any other t < t T, then V (t, π) G(t, π) = V (t, π) t H(π) V (t, π) t H(π) = V (t, π) G(t, π) <, showing that (t, π) belongs to C as well. From this we may conclude in (.16)-(.17) that the boundary t g (t) is increasing and the boundary t g 1 (t) is decreasing on [, T ]. (iv) Let us finally observe that the value function V from (.11) and the boundaries g and g 1 from (.16)-(.17) also depend on T and let them denote here by V T, g T and g1 T, respectively. Using the fact that T V T (t, π) is a decreasing function on [t, and V T (t, π) = G(t, π) for all π [, g T (t)] [g1 T (t), 1], we conclude that if T < T, then g T (t) g T (t) < c < g1 T (t) gt 1 (t) 1 for all t [, T. Letting T in the previous expression go to, we get that < A g T (t) < c < gt 1 (t) B < 1 with A lim T g T (t) and B lim T g1 T (t) for all t, where A and B are the optimal stopping points in the infinite horizon problem referred to above..6. Let us now show that the value function (t, π) V (t, π) is continuous on [, T ] [, 1]. For this it is enough to prove that: (.18) (.19) π V (t, π) is continuous at π t V (t, π) is continuous at t uniformly over π [π δ, π + δ] 5
6 for each (t, π ) [, T ] [, 1] with some δ > small enough (it may depend on π ). Since (.18) follows by the fact that π V (t, π) is concave on [, 1], it remains to establish (.19). For this, let us fix arbitrary t 1 < t T and < π < 1, and let τ 1 = τ (t 1, π) denote the optimal stopping time for V (t 1, π). Set τ = τ 1 (T t ) and note since t V (t, π) is increasing on [, T ] and τ τ 1 that we have: (.) V (t, π) V (t 1, π) E π [(t + τ ) + H(π t +τ )] E π [(t 1 + τ 1 ) + H(π t1 +τ 1 )] (t t 1 ) + E π [H(π t +τ ) H(π t1 +τ 1 )] where we recall that H(π) = aπ b(1 π) for π [, 1]. Observe further that: (.1) E π [H(π t +τ ) H(π t1 +τ 1 )] = 1 i= 1 + ( 1) i (1 π) E i [h(ϕ τ ) h(ϕ τ1 )] where for each π, 1 given and fixed the function h is defined by: (( ) π / ( (.) h(x) = H 1 π x 1 + π )) 1 π x for all x >. Then for any < x 1 < x given and fixed it follows by the mean value theorem (note that h is C 1 on, except one point) that there exists ξ [x 1, x ] such that: (.3) h(x ) h(x 1 ) h (ξ) (x x 1 ) where the derivative h at ξ satisfies: (( (.4) h (ξ) = π H 1 π ξ ) / ( 1 + π 1 π ξ )) π(1 π) π(1 π) K (1 π + πξ) (1 π) = K π 1 π with some K > large enough. On the other hand, the explicit expression (.7) yields: ( (.5) ϕ τ ϕ τ1 = ϕ τ 1 ϕ ) ( ( )) τ 1 µ = ϕ τ 1 exp ϕ τ σ (X τ 1 X τ ) µ σ (τ 1 τ ) and thus the strong Markov property (stationary independent increments) together with the representation (.) and the fact that τ 1 τ t t 1 implies: (.6) E i ( ϕ τ ϕ τ1 ) [ ( )) ] µ = E i ϕ τ (1 exp σ (W τ 1 W τ ) ( 1) i µ σ (τ 1 τ ) [ ( ) µ ]] = E i [ϕ τ E i 1 exp σ (W τ 1 W τ ) ( 1) i µ F σ (τ X 1 τ ) τ [ ( ) ] µ E i [ϕ τ ] E i sup exp t t t 1 σ W t + µ σ t 1 6
7 for i =, 1. Since it easily follows that: ( )] µ (.7) E i [ϕ τ ] = E i [exp σ W τ ( 1) i µ σ τ from (.)-(.7) we get: ( ) ( ) µ µ exp σ (T t ) exp σ T (.8) E i ( h(ϕ τ ) h(ϕ τ1 ) ) K π 1 π E i( ϕ τ ϕ τ1 ) K π 1 π L(t t 1 ) where the function L is defined by: ( ) [ µ (.9) L(t t 1 ) = exp σ T E i sup exp t t t 1 Therefore, combining (.8) with (.)-(.1) above, we obtain: ( ) ] µ σ W t + µ σ t 1. (.3) V (t, π) V (t 1, π) (t t 1 ) + K π 1 π L(t t 1 ) from where, by virtue of the fact that L(t t 1 ) in (.9) as t t 1, we easily conclude that (.19) holds. In particular, this shows that the instantaneous-stopping conditions (.5) are satisfied..7. In order to prove that the smooth-fit conditions (.51) hold, i.e. that π V (t, π) is C 1 at g (t) and g 1 (t), let us fix a point (t, π) [, T, 1 lying on the boundary g so that π = g (t). Then for all ε > such that π < π + ε < c we have: (.31) V (t, π + ε) V (t, π) ε and hence, taking the limit in (.31) as ε, we get: (.3) + V π G(t, π + ε) G(t, π) ε G (t, π) (t, π) π where the right-hand derivative in (.3) exists (and is finite) by virtue of the concavity of π V (t, π) on [, 1]. Note that the latter will also be proved independently below. Let us now fix some ε > such that π < π + ε < c and consider the stopping time τ ε = τ (t, π + ε) being optimal for V (t, π + ε). Note that τ ε is the first exit time of the process (π t+s ) s T t from the set C in (.16). Then by (.1) and (.8) it follows using the mean value theorem that there exists ξ i [π, π + ε] such that: (.33) V (t, π + ε) V (t, π) E π+ε [G(t + τ ε, π t+τε )] E π [G(t + τ ε, π t+τε )] 1 1 = E i [S i (π + ε) S i (π)] = ε E i [S i (ξ i)] where the function S i is defined by: (.34) S i (π) = 1 + ( 1)i (1 π) i= i= ( π / ( G t + τ ε, 1 π ϕ τ ε 1 + π )) 1 π ϕ τ ε 7
8 and its derivative S i at ξ i is given by: ( S i (ξ i) = ( 1) i+1 ξ / ( i G t + τ ε, ϕ τε 1 + ξ )) i (.35) ϕ τε 1 ξ i 1 ξ i ( ( 1)i (1 ξ i ) G ξ / ( i t + τ ε, ϕ τε 1 + ξ i ϕ τε π 1 ξ i 1 ξ i )) ϕ τε (1 ξ i + ξ i ϕ τε ) for i =, 1. Since g is increasing it is easily verified using (.7)-(.8) and the fact that t (±µ/(σ)) t is a lower function for the standard Wiener process W that τ ε (P i -a.s.) and thus ϕ τε 1 (P i -a.s.) as ε for i =, 1. Hence we easily find: (.36) S i(ξ i ) ( 1) i+1 G(t, π) ( 1)i (1 π) G (t, π) π (P i-a.s.) as ε, and clearly S i (ξ i) K i with some K i > large enough for i =, 1. It thus follows from (.33) using (.36) that: (.37) V (t, π + ε) V (t, π) ε 1 i= E i [S i(ξ i )] G (t, π) π as ε by the dominated convergence theorem. This combined with (.31) above proves that V + π (t, π) exists and equals G π(t, π). The smooth-fit at the boundary g 1 is proved analogously..8. We proceed by proving that the boundaries g and g 1 are continuous on [, T ] and that g (T ) = g 1 (T ) = c. (i) Let us first show that the boundaries g and g 1 are right-continuous on [, T ]. For this, fix t [, T and consider a sequence t n t as n. Since g i is monotone, the right-hand limit g i (t+) exists for i =, 1. Because (t n, g i (t n )) D for all n 1, and D is closed, we see that (t, g i (t+)) D for i =, 1. Hence by (.17) we see that g (t+) g (t) and g 1 (t+) g 1 (t). The reverse inequalities follow obviously from the fact that g is increasing and g 1 is decreasing on [, T ], thus proving the claim. (ii) Suppose that at some point t, T the function g 1 makes a jump, i.e. let g 1 (t ) > g 1 (t ) c. Let us fix a point t < t close to t and consider the half-open region R C being a curved trapezoid formed by the vertexes (t, g 1 (t )), (t, g 1 (t )), (t, π ) and (t, π ) with any π fixed arbitrarily in the interval g 1 (t ), g 1 (t ). Observe that the strong Markov property implies that the value function V from (.11) is C 1, on C. Note also that the gain function G is C 1, in R so that by the Leibnitz-Newton formula using (.5) and (.51) it follows that: (.38) V (t, π) G(t, π) = g1 (t) g1 (t) π u ( V π G π ) (t, v) dv du for all (t, π) R. Let us fix some (t, π) C and take an arbitrary ε > such that (t + ε, π) C. Then denoting by τ ε = τ (t + ε, π) the optimal stopping time for V (t + ε, π), we have: (.39) V (t + ε, π) V (t, π) ε E t+ε,π[g(t + ε + τ ε, π t+ε+τε )] E t,π [G(t + τ ε, π t+τε )] ε = E π[g(t + ε + τ ε, π τε ) G(t + τ ε, π τε )] = 1 ε 8
9 and thus, taking the limit in (.39) as ε, we get: (.4) V t G (t, π) (t, π) = 1 t at each (t, π) C. Since the strong Markov property implies that the value function V from (.11) solves the equation (.49), using (.4) we obtain: (.41) V 1 V σ (t, π) = σ (t, π) ε π µ π (1 π) t µ for all t t < t and all π π < g 1 (t ) with ε > small enough. Hence by (.38) using that G ππ = we get: (.4) V (t, π ) G(t, π ) ε σ (g 1 (t ) π ) µ ε σ (g 1 (t ) π ) µ as t t. This implies that V (t, π ) < G(t, π ) which contradicts the fact that (t, π ) belongs to the stopping region D. Thus g 1 (t ) = g 1 (t ) showing that g 1 is continuous at t and thus on [, T ] as well. A similar argument shows that the function g is continuous on [, T ]. (iii) We finally note that the method of proof from the previous part (ii) also implies that g (T ) = g 1 (T ) = c. To see this, we may let t = T and likewise suppose that g 1 (T ) > c. Then repeating the arguments presented above word by word we arrive to a contradiction with the fact that V (T, π) = G(T, π) for all π [c, g 1 (T )] thus proving the claim..9. Summarizing the facts proved in Subsections.5-.8 above we may conclude that the following exit time is optimal in the extended problem (.11): (.43) τ = inf{ s T t π t+s / g (t + s), g 1 (t + s) } (the infimum of an empty set being equal T t) where the two boundaries (g, g 1 ) satisfy the following properties (see Figure 1 below): < (.44) (.45) (.46) (.47) g : [, T ] [, 1] is continuous and increasing g 1 : [, T ] [, 1] is continuous and decreasing A g (t) < c < g 1 (t) B for all t < T g i (T ) = c for i =, 1 where A and B satisfying < A < c < B < 1 are the optimal stopping points for the infinite horizon problem uniquely determined from the system of transcendental equations (4.85) in [18; page 185]. Standard arguments imply that the infinitesimal operator L of the process (t, π t ) t T acts on a function f C 1, ([, T [, 1]) according to the rule: ( ) f (.48) (Lf)(t, π) = t + µ σ π (1 π) f (t, π) π 9
10 1 t g (t) 1 c π t π t t g (t) τ T Figure 1. A computer drawing of the optimal stopping boundaries g and g 1 from Theorem.1. In the case above it is optimal to accept the hypothesis H 1. for all (t, π) [, T [, 1]. In view of the facts proved above we are thus naturally led to formulate the following free-boundary problem for the unknown value function V from (.11) and the unknown boundaries (g, g 1 ) from (.16)-(.17): (.49) (.5) (.51) (.5) (.53) (LV )(t, π) = for (t, π) C V (t, π) = t + ag π=g (t)+ (t), V (t, π) = t + b(1 g π=g1 (t) 1(t)) V π (t, π) V = a, π=g (t)+ π (t, π) = b π=g1 (t) V (t, π) < G(t, π) for (t, π) C V (t, π) = G(t, π) for (t, π) D where C and D are given by (.16) and (.17), and the instantaneous-stopping conditions (.5) are satisfied for all t T and the smooth-fit conditions (.51) are satisfied for all t < T. Note that the superharmonic characterization of the value function (see [] and [18]) implies that V from (.11) is a largest function satisfying (.49)-(.5) and (.5)-(.53). 1
11 .1. Making use of the facts proved above we are now ready to formulate the main result of the paper. Below we set ϕ(x) = (1/ π)e x / and Φ(x) = x ϕ(y) dy for x R. Theorem.1. In the Bayesian problem (.4)-(.5) of testing two simple hypotheses (.3) the optimal decision rule (τ, d ) is explicitly given by: (.54) (.55) τ = inf{ t T π t / g (t), g 1 (t) } { 1 (accept H 1 ) if π τ = g 1 (τ ) d = (accept H ) if π τ = g (τ ) where the two boundaries (g, g 1 ) can be characterized as a unique solution of the coupled system of nonlinear integral equations: (.56) E t,gi (t)[aπ T b(1 π T )] = ag i (t) b(1 g i (t)) 1 T t + ( 1) j P t,gi (t)[π t+u g j (t + u)] du (i =, 1) j= for t T satisfying (.44)-(.47) [see Figure 1 above]. More explicitly, the six terms in the system (.56) read as follows: (.57) (.58) E t,gi (t)[aπ T b(1 π T )] = = g i (t) + (1 g i (t)) ag i (t) exp ( µz T t/σ + µ (T t)/(σ ) ) b(1 g i (t)) 1 g i (t) + g i (t) exp ( µz ) ϕ(z) dz T t/σ + µ (T t)/(σ ) ag i (t) exp ( µz T t/σ µ (T t)/(σ ) ) b(1 g i (t)) 1 g i (t) + g i (t) exp ( µz ) ϕ(z) dz T t/σ µ (T t)/(σ ) P t,gi (t)[π t+u g j (t + u)] ( ( ) σ = g i (t)φ µ u log gj (t + u) 1 g i (t) µ ) u 1 g j (t + u) g i (t) σ ( ( ) σ + (1 g i (t))φ µ u log gj (t + u) 1 g i (t) + µ ) u 1 g j (t + u) g i (t) σ for u T t with t T and i, j =, 1. [Note that in the case when a = b we have c = 1/ and the system (.56) reduces to one equation only since g 1 = 1 g by symmetry.] Proof. (i) The existence of boundaries (g, g 1 ) satisfying (.44)-(.47) such that τ from (.54) is optimal in (.4)-(.5) was proved in Subsections.5-.9 above. By the change-ofvariable formula from [1] it follows that the boundaries (g, g 1 ) solve the system (.56) (cf. (.6)-(.64) below). Thus it remains to show that the system (.56) has no other solution in the class of functions (h, h 1 ) satisfying (.44)-(.47). Let us thus assume that two functions (h, h 1 ) satisfying (.44)-(.47) solve the system (.56), and let us show that these two functions (h, h 1 ) must then coincide with the optimal 11
12 boundaries (g, g 1 ). For this, let us introduce the function: { (.59) V h U h (t, π) if (t, π) C h (t, π) = G(t, π) if (t, π) D h where the function U h is defined by: (.6) U h (t, π) = E t,π [G(T, π T )] T t P t,π [(t + u, π t+u ) D h ] du for all (t, π) [, T [, 1] and the sets C h and D h are defined as in (.16) and (.17) with h i instead of g i for i =, 1. Note that (.6) with G(t, π) instead of U h (t, π) on the left-hand side coincides with (.56) when π = g i (t) and h j = g j for i, j =, 1. Since (h, h 1 ) solve (.56) this shows that V h is continuous on [, T [, 1]. We need to verify that V h coincides with the value function V from (.11) and that h i equals g i for i =, 1. (ii) Using standard arguments based on the strong Markov property (or verifying directly) it follows that V h i.e. U h is C 1, on C h and that: (.61) (LV h )(t, π) = for (t, π) C h. Moreover, since Uπ h is continuous on [, T, 1 (which is readily verified using the explicit expressions (.57) and (.58) above with π instead of g i (t) and h j instead of g j for i, j =, 1), we see that Vπ h is continuous on C h. Finally, since h (t), c and h 1 (t) c, 1 we see that V h i.e. G is C 1, on D h. Therefore, with (t, π) [, T, 1 given and fixed, the change-of-variable formula from [1] can be applied, and in this way we get: (.6) V h (t + s, π t+s ) = V h (t, π) + s + M h s + 1 (LV h )(t + u, π t+u )I(π t+u h (t + u), π t+u h 1 (t + u)) du 1 i= s π V h π (t + u, π t+u )I(π t+u = h i (t + u)) dl h i u for s T t where π Vπ h(t + u, h i(t + u)) = Vπ h(t + u, h i(t + u)+) Vπ h(t + u, h i(t + u) ), the process (l h i s ) s T t is the local time of (π t+s ) s T t at the boundary h i given by: (.63) l h i s 1 = P t,π lim ε ε s I(h i (t + u) ε < π t+u < h i (t + u) + ε) µ σ π t+u(1 π t+u ) du for i =, 1, and (Ms h) s T t defined by Ms h = s V h π (t + u, π t+u) I(π t+u h (t + u), π t+u h 1 (t + u)) (µ/σ) π t+u (1 π t+u ) dw u is a martingale under P t,π. Setting s = T t in (.6) and taking the P t,π -expectation, using that V h satisfies (.61) in C h and equals G in D h, we get: (.64) E t,π [G(T, π T )] = V h (t, π) + T t P t,π [(t + u, π t+u ) D h ] du + 1 F (t, π) 1
13 where (by the continuity of the integrand) the function F is given by: (.65) F (t, π) = 1 i= T t π V h π (t + u, h i(t + u)) d u E t,π [l h i u ] for all (t, π) [, T [, 1] and i =, 1. Thus from (.64) and (.59) we see that: { if (t, π) C h (.66) F (t, π) = (U h (t, π) G(t, π)) if (t, π) D h where the function U h is given by (.6). (iii) From (.66) we see that if we are to prove that: (.67) π V h (t, π) is C 1 at h i (t) for each t < T given and fixed and i =, 1, then it will follow that: (.68) U h (t, π) = G(t, π) for all (t, π) D h. On the other hand, if we know that (.68) holds, then using the general facts obtained directly from the definition (.59) above: (.69) (.7) π (U h (t, π) G(t, π)) π (U h (t, π) G(t, π)) π=h (t) π=h1 (t) = V h π (t, h (t)+) V h π (t, h (t) ) = π V h π (t, h (t)) = V h π (t, h 1(t) ) V h π (t, h 1(t)+) = π V h π (t, h 1(t)) for all t < T, we see that (.67) holds too. The equivalence of (.67) and (.68) suggests that instead of dealing with the equation (.66) in order to derive (.67) above we may rather concentrate on establishing (.68) directly. To derive (.68) first note that using standard arguments based on the strong Markov property (or verifying directly) it follows that U h is C 1, in D h and that: (.71) (LU h )(t, π) = 1 for (t, π) D h. It follows that (.6) can be applied with U h instead of V h, and this yields: (.7) U h (t + s, π t+s ) = U h (t, π) + s I((t + u, π t+u ) D h ) du + N h s using (.61) and (.71) as well as that π Uπ h(t + u, h i(t + u)) = for all u s and i =, 1 since Uπ h is continuous. In (.7) we have Ns h = s U h π (t + u, π t+u ) I(π t+u h (t + u), π t+u h 1 (t + u)) (µ/σ) π t+u (1 π t+u ) dw u and (Ns h ) s T t is a martingale under P t,π. Next note that (.6) applied to G instead of V h yields: (.73) G(t + s, π t+s ) = G(t, π) + s I(π t+u c) du a + b 13 l c s + M s
14 using that LG = 1 off [, T ] {c} as well as that π G π (t + u, c) = b a for u s. In (.73) we have M s = s G π(t + u, π t+u ) I(π t+u c) (µ/σ) π t+u (1 π t+u ) dw u = s [a I(π t+u < c) b I(π t+u >c)] (µ/σ) π t+u (1 π t+u ) dw u and (M s ) s T t is a martingale under P t,π. For < π h (t) or h 1 (t) π < 1 consider the stopping time: (.74) σ h = inf{ s T t π t+s [h (t + s), h 1 (t + s)]}. Then using that U h (t, h i (t)) = G(t, h i (t)) for all t < T and i =, 1 since (h, h 1 ) solve (.56), and that U h (T, π) = G(T, π) for all π 1, we see that U h (t + σ h, π t+σh ) = G(t + σ h, π t+σh ). Hence from (.7) and (.73) using the optional sampling theorem (see e.g. [16; Chapter II, Theorem 3.]) we find: (.75) U h (t, π) = E t,π [U h (t + σ h, π t+σh )] E t,π [ σ h = E t,π [G(t + σ h, π t+σh )] E t,π [ σ h [ σ h ] = G(t, π) + E t,π I(π t+u c) du [ σ h ] E t,π I((t + u, π t+u ) D h ) du = G(t, π) ] I((t + u, π t+u ) D h ) du ] I((t + u, π t+u ) D h ) du since π t+u c and (t + u, π t+u ) D h for all u < σ h. This establishes (.68) and thus (.67) holds as well. It may be noted that a shorter but somewhat less revealing proof of (.68) [and (.67)] can be obtained by verifying directly (using the Markov property only) that the process: (.76) U h (t + s, π t+s ) s I((t + u, π t+u ) D h ) du is a martingale under P t,π for s T t. This verification moreover shows that the martingale property of (.76) does not require that h and h 1 are continuous and monotone (but only measurable). Taken together with the rest of the proof below this shows that the claim of uniqueness for the equation (.56) holds in the class of continuous functions h and h 1 from [, T ] to R such that < h (t) < c and c < h 1 (t) < 1 for all < t < T. (iv) Let us consider the stopping time: (.77) τ h = inf{ s T t π t+s / h (t + s), h 1 (t + s) }. Observe that, by virtue of (.67), the identity (.6) can be written as: (.78) V h (t + s, π t+s ) = V h (t, π) + s I((t + u, π t+u ) D h ) du + M h s with (M h s ) s T t being a martingale under P t,π. Thus, inserting τ h into (.78) in place of s and taking the P t,π -expectation, by means of the optional sampling theorem we get: (.79) V h (t, π) = E t,π [G(t + τ h, π t+τh )] 14
15 for all (t, π) [, T [, 1]. Then comparing (.79) with (.11) we see that: (.8) V (t, π) V h (t, π) for all (t, π) [, T [, 1]. (v) Let us now show that g h and h 1 g 1 on [, T ]. For this, recall that by the same arguments as for V h we also have: (.81) V (t + s, π t+s ) = V (t, π) + s I((t + u, π t+u ) D) du + M g s where (M g s ) s T t is a martingale under P t,π. Fix some (t, π) belonging to both D and D h (firstly below g and h and then above g 1 and h 1 ) and consider the stopping time: (.8) σ g = inf{ s T t π t+s [g (t + s), g 1 (t + s)]}. Inserting σ g into (.78) and (.81) in place of s and taking the P t,π -expectation, by means of the optional sampling theorem we get: [ σg ] (.83) E t,π [V h (t + σ g, π t+σg )] = G(t, π) + E t,π I((t + u, π t+u ) D h ) du (.84) E t,π [V (t + σ g, π t+σg )] = G(t, π) + E t,π [σ g ]. Hence by means of (.8) we see that: [ σg ] (.85) E t,π I((t + u, π t+u ) D h ) du E t,π [σ g ] from where, by virtue of the continuity of h i and g i on, T for i =, 1, it readily follows that D D h, i.e. g (t) h (t) and h 1 (t) g 1 (t) for all t T. (vi) Finally, we show that h i coincides with g i for i =, 1. For this, let us assume that there exists some t, T such that g (t) < h (t) or h 1 (t) < g 1 (t) and take an arbitrary π from g (t), h (t) or h 1 (t), g 1 (t), respectively. Then inserting τ = τ (t, π) from (.43) into (.78) and (.81) in place of s and taking the P t,π -expectation, by means of the optional sampling theorem we get: [ τ ] (.86) E t,π [G(t + τ, π t+τ )] = V h (t, π) + E t,π I((t + u, π t+u ) D h ) du (.87) E t,π [G(t + τ, π t+τ )] = V (t, π). Hence by means of (.8) we see that: [ τ ] (.88) E t,π I((t + u, π t+u ) D h ) du which is clearly impossible by the continuity of h i and g i for i =, 1. We may therefore conclude that V h defined in (.59) coincides with V from (.11) and h i is equal to g i for i =, 1. This completes the proof of the theorem. 15
16 Remark.. Note that without loss of generality it can be assumed that µ > in (.). In this case the optimal decision rule (.54)-(.55) can be equivalently written as follows: (.89) (.9) τ = inf{ t T X t / b π (t), bπ 1 { (t) } 1 (accept H 1 ) if X τ = b π d = 1(τ ) (accept H ) if X τ = b π (τ ) where we set: (.91) b π i (t) = σ µ log ( 1 π π ) g i (t) + µ 1 g i (t) t for t [, T ], π [, 1] and i =, 1. The result proved above shows that the following sequential procedure is optimal. Observe X t for t [, T ] and stop the observation as soon as X t becomes either greater than b π 1(t) or smaller than b π (t) for some t [, T ]. In the first case conclude that the drift equals µ, and in the second case conclude that the drift equals. Remark.3. In the preceding procedure we need to know the boundaries (b π, b π 1) i.e. the boundaries (g, g 1 ). We proved above that (g, g 1 ) is a unique solution of the system (.56). This system cannot be solved analytically but can be dealt with numerically. The following simple method can be used to illustrate the latter (better methods are needed to achieve higher precision around the singularity point t = T and to increase the speed of calculation). Set t k = kh for k =, 1,..., n where h = T/n and denote: (.9) (.93) J(t, g i (t)) = E t,gi (t)[aπ T b(1 π T )] ag i (t) b(1 g i (t)) 1 K(t, g i (t); t + u, g (t + u), g 1 (t + u)) = ( 1) j P t,gi (t)[π t+u g j (t + u)] j= for i =, 1 upon recalling the explicit expressions (.57) and (.58) above. Note that K always depends on both g and g 1. Then the following discrete approximation of the integral equations (.56) is valid: n 1 (.94) J(t k, g i (t k )) = K(t k, g i (t k ); t l+1, g (t l+1 ), g 1 (t l+1 )) h (i =, 1) l=k for k =, 1,..., n 1. Setting k = n 1 and g (t n ) = g 1 (t n ) = c we can solve the system of two equations (.94) numerically and get numbers g (t n 1 ) and g 1 (t n 1 ). Setting k = n and using the values g (t n 1 ), g (t n ), g 1 (t n 1 ), g 1 (t n ) we can solve (.94) numerically and get numbers g (t n ) and g 1 (t n ). Continuing the recursion we obtain g i (t n ), g i (t n 1 ),..., g i (t 1 ), g i (t ) as an approximation of the optimal boundary g i at the points T, T h,..., h, for i =, 1 (cf. Figure 1 above). 16
17 References [1] Dvoretzky, A., Kiefer, J. and Wolfowitz, J. (1953). Sequential decision problems for processes with continuous time parameter. Testing hypotheses. Ann. Math. Statist. 4 (54 64). [] Dynkin, E. B. (1963). The optimum choice of the instant for stopping a Markov process. Soviet Math. Dokl. 4 (67 69). [3] Grigelionis, B. I. and Shiryaev, A. N. (1966). On Stefan s problem and optimal stopping rules for Markov processes. Theory Probab. Appl. 11 ( ). [4] Irle, A. and Schmitz, N. (1984). On the optimality of the SPRT for processes with continuous time parameter. Math. Operationsforsch. Statist. Ser. Statist. 15 (91 14). [5] Jacka, S. D. (1991). Optimal stopping and the American put. Math. Finance 1 (1 14). [6] Lehmann, E. L. (1959). Testing Statistical Hypotheses. Wiley, New York. [7] Liptser, R. S. and Shiryaev, A. N. (1977). Statistics of Random Processes I. Springer, Berlin. [8] McKean, H. P. Jr. (1965). Appendix: A free boundary problem for the heat equation arising from a problem of mathematical economics. Ind. Management Rev. 6 (3 39). [9] Mikhalevich, V. S. (1958). A Bayes test of two hypotheses concerning the mean of a normal process (in Ukrainian). Visnik Kiiv. Univ. 1 (11 14). [1] Myneni, R. (199). The pricing of the American option. Ann. Appl. Probab. (1 3). [11] Pedersen, J. L. and Peskir, G. (). On nonlinear integral equations arising in problems of optimal stopping. Proc. Functional Anal. VII (Dubrovnik 1), Various Publ. Ser. 46 ( ). [1] Peskir, G. (). A change-of-variable formula with local time on curves. Research Report No. 48, Dept. Theoret. Statist. Aarhus (17 pp). [13] Peskir, G. (). On the American option problem. Research Report No. 431, Dept. Theoret. Statist. Aarhus (13 pp). [14] Peskir, G. (3). The Russian option: Finite horizon. Research Report No. 433, Dept. Theoret. Statist. Aarhus (16 pp). To appear in Finance Stoch. [15] Peskir, G. and Shiryaev, A. N. (). Sequential testing problems for Poisson processes. Ann. Statist. 8 ( ). [16] Revuz, D. and Yor, M. (1999). Continuous Martingales and Brownian Motion. Springer, Berlin. 17
18 [17] Shiryaev, A. N. (1967). Two problems of sequential analysis. Cybernetics 3 (63 69). [18] Shiryaev, A. N. (1978). Optimal Stopping Rules. Springer, Berlin. [19] van Moerbeke, P. (1976). On optimal stopping and free-boundary problems. Arch. Rational Mech. Anal. 6 (11 148). [] Wald, A. (1947). Sequential Analysis. Wiley, New York. [1] Wald, A. and Wolfowitz, J. (1948). Optimum character of the sequential probability ratio test. Ann. Math. Statist. 19 (36 339). [] Wald, A. and Wolfowitz, J. (195). Bayes solutions of sequential decision problems. Ann. Math. Statist. 1 (8 99). Pavel V. Gapeev Russian Academy of Sciences Institute of Control Sciences Profsoyuznaya Str Moscow, Russia gapeev@cniica.ru Goran Peskir Department of Mathematical Sciences University of Aarhus, Denmark Ny Munkegade, DK-8 Aarhus home.imf.au.dk/goran goran@imf.au.dk 18
The Wiener Disorder Problem with Finite Horizon
Stochastic Processes and their Applications 26 11612 177 1791 The Wiener Disorder Problem with Finite Horizon P. V. Gapeev and G. Peskir The Wiener disorder problem seeks to determine a stopping time which
More informationOn the sequential testing problem for some diffusion processes
To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We
More informationSolving the Poisson Disorder Problem
Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem
More informationBayesian quickest detection problems for some diffusion processes
Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process
More informationOn the American Option Problem
Math. Finance, Vol. 5, No., 25, (69 8) Research Report No. 43, 22, Dept. Theoret. Statist. Aarhus On the American Option Problem GORAN PESKIR 3 We show how the change-of-variable formula with local time
More informationMaximum Process Problems in Optimal Control Theory
J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard
More informationThe Azéma-Yor Embedding in Non-Singular Diffusions
Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let
More informationSEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)
SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES SAVAS DAYANIK AND SEMIH O. SEZER Abstract. One of two simple hypotheses is correct about the unknown arrival rate and jump distribution
More informationPredicting the Time of the Ultimate Maximum for Brownian Motion with Drift
Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with
More informationA COLLOCATION METHOD FOR THE SEQUENTIAL TESTING OF A GAMMA PROCESS
Statistica Sinica 25 2015), 1527-1546 doi:http://d.doi.org/10.5705/ss.2013.155 A COLLOCATION METHOD FOR THE SEQUENTIAL TESTING OF A GAMMA PROCESS B. Buonaguidi and P. Muliere Bocconi University Abstract:
More informationOn the principle of smooth fit in optimal stopping problems
1 On the principle of smooth fit in optimal stopping problems Amir Aliev Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory, 119992, Moscow, Russia. Keywords:
More informationPavel V. Gapeev The disorder problem for compound Poisson processes with exponential jumps Article (Published version) (Refereed)
Pavel V. Gapeev The disorder problem for compound Poisson processes with exponential jumps Article (Published version) (Refereed) Original citation: Gapeev, Pavel V. (25) The disorder problem for compound
More informationOptimal Prediction of the Ultimate Maximum of Brownian Motion
Optimal Prediction of the Ultimate Maximum of Brownian Motion Jesper Lund Pedersen University of Copenhagen At time start to observe a Brownian path. Based upon the information, which is continuously updated
More informationOn Reflecting Brownian Motion with Drift
Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability
More informationA Barrier Version of the Russian Option
A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr
More informationOptimal Stopping Problems and American Options
Optimal Stopping Problems and American Options Nadia Uys A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master
More informationOptimal Stopping Games for Markov Processes
SIAM J. Control Optim. Vol. 47, No. 2, 2008, (684-702) Research Report No. 15, 2006, Probab. Statist. Group Manchester (21 pp) Optimal Stopping Games for Markov Processes E. Ekström & G. Peskir Let X =
More informationThe Russian option: Finite horizon. Peskir, Goran. MIMS EPrint: Manchester Institute for Mathematical Sciences School of Mathematics
The Russian option: Finite horizon Peskir, Goran 25 MIMS EPrint: 27.37 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: And by contacting:
More informationOn the sequential testing and quickest change-pointdetection problems for Gaussian processes
Pavel V. Gapeev and Yavor I. Stoev On the sequential testing and quickest change-pointdetection problems for Gaussian processes Article Accepted version Refereed Original citation: Gapeev, Pavel V. and
More informationA Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1
Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable
More informationPavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns
Pavel V. Gapeev, Neofytos Rodosthenous Perpetual American options in diffusion-type models with running maxima and drawdowns Article (Accepted version) (Refereed) Original citation: Gapeev, Pavel V. and
More informationfor all f satisfying E[ f(x) ] <.
. Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationOPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS
APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationOptimal Stopping Problems for Time-Homogeneous Diffusions: a Review
Optimal Stopping Problems for Time-Homogeneous Diffusions: a Review Jesper Lund Pedersen ETH, Zürich The first part of this paper summarises the essential facts on general optimal stopping theory for time-homogeneous
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationOPTIMAL STOPPING OF A BROWNIAN BRIDGE
OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian
More informationSequential Detection. Changes: an overview. George V. Moustakides
Sequential Detection of Changes: an overview George V. Moustakides Outline Sequential hypothesis testing and Sequential detection of changes The Sequential Probability Ratio Test (SPRT) for optimum hypothesis
More informationOptimal Sequential Procedures with Bayes Decision Rules
International Mathematical Forum, 5, 2010, no. 43, 2137-2147 Optimal Sequential Procedures with Bayes Decision Rules Andrey Novikov Department of Mathematics Autonomous Metropolitan University - Iztapalapa
More informationON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER
ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationIntroduction to Random Diffusions
Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales
More informationOn the quantiles of the Brownian motion and their hitting times.
On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]
More informationOn Wald-Type Optimal Stopping for Brownian Motion
J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationLectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics. Managing Editor: Michael Struwe
Lectures in Mathematics ETH Zürich Department of Mathematics Research Institute of Mathematics Managing Editor: Michael Struwe Goran Peskir Albert Shiryaev Optimal Stopping and Free-Boundary Problems Birkhäuser
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationERRATA: Probabilistic Techniques in Analysis
ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,
More informationSTOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES
STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s
More informationAlbert N. Shiryaev Steklov Mathematical Institute. On sharp maximal inequalities for stochastic processes
Albert N. Shiryaev Steklov Mathematical Institute On sharp maximal inequalities for stochastic processes joint work with Yaroslav Lyulko, Higher School of Economics email: albertsh@mi.ras.ru 1 TOPIC I:
More informationLOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES
LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES ADRIAN D. BANNER INTECH One Palmer Square Princeton, NJ 8542, USA adrian@enhanced.com RAOUF GHOMRASNI Fakultät II, Institut für Mathematik Sekr. MA 7-5,
More informationA Representation of Excessive Functions as Expected Suprema
A Representation of Excessive Functions as Expected Suprema Hans Föllmer & Thomas Knispel Humboldt-Universität zu Berlin Institut für Mathematik Unter den Linden 6 10099 Berlin, Germany E-mail: foellmer@math.hu-berlin.de,
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationWITH SWITCHING ARMS EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT
lournal of Applied Mathematics and Stochastic Analysis, 12:2 (1999), 151-160. EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT WITH SWITCHING ARMS DONCHO S. DONCHEV
More informationShort-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility
Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More informationON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS
PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary
More informationGoodness of fit test for ergodic diffusion processes
Ann Inst Stat Math (29) 6:99 928 DOI.7/s463-7-62- Goodness of fit test for ergodic diffusion processes Ilia Negri Yoichi Nishiyama Received: 22 December 26 / Revised: July 27 / Published online: 2 January
More informationTyler Hofmeister. University of Calgary Mathematical and Computational Finance Laboratory
JUMP PROCESSES GENERALIZING STOCHASTIC INTEGRALS WITH JUMPS Tyler Hofmeister University of Calgary Mathematical and Computational Finance Laboratory Overview 1. General Method 2. Poisson Processes 3. Diffusion
More informationOptimal stopping problems
Optimal stopping problems Part II. Applications A. N. Shiryaev M. V. Zhitlukhin Steklov Mathematical Institute, Moscow The University of Manchester, UK Bonn May 2013 Outline The second part of the course
More informationChange detection problems in branching processes
Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationBrownian Motion and Stochastic Calculus
ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s
More informationLimit theorems for multipower variation in the presence of jumps
Limit theorems for multipower variation in the presence of jumps Ole E. Barndorff-Nielsen Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, DK-8 Aarhus C, Denmark oebn@imf.au.dk
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationA new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009
A new approach for investment performance measurement 3rd WCMF, Santa Barbara November 2009 Thaleia Zariphopoulou University of Oxford, Oxford-Man Institute and The University of Texas at Austin 1 Performance
More informationBrownian Motion. Chapter Definition of Brownian motion
Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier
More informationApplications of Optimal Stopping and Stochastic Control
Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed
More informationBAYESIAN SEQUENTIAL TESTING OF THE DRIFT OF A BROWNIAN MOTION
BAYESIAN SEQUENTIAL TESTING OF THE DRIFT OF A BROWNIAN MOTION ERIK EKSTRÖM1 AND JUOZAS VAICENAVICIUS Abstract. We study a classical Bayesian statistics problem of sequentially testing the sign of the drift
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationOn the submartingale / supermartingale property of diffusions in natural scale
On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationQuickest Detection With Post-Change Distribution Uncertainty
Quickest Detection With Post-Change Distribution Uncertainty Heng Yang City University of New York, Graduate Center Olympia Hadjiliadis City University of New York, Brooklyn College and Graduate Center
More informationContinuous Time Finance
Continuous Time Finance Lisbon 2013 Tomas Björk Stockholm School of Economics Tomas Björk, 2013 Contents Stochastic Calculus (Ch 4-5). Black-Scholes (Ch 6-7. Completeness and hedging (Ch 8-9. The martingale
More informationA NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS
A NOTE ON ALMOST PERIODIC VARIATIONAL EQUATIONS PETER GIESL AND MARTIN RASMUSSEN Abstract. The variational equation of a nonautonomous differential equation ẋ = F t, x) along a solution µ is given by ẋ
More informationarxiv: v1 [math.pr] 11 Jan 2013
Last-Hitting Times and Williams Decomposition of the Bessel Process of Dimension 3 at its Ultimate Minimum arxiv:131.2527v1 [math.pr] 11 Jan 213 F. Thomas Bruss and Marc Yor Université Libre de Bruxelles
More informationThe exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case
The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case Konstantin Borovkov and Zbigniew Palmowski Abstract For a multivariate Lévy process satisfying
More informationHJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011
Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance
More informationDoléans measures. Appendix C. C.1 Introduction
Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationRichard F. Bass Krzysztof Burdzy University of Washington
ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More informationStochastic optimal control with rough paths
Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction
More informationStochastic Volatility and Correction to the Heat Equation
Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century
More informationOn the Goodness-of-Fit Tests for Some Continuous Time Processes
On the Goodness-of-Fit Tests for Some Continuous Time Processes Sergueï Dachian and Yury A. Kutoyants Laboratoire de Mathématiques, Université Blaise Pascal Laboratoire de Statistique et Processus, Université
More informationMinimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization
Finance and Stochastics manuscript No. (will be inserted by the editor) Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization Nicholas Westray Harry Zheng. Received: date
More informationLogarithmic scaling of planar random walk s local times
Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract
More informationPREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES. 1. Introduction.
Acta Math. Univ. Comenianae Vol. LXXVII, 1(28), pp. 123 128 123 PREDICTABLE REPRESENTATION PROPERTY OF SOME HILBERTIAN MARTINGALES M. EL KADIRI Abstract. We prove as for the real case that a martingale
More informationConvergence at first and second order of some approximations of stochastic integrals
Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456
More informationMulti-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form
Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct
More informationOn Stopping Times and Impulse Control with Constraint
On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationOptimal Stopping and Maximal Inequalities for Poisson Processes
Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,
More informationb i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1
NONLINEAR EVOLTION EQATIONS FOR MEASRES ON INFINITE DIMENSIONAL SPACES V.I. Bogachev 1, G. Da Prato 2, M. Röckner 3, S.V. Shaposhnikov 1 The goal of this work is to prove the existence of a solution to
More informationJump-type Levy Processes
Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic
More informationNested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model
Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model Xiaowei Chen International Business School, Nankai University, Tianjin 371, China School of Finance, Nankai
More informationLarge time behavior of reaction-diffusion equations with Bessel generators
Large time behavior of reaction-diffusion equations with Bessel generators José Alfredo López-Mimbela Nicolas Privault Abstract We investigate explosion in finite time of one-dimensional semilinear equations
More informationSequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process
Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationLikelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN
INFORMATION AND CONTROL 16, 303-310 (1970) Likelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN Computer, Information and Control Engineering, The University of Nlichigan, Ann Arbor,
More informationOn Doob s Maximal Inequality for Brownian Motion
Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationB. LAQUERRIÈRE AND N. PRIVAULT
DEVIAION INEQUALIIES FOR EXPONENIAL JUMP-DIFFUSION PROCESSES B. LAQUERRIÈRE AND N. PRIVAUL Abstract. In this note we obtain deviation inequalities for the law of exponential jump-diffusion processes at
More informationThe Skorokhod problem in a time-dependent interval
The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem
More informationSTOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED
J. Appl. Prob. 42, 826 838 (25) Printed in Israel Applied Probability Trust 25 STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED X. GUO, Cornell University J. LIU, Yale University
More informationUnfolding the Skorohod reflection of a semimartingale
Unfolding the Skorohod reflection of a semimartingale Vilmos Prokaj To cite this version: Vilmos Prokaj. Unfolding the Skorohod reflection of a semimartingale. Statistics and Probability Letters, Elsevier,
More informationHarmonic Functions and Brownian Motion in Several Dimensions
Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time
More information