Compound Poisson disorder problem

Size: px
Start display at page:

Download "Compound Poisson disorder problem"

Transcription

1 Compound Poisson disorder problem Savas Dayanik Princeton University, Department of Operations Research and Financial Engineering, and Bendheim Center for Finance, Princeton, NJ sdayanik Semih Onur Sezer Princeton University, Department of Operations Research and Financial Engineering, Princeton, NJ ssezer This paper is dedicated to our teacher and mentor, Professor Erhan Çınlar, on the occasion of his 6th birthday. In compound Poisson disorder problem, arrival rate and/or jump distribution of some compound Poisson process change suddenly at some unknown and unobservable time. The problem is to detect the change or disorder time as quickly as possible. A sudden regime-shift may require some counter-measures be taken promptly, and a quickest detection rule can help with those efforts. We describe complete solution of compound Poisson disorder problem with several standard Bayesian risk measures. Solution methods are feasible for numerical implementation and are illustrated on examples. Key words: Poisson disorder problem; quickest detection; compound Poisson processes, optimal stopping MSC2 Subject Classification: Primary: 62L; Secondary: 62L, 62C, 6G4 OR/MS subject classification: Primary: Statistics: Bayesian, Estimation; Secondary: Dynamic programming/optimal control: Applications. Introduction. Let Ω, F, P be a probability space hosting a compound Poisson process N t X t = X + Y k, t. k= Jumps arrive according to a standard Poisson process N = {N t ; t } at some rate λ >. The marks at each jump are i.i.d. R d -valued random variables Y, Y 2,... with some common distribution ν independent of the arrival process N. The process X may represent customer orders arriving in batches to a multi-product service system, claims of various sizes filed with an insurance company, or sizes of electronic files requested for download from a network server. Suppose that, at an unknown and unobservable time θ, the initial arrival rate λ and mark distribution ν of the process X change suddenly to λ and ν, respectively. This regime shift at the disorder time θ may become detrimental on the underlying system unless certain counter-measures are taken quickly. For example, optimal inventory levels, insurance premiums, or number of network servers may need to be revised as soon as the regime changes in order to maintain profitability, avoid bankruptcy, or ensure the network availability. The objective of this paper is to detect the disorder time θ as quickly as possible in order to give decision makers an opportunity to react the regime change on a timely basis. We assume that λ, λ, ν and ν are known, and that the disorder time θ is a random variable whose prior distribution is P{θ = } = π and P{θ > t} = πe λt, t ; π,, λ >. The disorder time θ is still unobservable, and we need a quickest detection rule adapted to the history F of the observation process X in. More precisely, we would like to find a stopping time τ of the process X whose Bayes risk R τ π P{τ < θ} + c Eτ θ +, π,, τ F 2 is the smallest x + max{x, }. If an F-stopping time τ attains the minimum Bayes risk Uπ inf τ F R τ π, π,, then it is called a Bayes-optimal alarm time and solves optimally the tradeoff between the false-alarm frequency P{τ < θ} and the expected detection delay cost c Eτ θ +.

2 2 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS All of the early work has dealed with simple Poisson disorder problem. In that problem and in the notation above, the observation process was the counting process N whose rate changes at some unobservable time θ from some known constant λ to some other λ. While the question was the same; namely, to detect the disorder time θ as quickly as possible, the information about marks Y, Y 2,... were ignored completely. This omission was understandable because of the difficulty of the problem: simple Poisson disorder problem was solved completely by Peskir and Shiryaev only recently more than thirty years after it was formulated by Galchuk and Rozovskii 9 for the first time. In the meantime, partial solutions and new insights were provided. Most notably, Davis showed that quickest detection rules should not differ much if they are to minimize some standard Bayes risks; namely, one of R, R 2 same as R of 2, or R in R τ π P{τ < θ ε} + c Eτ θ +, R 2 τ π P{τ < θ} + c Eτ θ +, R τ π Eθ τ + + c Eτ θ +, R 4 τ π P{τ < θ} + c Ee ατ θ+, where ε, c, and α are some known positive constants see also Shiryaev 7. Recently, Bayraktar and Dayanik solved simple Poisson disorder problem with Bayes risk R 4 in 4, whose exponential detection-delay penalty makes it more suitable for financial applications. Later, Bayraktar, Dayanik, and Karatzas 2 showed that the measure R 4 is also a standard Bayes risk if the latter is redefined suitably and gave a general solution method for standard problems. For the first time, Gapeev has recently succeeded to include the observed marks Y, Y 2,... into an optimal decision rule in order to detect the disorder time more quickly and accurately. He provided the full solution for the following very special instance of compound Poisson disorder problem: before and after the disorder time θ, real-valued marks Y, Y 2,... have exponential distributions, and the expected mark sizes are the same as the corresponding jump arrival rates. Namely, the mark distributions are ν i A = A λ i exp { λ i y 4 } dy, A BR +, i =,, where λ and λ are the arrival rates of jumps i.e., the counting process N in before and after the disorder, respectively. The main contribution of our paper is the complete solution of compound Poisson disorder problem in its full generality. For any pair of arrival rates λ and λ and mark distributions ν and ν, we describe explicitly a quickest detection rule. These rules depend on the some F-adapted odds-ratio process Φ = {Φ t ; t }; see. At every t, the random variable Φ t is the conditional odds-ratio of the event {θ t} that disorder has happened at or before time t given past and present observations F t of the process X. For a suitable constant ξ >, the first crossing time U = inf{t : Φ t ξ} of the process Φ turns out to be a quickest detection rule: the Bayes risk R U in 2 of U is the smallest among all of the stopping times of the process X. The critical threshold ξ can be calculated numerically, and the quickest detection rule U is suitable for online implementation since Φ t, t can be updated by a recursive formula; see. We also show that every compound Poisson disorder problem with one of standard Bayes risks in 4 can be solved in the same way. Our probabilistic methods are different from the analytical methods of all previously cited work. The latter attacked Poisson disorder problems by studying analytical properties of related free-boundary integro-differential equations. Instead, we study very carefully sample-paths of the process Φ, which turn out to be piecewise deterministic and Markovian. General characterization of stopping times of jump processes allows us to approximate the minimum Bayes risk successively. This approximation is the key to our computational and theoretical results. In the next section we give the precise description of compound Poisson disorder problem and show how to reduce it to an optimal stopping problem for a suitable Markov process. In Sections and 4, we introduce successive approximations of the value function of the optimal stopping problem and establish key results for an efficient numerical method, which is presented in Section. We illustrate this method on several old and new examples and discuss briefly some extensions in Section 6. Finally, we establish in Section 7 the connection between our method and method of variational inequalities as applied to compound Poisson disorder problem. Appendix A contains some basic derivations and long proofs.

3 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS 2. Model and problem description. Starting with a reference probability measure, we shall first construct a model containing all of the random elements of our problem with the correct probability laws. Model. Let Ω, F, P be a probability space hosting the following independent stochastic elements: i a standard Poisson process N = {N t ; t } with the arrival rate λ, ii independent and identically distributed R d -valued random variables Y, Y 2,... with some common distribution ν B P {Y B} for every set B in the Borel σ-algebra BR d and ν {} =, iii a random variable θ with the distribution P {θ = } = π, and P {θ > t} = πe λt, t, λ >. 6 Let X = {X t ; t } be the process defined by with the jump times σ n inf{t > σ n : X t X t }, n σ, 7 and F = {F t } t be the augmentation of its natural filtration σx s, s t, t with P -null sets. Then the process X is a P, F-compound Poisson process with the arrival rate λ and the jump distribution ν. Let λ > be a constant, and ν be a probability measure on R d, BR d absolutely continuous with respect to the distribution ν. In general, every probability measure ν is the sum of two probability measures; one is singular, and the other is absolutely continuous with respect to ν. If it is necessary, the distribution ν is replaced with its component which is absolutely continuous with respect to the measure ν without loss of generality as explained by Poor 4, pp Then the Radon-Nikodym derivative fy dν y, y R d 8 dν BR d of ν with respect to ν exists and is a ν -a.e. nonnegative Borel function. We shall denote by G = {G t } t and G t F t σθ, t the enlargement of the filtration F with the sigma-algebra σθ generated by θ. Let us define a new probability measure P on the measurable space Ω, s G s locally in terms of the Radon-Nikodym derivatives dp dp Gt = Z t {t<θ} + {t θ} e λ λt θ N t k=n θ + λ fy k, t, 9 λ where N θ is the number of arrivals in the time-interval, θ. If the disorder time θ is known, then each random variable Z t is simply the likelihood ratio of the interarrival times σ, σ 2 σ,... and the jump sizes Y, Y 2,... observed at or before time t. Under P, the interarrival times and jump sizes are conditionally independent and have the desired conditional distributions given θ: the rate of exponentially distributed interarrival times and the distribution of the jump sizes change at time θ from λ and ν to λ and ν, respectively. See also Appendix A. for another justification by using an absolutely continuous change of measure for point processes. Finally, because Z = almost surely and the probability measures P and P coincide on G = σθ, the distribution of θ is the same under P and P. Hence, under the probability measure P defined by 9, the process X and the random variable θ have the same properties as in the setup of the disorder problem described in the introduction. Problem description. In the remainder, we shall work with the concrete model described above. The random variable θ is the unobservable disorder time and must be detected as quickly as possible as the history F of the observation process X is unfolded. The admissible detection rules are the stopping times of the filtration F. Our problem is to find the smallest Bayes risk U in by minimizing over all stopping rules τ of the filtration F the tradeoff R τ in 2 between the false-alarm frequency and expected detection delay cost. If this infimum is attained, then we also want to describe explicitly a stopping rule with the minimum Bayes risk.

4 4 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS In the remainder of this section, we shall formulate the quickest-detection problem as an optimal stopping problem for a suitable Markov process; see 6 below. In later sections, we solve this optimal stopping problem completely and identify an optimal stopping rule. One may check as in Bayraktar, Dayanik, and Karatzas 2, Proposition 2. that the Bayes risk in 2 can be expressed as τ R τ π = π + c π E e λt Φ t λ dt, π,, τ F c in terms of the F-adapted odds-ratio process Φ t P{θ t F t}, t. P{θ > t F t } For every t, the random variable Φ t is the conditional odds-ratio of the event that the disorder happened at or before time t given the history F t of the process X. In, the expectation E is taken with respect to P, and the probability measure P in is defined by the absolutely continuous change of measure in 9. In Appendix A.2, we show that the process Φ = {Φ t ; t } in is a piecewise-deterministic Markov process Davis 6, 7. If we define { } λ/a, if a a λ λ + λ, φ d,, if a = { } 2 φd + e at φ φ d, a xt, φ, t R, φ R. φ + λt, a = and the σ n, n are the jump times in 7 of the process X, then we get Φ t = x t σ n, Φ σn, t σn, σ n Φ σn = λ, n. fy n Φ σn λ Namely, the process Φ follows one of the deterministic curves t xt, φ, φ R in 2 between consecutive jumps of X and is updated instantaneously at every jump of X as in ; see also Figure on page 9. The P, F-infinitesimal generator of the process Φ coincides on the collection of continuously differentiable functions h : R + R with the first-order integro-differential operator see Appendix A. Ahx = λ + ax h x + λ y R d h λ λ fy x hx ν dy, x R +. 4 Finally, the minimum Bayes risk in, is given by π Uπ = π + c π V, π π, in terms of the value function τ V φ inf τ F Eφ e λt gφ t dt, φ R + 6 of a discounted optimal stopping problem with the running cost gφ φ λ c, φ R + 7 and discount rate λ > for the piecewise-deterministic Markov process Φ in. In 6, the expectation E φ is taken with respect to the probability measure P and P {Φ = φ} =. Thus, our problem becomes to calculate the value function V in 6 and to find an optimal stopping rule if the infimum is attained. Our approach is direct and very suitable for piecewise-deterministic Markov processes. The solution is described in Section in terms of single-jump operators after key results are established in Sections and 4.

5 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS We adopt the direct approach instead of its widely-used alternative, namely, the method of variational inequalities. In the latter method, the value function V in 6 is expected to satisfy the variational inequalities min {A λvφ + gφ, vφ} =, φ R + 8 in some suitable sense and may be identified by solving 8 subject to certain boundary conditions. However, solving 8 is very difficult because of unfavorable analytical properties of the singular integrodifferential operator A in 4. Our direct approach not only provides the complete solution of the original optimal stopping problem in 6, but also concludes as a by-product that V is indeed the unique solution of the variational inequalities in 8; see Section 7.. A useful approximation and its single-jump analysis. Let us introduce the family of optimal stopping problems τ σn V n φ inf τ F Eφ e λt gφ t dt, φ R +, n N, 9 obtained from 6 by stopping the odds-ratio process Φ at the nth jump time σ n of the observation process X. Since the running cost g in 7 is bounded from below by the constant λ/c, the expectation in 9 is well-defined for every stopping time τ F. In fact, /c V n for every n N. Since the sequence σ n n of jump times of the process X is increasing almost surely, the sequence V n n is decreasing. Therefore, lim n V n exists everywhere. It is also obvious that V n V, n N. Proposition. As n, the sequence V n φ converges to V φ uniformly in φ R +. In fact, for every n N and φ R +, we have n c λ V φ V n φ. 2 λ + λ Proof. E φ Since gφ λ/c for every φ, we have τ τ σn e λs gφ s ds E φ e λs gφ s ds c Eφ e λσ n, τ F, n N. Under P, the nth jump time σ n has Erlang distribution with parameters n and λ. Taking the infimum of both sides over τ F gives the first inequality in 2. The uniform approximation in Proposition. is fast and accurate. On the other hand, the functions V n can be found easily by an iterative algorithm. We shall calculate the V n s by adapting to our problem a method of Gugerli and Davis 7, Chapter. Developed for optimal stopping of general piecewise-deterministic Markov processes with an undiscounted terminal reward, the results of U. Gugerli and M. Davis do not apply here immediately. Since total discounted running cost over the infinite horizon has infinite expectation, an obvious transformation of our problem to those studied by U. Gugerli and M. Davis does not exist. Let us start by defining the following operators acting on bounded Borel functions w : R + R: t σ Jwt, φ E φ e λu g Φ u du + {t σ}e λσ w Φ σ, t,, 2 J t wφ inf Jwu, φ, t,. 22 u t, The special structure of the stopping times of jump processes see Lemma A. below implies τ σ J wφ = inf τ F Eφ e λt g Φ t dt + {τ σ}e λσ w Φ σ. By relying on the strong Markov property of the process X at its first jump time σ, one expects that the value function V of 6 satisfies the equation V = J V. In Proposition.6 below, we show that this is indeed the case. In fact, if we define v n : R + R, n N, sequentially by v, and v n J v n n N, 2

6 6 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS then every v n is bounded and identical to V n of 9, lim n v n exists and equals the value function V in 6; see Corollary.4 and Proposition.. Under P, the first jump time σ of the process X has exponential distribution with rate λ. Using the Fubini theorem and, we can write 2 as Jwt, φ = t e λ+λu g + λ Sw xu, φ du, t,, 24 where the function x, φ is given by 2, and S is the linear operator λ Swx fy x ν dy, x R, 2 λ R d w defined on the collection of bounded functions w : R R. Remark.2 Using the explicit form of xu, φ in 2, it is easy to check that the integrand in 24 is absolutely integrable on R +. Therefore, lim Jwt, φ = Jw, φ <, t and the mapping t Jwt, φ :, + R is continuous. Therefore, the infimum J t wφ in 22 is attained for every t,. Lemma. For every bounded Borel function w : R + R, the mapping J w is bounded. If we define w sup φ R+ wφ <, then λ λ + λ c + λ w J wφ, φ R λ + λ If the function w is concave, then so is J w. If w w 2 are real-valued and bounded Borel functions defined on R +, then J w J w 2. Namely, the operator J preserves the boundedness, concavity, and monotonicity. Proof. The lower bound in 26 follows from the lower bound λ/c on the running cost g in 6. The concavity and the monotonicity can be checked directly. Corollary.4 Every v n, n N in 2 is bounded and concave, and /c... v n v n v v. The limit vφ lim n v nφ, φ R + 27 exists, and is bounded, concave, and nondecreasing. Both v n : R + R, n N and v : R + R are continuous and nondecreasing. Their left and right derivatives are bounded on every compact subset of R +. Proof. By definition, v is bounded, concave, and nondecreasing. By an induction argument on n, the conclusions follow from Lemma., the properties of concave functions, and the monotonicity of the functions xt, for every fixed t R and g in 2 and 7, and the operator S in 2. Next proposition describes some ε-optimal stopping rules for each problem in 9. In conjunction with Proposition.6 below, it is the basic block of the numerical scheme described in Section. Its proof is presented in Section A.4. Proposition. For every n N, the functions v n of 2 and V n of 9 coincide. For every ε, let rnφ ε inf { s, : Jv n s, φ J v n φ + ε }, n N, φ R +, { S ε r ε r ε/2 Φ σ, and Sn+ ε n Φ, if σ > r ε/2 } n Φ 28 σ + Sn ε/2 θ σ, if σ rn ε/2, n N, Φ where θ s is the shift-operator on Ω: X t θ s = X s+t. Then S ε n e λt g Φ t dt v n φ + ε, n N, ε. 29 E φ

7 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS 7 Proposition.6 We have vφ = V φ for every φ R +. solution U of the equation U = J U. Moreover, V is the largest nonpositive Proof. Corollary.4 and Propositions. and. imply that vφ = lim n v n φ = lim n V n φ = V φ for every φ R +. Next, let us show that V = J V. Since v n n and Jv n n are decreasing, the bounded convergence theorem gives V φ = lim n v nφ = inf n J v n φ = inf t, lim Jv n t, φ = n inf Jvt, φ = J vφ. t, If U = J U and U v, then repeated applications of J to both sides of the last inequality and the monotonicity of J see Lemma. imply U V. The next lemma and its immediate corollary below characterize the smallest deterministic optimal stopping times r n, n N of Proposition. in a way familiar from the general theory of optimal stopping: r nφ is the first time when the continuous path t xt, φ enters the stopping region {x R + : V n+ x = }. Lemma.7 Let w : R + R be a bounded function. For every t R + and φ R +, Corollary.8 Let J t wφ = Jwt, φ + e λ+λt J w xt, φ. r n φ = inf { s, : Jv n s, φ = J v n φ } be the same as r ε nφ in Proposition. with ε =. Then r n φ = inf { t > : v n+ xt, φ = } inf. 2 Remark.9 For every t, r n φ, we have J t v n φ = J v n φ = v n+ φ. Then substituting w = v n in gives the dynamic programming equation for the family {v k } k N : for every φ R + and n N v n+φ = Jv n t, φ + e λ+λt v n+xt, φ, t, r n φ. Remark. Since V is bounded, and V = J V by Proposition.6, Lemma.7 gives for every φ R +. If we define J t V φ = JV t, φ + e λ+λt V xt, φ, t R + rφ inf{t > : JV t, φ = J V φ}, φ R +, then same arguments as in the proof of Corollary.8 with obvious changes and give Let us define the F-stopping times rφ = inf{t > : V xt, φ = }, φ R +, 4 V φ = JV t, φ + e λ+λt V xt, φ, t, rφ. U ε inf{t : V Φ t ε}, ε. 6 Next proposition shows that for the problem in 6 the stopping time U = inf{t : V Φ t = } is optimal, and the stopping times U ε in 6, ε are ε-optimal as in 7. Proposition. For every ε, the stopping time U ε in 6 is an ε-optimal stopping time for the optimal stopping problem 6, i.e., Uε e λs gφ s ds V φ + ε, for every φ R +. 7 E φ The proof in Section A.4 makes use of the local martingales described by the next proposition, which will be needed also in Section 7, where we show that the value function V is the unique solution of variational equations in 8.

8 8 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS Proposition.2 The process M t e λt V Φ t + t e λs gφ s ds, t. 8 is a P, F-local martingale. For every n N, ε, and φ R +, we have E φ M = E φ M U ε σ n, i.e., V φ = E φ e λuε σn V Φ Uε σ n + Uε σ n e λs gφ s ds Sample paths and bounds on the optimal alarm time. A brief study of sample paths of the sufficient statistic Φ in gives simple lower and upper bounds on the optimal alarm time U in 6. In several special cases, the lower bound becomes optimal. On the other hand, the upper bound has always finite Bayes risk. Recall from Section 2 that the sufficient statistic Φ follows the deterministic curves t xt, φ, φ R + in 2 when the observation process X does not jump. At every jump of the process X, the motion of Φ restarts on a different curve. Between jumps, the process Φ reverts to the mean-level φ d if φ d is positive, and grows unboundedly otherwise; see Figure. A jump at time t of the process Φ is in the forward direction if f Y Nt λ /λ and in the backward direction otherwise. Since the running cost gφ = φ λ/c in 7 is negative on the interval φ, λ/c, the maximum τ τ of any stopping rule τ and τ inf{t : Φ t λ/c} 4 gives a lower expected discounted total running cost than τ does: τ τ τ τ E φ e λt gφ t dt = E φ e λt gφ t dt + E φ {τ>τ} e λt gφ t dt τ τ E φ e λt gφ t dt, for every φ R +. Therefore, the infimum in 6 can be taken over the stopping times {τ F : τ τ} without any loss, and τ in 4 is a lower bound on the optimal alarm time. Proposition 4. Suppose that fyλ /λ for every y R d. If φ d < or < λ/c φ d in 2, then the stopping rule τ of 4 is optimal for the problem 6. By Proposition., the stopping time U = inf{t : V Φ t = } is always optimal for the problem 6. Next we show that U is bounded almost surely between τ in 4 and with τ inf{t : Φ t ξ} { } λ + λ λ + λ λ ξ max, φ d + φ d > λ c c λ + λ c. 4 Proposition 4.2 We always have U τ, τ almost surely and λ/c, {φ R + : v φ = } {φ R + : V φ = } ξ,. 42 From, we find that the Bayes risk of the upper bound τ in 4 τ R τ π = π + c π E e λt Φ t λ dt π + c π ξ λ c c λ is finite. Since Eτ Eτ θ + +Eθ < /cr τ π+/λ <, the stopping time τ is finite P-almost surely.

9 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS 9 t Φ t ω t Φ t ω t Φ t ω λ/c φ d a < λ/c φ d φ φ d λ/c b < φ d < λ/c φ λ/c φ c φ d < Figure : The sample paths of the process Φ in. If the quantity φ d in 2 is positive, then it is the mean-reversion level for the process Φ: between successive jumps, the process reverts to the level φ d as in a. If however φ d <, then the process increases unboundedly between jumps as in c. In general, the process Φ may jump in both directions in both cases compare this with the sample paths of a similar statistic in the standard Poisson disorder problem; see Bayraktar, Dayanik, and Karatzas 2.. The solution. By Propositions. and., the value function V of the optimal stopping problem in 6 is approximated uniformly with a decreasing sequence of functions {v n } n defined sequentially by v, and v n+ φ = inf t, Jv n t, φ = t e λ+λu g + λ Sv n xu, φdu, n, 4 where S is the operator in 2. The sequence {v n } n converges to V pointwise at an exponential rate, and the explicit bound in 2 determines the number n of iterations of 4 needed in order to achieve any desired accuracy: for any given ε >, we have n+ λ < ε = V φ v n+φ < ε for every φ R c λ + λ For every integer n + as in 44, the stopping rule S n+ Sn+ of Proposition. is ε-optimal for the problem in 6: Sn+ V φ E φ e λt gφ t dt < ε for every φ R +. The stopping time S n+ is determined collectively by the jump times σ,..., σ n+ of the observation process X and the smallest minimizers r n, r n,..., r of the deterministic optimization problems in 4; see 28 and : We wait until the earliest of the first jump at σ and the time r n Φ. If r n Φ occurs first, then we stop; otherwise, we reset the clock and continue to wait until the earliest of the next jump at σ 2 σ = σ θ σ and the time r n Φ σ. If r n Φ σ occurs first, then we stop; otherwise, we reset the clock and continue to wait until the earliest of the next jump at σ σ 2 = σ θ σ2 and the time r n 2 Φ σ2, and so on. We stop at the n + st jump time σ n+ if we have not stopped yet.

10 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS The original definition of the time r n, n in 28 obscures its simple meaning. Let us introduce the stopping and continuation regions, Γn {φ R + : v n φ = }, n Cn R + \ Γ n, n and, 4 Γ {φ R + : vφ = } C R + \ Γ respectively. By Corollary.8, the deterministic time r n φ = inf{t > : xt, φ Γ n+ }, n 46 is the first return time of the continuous and deterministic path t xt, φ in 2 to the stopping region Γ n+. Clearly, a concrete characterization of the stopping regions Γ n+, n will ease the calculation of the return times r n, n and an ε-optimal alarm time S n+ as described above. Moreover, the function v n+ is already known on the set Γ n+ it equals zero identically, so the location and shape of the region C n+ = R + \ Γ n+ help a better implementation of 4. Since the sequence of nonpositive functions {v n } n decreases to v, Proposition 4.2 implies that λ/c, Γ Γ 2 Γ n+ Γ ξ,,, λ/c C C 2 C n+ C, ξ, 47 where ξ is the explicit threshold in 4 for the upper bound on the optimal alarm time U. Therefore, the deterministic problems in 4 should be solved only for φ, ξ. The smallest infimum r n φ in 46 of the problem 4 is less than or equal to r n φ inf{t > : xt, φ ξ}, and the infimum in 4 may be taken only over the interval t, r n φ without any loss. Let us define ξ n inf{φ R + : v n φ = }, n and ξ inf{φ R + : vφ = }. 48 Proposition. We have λ/c ξ ξ 2 ξ n ξ ξ, and Γ n = ξ n,, n and Γ = ξ,. 49 Moreover, ξ n ξ as n. The functions v n, n and v are strictly increasing on C n =, ξ n, n and C =, ξ, respectively. Proof. By 47, we have λ/c ξ n ξ ξ for every n, and the sequence ξ n n is increasing. Since the nonpositive functions v n, n and v are increasing and continuous by Corollary.4, the identities in 49 follow. Because the functions are also concave, they are strictly increasing on the corresponding continuation regions. Because ξ n n is increasing, we have ξ ξ lim n ξ n Γ k and v k ξ = for every k. Therefore, vξ = lim k v n ξ = and ξ Γ, i.e., ξ ξ. Hence ξ = ξ lim n ξ n. The structure of the problems in 4 helps to lay out a concrete iterative solution algorithm; see Figure 2. Suppose that v n is already calculated for some n, and v n+ is the next. The infimum in 4 is not reached before the curve t xt, φ leaves the region where the boundary point A n {φ R + : g + λ Sv n φ < } =, α n, n, α n inf{x R + : g + λ Sv n x = } can be calculated immediately since v n is known. In, the identity A n =, α n follows from that the mapping x g + λ Sv n x : R + R is strictly increasing and continuous with limits λ/r + v n < and + as x goes to and +, respectively. Now the unknown boundary ξ n+ of the continuation region C n+ =, ξ n+ and the function v n+ φ for φ C n+ can be found from the relation between the known α n in and φ d in 2:

11 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS Step Set n = and v. Calculate ξ of 4. Step Find α n of by a bisection search in, ξ. If φ d /, α n, then set ξ n+ to α n, and calculate on R + the function { } Jvn r n φ, φ, φ < ξ n+ a ln ξ n+ φ d, a φ φ v n+ φ =, r n φ d., φ ξ n+ ξ n+ φ, a = λ If φ d, α n, then set ξ n+ to the unique root of the strictly increasing mapping φ Jv n, φ of 2. The root can be found by another bisection search in α n, ξ. Calculate on R + the function { } Jvn, φ, φ < ξ n+ v n+ φ =., φ ξ n+ Step 2 For the n+st problem in 9 set the stopping region Γ n+ to ξ n+, and the value function V n+ to v n+. Increase n by one and go to Step. Figure 2: The solution of 6 by iterative approximations. In Step 2, the relation 44 may be used as a stopping rule to obtain arbitrarily close approximations V n+ for the value function V of 6. Case I: φ d /, α n : the curve t xt, φ, φ R + leaves the interval, α n and never comes back; see 2 and Figure on page 9. Therefore, C n+ = A n i.e., ξ n+ = α n and v n+ φ = Jv n r n φ, φ = rnφ e λ+λu g + λ Sv n xu, φdu, where r n φ in 46 becomes the first exit time of t xt, φ from A n =, α n. Case II: φ d, α n : as t +, we have xt, φ φ d monotonically. Therefore, the infimum in 4 is attained at either t = or t = +. The continuous function φ Jv n +, φ = e λ+λt g + λ Sv n xt, φdt : R + R 2 is strictly increasing and Jv n +, α n < < lim φ Jv n +, φ = +. Therefore, the mapping φ Jv n +, φ has unique root, and this root is at ξ n+ > α n, since min{, Jv n, φ} = v n+ φ is negative at φ, ξ n+ and zero at φ ξ n+,. The algorithm is summarized in Figure 2. It is implemented to solve several numerical examples in Section 6. We shall close this section with a summary of the discussions above. The following corollary will be needed later as we describe how smooth the value function V is. Below i is proved while discussing Case I and Case II above. The proof of ii is very similar. Corollary.2 Recall that the continuation regions {C n } n and C, the sets {A n } n, the numbers {ξ n } n, ξ, {α n } n, and α are defined as in 4,, 48, and, respectively. Analogously, let us introduce α inf{x R + : g + λ SV x = }, A {φ R + : g + λ SV φ < } =, α. The identity A =, α follows from that the mapping x g +λ SV x : R + R is strictly increasing and continuous with limits λ/r + V < and + as x goes to and +, respectively. Moreover, the followings hold: i If φ d / C n+ =, ξ n+, then C n+ = A n =, α n and g +λ SV ξ n+ = g +λ SV α n =. If φ d C n+ =, ξ n+, then A n C n+ and v n+ φ = Jv n +, φ = e λ+λt g + λ Sv n xt, φdt, φ C n+.

12 2 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS ii If φ d / C =, ξ, then C = A =, α and g + λ SV ξ = g + λ SV α =. If φ d C =, ξ, then A C and V φ = JV +, φ = e λ+λt g + λ SV xt, φdt, φ C. 6. Examples and extensions. In Section 6., we provide numerical examples with discrete and absolutely continuous jump distributions. The methods of previous sections apply to quickest detection problems with other standard Bayes risk measures. A few necessary minor changes are explained, and numerical examples are given in Section 6.2. Finally, we revisit in Section 6.4 Gapeev s very special compound Poisson disorder problem. 6. Numerical examples. In the first example, jump sizes are discrete. The jump distributions before and after the disorder are ν =,, 4,, 2 2 and ν =,, 4,, on the set {, 2,, 4, }, respectively; see the upper left panel in Figure on page. The jump distribution is right skewed before the disorder histogram with heavy outline in the background and left skewed after the disorder histogram with filled bars in the foreground. The mode of the jump distribution increases after the disorder. After having set the parameters c cost per unit delay time, λ disorder arrival rate, λ arrival rate of observations before the disorder, the quickest-detection problem has been solved for three different arrival rates λ of observations after the disorder; see the upper panels b-d in Figure : b λ = λ /2 observations arrive at a lower rate after the disorder, c λ = λ arrival rate does not change, and d λ = 2λ observations arrive at a higher rate after the disorder. In each panel b-d are the successive approximations V, V 2,... of the value function V of 6 drawn. The successive approximations V, V 2,... are the same as the functions in 9 and are calculated iteratively by using the algorithm in Figure 2. The algorithm is terminated after, 4, and 7 iterations, respectively, for b, c, and d, when the largest difference between most recent two approximations becomes negligible. The functions V in b, V 4 in c, and V 7 in d are the approximations of V. In b, the relation V V implies that the disorder time will be spotted as closely as possible by the arrival of the th observation with a negligible sacrifice from the optimal Bayes risk; see also 2. Similar conclusions are true in c and d. Given that everything else is the same, we expect that the minimum Bayes risk is smaller when pre- and post-disorder arrival rates of observations are different than when they are the same. Intuitively, if the arrival rates before and after the disorder are different, then the interarrival times between observations carry useful information for the quickest detection of the disorder time. In the light of the relation in between the Bayes risk U and the value function V, this intuitive remark is confirmed empirically by a comparison of the case c with b and d. The value functions in cases b and d where λ λ are smaller than that in case c where λ = λ. The difference is more striking between d and c than between b and c. This is perhaps because case b unlike case d is deprived of useful additional information about the jump-sizes due to slow arrival rate of observations after the disorder. Finally, the rightmost vertical bar at the edge of each panel marks the critical threshold ξ in 49 which determines the optimal alarm time: declare an alarm as soon as the odds-ratio process Φ in leaves the interval, ξ. In the second example, jump-size distributions before and after the disorder are absolutely continuous. Before the disorder, jump sizes are exponentially distributed with some rate µ. After the disorder, they have gamma distribution with scale parameter µ the same as the rate of the exponential distribution. For three different shape parameters 2,, and 6, the quickest detection problem is solved; in Figure, see panel e for the comparisons of probability density functions and panels f-h for successive approximations V, V 2,... for each of three cases. In all of the cases, the arrival rate of observations before and after the disorder is kept the same i.e., λ = λ ; thus, only observed jump sizes contain useful information to detect quickly the disorder time.

13 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS a Discrete jump b λ λ = 2 c λ λ = d λ λ = 2 distributions - -4 jumps jumps jumps Exponentialµ Gamma2,µ Gamma,µ Gamma6,µ jumps jumps jumps e Continuous jump f Gamma2,µ g Gamma,µ distributions µ = 2 λ = λ h Gamma6,µ Figure : The solutions of the compound Poisson disorder problems with the Bayes risk in 2 c =.2, λ =., λ =. Top row: The jump distributions before and after the disorder are discrete. In a, their probability mass functions are sketched shaded is the post-disorder probability mass function. The number of iterations jumps and the successive approximations v n are reported when the ratio λ /λ equals b /2, c, and d 2. Bottom row: Before the disorder, the jumps are exponentially distributed with rate µ = 2. After the disorder, the jumps have f Gamma2,µ, g Gamma,µ, and h Gamma6,µ distributions; see e for the sketches of their probability density functions. In all of the cases, λ = λ. The optimal thresholds are indicated by the vertical bars at upper and lower edges of the panels; see also Figure 4. Intuitively, if jump distributions before and after the disorder concentrate more on distinct/disjoint subsets, then the disorder can be spotted more accurately, and the Bayes risk becomes smaller. The numerical results e-h confirm our expectation. As the shape parameter increases, the post-disorder jump distribution shifts to the right away from the pre-disorder jump distribution. At the same time, the value function V and the Bayes risk U thanks to gets uniformly smaller. 6.2 Standard Poisson disorder problems. The Bayes risk of 2 is the second of four standard Bayes risks in 4. The risk measures in 4 are called standard by Bayraktar et. al. 2 following Davis since they have essentially the same representation R τ π α, k, γ, β γπ + βπe τ e λt Φ α t k dt, π, 4 for some known constants α, k > and functions γ, β from, into R +. The generalized odds-ratio process Φ α t E e αt θ {θ t} F t, t, α P{θ > t F t }

14 4 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS 2 b λ λ = 2 2 c λ λ = 2 d λ λ = f Gamma2,µ g Gamma,µ h Gamma6,µ 7 Figure 4: The critical thresholds ξ n, n =, 2,... for the compound Poisson disorder problems considered in Figure becomes the same as the odds-ratio process Φ t, t in when α =. If we redefine the parameter a in 2 by a λ + α λ + λ, then the process Φ α = {Φ α t ; t } has the same dynamics as in for every α : Φ α t = x t σ n, Φ α σ n, t σ n, σ n Φ α σ n = λ fy n Φ α, n. σ λ n See Bayraktar et. al. 2, Proposition 2. for the proof of the following result. Proposition 6. For every π, and stopping time τ F, we have R i τ π = R τ π α i, k i, γ i, β i, for every i =, 2,, 4, 6 where α = α 2 = α =, α 4 = α; k = λ/ce ελ, k 2 = λ/c, k = /c, k 4 = λ/cα; and γ π = πe λε, γ 2 π = π, γ π = π λ, γ 4π = π β π = c π, β 2 π = c π, β π = c π, β 4 π = cα π. For i = 2 the identity in 4, 6 is the same as the representation which was the key for the solution. Therefore, the solution of the compound Poisson disorder problem with any standard Bayes risk in 4,6 remains the same after a few obvious changes. The minimum Bayes risk Uπ = inf τ F Rπ α, k, γ, β, π, is given by π Uπ = γπ + βπ V, π, π

15 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS λ λ = 2 λ λ = 2 a R 2 b R, ε =. λ c R d R 4, α = jumps... jumps jumps jumps jumps 8 jumps 6 jumps 8 jumps Figure : As in Bayraktar, Dayanik, and Karatzas 2, we take c =.2, λ =., λ = and ν ν δ {}. For each case, the rate λ is determined according to the ratio λ /λ at the beginning of the same row. In every column, the disorder problem is solved for one of four penalties linear, ε, expected miss, and exponential. The number of iterations jumps before convergence and the successive approximations v n of the value function V are displayed for eight cases. In every case, the optimal threshold ξ n for each subproblem v n is indicated by a vertical bar on both top and bottom edges of the panels; see also Figure 6. in terms of the value function V φ inf τ F Eφ τ e λt gφ α t dt, φ R + of a discounted optimal stopping problem with the running cost gφ φ k, φ R + and discount rate λ > for the piecewise-deterministic Markov process Φ α in. The successive approximations {V n } n in 9 of the value function V are uniformly decreasing; and since g k, we have k n λ λ V φ V n φ. λ + λ The results of Sections - remain valid in this general case. Figure illustrates solutions of some Poisson disorder problem for each of four standard Bayes risk measures in 4. For comparison the parameters are chosen the same as in Bayraktar et. al. 2, Table, whose methods are unable to detect the change in the jump-size distribution, and therefore, can only use the count data on the number of arrivals to detect the disorder. On the other hand, the method of Sections - can be told to ignore completely the jump-size information and to use number of arrivals only by setting the density function f in 8 and identically to one more precisely, the jumpdistributions ν and ν are replaced with the Dirac measure δ {} at one on R +, so that the process X is the same as the counting process N in. In Figure, the rightmost vertical bars at the edge of panels mark the critical thresholds of the quickest alarm rules and agree with those reported by Bayraktar et. al. 2, Table.

16 6 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS a R 2 b R, ε =. λ c R d R 4, α = λ λ = λ λ = Figure 6: The critical thresholds ξ n, n =, 2,... for the standard Poisson disorder problems considered in Figure. 6. Reducing the Bayes-risk by observing marks in addition to arrival times. Suppose that in the examples b-d of Figure the observations of marks are unavailable, and one has to use only the data on the arrival times in order to detect the disorder time. How do the optimal Bayes risks and optimal strategies differ? For different values of the ratio λ /λ, the value function of 6 is calculated in the presence and the absence of the mark data and displayed in the first row of Figure 7. In the absence of the mark data, compound Poisson disorder problem reduces to standard Poisson disorder problem, and the solutions of the latter are recalled from Figure a for λ /λ = /2 and 2. If λ /λ =, and the mark data is absent, then i the sufficient statistic Φ in becomes the increasing deterministic process Φ t = xt, Φ = + e λt Φ +, t λ = λ, f, 7 ii following from 6, 7, 9, the optimal thresholds in 49 become ξ = ξ 2 =... = ξ = λ/c, iii the optimal alarm time t Φ = inf{t : Φ t λ/c} is also deterministic, + + λ/c t Φ = λ ln and V φ = + φ + λ/c + ln λ + c + Φ λ + φ cλ. The latter expression is used to draw the graph in Figure 7b of the value function V of 6 corresponding to the case without mark observations. The first row of Figure 7 shows that the reduction in the Bayes risk obtained by using the observations of the marks in addition to those of the arrival times can be significant. Moreover, this reduction tends to grow as the number of arrivals hence the additional information carried by the accompanying mark data increases with the increasing rate λ for fixed λ. Finally, observe from 7 that arrival times carry no information about the disorder time if the arrival rate is not expected to change i.e., λ = λ, and the observations of marks become more crucial for early detection of the disorder and for lower Bayes risk; see Figure 7b. Since every stopping time of the arrival process N = {N t ; t } in is also a stopping time of X = {X t ; t }, the value function V of 9 is always at least as small in the presence of

17 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS 7 b λ λ = 2 c λ λ = d λ λ = without mark data with mark data without mark data with mark data Figure 7: In the presence and the absence of mark observations, the value function V of 6 and the thresholds {ξ n ; n =, 2,...} of 48 until the termination of the algorithm in Figure 2 are displayed, respectively, in the first and second rows. The data are the same as those of Figure b, c, and d: c =.2, λ =., λ =, and the discrete mark distributions ν, ν are as in. mark observations as the same function in the absence of mark observations. Therefore, the thresholds {ξ n ; n =, 2,...} and ξ in 48 are always at least as large in the presence of mark observations as those in the absence of mark observations. This fact is confirmed by the illustrations in the second row of Figure 7, where the thresholds {ξ n ; n =, 2,...} are displayed for each case before the algorithm in Figure 2 terminates. Note that this fact does not imply that an optimal alarm in the presence of mark observations is given always earlier than that in the absence of mark observations: not only the critical thresholds ξ but also the dynamics of the sufficient statistic Φ in 2, are different in the presence i.e., nontrivial f and in the absence i.e., f of the mark observations. Therefore, the relation between optimal alarm times is not obvious. 6.4 Compound Poisson disorder problem with exponential jumps. Gapeev recently solved fully a very special compound Poisson disorder problem: before and after the disorder, the jump sizes are exponentially distributed, and their common expected values are the same as the arrival rates of jumps in corresponding regimes. Namely, jump-size distributions are as in, and the Radon-Nikodym derivative in 8 becomes fy = dν = λ { exp } y. 8 dν λ λ λ BR+ Below are Gapeev s, Theorem 4. conclusions obtained by using general methods of this paper. If λ < λ and a λ λ λ c, then f λ /λ and either φ d < or < λ/c φ d. Therefore, Proposition 4. applies, and the stopping time τ in 4 is optimal. P. Gapeev works with the posterior probability process Π t P{θ t F t } Φ t + Φ t, t,

18 8 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS and the optimal stopping rule τ can be rewritten as { τ = inf t : Π t λ }. λ + c If either λ < λ and a > c or λ > λ, then the stopping rule U = inf{t : Φ t ξ} = inf{t : Π t ξ/+ξ} in 6, 4, 48, 49 is optimal by Propositions. and.. If λ > λ, then φ d < and the value function V in 6 is continuously differentiable on R + by Lemma 7. below, and V ξ =. 7. Differentiability and variational inequalities. In this final section, smoothness of the value function V in 6 is studied. The function V is shown to be piecewise continuously differentiable and unique bounded solution of the variational inequalities in 8; see Lemma 7. and Proposition 7. on pages 2 and 2, respectively. 7. Differentiability of the value function. Since V on the stopping region Γ = ξ, by 4, 48, 49, it is obviously continuously differentiable on ξ,. Its smoothness on, ξ is investigated below separately in two cases due to different behavior of functions t xt, φ, φ R + of 2 for φ d /, ξ and φ d, ξ. We summarize our conclusions in Lemma 7.. In both case, it will be very useful to recall from Remark. and 4,, 4, 49 that the value function V satisfies some form of dynamic programming equation; namely, V φ = JV t, φ + e λ+λt V xt, φ, t, rφ, 9 rφ = inf{t > : xt, φ ξ}, φ R + Case I: φ d /, ξ. Let us fix some φ, ξ and define for every < h < ξ φ that φ + h T h, φ inf{t : xt, φ φ + h} = a ln φd, a φ φ d h/λ, a =. The second equality follows from 2. Because T h, φ rφ, replacing T h, φ with t in 9 gives V φ = T h,φ e λ+λu g + λ SV xu, φdu + e λ+λt h,φ V φ + h. 6 Subtracting V φ + h from each side and dividing by /h give T h,φ V φ + h V φ = e λ+λu g + λ SV xu, φdu h h e λ+λt h,φ V φ + h. h Since V is concave by Corollary.4 and Proposition.6, it has right derivatives everywhere. As h decreases to, we obtain V φ + h V φ T h, φ lim = gφ + λ SV φ λ + λ V φ h + h h, 6 h= since the functions V and SV are bounded and continuous by bounded convergence theorem. Because { T h, φ /aφ φd }, a h = = 62 h= /λ, a = λ + aφ is a continuous function of φ, ξ recall that φ d /, ξ, so the denominator is bounded away from zero on φ, ξ, the right derivative of V in 6 is continuous on φ, ξ. Since V is concave, this implies that V is continuously differentiable on, ξ, and 6, 62 give the derivative V φ = gφ + λ SV φ λ + λ V φ, φ, ξ. 6 λ + aφ Finally, V ξ = = V ξ+ since V on ξ, and g + λ SV ξ = because of Corollary.2ii and φ d /, ξ. The concavity of V implies again that V ξ exists and equals zero. Hence the function V is continuously differentiable everywhere on R + if φ d /, ξ.

19 S. Dayanik and S. O. Sezer: Compound Poisson disorder problem Mathematics of Operations Research xxx, pp. xxx xxx, c 2x INFORMS 9 Case II: φ d, ξ. For every φ R +, the function xt, φ converges monotonically to φ d as t increases to infinity. For every φ, ξ, we have V φ = JV, φ by Corollary.2ii. If we redefine T h, φ inf{t : xt, φ φ h} for every φ, ξ and h >, then the same arguments as in the previous case show that V is continuously differentiable with the same derivative V φ as in 6 on φ, ξ \ {φ d }. Let us show now that the function V φ is not differentiable at φ = ξ. In terms of one can write using Corollary.2ii that V ξ V ξ h h = W φ g + λ SV φ, φ R +, e λ+λu W xu, ξ W xu, ξ h xu, ξ xu, ξ h xu, ξ xu, ξ h du. h Since the functions g and V are increasing, so are SV of 2 and W. Therefore, W xu, ξ W xu, ξ h xu, ξ xu, ξ h and = e au ; xu, ξ xu, ξ h h and Fatou s Lemma gives V ξ V ξ h lim h h + e λu + λ lim h + SV Since SV is increasing, the limit infimum above is non-negative and V ξ V ξ h lim h h + xu, ξ SV xu, ξ h xu, ξ xu, ξ h du. e λu du = V ξ + h V ξ > = lim. 64 λ h h + Hence, the lefthand and righthand derivatives of V are unequal at φ = ξ, and V is not differentiable at φ = ξ. On the other hand, the function V may or may not be differentiable at φ = φ d. Since xt, φ d = φ d for every t by 2, Corollary.2ii gives V φ d + h V φ d h = λ + λ Because SV is nondecreasing, Fatou s lemma gives V φ d + h V φ d lim h h + λ + λ e λ+λu SV φ d + e au h SV φ d du. h e λ+λu SV φ d + e au h SV φ d lim du. 6 h h + We shall calculate the limit infimum on the righthand side. In terms of the sets { A y R d ; fy λ } { = and B y R d ; fy λ } φ d = ξ, 66 λ λ the definition in 2 of SV implies SV φ d + e au h SV φ d = ν dy h R d \A B V φd + e au h V φ d + ν dy A h V fy λ λ φ d + e au h V fy λ h + ν dy B λ φ d V ξ + ξ/φd e au h V ξ The last integral is equal to because V φ = for every φ ξ. Since the concave and increasing function V has bounded right derivatives by Corollary.4 and is continuously differentiable on R + \ {φ d, ξ}, the dominated convergence theorem implies that SV φ d + e au h SV φ d lim = e au λ h h λ + R d \A B ν dyv fy λ λ φ d + e au V φ d + h V φ d ν A. lim h h + h. for every u R +. 67

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2) SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES SAVAS DAYANIK AND SEMIH O. SEZER Abstract. One of two simple hypotheses is correct about the unknown arrival rate and jump distribution

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Poisson Disorder Problem with Exponential Penalty for Delay

Poisson Disorder Problem with Exponential Penalty for Delay MATHEMATICS OF OPERATIONS RESEARCH Vol. 31, No. 2, May 26, pp. 217 233 issn 364-765X eissn 1526-5471 6 312 217 informs doi 1.1287/moor.16.19 26 INFORMS Poisson Disorder Problem with Exponential Penalty

More information

Monitoring actuarial assumptions in life insurance

Monitoring actuarial assumptions in life insurance Monitoring actuarial assumptions in life insurance Stéphane Loisel ISFA, Univ. Lyon 1 Joint work with N. El Karoui & Y. Salhi IAALS Colloquium, Barcelona, 17 LoLitA Typical paths with change of regime

More information

Liquidation in Limit Order Books. LOBs with Controlled Intensity

Liquidation in Limit Order Books. LOBs with Controlled Intensity Limit Order Book Model Power-Law Intensity Law Exponential Decay Order Books Extensions Liquidation in Limit Order Books with Controlled Intensity Erhan and Mike Ludkovski University of Michigan and UCSB

More information

Solutions For Stochastic Process Final Exam

Solutions For Stochastic Process Final Exam Solutions For Stochastic Process Final Exam (a) λ BMW = 20 0% = 2 X BMW Poisson(2) Let N t be the number of BMWs which have passes during [0, t] Then the probability in question is P (N ) = P (N = 0) =

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

MICHAEL LUDKOVSKI AND SEMIH O. SEZER

MICHAEL LUDKOVSKI AND SEMIH O. SEZER FINITE HORIZON DECISION TIMING WITH PARTIALLY OBSERVABLE POISSON PROCESSES MICHAEL LUDKOVSKI AND SEMIH O. SEZER Abstract. We study decision timing problems on finite horizon with Poissonian information

More information

ADAPTIVE POISSON DISORDER PROBLEM

ADAPTIVE POISSON DISORDER PROBLEM ADAPTIVE POISSON DISORDER PROBLEM ERHAN BAYRAKTAR, SAVAS DAYANIK, AND IOANNIS KARATZAS Abstrat. We study the quikest detetion problem of a sudden hange in the arrival rate of a Poisson proess from a known

More information

Surveillance of BiometricsAssumptions

Surveillance of BiometricsAssumptions Surveillance of BiometricsAssumptions in Insured Populations Journée des Chaires, ILB 2017 N. El Karoui, S. Loisel, Y. Sahli UPMC-Paris 6/LPMA/ISFA-Lyon 1 with the financial support of ANR LoLitA, and

More information

On the sequential testing problem for some diffusion processes

On the sequential testing problem for some diffusion processes To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

ADAPTIVE POISSON DISORDER PROBLEM

ADAPTIVE POISSON DISORDER PROBLEM Submitted to the Annals of Applied Probability ADAPTIVE POISSON DISORDER PROBLEM By Erhan Bayraktar Savas Dayanik and Ioannis Karatzas University of Mihigan, Prineton University, and Columbia University

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

BAYESIAN SEQUENTIAL CHANGE DIAGNOSIS

BAYESIAN SEQUENTIAL CHANGE DIAGNOSIS BAYESIAN SEQUENTIAL CHANGE DIAGNOSIS SAVAS DAYANIK, CHRISTIAN GOULDING, AND H. VINCENT POOR Abstract. Sequential change diagnosis is the joint problem of detection and identification of a sudden and unobservable

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Dynamic Pricing for Non-Perishable Products with Demand Learning

Dynamic Pricing for Non-Perishable Products with Demand Learning Dynamic Pricing for Non-Perishable Products with Demand Learning Victor F. Araman Stern School of Business New York University René A. Caldentey DIMACS Workshop on Yield Management and Dynamic Pricing

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

Chapter 4. The dominated convergence theorem and applications

Chapter 4. The dominated convergence theorem and applications Chapter 4. The dominated convergence theorem and applications The Monotone Covergence theorem is one of a number of key theorems alllowing one to exchange limits and [Lebesgue] integrals (or derivatives

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

The Wiener Sequential Testing Problem with Finite Horizon

The Wiener Sequential Testing Problem with Finite Horizon Research Report No. 434, 3, Dept. Theoret. Statist. Aarhus (18 pp) The Wiener Sequential Testing Problem with Finite Horizon P. V. Gapeev and G. Peskir We present a solution of the Bayesian problem of

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

Quickest Detection With Post-Change Distribution Uncertainty

Quickest Detection With Post-Change Distribution Uncertainty Quickest Detection With Post-Change Distribution Uncertainty Heng Yang City University of New York, Graduate Center Olympia Hadjiliadis City University of New York, Brooklyn College and Graduate Center

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

A User s Guide to Measure Theoretic Probability Errata and comments

A User s Guide to Measure Theoretic Probability Errata and comments A User s Guide to Measure Theoretic Probability Errata and comments Chapter 2. page 25, line -3: Upper limit on sum should be 2 4 n page 34, line -10: case of a probability measure page 35, line 20 23:

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Existence and Comparisons for BSDEs in general spaces

Existence and Comparisons for BSDEs in general spaces Existence and Comparisons for BSDEs in general spaces Samuel N. Cohen and Robert J. Elliott University of Adelaide and University of Calgary BFS 2010 S.N. Cohen, R.J. Elliott (Adelaide, Calgary) BSDEs

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Sequential Hypothesis Testing under Stochastic Deadlines

Sequential Hypothesis Testing under Stochastic Deadlines Sequential Hypothesis Testing under Stochastic Deadlines Peter I. Frazier ORFE Princeton University Princeton, NJ 08544 pfrazier@princeton.edu Angela J. Yu CSBMB Princeton University Princeton, NJ 08544

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Optimal stopping for non-linear expectations Part I

Optimal stopping for non-linear expectations Part I Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University

More information

Worst case analysis for a general class of on-line lot-sizing heuristics

Worst case analysis for a general class of on-line lot-sizing heuristics Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University

More information

DYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION

DYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION Dynamic Systems and Applications 26 (2017) 575-588 DYNAMIC MODEL OF URBAN TRAFFIC AND OPTIMUM MANAGEMENT OF ITS FLOW AND CONGESTION SHI AN WANG AND N. U. AHMED School of Electrical Engineering and Computer

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.262 Discrete Stochastic Processes Midterm Quiz April 6, 2010 There are 5 questions, each with several parts.

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Optimal Execution Tracking a Benchmark

Optimal Execution Tracking a Benchmark Optimal Execution Tracking a Benchmark René Carmona Bendheim Center for Finance Department of Operations Research & Financial Engineering Princeton University Princeton, June 20, 2013 Optimal Execution

More information

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Jingnan Fan Andrzej Ruszczyński November 5, 2014; revised April 15, 2015 Abstract For controlled discrete-time

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

SEQUENTIAL CHANGE DETECTION REVISITED. BY GEORGE V. MOUSTAKIDES University of Patras

SEQUENTIAL CHANGE DETECTION REVISITED. BY GEORGE V. MOUSTAKIDES University of Patras The Annals of Statistics 28, Vol. 36, No. 2, 787 87 DOI: 1.1214/95367938 Institute of Mathematical Statistics, 28 SEQUENTIAL CHANGE DETECTION REVISITED BY GEORGE V. MOUSTAKIDES University of Patras In

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days? IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross

More information

Pavel V. Gapeev The disorder problem for compound Poisson processes with exponential jumps Article (Published version) (Refereed)

Pavel V. Gapeev The disorder problem for compound Poisson processes with exponential jumps Article (Published version) (Refereed) Pavel V. Gapeev The disorder problem for compound Poisson processes with exponential jumps Article (Published version) (Refereed) Original citation: Gapeev, Pavel V. (25) The disorder problem for compound

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS JOHANNES RUF AND JAMES LEWIS WOLTER Appendix B. The Proofs of Theorem. and Proposition.3 The proof

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University

More information

On the sequential testing and quickest change-pointdetection problems for Gaussian processes

On the sequential testing and quickest change-pointdetection problems for Gaussian processes Pavel V. Gapeev and Yavor I. Stoev On the sequential testing and quickest change-pointdetection problems for Gaussian processes Article Accepted version Refereed Original citation: Gapeev, Pavel V. and

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

e - c o m p a n i o n

e - c o m p a n i o n OPERATIONS RESEARCH http://dx.doi.org/1.1287/opre.111.13ec e - c o m p a n i o n ONLY AVAILABLE IN ELECTRONIC FORM 212 INFORMS Electronic Companion A Diffusion Regime with Nondegenerate Slowdown by Rami

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

On the Converse Law of Large Numbers

On the Converse Law of Large Numbers On the Converse Law of Large Numbers H. Jerome Keisler Yeneng Sun This version: March 15, 2018 Abstract Given a triangular array of random variables and a growth rate without a full upper asymptotic density,

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

arxiv: v2 [math.pr] 4 Feb 2009

arxiv: v2 [math.pr] 4 Feb 2009 Optimal detection of homogeneous segment of observations in stochastic sequence arxiv:0812.3632v2 [math.pr] 4 Feb 2009 Abstract Wojciech Sarnowski a, Krzysztof Szajowski b,a a Wroc law University of Technology,

More information

THEOREMS, ETC., FOR MATH 516

THEOREMS, ETC., FOR MATH 516 THEOREMS, ETC., FOR MATH 516 Results labeled Theorem Ea.b.c (or Proposition Ea.b.c, etc.) refer to Theorem c from section a.b of Evans book (Partial Differential Equations). Proposition 1 (=Proposition

More information

Lebesgue Integration on R n

Lebesgue Integration on R n Lebesgue Integration on R n The treatment here is based loosely on that of Jones, Lebesgue Integration on Euclidean Space We give an overview from the perspective of a user of the theory Riemann integration

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

Change-point models and performance measures for sequential change detection

Change-point models and performance measures for sequential change detection Change-point models and performance measures for sequential change detection Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece moustaki@upatras.gr George V. Moustakides

More information

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205,

More information