Poisson Disorder Problem with Exponential Penalty for Delay

Size: px
Start display at page:

Download "Poisson Disorder Problem with Exponential Penalty for Delay"

Transcription

1 MATHEMATICS OF OPERATIONS RESEARCH Vol. 31, No. 2, May 26, pp issn X eissn informs doi /moor INFORMS Poisson Disorder Problem with Exponential Penalty for Delay Erhan Bayraktar Department of Mathematics, University of Michigan, Ann Arbor, Michigan 4819, erhan@umich.edu, erhan/ Savas Dayanik Department of Operations Research and Financial Engineering, Princeton University, Princeton, New Jersey 8544, sdayanik@princeton.edu, sdayanik/ We solve the Poisson disorder problem when the delay is penalied exponentially. Our objective is to detect as quickly as possible the unobservable time of the change (or disorder) in the intensity of a Poisson process. The disorder time delimits two different regimes in which one employs distinct strategies (e.g., investment, advertising, manufacturing). We seek a stopping rule that minimies the frequency of false alarms and an exponential (unlike previous formulations, which use a linear) cost function of the detection delay. In the financial applications, the exponential penalty is a more apt measure for the delay cost because of the compounding of the investment growth. The Poisson disorder problem with a linear delay cost was studied by Peskir and Shiryaev [22. Solving the Poisson Disorder Problem. Advances in Finance and Stochastics. Springer, Berlin, Germany, ], which is a limiting case of ours. Key words: Poisson disorder problem; quickest detection; optimal stopping; differential-delay equations MSC2 subject classification: Primary: 62L1; secondary: 62L15, 62C1, 6G4 OR/MS subject classification: Primary: statistics: Bayesian; estimation; secondary: dynamic programming/optimal control: applications History: Received March 25, 25; revised August 29, 25, and January 3, Introduction. In this paper, we address a change-detection problem involving Poisson processes. Suppose that we observe a Poisson process X = X t t whose intensity changes from to 1 at some random time. The disorder time is unobservable, but has a known a priori probability distribution: it equals ero with probability 1 and has exponential distribution with rate given that it is positive. The parameters,, 1 are known positive constants. The Poisson disorder problem is to detect the change-time by using the observations of X. Here, we are interested in the best detection rule which minimies the expected sum of the frequency of the false alarms and an exponential cost function of the detection delay. The change-detection and sequential hypothesis testing problems about the drift of a Wiener process have been extensively studied; see, e.g., Shiryaev [17, 15, Chapter IV], Beibel [3, 2], Karatas [8], Moustakides [1]. Similar questions for the intensity of a Poisson process also draw significant attention, because Poisson processes are often used to model abrupt changes, such as sudden price movements in stock markets, changes in credit ratings due to defaults, changes in the intensity of earthquakes, product failures in a manufacturing system, etc. Particularly, the Poisson disorder problem with linear penalty functions of the delay has been investigated well; see, e.g., Galchuk and Roovsky [7], Davis [6], Peskir and Shiryaev [12, 11]. However, in many applications the cost of the lost opportunity due to the detection delay exponentiates with the delay time. Therefore, it is captured better by an exponential penalty function than by a linear one. As a simple motivating example from quality control, let us consider an assembly line whose finished products are continuously inspected for defects. A sudden upward shift (e.g., from low to high 1 ) in the rate of the number of defective items (the observation process X) may have an assignable cause and warrant an investigation at some fixed cost. A good control policy should balance the costs of false alarm (due to unnecessary inspection) and detection delay (due to lost production time and raw materials, scrapping or recycling, etc.). A firm often measures its financial losses and gains by compounding those at its own internal rate of return (IRR). Let us denote our firm s IRR by. Typically the production rate of an assembly line is constant, say one item per unit time. Suppose also that a defective item costs one dollar. If the production system goes out of control at time, and an alarm is given at some later time, then every dt units of defective items at each t will cost exp tdt at the detection time. Therefore, total cost of detection delay equals + 1 > e t dt = 1 > e t dt = e t dt = 1 1 e + and is an exponential function of the detection delay time +. In addition, if each false alarm costs 1/c dollars on average, then an optimal alarm time should minimie the expected total inspection cost, which is proportional to < + cɛe + 1 (1) 217

2 218 Mathematics of Operations Research 31(2), pp , 26 INFORMS i.e., the alarm time should solve optimally some Poisson disorder problem with an exponential penalty for delay. The only alternative to exponential delay penalty in the literature (see, e.g., Galchuk and Roovsky [7], Davis [6], Peskir and Shiryaev [12]) is the linear delay penalty, as in < + cɛ + (2) From decision theoretic point of view, the penalty Ɛ + is the choice of a risk-neutral decision maker. Indeed, if the risks implied by the fluctuations in the delay time + are important to a decision maker, then the linear penalty Ɛ + falls short. On the other hand, the exponential penalty Ɛe + 1 not only captures the variability of the delay time, but also reflects the risk-sensitivity of a risk-averse decision maker; see Whittle [18, 19]. By adjusting the parameter, the decision maker can tune the exponential penalty to his/her risk preferences. To see these, let us replace in (2) the linear penalty Ɛ + with the exponential penalty 1/Ɛe + 1 and use the identity (see, for example, Bensoussan [5, p. 54]) e x = 1 + x + 2 x 2 1 s e rx dr ds x to rewrite it. We obtain < + c [ 1 Ɛe + 1 = < + cɛ + + cɛ + 2 s ] e r + dr ds (3) Hence, the exponential penalty on the left accounts for the losses in (2) as well as the effect of the second-order terms in the delay time. For large values of, every alarm time causing high variations in the delay time + is now punished severely according to (3). For small values of, the punishment is lesser, and we retrieve the risk-neutral (linear) case (2) ifwelet go to ero. Hence, the exponential penalty contains the linear penalty as a sub case and allows the risk preferences to be added to the analysis by a natural mechanism. The importance of exponential delay penalty was recognied first by Poor [13], who solved a quickestdetection problem with exponential delay cost in the discrete-time setting. Later, Beibel [3] solved the Wiener disorder problem with the same cost function. The Poisson disorder problem with an exponential penalty function of the detection delay is studied for the first time in this paper, to the best of our knowledge. To solve the Poisson disorder problem, we first show that it is equivalent to an optimal stopping problem for a two-dimensional jump process in 1 +. For every t, t is the conditional probability that the change occurred at or before time t given the past observations of X. On the other hand, t is essentially the likelihood-ratio process t t /1 t with an adjustment, adapted to the history of X, reflecting the exponential detection delay cost; see (7). The optimal stopping problem is reduced to a free-boundary problem involving a differential delay equation. By means of a key verification lemma, one solution of the free-boundary problem is identified as the value function of the optimal stopping problem. The optimal stopping rule turns out a threshold type for the process regardless of : declare that the disorder happened at or before time t as soon as t exceeds a suitable threshold (constant over time). We characterie the optimal threshold and the value function of the optimal stopping problem. To calculate the threshold and the value function, we describe an efficient numerical method using bisection search on the real line and finite-difference method for differential-delay equations. Our systematic numerical method also complements the work of Peskir and Shiryaev [12], whose problem is a limiting case of ours. Let us also mention that Beibel [3] reduced the Wiener disorder problem with exponential penalty function to a similar optimal stopping problem and solved it as a generalied parking problem. Beibel s approach relies on the continuity of the paths of the process. For the Poisson disorder problem, the process has jumps, and the related optimal stopping problem cannot be formulated as a generalied parking problem. In the next section, we give a precise description of our problem and formulate the equivalent optimal stopping problem in (6) (8). The latter is solved, and an optimal Bayes rule and the minimum Bayes cost function are determined in 3; see Propositions To calculate the optimal decision rules, numerical methods are also described; they are illustrated on examples in Figures 2 and 3 in 3. Long proofs are presented in 4 and in the appendix. Let us remark that our analysis of the free-boundary problem might be useful to solve other quickest-detection problems; see, for example, Bayraktar et al. [1] for an application to standard Poisson disorder problems.

3 Mathematics of Operations Research 31(2), pp , 26 INFORMS The problem. Let F be a probability space hosting two independent Poisson processes X = X t t and X 1 = X 1 t t with rates and 1, respectively, and a random variable, independent of the processes X and X 1, having the distribution = = and > t = 1 e t t The processes X, X 1 and the random variable are unobservable. The observation process is X t = t 1 s dx s + t 1 s> dx 1 s t (4) with the natural filtration F X = Ft X t (modified suitably to make it satisfy the usual conditions) and F X t Ft X. For every F X -stopping time (sometimes, we write F X ), the associated Bayes error R < + cɛ e + 1 (5) is the sum of the probability of the false alarm < and the expected exponential delay penalty cɛ e + 1 for some known positive constants c and. The Poisson disorder problem is to find an F X -stopping time as close to the disorder time as possible in the sense that, if such a stopping time exists, it achieves the minimum Bayes error Vinf R 1 (6) where the infimum is taken over all F X -stopping times. In fact, it is enough to take the infimum in (6) only over the F X -stopping times having finite expectations (this will be useful later in establishing the relationship (17)). Indeed, if is an F X -stopping time and Ɛ =, then the Jensen s inequality implies that R cɛ e + 1 cexpɛ Ɛ + 1 => 1 = R V; i.e., cannot attain the infimum in (6). In terms of the F X -adapted processes t t F X t and t Ɛ [ ] 1 t e t + Ft X 1 t t (7) the Bayes error R in (5) can be expressed as R = 1 + Ɛ 1 s c s ds (8) for every F X -stopping time (see p. 231 for the proof). We interpret the process t in (7) astheweighted likelihood-ratio process with the exponential delay cost as the weight because of its resemblance to the well-known likelihood-ratio process t t /1 t, the sufficient statistic in many statistical detection and hypothesis testing problems. In our problem, also turns out the sufficient statistic in the sense that it completely determines the optimal detection rule. The standard applications of Bayes theorem and the chain rule (see A.1 in the appendix) reveal the dynamics of the jump processes,, and as d t = 1 t 1 t dt + 1 t 1 t dx 1 t + 1 t = (9) t d t = t dt + 1 / 1 t dx t = /1 (1) d t = t dt + 1 / 1 t dx t = /1 (11) Evidently, the processes,, and are strongly Markovian. With the new look of the Bayes risk in (8), the quickest-detection problem of (6) is an optimal stopping problem for the two-dimensional Markovian jump process. In the next section, we shall formulate the optimal stopping problem as a free-boundary problem involving the infinitesimal generator of and solve the latter. 3. Free boundary problem and its solution. We start this section with an observation. For every real number, let us denote the exit time of out of the interval by inft t inf + (12)

4 22 Mathematics of Operations Research 31(2), pp , 26 INFORMS and define for future reference { } a b + > /a if a d (13) r 1 / r /c > if a = The drift + a of in (11) changes its sign at = d ( d for drift); the sign of the integrand in (8) is determined by the function c whose sign changes at = r ( r for reward). As clearly seen from (8), the Bayes risk R decreases as long as the process stays in r. Therefore, it is not optimal to stop before leaves r. Lemma 3.1. If an F X -stopping time is optimal for (6, 8), then so is r, where r is as in (13). Proof. For every 1, (8) implies R = R r + Ɛ r 1 s c s ds R = V, i.e., R = V. r Hence, we can restrict our search for an optimal stopping time to those F X -stopping times satisfying r, -a.s. for all 1. From this observation and the behavior of the paths of the process, our first result follows. Proposition 3.1 (Case I). Let 1.If d < or < d r, then the stopping time r is optimal for (6, 8), where r and d areasin(13). Let and n inft > n 1 t t > be the nth jump time of for every n (by convention, inf =+). From (11), it is easy to obtain that { d + n 1 t = } d exp / d t n 1 d n 1 t< n n 1 + t d = (14) + and n = 1 / n n If d <, then the paths of the process always increase between jumps. If d >, then d is the mean-level to which the process reverts between jumps; the difference t d in (14) never vanishes before a jump, and n d for all n> almost surely. If d >, then the state d is an entrance boundary for : it can start at d and stays there until the first jump time, but never comes back to d after it leaves. If 1 >, then has positive jumps; d might be negative or positive; see Figure 1(a, b). If 1 =, then the process never jumps; since d is always defined and negative, the paths of increase. If 1 <, then has negative jumps; since d is always defined and negative, the paths of increase between jumps; see Figure 1(c). t t t Φ t (ω) Φ t (ω) Φ t (ω) φ r φ d φ r φ φ r φ φ r φ (a) λ 1 > λ, φ d > (b) λ 1 > λ, φ d < (c) λ 1 < λ (φ d < ) Figure 1. Sample paths of the process of (11, 14). The process has positive jumps if 1 > (a, b), and negative jumps if 1 < (c). The paths increase between jumps if d < (b, c). Note from (13) that d is always negative if 1 <.If d >, then reverts to the mean-level d between (positive) jumps (a).

5 Mathematics of Operations Research 31(2), pp , 26 INFORMS 221 Proof of Proposition 3.1. In all three cases, the process does not return to the interval r, once it leaves; see Figure 1(a, b). Therefore, r is optimal. In the remainder, we focus on 1 >,< d < r (Case II), and 1 < (Case III). In both cases, the process returns to the interval r with positive probability after every exit (see Figure 1(a, c)); the optimal stopping rule for (6, 8) turns out in the form of of (12) for some suitable > r. On F, let the F X -adapted process be the unique solution of the stochastic differential equation d t = t dt + 1 / 1 t dx t = + (15) For every 1 +, we shall denote by the probability measure if = = 1. Let Ɛ be the expectation under, and introduce the auxiliary optimal stopping problem V inf Ɛ 1 s c s ds 1 + (16) where the infimum is taken over all F X -stopping times having finite -expectations. Note from (11) and (15) that the finite-dimensional distribution of X under /1 is the same as that of X under. Therefore, the value function of the original optimal stopping problem (6, 8) is given by V= 1 + V/1 1 (17) Ansat. For a suitable function g + and a real number > r, the value function V and the optimal continuation region for (16) are in the forms of V= 1 g 1 + and C = 1 (18) respectively. The first exit time of out of C 1 + is optimal for (16). It is obvious from (5), (6), and (8) that V 1 for every 1. If the ansat is true, then (17) and (18) imply that V= g/1 1 for every 1; namely, g is bounded, and 1 g + (19) The infinitesimal generator of coincides with the first order differential-difference operator in (67) acting on the functions in C Ifg C 1 +, then the free-boundary problem associated with (16, 18) becomes 1 gy= 1 cy y 1 (2) 1 gy= y 1 (21) It is easily checked that 1 gy= 1 +ayg y bgy+ gry for every y 1 +, where a, b, r are defined as in (13). Therefore, (2, 21) simplifies to the one-dimensional free-boundary problem (2, 21 ) below. The proof of the next lemma is given on page 232 after the supporting facts are established in A.2 of the appendix. Verification Lemma. Let g + be a bounded, continuous and piecewise continuously differentiable function such that + ayg y bgy + gry cy + y + (22) whenever g y exists. Then Vy 1 gy for every y 1 +. In addition, if g C + C 1 + \ d for some real number > r and + ayg y bgy + gry = cy + y d d (2 ) gy = y (21 ) then (18) holds, i.e., Vy= 1 gy for every y 1 + ; the stopping time inft t has finite -expectation and is optimal for (16).

6 222 Mathematics of Operations Research 31(2), pp , 26 INFORMS Proposition 3.2 (Cases II and III). There exist unique real number > r and unique function g + 1 in C + C 1 + \ d that satisfy (22), (2 ), (21 ) with instead of. The minimum Bayes error in (5) is V= g/1 1 and = t t is a minimum Bayes stopping rule. The next section is devoted to the proof of the existence and the uniqueness of and g. The rest of Proposition 3.2 follows from (17) and the verification lemma above. In the remainder of this section, we shall describe a numerical method to calculate and the function g. Case II: 1 > and < d < r. For every real number > d, denote by h d the unique solution in C d C 1 d of h y = lyh ry sgn + ay + ay b/a 1 cy y d (23) h y = y + (24) where ly sgn + ay + ay b/a 1 + ary b/a is well-defined for every y d because a< and r>1; see (13). Proposition 3.3. Let and g be as in Proposition 3.2. The function h is the only one among all h, > d such that f 1 y + ay b/a h y y d (25) By defining it on as the solution of the differential equation (23), its extension onto + (denoted also by h ) remains between the same bounds of (25)on +. We have gy = + ay b/a h y for every y +, and r < < r [ ] c b a + b a c r b/a+1 1 b r b/a 1 r b/a+1 1 The function I h d, r is continuous and strictly decreasing, and I =. Thanks to Proposition 3.3, one can find (and h on d ) bythebisection search in the interval r : Initially, let = r. For every n, let n be the mid-point of the interval n n ;if I n <, then set n+1 n+1 = n n, otherwise n+1 n+1 = n n. Then = n n n. Unfortunately, the solution h of (23), (24) is unavailable in closed form; however, it can be calculated on r fairly accurately by the finite difference methods. After and h on d are found, h on d can be calculated similarly from (23) by the continuation process (see, e.g., Bellman and Cooke [4, p. 47]). See Figure 2(a) for an illustration. 5 h φ * h φ4 φ φ* 5 φ 6 h φ4 5 φ5 φ 2 φ 1 h φ6 h φ3 φ 3 y 5 1 h φ2 h φ1 15 h φ * 2 hφ f 1 25 f φ d φ r φ d /r φ d φ r (a) (b) φ* φ y 6 7 Figure 2. Case II. (a) bisection search for, (b) comparison of h for different -values; see (23, 24). For both illustrations, the parameters are = 1, = 3, 1 = 6, = 1, and c = 2; we compute r = 2, d = 1, r = 5; see (13). In (a), the search for starts in r = with the tight upper bound of Proposition 3.3, and continues along the intervals r 1 r 2 r The mid-points of the intervals are 1 2 ; the lefthand (righthand, resp.) half of each interval is eliminated if I i h i d is positive (negative, resp.). The unique root of I = is found at = 616 up to three decimal points after 17 iterations. The figure in (b) illustrates that the function h is the separatrix for (23, 24): among all h, d, itisthe only function bounded between y-axis and y f 1 y of Proposition 3.3. Moreover, h d =, and h d is positive (negative, resp.) for every < (>, resp.); the functions g y + ay b/a h y explode near y = d for all but.ifa + b<, equivalently 1 <r /, then only h has a stable extension onto + ; the functions y h y above it (below it, resp.) increase to + (decrease to, resp.) as y decreases to d /r. (In both (a) and (b), the solutions h of the differential equation (23, 24) ofadvanced type are computed by the finite difference method.)

7 Mathematics of Operations Research 31(2), pp , 26 INFORMS 223 Case III: 1 <. solution of For every real number, let + be the unique continuously differentiable y = + ay b/a 1 + ary b/a ry + cy y > (26) = (27) The differential equations in (23) and (26) are essentially the same (in the latter case, + ay is positive for every y + because a is positive). However, the solution h y of (23) is unique if it is initially described for all y r, whereas uniquely determines the solution of (26). Strictly speaking, (23) of Case II is a differential equation of advanced type (r = 1 / > 1), and (26) of Case III is a differential equation of retarded type (r = 1 / < 1), see Bellman and Cooke [4, p. 48]. The integral equation obtained from (26) also resembles a renewal equation; this will be useful in the next section. Proposition 3.4. Let and g be as in Proposition 3.2. Then gy = + ay b/a y for every y, where is the unique number satisfying both = = and f 1 y + ay b/a y y (28) We have r < < b/c and b/a < <. The function J max y b/c y, b/a is continuous and strictly increasing, and J =. One can find in Proposition 3.4 by bisection search in the interval = b/a : For every n, let n be the mid-point of n n.ifj n <, then let n+1 n+1 = n n, otherwise n+1 n+1 = n n. Then = n n n. See Figure 3(a). As the proof of Proposition 3.4 on page 229 reveals that the maximum at which J = max x b/c x is attained is unique in b/c. Remark 3.1. If the discount rate decreases to ero in such a way that C c remains constant, then the Bayes error R t of (5) converges for every t + to R t > t + CƐ t +, where the detection delay cost is proportional to the delay time. Peskir and Shiryaev [12] showed that the Poisson disorder problem V inf F X R accepts a Bayes optimal stopping rule in the form of inft t B for some suitable constant B 1. Our results are in agreement with their findings. As, we have t t t /1 t for every t + almost surely (see (1) and (11)); moreover, since C c remains constant, we have r = /C, d = /a with a = 1 + in (13). By Propositions 3.1 and 3.2, there is a suitable r such that inft t is a minimum Bayes stopping rule. Equivalently, = inft t B with B /1 +, since x x/1 + x, x + is increasing. Propositions 3.1, 3.3, and 3.4 describe how to find (therefore, B ) and the value function for different parameter ranges. For example, if 1, and either d < or< r d (equivalently, > 1 or C 1 ), then = r and B = / + C by Proposition 3.1 (compare with Peskir and Shiryaev [12, Theorem 4.1]) β1 β β3.1.4 β1.5 β.25 β* β2.75 β * β2 β 3.7 β2 f.6 1 f β* φ* f φ r φ* φ r (a) b/cα (b) 2 2 b/cα Figure 3. Case III. (a) bisection search for, (b) comparison of for different -values, see (26, 27). For both illustrations, the parameters are = 1, = 3, 1 = 15, = 1, and c = 2; we compute r = 1/2, d <, r = 5, see (13). By Proposition 3.4, r < < b/c = 2. The condition (28) on implies that b/a = f 1 h =. Therefore, our search for starts in b/a = 1 ; it continues along the intervals where 1 2 are the mid-points of the intervals; the lefthand (righthand, resp.) half of each interval is eliminated if J i max y b/c i y is negative (positive, resp.). The unique ero of J = in 1 is found after 1 iterations at = 581 up to three decimal points; J is attained at y = The figure in (b) displays the functions for = = The function is the smallest among all functions, b/a which intersect the y-axis.

8 224 Mathematics of Operations Research 31(2), pp , 26 INFORMS 4. Proofs of Propositions 3.2, 3.3, and 3.4. In Case II, we prove Propositions 3.2 and 3.3 simultaneously by showing that there is a suitable > d such that the solution h of (23, 24) with = is bounded as in (25), and the function gy + ay b/a h y, y + has the desired properties enlisted in Proposition 3.2. In Case III, we prove Propositions 3.2 and 3.4 similarly. For the proofs of several lemmas below, it will be crucial to notice that the family of the auxiliary functions f m y m + ay b/a y + m solves f y = blyf ry y + \ d (29) which resembles Equations (23) and (26) that every h and satisfy, respectively. For every m, it is also easy to check that f m x = f m R R x b + ay b/a 1 + ary b/a f m ry dy d x R< (3) 4.1. Case II: 1 > and < d < r. For every > d, let h d be the unique solution of (23, 24)inC d C 1 d. The proof of the existence and the uniqueness of the function h, bounded as in (25), for some > d is broken down into several lemmas below. Lemmas 4.3 and 4.4 identify the crucial property of the family of functions h : for any > r,ify h y violates one of the bounds in (25), then h d (see also Figure 2(b)). However, the continuous mapping I h d has unique root in r (identified with ) by Lemmas 4.1, 4.2, 4.5, and 4.6. Therefore, h becomes one (and the only) member of h >r between the bounds in (25). Lemma 4.1. If r A<B<, then h A x > h B x for every x d B. Proof. It is sufficient to consider max d B/r A<B; in general, A B/r n+1 B/r n for some n, and h A >h B/r n > >h B/r >h B on d B follows. By (23) and (24), the difference h AB h A h B is positive on max d B/r B. If d B/r B, then the proof is complete. Otherwise, h AB satisfies h AB x = h AB R h AB y = lyh AB ry d <y<a (31) R x + ay b/a 1 + ary b/a h AB ry dy d x R A (32) There exists a positive real number m 1 such that f m1 B/r = h AB B/r. Since f m1 of (29) is increasing, and h AB is decreasing on B/r B, f m1 dominates h AB over B/r B (see Figure 4(a)). Therefore, a comparison of (3) and (32) with R = B/r gives f m1 x<h AB x x B/r 2 B/r If d B/r 2 B/r, then the proof is complete. Otherwise, there exists a (positive) real number m 2 >m 1 such that f m2 B/r 2 = h AB B/r 2 >f m1 B/r 2. Since f m2 >f m1 >h AB on B/r B, the comparison of the differential equations in (29) and (31) givesf m 2 >h AB > onb/r2 B/r. Because f m2 B/r 2 = h AB B/r 2, this implies f m2 x>h AB x, for every x B/r 2 B/r (see Figure 4(a)). Therefore, a comparison of (3) and (32) with R = B/r 2 gives f m2 x<h AB x x B/r 3 B/r 2 Because d B/r n B/r n 1 for some n, from a finite induction the proof follows. f m2 AB f m1 r n r n 1 r 3 r 2 f 1 r φ φ r φ d f m1 d φ B r n φ d B r n 1 B r 3 B r 2 (a) φ r B r A B (b) f m2 Figure 4. Illustrations for the proofs of (a) Lemma 4.1 and (b) Lemma 4.4. The location of r is unimportant as long as d < r <A<B in (a), and d < r < in (b).

9 Mathematics of Operations Research 31(2), pp , 26 INFORMS 225 Lemma 4.2. If d A<B r, then h A x<h B x for every x d B. For every d r, h > on d. Proof. The proof of that h BA h B h A > on d B is similar to that of Lemma 4.1. For the second part, observe that d /r n+1 /r n for some n, and h /r n. Therefore, the first part implies h > h /r > >h /r n on d. Lemma 4.3. Let r. Ifh = for some d, then h > on d. Proof. Let r. Suppose that h has some eros in d, and let be the largest. By (23) and (24), h A < onmaxa/r r A for every A> r. Since r /r n+1 /r n for some n, Lemma 4.1 implies that >h /r n > >h /r >h on r. Therefore, d r.takeanya max d /r. With minor modifications to the proof of Lemma 4.2, it can be shown that h >h A on d ). Lemma 4.4. Let r. Ifh = f 1 of (29) for some d, then h <f 1 on d. Proof. The function d x h x f 1 x, x d satisfies d x = lxd rx sgn + ax + ax b/a 1 cx d x = + ax b/a x x d (33) Suppose that d has some eros in d, and let be the largest (note that d ). Since d > on, (33) implies that d > and d < on/r. If d /r, then the proof is complete. Otherwise, (33) implies R d x<d R + lyd ry dy d x<r (34) x There exists a negative real number m 1 such that f m1 /r = d /r. Since f m1 is decreasing, and d is increasing on /r, wehavef m1 <d on /r. Therefore, (34) with R = /r implies (see Figure 4(b)) d x<f m1 /r + /r x lf m1 r d = f m1 x x /r 2 /r If d /r 2 /r, then the proof is complete. Otherwise, there exists a negative real number m 2 <m 1 such that f m2 /r 2 = d /r 2 <f m1 /r 2. Since f m2 <f m1,wehavef m2 <d on /r. Therefore, (33) implies d x lxd rx blxf m2 rx = f m 2 x x /r 2 /r Since f m2 /r 2 = d /r 2, this implies f m2 <d on /r 2 /r. Therefore, (34) (with R = /r 2 ) implies d < f m2 on/r 3 /r 2. Since d /r n /r n 1 for some n, a finite induction completes the proof. Lemma 4.5. For every r [ c b a + b a c r b/a+1 1 b ] r b/a 1 > r b/a+1 1 r we have h < on d. Proof. If h /r f 1 /r < for some r, then h < on d by Lemmas 4.3 and 4.4. Therefore, Lemma 4.1 implies that h >h on d for every >.By(23, 24), we have h /r = + ax b/a 1 cx dx = k + a b/a + k/r + a/r b/a /r where k bc b a c/bb a. Therefore, h /r f 1 /r for some > r if and only if 1 + k/r/k + a b/a / + a/r b/a.astends to, the function on the righthand side decreases to r b/a. Finally, the solution of 1 + k/r/k = r b/a yields =. One can also see from < d < r that > r. Lemma 4.6. The function I h d, r is strictly decreasing and continuous.

10 226 Mathematics of Operations Research 31(2), pp , 26 INFORMS Proof. For any r A<B<, the mapping h AB h A h B is positive on d A by Lemma 4.1. Therefore, IA IB= h AB d >, i.e., I is strictly decreasing. Because h AB = h B ona, h AB is nonnegative on d ; therefore, h AB y = lyh AB ry, y d A, thanks to (23). Hence, h AB is nondecreasing on d A.Ifmax r B/r<A<B, then <IA IB= h AB d h AB A = B A + ay b/a 1 cy dy where the last equality follows from (23) and (27). Because the integral above goes to ero as B A decreases to ero, the continuity of I follows. Proof of Proposition 3.2 in Case II and Proposition 3.3. By Lemma 4.6, the function I introduced in Proposition 3.3 is continuous and strictly decreasing on r. Since I > for every d r by Lemma 4.2, and I< by Lemma 4.5, the intermediate value theorem implies that there is unique r such that I =. Lemmas 4.3 and 4.4 imply that and h are the only solutions of (23), (24) that also satisfy (25). (If >, then h d = I< = f 1 d, i.e., h violates the lower bound f 1 in some neighborhood of d.if d <<, then h d = I >, i.e., h violates the upper bound (namely, the nonpositivity) in some neighborhood of d.) We extend h onto + by defining it on d as the solution of h y = lyh ry sgn + ay + ay b/a 1 cy y d (35) where ly sgn + ay + ay b/a 1 + ary b/a as in (23). Because h on d and < d < r, (35) implies that h on d. Because h d =, this yields that h on d. Because f 1 h on d where f 1 is given by (29), (35) also implies h y lyf 1 ry sgn + ay + ay b/a 1 cy = blyf 1 ry sgn + ay + ay b/a 1 cy blyf 1 ry = f 1 y for all y d. Because f 1 d = h d =, this implies f 1 h on d. Thus, we showed f 1 y + ay b/a h y y + (36) The proof of Proposition 3.3, as well as that of Proposition 3.2 in Case II, is now complete because a direct computation using (23), (24) (with = ) and (35) shows that the continuous function + ay b/a h y y + \ d gy (37) lim + ax b/a h x y = d x d which is evidently in C 1 + \ d and bounded in 1 by (36), satisfies (together with ) the conditions of the verification lemma on page 221 (an application of L Hospital rule reveals that g d is finite) Case III: 1 <. In this case, note that a> and d <. Lemma 4.7 and its corollary establish the existence and the uniqueness of the solutions of (26), (27). Lemmas 4.8, 4.9, and 4.1 are essential for the existence of unique that lies between the bounds in (28). Lemma 4.11 hints that the search for the optimal stopping threshold in Propositions 3.2 and 3.4 can be confined into r b/c. Lemma 4.7. If f,, and are real-valued functions on + such that ft k, t T and max s t s t with T sds < for some positive constants k and T, then ut = ft+ t surs ds t T (38) has unique integrable solution u. Iff is continuous (resp., continuously differentiable, and is continuous), then u is continuous (resp., continuously differentiable). Moreover, if is bounded, so is u. Proof. The proof is a slight modification of that of the existence and uniqueness of the renewal equation (Bellman and Cooke [4, p. 217]).

11 Mathematics of Operations Research 31(2), pp , 26 INFORMS 227 Corollary 4.1. Equations (26), (27) have unique bounded and continuously differentiable solution on T, for every T>. Proof. We can write (26), (27) as x = fx+ yhry dy, with fx + ay b/a 1 cx dy and x + ax b/a 1 + arx b/a Because y is decreasing, x max y x y==1/ is bounded. Because is continuous (note that a>if 1 < ), and f is continuously differentiable, the conclusion follows from Lemma 4.7. Lemma 4.8. If >, then x > x for x T for every T>. Proof. By means of the family of functions f m in (29), we shall prove that h is positive on T for any fixed T>. Using (26), (27), we find that y = ly ry y T and = (39) where ly + ay b/a 1 + ray b/a as in (29). If m is chosen such that f m =, then (29) and (39) imply that f m = blf m = bl < l =. Hence f m < in some neighborhood of. Therefore, there are some m 1 >m and > such that f m1 > > on and f m1 =. The proof is complete if T. Otherwise, note that f m 1 x = blxf m1 rx blx rx < lx rx = x for every x /r, and <f m1 x = f m1 + f m 1 y dy < + y dy = x x /r If T /r, then the proof is complete. Otherwise, we choose m 2 >m 1 such that f m2 /r = /r > f m1 /r, see Figure 5(a). Since f m2 >f m1,wehavef m2 > on. Therefore, f m 2 x = blxf m2 rx blx rx < lx rx = x for every x /r /r 2, and <f m2 x = f m2 /r + f m 2 y dy < /r + y dy = x /r /r x /r /r 2 Because T /r n 1 /r n for some n, the proof follows by finite induction. Lemma 4.9. x is a continuous function of uniformly in x T for every T>. γ β ~ φ n 2 φ ~ φ n 1 φ φ ~ φ n φ γβ cα λ1 f m2 ε ε r β + λ b/a f m1 r ε r 2 r 2 r 3 d β f m1 (a) ε r n 1 r n 1 T T r n ε r n ~ φ n 2 r G s ~ φ n 1 r f m2 (b) (c) Figure 5. Illustrations for the proofs of (a) Lemma 4.8, (b) Lemma 4.1, and (c) Proposition 3.4.

12 228 Mathematics of Operations Research 31(2), pp , 26 INFORMS Proof. Let.By(26) and (27), x = ly ry dy for every x T. Because lx = + ax b/a 1 + arx b/a is decreasing and l = 1/, wehave x r d x T (4) After multiplying by exp x/r and taking the integrals of both sides, we obtain d exp x/r exp /r d. Therefore, (4) implies ( ) x 1 + /r exp /r exp /r d = exp x/r exp T /r x T Lemma 4.1. Let b/a. If = f 1 for some +, then x<f 1 x < for every x T and T>, where f 1 is as in (29) and f 1 = b/a. Proof. First, let b/a. Suppose that d x x f 1 x, x + has some eros, and let be the smallest (>). We shall prove that d < on T for every T>. By (26), (27), and (29), d x = lxd rx + ax b/a 1 cx x + and d = + b/a (41) Because d > on,(41) implies that d is decreasing on /r. Hence, d < on /r.ift /r, then the proof is complete. Otherwise, there exists some m 1 < such that f m1 /r = d /r. Since f m1 is increasing, and d is decreasing on /r, wehaved f m1 on /r. Therefore, f m 1 x = blxf m1 rx blxd rx > lxd rx + ax b/a 1 cx = d x for every x /r /r 2. Thus, >f m1 x = f m1 /r + f m 1 y dy > d /r + d y dt = d x /r /r x /r /r 2 If T /r /r 2, then the proof is complete. Otherwise, there exists m 2 <m 1 such that f m2 /r 2 = d /r 2 < f m1 /r 2, see Figure 5(b). Because f m2 <f m1,wehavef m2 <d,on /r. By(29) and (41), f m 2 x = blxf m2 rx blxd rx > lxd rx + ax b/a 1 cx = d x for every x /r /r 2. Because f m2 /r 2 = d /r 2, this implies that f m2 <d on /r /r 2. By using (29) and (41) once again, f m 2 x = blxf m2 rx blxd rx > lxd rx + ax b/a 1 cx = d x for every x /r 2 /r 3. Because f m2 /r 2 = d /r 2, this implies that >f m2 >d on /r 2 /r 3. Because T /r n 1 /r n for some n, a finite induction argument concludes the proof. If = b/a, then d = d + =, and d is decreasing on /r for some small >; the rest of the proof follows as above. Lemma The function Jmax x b/c x, b/a is continuous and strictly increasing. There exists unique b/a such that J =. The maximum J is attained in r b/c; at every r b/c where it is attained, we have = =. Proof. Lemmas 4.8 and 4.9 imply that J is strictly increasing and continuous on b/a. On the other hand, J > since = < +, and J b/a max x b/c f 1 x = f 1 b/c < by Lemma 4.1. By the intermediate value theorem, there exists unique b/a such that = J = max x b/c x. Because h is continuous by Lemma 4.9 and h = <, the maximum J is attained in b/c. If it is attained at some b/c, then = = is obvious. Suppose that b/c =. Then >f 1 on b/c by Lemma 4.1 (otherwise, there exists some b/c such that = f 1, and <f 1 < on b/c). By (26) and (29), [ ] x = + rx ax b/a 1 cx + ax b/a 1 b cx x b/c u f 1 rx where u infx + x f 1 x > b/c, because both and f 1 are continuous. However, this implies that max x u x = J = b/c = and b/c =. Finally, let b/c be the smallest number where J is attained. Then y < for every y<.if r, then (26) implies = + a b/a 1 + a b/a r + c >, which contradicts with =. Thus, > r.

13 Mathematics of Operations Research 31(2), pp , 26 INFORMS 229 Proof of Proposition 3.2 in Case III and Proposition 3.4. By Lemma 4.11, the function J has the properties enlisted in Proposition 3.4. As in the same lemma, let b/a and r b/c be such that = = J max x b/c x; by the choice of,wehave = and f 1 x + ay b/a < x x (42) (If f 1 = for some <, then >f 1 > on by Lemma 4.1, which contradicts with =.) We define + ay b/a y y gy (43) y The function g is continuous and continuously differentiable on + (because = =, g is differentiable at ), and is bounded in 1 by (42). The proofs of both propositions will be complete if we show that g and satisfy the conditions of the verification lemma on page 221. Using (26) with =, direct computation shows that g solves (2, 21 ) and satisfies the inequality (22) for every y /r (recall that > r ). It remains only to show that g satisfies (22) for y /r. Equivalently, because + ayg y bgy + gry+ cy = gry+ cy = + ary b/a ry + cy y y /r (44) by the definition of g and (26), it is sufficient to prove that is decreasing on /r. We define Gy + ay b/a y ky + ay b/a+1 G y and sy + ay b/a+2 G y y + (45) Because = =, and is a local maximum of, it is also a local maximum of G, and G = G = and G < (46) Note that g and G coincide on. Unlike g, G satisfies everywhere on y + + ayg y bgy + Gry + cy = (47) + ayg y b ag y + 1 G ry + c = (48) + ayg y b 2aG y + 1 rg ry = (49) where the last two equations follow from the first by differentiation. After they are multiplied by +ay b a/a and + ay b 2a/a, respectively, and their terms are rearranged by using k and s in (45), we obtain (51) and (52) below. In (5), we rewrite the dynamics (26) of in terms of G : y = + ay b/a 1 Gry + cy (5) k y = + ay b/a 1 G ry + c (51) s y = 1 r+ ay b/a+1 G ry = 1 r+ ay b/a+1 + ary b/a 2 sry (52) Because =, (5) implies Gr + c =. Thus, is decreasing on /r if the mapping y Gry + cy is increasing on /r, equivalently, its derivative 1 G ry + c is positive at every y /r. Therefore, the verification will be finalied under (46) when we show that if G = and G < then G y > c/ 1 for every y r (53) By the second equality in (52), the function s reverts itself to the mean-level y =. By (42), 1 G, and (45) imply that s<. Let, and < 1 < 2 < be the intersection points of s with y-axis (if there are finitely many of them, then we set the rest to +). Then (52) implies that s is increasing (decreasing, resp.) on n /r n+1 /r for every even (odd, resp.) n, and < 1 < 1 r < 2 < 2 r < 3 < 3 r < < n 1 < n 1 < r n < (54)

14 23 Mathematics of Operations Research 31(2), pp , 26 INFORMS Because s and G have the same signs, G is negative (positive, resp.) on n n+1 for every even (odd, resp.) n. Therefore, G is decreasing (increasing, resp.) on n n+1 for every even (odd, resp.) n ; see Figure 5(c). Let + be such that G = and G <. Then n n+1 for some even n, and G y G = for every y n.ifn =, then G > on and (53) follows. Suppose n 2. If G c/ 1 on n 1 n, then (53) holds because r r n n 1 by (54), and G > c/ 1 on n. For the rest, we suppose that G y = c/ 1 for some y n 1 n. Because G is increasing on n 1 n, it intersects with y = c/ 1 and y = exactly once, say at and, respectively. Then, n 1 < < n <. We prove (53) by showing that <r (indeed, this implies r ; because G > c/ 1 on,(53) follows). Observe that k in (45) and G have exactly the same eros and signs. Thus, k = k = and k >on. Therefore, k has at least one local maximum in. Hence, there exists at least one y such that (i) k y =, (ii) k >ony y, and (iii) k <ony y + for some >. Then (51) and (54) imply that there exists at least one r r n 2 n such that G = c/ 1 and for some > (55) G y < c/ 1 for y and G y > c/ 1 for y + In the interval n 1 n, is the only candidate for in (55). On the other hand, even if G intersects with y = c/ 1 on n 2 n 1, the intersection point cannot become in (55) since G is decreasing on n 2 n 1 ; see the dotted curve in Figure 5(c). Hence, must be in (55). Therefore, r and (53) is proved. For the proof of the uniqueness of, let us pick as the smallest of all numbers such that = =, and g be as in (43). By (46) and (53), we have <on /r, and (44) implies that (22) holds with strict inequality on /r. An application of the chain-rule (see Lemma A.1) shows that /r must be in the optimal stopping region. Suppose that there exists another + such that it is optimal to stop in. Then, we must have. Because gy < ony, it is optimal not to stop in ; therefore. Thus, =. Appendix A.1. The Bayesian analysis. Let u be the probability measure induced on F X by the finite-dimensional distribution of X of (4) given that = u. Then d u d F X t = 1 u>t + 1 u t L t L u and L t ( 1 ) Xt e 1 t ut (56) It is easily checked that F = F + 1 sf e s ds for every F F X. Moreover, the useful identity F > t = 1 t F e t F F X t t (57) follows from the equality F > t = t s F ds and that the measure s coincides with on Ft X for every s t. Using the generalied Bayes theorem (see, e.g., Shiryaev [16, p ], Liptser and Shiryaev [9, p ]), we obtain and s F X t = L t + 1 [ s t L t /L u e u du + e t e s t] L t + 1 [ t L st (58) t/l u e u du + e t] = F X t = L t L t + 1 [ t L t/l u e u du + e t] t (59) ds F X 1 L t = t /L s e s ds L t + 1 [ t L <s t (6) t/l u e u du + e t]

15 Mathematics of Operations Research 31(2), pp , 26 INFORMS 231 By substituting t for s in (58), we obtain t t F X t = t t 1 t = e t L t L t + 1 t L t/l u e u du L t + 1 [ t L (61) t/l u e u du + e t] [ t ] e u du t (62) L u Using (59) and (6), we also calculate [ ] e t[ L Ɛ 1 t e t + F X t + 1 t L t = t/l u e +u du ] L t + 1 [ t L t/l u e u du + e t] and t Ɛ [ ] 1 t e t + Ft X [ t ] = e +t L 1 t t e +u du t (63) L u Once noticed that L t in (56) obeys dl t /L t = 1 dt + 1 / 1dX t, t and L = 1, the chain rule gives (1) and (11) as the dynamics of t in (62) and t in (63), respectively. Another application of the chain rule to t = t /1 + t with (1) gives the dynamics of as in (9). Proof of (8). Let be an F X -stopping time. Since e + 1 = + the expectation of both sides gives Ɛ e + 1 = because of Fubini s theorem, that > s F X s > = = 1 = 1 e u du = 1 > e s ds = Ɛ ( 1>s Ɛ [ 1 s e s + F X s s < s ds = for every s and (7). But 1 >s 1 s e s + ds ]) ds = Ɛ 1 s s ds 1 s s1 e s ds 1 s se s ds = 1 s > sds Ɛ 1 s 1 s ds = 1 Ɛ 1 s ds (64) where the forth equality follows from (57) since s Fs X for every s. Finally, the sum of (64) and the preceding equation gives (8). Presented here for completeness, the formulas (58) (64) were derived for the first time by Shiryaev [14, 15, Chapter 4] and Beibel [3]. Appendix A.2. The chain-rule and the verification lemma. Needed at the end of this section on page 221 for the proof of the verification lemma, the lemmas in the remainder are concerned with the processes and of (15) on the probability space F for every 1 +, see page 221. They remain true when and in their statements and in their proofs are replaced with and, respectively, because (i) the probability measures and /1 are the same on F, and (ii) the triplet X under /1 has the same finite-dimensional distribution as that of the triplet under (compare (15) and (11)). Finally, observe that the innovations process X t X t t 1 s + 1 s ds t (65) is an F X -martingale under both and for every 1 and +. Lemma A.1. Let G 1 + be a continuous and piecewise continuously differentiable function in each coordinate. Then G t t = G + t G s s ds+ M t t (66)

16 232 Mathematics of Operations Research 31(2), pp , 26 INFORMS where M t t G s s dx s, t is an F X -local martingale; G G 1 / / G for every 1 +, and G G G G (67) at every 1 + where the partial derivatives G and G exist. If G is bounded, and is an F X -stopping time with finite -expectation, then Ɛ G = Ɛ G Ɛ G s s ds (68) Proof. Denote by c and c the continuous parts of the processes and. Let t = 1 t 1 t and 1 t + 1 t = 1 / 1 t t> t The processes and jump simultaneously with the jump sies t and t at every time t when X jumps. Standard application of the chain-rule with the dynamics of and in (9) and (15), respectively, gives G t t = G + G t s s ds+ M t where G and M t are as above. This proves (66). Suppose next that G is bounded, and is an F X -stopping time with finite expectation under. Because Ɛ G s s 1 s + 1 s ds sup 2Gp y 1 Ɛ < py the stopped process M t t is a closable F X -martingale under. Therefore, Ɛ M =. If we replace with t and take expectations of both sides in (66), then (68) follows. Lemma A.2. Let inft t be the first exit time of the process out of the interval, +. Then Ɛ y is finite for every y 1 +. Proof. If y, then Ɛ y =. Suppose that y<.by(15) and (65), we obtain for every t + t = = t t s ds+ { s + t ( / 1 s dx s } ) 1 s + 1 s s ds + M t (69) where M t t 1 / 1 s d X s is a stopped F X -martingale under y because t ( ) Ɛ y 1 1 s 1 s + 1 s ] ds 1 1 Ɛ y t < t + Therefore, Ɛ y M t = for every t +. After the last integrand in (69) is simplified, and the expectations of both sides are taken, we obtain t [ Ɛ y t y = Ɛ y + ( + + ) ] 1 2 s s ds Ɛ y t By (15), we have y t max 1 / 1 = 1; therefore, Ɛ y t Ɛ y t y/ max 1 / 1 y/ for every t +. When the limit of both sides is taken as t tends to infinity, the conclusion follows from the monotone convergence theorem. Proof of Verification Lemma. Suppose that g + is bounded, continuous and piecewise continuously differentiable. Let G y 1 gy for every y 1 +. For every 1 and y + where g y exists, we have 1 gy = 1 + ayg y bgy + gry. For every F X -stopping time with finite y -expectation (see pages 221 and 231), the last part of Lemma A.1 implies 1 gy = Ɛ y G = Ɛ y G Ɛ y 1 s + a s g s bg s + gr s ds Ɛ y 1 s c s ds y 1 + (7)

17 Mathematics of Operations Research 31(2), pp , 26 INFORMS 233 where the inequality follows from (22) and that G is nonpositive. When we take the infimum of both sides in (7) over all F X -stopping times with finite y -expectations, we obtain 1 gy Vy for every y 1 +. Suppose that the same function g above is in C + C 1 + \ d for some real number > r and solves (2, 21 ). By Lemma A.2, the F X -stopping time inft t has finite y -expectation. When is replaced with,(7) is still true; but, now with an equality instead of the inequality (thanks to (2 ) and (21 )). Together with the inequality obtained in the first part, we have Vy= 1 gy= Ɛ y 1 s c s ds and is optimal for (16). y 1 + Acknowledgments. Erhan Bayraktar is supported by the U.S. Office of Naval Research under Grant N and the U.S. Army Pantheon Project. Savas Dayanik is supported by the National Science Foundation under Grant NSF-DMI We thank the associate editor and the anonymous referee for their careful reading and their suggestions, which made our earlier presentation better. References [1] Bayraktar, Erhan, Savas Dayanik, Ioannis Karatas. 25. The standard Poisson disorder problem revisited. Stochastic Process. Appl. 115(9) [2] Beibel, M A note on Ritov s Bayes approach to the minimax property of the CUSUM procedure. Ann. Statist. 24(4) [3] Beibel, M. 2. A note on sequential detection with exponential penalty for the delay. Ann. Statist. 28(6) [4] Bellman, Richard, Kenneth L. Cooke Differential-Difference Equations. Academic Press, New York. [5] Bensoussan, Alain Stochastic Control of Partially Observable Systems. Cambridge University Press, Cambridge, UK. [6] Davis, M. H. A A note on the Poisson disorder problem. Banach Center Publ [7] Galchuk, L. I., B. L. Roovskii The disorder problem for a Poisson process. Theory Probab. Appl [8] Karatas, Ioannis. 23. A note on Bayesian detection of change-points with an expected miss criterion. Statist. Decisions 21(1) [9] Liptser, Robert S., Albert N. Shiryaev. 21. Statistics of Random Processes, I, expanded ed., Applications of Mathematics, Vol. 5. Springer-Verlag, Berlin, New York. [1] Moustakides, G. V. 24. Optimality of the CUSUM procedure in continuous time. Ann. Statist. 32(1). [11] Peskir, G., A. N. Shiryaev. 2. Sequential testing problems for Poisson processes. Ann. Statist. 28(3) [12] Peskir, G., A. N. Shiryaev. 22. Solving the Poisson disorder problem. Advances in Finance and Stochastics. Springer, Berlin, [13] Poor, H. Vincent Quickest detection with exponential penalty for delay. Ann. Statist. 26(6) [14] Shiryaev, A. N Statistical Sequential Analysis. American Mathematical Society, Providence, RI. [15] Shiryaev, A. N Optimal Stopping Rules. Springer-Verlag, New York. [16] Shiryaev, A. N Probability. Graduate Texts in Mathematics, Vol. 95. Springer-Verlag, New York. [17] Shiryaev, A. N. 22. Quickest detection problems in the technical analysis of the financial data. Mathematical finance Bachelier Congress, 2 (Paris). Springer Finance, Springer, Berlin, Germany, [18] Whittle, Peter Risk-Sensitive Optimal Control. Wiley-Interscience Series in Systems and Optimiation. John Wiley & Sons Ltd., Chichester, UK. [19] Whittle, Peter Optimal Control. Wiley-Interscience Series in Systems and Optimiation. John Wiley & Sons Ltd., Chichester, UK.

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Compound Poisson disorder problem

Compound Poisson disorder problem Compound Poisson disorder problem Savas Dayanik Princeton University, Department of Operations Research and Financial Engineering, and Bendheim Center for Finance, Princeton, NJ 844 email: sdayanik@princeton.edu

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

Surveillance of BiometricsAssumptions

Surveillance of BiometricsAssumptions Surveillance of BiometricsAssumptions in Insured Populations Journée des Chaires, ILB 2017 N. El Karoui, S. Loisel, Y. Sahli UPMC-Paris 6/LPMA/ISFA-Lyon 1 with the financial support of ANR LoLitA, and

More information

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2) SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES SAVAS DAYANIK AND SEMIH O. SEZER Abstract. One of two simple hypotheses is correct about the unknown arrival rate and jump distribution

More information

SEQUENTIAL CHANGE DETECTION REVISITED. BY GEORGE V. MOUSTAKIDES University of Patras

SEQUENTIAL CHANGE DETECTION REVISITED. BY GEORGE V. MOUSTAKIDES University of Patras The Annals of Statistics 28, Vol. 36, No. 2, 787 87 DOI: 1.1214/95367938 Institute of Mathematical Statistics, 28 SEQUENTIAL CHANGE DETECTION REVISITED BY GEORGE V. MOUSTAKIDES University of Patras In

More information

Quickest Detection With Post-Change Distribution Uncertainty

Quickest Detection With Post-Change Distribution Uncertainty Quickest Detection With Post-Change Distribution Uncertainty Heng Yang City University of New York, Graduate Center Olympia Hadjiliadis City University of New York, Brooklyn College and Graduate Center

More information

Monitoring actuarial assumptions in life insurance

Monitoring actuarial assumptions in life insurance Monitoring actuarial assumptions in life insurance Stéphane Loisel ISFA, Univ. Lyon 1 Joint work with N. El Karoui & Y. Salhi IAALS Colloquium, Barcelona, 17 LoLitA Typical paths with change of regime

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Optimal stopping for non-linear expectations Part I

Optimal stopping for non-linear expectations Part I Stochastic Processes and their Applications 121 (2011) 185 211 www.elsevier.com/locate/spa Optimal stopping for non-linear expectations Part I Erhan Bayraktar, Song Yao Department of Mathematics, University

More information

Sequential Detection. Changes: an overview. George V. Moustakides

Sequential Detection. Changes: an overview. George V. Moustakides Sequential Detection of Changes: an overview George V. Moustakides Outline Sequential hypothesis testing and Sequential detection of changes The Sequential Probability Ratio Test (SPRT) for optimum hypothesis

More information

A Barrier Version of the Russian Option

A Barrier Version of the Russian Option A Barrier Version of the Russian Option L. A. Shepp, A. N. Shiryaev, A. Sulem Rutgers University; shepp@stat.rutgers.edu Steklov Mathematical Institute; shiryaev@mi.ras.ru INRIA- Rocquencourt; agnes.sulem@inria.fr

More information

Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes

Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes George V. Moustakides INRIA, Rennes, France Outline The change detection problem Overview of existing results Lorden s criterion and

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC R. G. DOLGOARSHINNYKH Abstract. We establish law of large numbers for SIRS stochastic epidemic processes: as the population size increases the paths of SIRS epidemic

More information

Change-point models and performance measures for sequential change detection

Change-point models and performance measures for sequential change detection Change-point models and performance measures for sequential change detection Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece moustaki@upatras.gr George V. Moustakides

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

The Uniform Integrability of Martingales. On a Question by Alexander Cherny The Uniform Integrability of Martingales. On a Question by Alexander Cherny Johannes Ruf Department of Mathematics University College London May 1, 2015 Abstract Let X be a progressively measurable, almost

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

On the sequential testing problem for some diffusion processes

On the sequential testing problem for some diffusion processes To appear in Stochastics: An International Journal of Probability and Stochastic Processes (17 pp). On the sequential testing problem for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We

More information

WITH SWITCHING ARMS EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT

WITH SWITCHING ARMS EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT lournal of Applied Mathematics and Stochastic Analysis, 12:2 (1999), 151-160. EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT WITH SWITCHING ARMS DONCHO S. DONCHEV

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES

STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

On a class of optimal stopping problems for diffusions with discontinuous coefficients

On a class of optimal stopping problems for diffusions with discontinuous coefficients On a class of optimal stopping problems for diffusions with discontinuous coefficients Ludger Rüschendorf and Mikhail A. Urusov Abstract In this paper we introduce a modification of the free boundary problem

More information

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,

More information

FIRST YEAR CALCULUS W W L CHEN

FIRST YEAR CALCULUS W W L CHEN FIRST YER CLCULUS W W L CHEN c W W L Chen, 994, 28. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

A Representation of Excessive Functions as Expected Suprema

A Representation of Excessive Functions as Expected Suprema A Representation of Excessive Functions as Expected Suprema Hans Föllmer & Thomas Knispel Humboldt-Universität zu Berlin Institut für Mathematik Unter den Linden 6 10099 Berlin, Germany E-mail: foellmer@math.hu-berlin.de,

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous 7: /4/ TOPIC Distribution functions their inverses This section develops properties of probability distribution functions their inverses Two main topics are the so-called probability integral transformation

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

MAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS

MAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS MAXIMAL COUPLING OF EUCLIDEAN BOWNIAN MOTIONS ELTON P. HSU AND KAL-THEODO STUM ABSTACT. We prove that the mirror coupling is the unique maximal Markovian coupling of two Euclidean Brownian motions starting

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

On an Effective Solution of the Optimal Stopping Problem for Random Walks

On an Effective Solution of the Optimal Stopping Problem for Random Walks QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

Data-Efficient Quickest Change Detection

Data-Efficient Quickest Change Detection Data-Efficient Quickest Change Detection Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign http://www.ifp.illinois.edu/~vvv (joint work with Taposh Banerjee)

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

arxiv: v1 [math.pr] 26 Mar 2008

arxiv: v1 [math.pr] 26 Mar 2008 arxiv:0803.3679v1 [math.pr] 26 Mar 2008 The game-theoretic martingales behind the zero-one laws Akimichi Takemura 1 takemura@stat.t.u-tokyo.ac.jp, http://www.e.u-tokyo.ac.jp/ takemura Vladimir Vovk 2 vovk@cs.rhul.ac.uk,

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205,

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

HOMEWORK ASSIGNMENT 6

HOMEWORK ASSIGNMENT 6 HOMEWORK ASSIGNMENT 6 DUE 15 MARCH, 2016 1) Suppose f, g : A R are uniformly continuous on A. Show that f + g is uniformly continuous on A. Solution First we note: In order to show that f + g is uniformly

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

The Hilbert Transform and Fine Continuity

The Hilbert Transform and Fine Continuity Irish Math. Soc. Bulletin 58 (2006), 8 9 8 The Hilbert Transform and Fine Continuity J. B. TWOMEY Abstract. It is shown that the Hilbert transform of a function having bounded variation in a finite interval

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Correction to: Yield curve shapes and the asymptotic short rate distribution in affine one-factor models

Correction to: Yield curve shapes and the asymptotic short rate distribution in affine one-factor models Finance Stoch (218) 22:53 51 https://doi.org/1.17/s78-18-359-5 CORRECTION Correction to: Yield curve shapes and the asymptotic short rate distribution in affine one-factor models Martin Keller-Ressel 1

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data Taposh Banerjee University of Texas at San Antonio Joint work with Gene Whipps (US Army Research Laboratory) Prudhvi

More information

We have been going places in the car of calculus for years, but this analysis course is about how the car actually works.

We have been going places in the car of calculus for years, but this analysis course is about how the car actually works. Analysis I We have been going places in the car of calculus for years, but this analysis course is about how the car actually works. Copier s Message These notes may contain errors. In fact, they almost

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators

More information

General Theory of Large Deviations

General Theory of Large Deviations Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

arxiv: v4 [math.mg] 23 May 2018

arxiv: v4 [math.mg] 23 May 2018 DOUBLE BUBBLES ON THE REAL LINE WITH LOG-CONVEX DENSITY arxiv:1708.0389v4 [math.mg] 3 May 018 ELIOT BONGIOVANNI, LEONARDO DI GIOSIA, ALEJANDRO DIAZ, JAHANGIR HABIB, ARJUN KAKKAR, LEA KENIGSBERG, DYLANGER

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Pseudo-stopping times and the hypothesis (H)

Pseudo-stopping times and the hypothesis (H) Pseudo-stopping times and the hypothesis (H) Anna Aksamit Laboratoire de Mathématiques et Modélisation d'évry (LaMME) UMR CNRS 80712, Université d'évry Val d'essonne 91037 Évry Cedex, France Libo Li Department

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

INDEX POLICIES FOR DISCOUNTED BANDIT PROBLEMS WITH AVAILABILITY CONSTRAINTS

INDEX POLICIES FOR DISCOUNTED BANDIT PROBLEMS WITH AVAILABILITY CONSTRAINTS Applied Probability Trust (4 February 2008) INDEX POLICIES FOR DISCOUNTED BANDIT PROBLEMS WITH AVAILABILITY CONSTRAINTS SAVAS DAYANIK, Princeton University WARREN POWELL, Princeton University KAZUTOSHI

More information

Sums of exponentials of random walks

Sums of exponentials of random walks Sums of exponentials of random walks Robert de Jong Ohio State University August 27, 2009 Abstract This paper shows that the sum of the exponential of an oscillating random walk converges in distribution,

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS JOHANNES RUF AND JAMES LEWIS WOLTER Appendix B. The Proofs of Theorem. and Proposition.3 The proof

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference Optimal exit strategies for investment projects Simone Scotti Université Paris Diderot Laboratoire de Probabilité et Modèles Aléatories Joint work with : Etienne Chevalier, Université d Evry Vathana Ly

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

North Carolina State University

North Carolina State University North Carolina State University MA 141 Course Text Calculus I by Brenda Burns-Williams and Elizabeth Dempster August 7, 2014 Section1 Functions Introduction In this section, we will define the mathematical

More information

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

Approximating diffusions by piecewise constant parameters

Approximating diffusions by piecewise constant parameters Approximating diffusions by piecewise constant parameters Lothar Breuer Institute of Mathematics Statistics, University of Kent, Canterbury CT2 7NF, UK Abstract We approximate the resolvent of a one-dimensional

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

The Hardy Operator and Boyd Indices

The Hardy Operator and Boyd Indices The Hardy Operator and Boyd Indices Department of Mathematics, University of Mis- STEPHEN J MONTGOMERY-SMITH souri, Columbia, Missouri 65211 ABSTRACT We give necessary and sufficient conditions for the

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

On a Class of Multidimensional Optimal Transportation Problems

On a Class of Multidimensional Optimal Transportation Problems Journal of Convex Analysis Volume 10 (2003), No. 2, 517 529 On a Class of Multidimensional Optimal Transportation Problems G. Carlier Université Bordeaux 1, MAB, UMR CNRS 5466, France and Université Bordeaux

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Econ Slides from Lecture 1

Econ Slides from Lecture 1 Econ 205 Sobel Econ 205 - Slides from Lecture 1 Joel Sobel August 23, 2010 Warning I can t start without assuming that something is common knowledge. You can find basic definitions of Sets and Set Operations

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Spatial Ergodicity of the Harris Flows

Spatial Ergodicity of the Harris Flows Communications on Stochastic Analysis Volume 11 Number 2 Article 6 6-217 Spatial Ergodicity of the Harris Flows E.V. Glinyanaya Institute of Mathematics NAS of Ukraine, glinkate@gmail.com Follow this and

More information