SEQUENTIAL CHANGE DETECTION REVISITED. BY GEORGE V. MOUSTAKIDES University of Patras

Size: px
Start display at page:

Download "SEQUENTIAL CHANGE DETECTION REVISITED. BY GEORGE V. MOUSTAKIDES University of Patras"

Transcription

1 The Annals of Statistics 28, Vol. 36, No. 2, DOI: / Institute of Mathematical Statistics, 28 SEQUENTIAL CHANGE DETECTION REVISITED BY GEORGE V. MOUSTAKIDES University of Patras In sequential change detection, existing performance measures differ significantly in the way they treat the time of change. By modeling this quantity as a random time, we introduce a general framework capable of capturing and better understanding most well-known criteria and also propose new ones. For a specific new criterion that constitutes an extension to Lorden s performance measure, we offer the optimum structure for detecting a change in the constant drift of a Brownian motion and a formula for the corresponding optimum performance. 1. Introduction. Suppose we are observing sequentially a process {ξ t } t>, which up to and including time τ follows the probability measure P and after τ it switches to an alternative regime P. Parameter τ is the change-time and denotes the last time instant the process is under the nominal regime P. The goal is to detect the change of measures as soon as possible, using a sequential scheme. Any sequential test can be modeled as a stopping time (s.t.) T adapted to the filtration {F t } t,wheref t = σ {ξ s, <s t} for t>; and F is the trivial σ -algebra. We note that the process {ξ t } becomes available for t> while the change-time τ can take upon the value as well. This is because with τ = we would like to capture the case where all observations are under the alternative regime, whereas τ = refers to the case where all observations are under the nominal regime. More generally, P τ denotes the probability measure induced by the change occurring at τ and E τ ] the corresponding expectation. In particular, if X is an F -measurable random variable and τ = t a deterministic time of change, then we can write (1.1) E t X]=E E X F t ]]. In developing optimum change detection algorithms, the first step consists in defining a suitable performance measure. Existing criteria basically quantify the detection delay (T τ) +,wherex + = max{x,}, by considering alternative versions of its average. These definitions play an important role in the underlying mathematical model for the change-time τ. Currently we distinguish two major models for change-time. The first, introduced by Shiryayev (1978), assumes that τ is random with a known (exponential) Received November 26; revised April 27. AMS 2 subject classifications. Primary 62L1; secondary 62L15, 6G4. Key words and phrases. Change-point, disorder problem, sequential detection. 787

2 788 G. V. MOUSTAKIDES prior. We can accompany this change-time model with the following performance measure J S (T ) = E τ T τ T >τ], that is, the average detection delay conditioned on the event that we stop after the change. Alternatively, we can consider τ to be deterministic and unknown and follow a worst-case scenario. There exist two possibilities. The first, proposed by Lorden (1971), considers the worst average delay J L (T ) = sup τ essup E τ (T τ) + F τ ] conditioned on the least-favorable observations before the change. The second, due to Pollak (1985), uses the worst average delay J P (T ) = sup τ E τ T τ T >τ] conditioned on the event that we stop after τ. Shiryayev s Bayesian approach presents definite analytical advantages and has been the favorite underlying model in several existing optimality results as Poor (1998), Beibel (1996), Peskir and Shiryayev (22), Karatzas (23), Bayraktar and Dayanik (26) and Bayraktar, Dayanik and Karatzas (26). The two deterministic approaches on the other hand, although more analytically involved, are clearly more tractable from a practical point of view since they do not make any limiting assumptions. As it will become evident in Section 3, the three performance criteria can be ordered as follows: J S (T ) J P (T ) J L (T ). Because of this property, there exist strong arguments against Lorden s measure as being overly pessimistic. Such claims, however, tend to be inconsistent with the fact that J L (T ), whenever it can be optimized, it gives rise to the CUSUM s.t., one of the most widely used change detection schemes in practice. Despite their similarity, Pollak s J P (T ) and Lorden s J L (T ) measure, as we are going to see, differ in a very essential way. In fact J P (T ), although not obvious at this point, will be shown to be closer to Shiryayev s J S (T ) measure than to Lorden s J L (T ). In the next section we present a general approach for modeling the changetime τ. The three measures presented previously will turn out to be special cases of our general setting corresponding to different levels and forms of prior knowledge. The understanding of their differences will give rise to a discussion concerning the suitability of each measure for the problem of interest and will explain, we believe in a convincing way, why Lorden s criterion, although seemingly more pessimistic than the other two, is more appropriate for the majority of change detection problems. Finally, we are going to introduce an additional criterion that constitutes an extension to Lorden s J L (T ) performance measure. For this case, we will also provide the optimum test for detecting a change in the constant drift of a Brownian motion and a formula for the corresponding optimum performance. 2. A randomized change-time. Suppose that nature, at every time instant t, consults the available information F t and with some randomization probability decides whether it should continue using the nominal probability measure or switch to the alternative one. Consequently, let π t denote the randomization probability that there is a change at t conditioned on the available information up to time t,

3 CHANGE DETECTION REVISITED 789 that is π t = Pτ = t F t ]. Clearly, π t is nonnegative and the process {π t } is {F t }- adapted. We recall that time τ is usually considered in the literature as the first time instant under the alternative regime. With the current setting this is no longer possible. Indeed, since there is a decision involved whether to change the statistics or not, this decision must be made before any data under the alternative regime are produced. Therefore, τ denotes the time we stop using the nominal regime. Consider now a process {X t } t,wherex t is nonnegative and F -measurable (the process in not necessarily {F t }-adapted). We would like to compute the expectation of the random variable X τ which is the τ -randomly-stopped version of {X t }, but we are interested only in finite values of τ. In other words we would like to find E τ X τ τ< ].Using(1.1) andthatπ t is F t -measurable, we can write ] E τ Xτ 1 {τ< } = E t X t π t ]= E E X t F t ]π t ]. t= Substituting X t = 1 in the previous relation, we obtain P τ τ< ] = t= E π t ], which is an expression for the probability of stopping at finite time. Combining the two outcomes leads to t= E E X t F t ]π t ] E τ X τ τ< ] = t=. E π t ] From now on, and without loss of generality, we make the simplifying assumption that P τ τ< ] = 1 (otherwise divide each π t with P τ τ< ]). Under this assumption, we have (2.1) E τ X τ ]= E E X t F t ]π t ]. t= Let us summarize our change-time model. We are given a time increasing information (filtration) {F t } t with F being the trivial σ -algebra, and a sequence of {F t }-adapted probabilities {π t }. Quantity π t denotes the history dependent randomization probability that t is the last time instant we obtain information under the nominal probability P and at the next time instant the new information will follow the alternative measure P. For a process {X t } with X t being nonnegative and F -measurable, we define the expectation of the τ -randomly-stopped process X τ with respect to the measure induced by the change, with the help of (2.1) Decomposition of the change-time statistics. The process {π t } can be decomposed as π t = ϖ t p t where {ϖ t } is a deterministic sequence of probabilities defined as (2.2) ϖ t = E π t ] therefore ϖ t = P τ τ< ] = 1; t= t=

4 79 G. V. MOUSTAKIDES and {p t } a nonnegative {F t }-adapted process defined for ϖ t > as p t = π t (2.3) therefore E p t ]=1, ϖ t while for ϖ t =, we can arbitrarily set p t = 1. Quantity ϖ t expresses the aggregate probability that τ will stop at t, whereas p t describes how this probability is distributed among the possible events that can occur up to time t. SinceF is the trivial σ -algebra, π is deterministic, therefore ϖ = π and p = 1. Clearly ϖ expresses the probability that the change takes place before the statistician obtains any information. 3. Performance measure and optimization criterion. If T is an {F t }- adapted s.t. used by the statistician to detect the change, then we are interested in defining a measure that quantifies its performance. Following the idea of Lorden (1971) and Pollak (1985), we propose the use of J(T ) = E τ T τ T >τ], namely, the average detection delay conditioned on the event that we stop after τ. Of course this measure makes sense for finite values of τ because a change at infinity is regarded as no change. Since (T t) + and 1 {T>t} are nonnegative and F -measurable, by using (2.1) our measure can be written as (3.1) J(T ) = t= E π t E (T t) + F t ]] t= E π t 1 {T>t} ] t= ϖ t E p t E (T t) + F t ]] = t=. ϖ t E p t 1 {T>t} ] If we are interested in finding an optimum T, then we must minimize J(T ) with respect to T, controlling at the same time the rate of false alarms. Similarly to Lorden (1971) and Pollak (1985), we propose the following constrained optimization with respect to T : (3.2) inf T J(T ) subject to E T ] γ. In other words, we minimize the conditional average detection delay, subject to the constraint that the average period between false alarms is no less than a given value γ. The performance measure, as we can see from (3.1), requires complete knowledge of the two processes {ϖ t } and {p t }. In the next subsection we extend our definition to include cases where the statistics of τ are not exactly known or they are limited to special cases.

5 CHANGE DETECTION REVISITED Special cases and uncertainty classes. If {ϖ t }, {p t } are not known exactly and instead we have available an uncertainty class T for τ, then we can extend the definition of our performance measure by adopting a worst-case approach of the form sup τ T J(T ), while (3.2) can be replaced by the following min-max constrained optimization problem: (3.3) inf sup J(T ) subject to E T ] γ. T τ T Next, we are going to identify the particular form of our criterion for specific change-time classes. In order to facilitate our presentation, we first introduce a technical lemma. LEMMA 1. Let {ϖ t } and {p t } be the processes defined in Section 2.1 satisfying (2.2) and (2.3), respectively. If {a t }, {b t } are two nonnegative deterministic sequences then t= ϖ t a t a t (3.4) sup t= = sup, {ϖ t } ϖ t b t t b t where, for a t = b t = we define the ratio a t /b t =. Furthermore, if x t,y t are two nonnegative and F t -measurable random variables then E p t x t ] sup p t E p t y t ] = essup x t (3.5), y t where, as before, when x t = y t = we define the ratio x t /y t =. PROOF. To prove (3.4) notice that since a t {sup t (a t /b t )}b t we conclude that for any sequence {ϖ t } we have t= ϖ t a t (3.6) t= ϖ t b t sup t The upper bound in (3.6) is attainable by a sequence {ϖ t } that places all its probability mass on the time instant(s) that attain the supremum. If the supremum is attained in the limit, then for every ɛ> we can find a sequence {ϖ t } that depends on ɛ, such that the left-hand side in (3.6) isɛ close to the right-hand side. Similar arguments apply for the proof of (3.5). Notice that, for every p t satisfying E p t ]=1 the combination E p t ] defines a probability measure on F t which is absolutely continuous with respect to P.Sincex t {essup(x t /y t )}y t, P -a.s., this leads to E p t x t ] E p t y t ] essup x t. y t The upper bound is attainable by a probability measure E p t ] that places all its mass on the event(s) that attain the essup, or we use limiting arguments if the essup is attained as a limit. a t b t.

6 792 G. V. MOUSTAKIDES Let us now proceed with the presentation of specific special cases and uncertainty classes regarding the two processes {ϖ t }, {p t }. Case of known ϖ t and p t = 1. Here, by selecting p t = 1, we limit our general change-time model to the case where the probability that the change will occur at t is independent from the observed history F t. The corresponding performance measure simplifies to the following expression t= ϖ t E t (T t) + (3.7) ] J S (T ) = J(T ) pt =1 = t= ϖ t P T>t], where we used (1.1) to replace E E (T t) + F t ]] with E t (T t) + ].There is no uncertainty class involved, we have simply limited the change-time τ to this special case. We recall that Shiryayev (1978) first introduced this model for the particular selection ϖ t = (1 δ)δ t. Case of arbitrary ϖ t and p t = 1. We continue using the same model of the previous case, but now we let {ϖ t } be an arbitrary sequence of probabilities satisfying, according to (2.2), t= ϖ t = 1. Using (3.4) from Lemma 1 and (1.1), it is straightforward to prove that (3.8) J P (T ) = sup J(T ) pt =1 = sup E t T t T >t]. {ϖ t } t By considering arbitrary {ϖ t }, we recover Pollak s performance measure. From the way J P (T ) is defined, it is evident that J S (T ) J P (T ). Regarding the minimization of J P (T ) with respect to the s.t. T, Pollak (1985) proposed the solution of the constrained optimization problem in (3.3). As candidate optimum s.t. for i.i.d. observations he suggested the Shiryayev Roberts stopping rule. Pollak was able to demonstrate asymptotic optimality (as γ ) for this test. Regarding nonasymptotic optimality of the Shiryayev Robert s.t. with respect to this criterion, see Mei (26). Case of arbitrary ϖ t and arbitrary p t. Here the probability to stop at time t depends on the observed history F t, we thus return to our general change-time model, but we assume complete lack of knowledge for the change time probabilities. In order to find the worst-case performance, we need to maximize J(T ) with respect to both processes {ϖ t } and {p t }. We have the following lemma that treats this problem. LEMMA 2. Let {ϖ t } and {p t } be defined as in Section 2.1 satisfying (2.2) and (2.3) respectively, then (3.9) J L (T ) = sup {ϖ t },{p t } J(T ) = sup essup E t (T t) + F t ]. t

7 PROOF. CHANGE DETECTION REVISITED 793 Using (3.4) from Lemma 1, for any given sequence {p t } we have sup J(T ) = sup {ϖ t } t E p t E (T t) + F t ]]. E p t 1 {T>t} ] Using the fact that we can change the order of two consecutive maximizations, we have E p t E (T t) + F t ]] sup J(T ) = sup sup {p t },{ϖ t } {p t } t E p t 1 {T>t} ] E p t E (T t) + F t ]] = sup sup t p t E p t 1 {T>t} ] = sup essup E (T t) + F t ] t = sup essup E t (T t) + F t ], t where for the third equality we used (3.5) from Lemma 1 and for the last equality the fact that E F t ]=E t F t ]. This concludes the proof. Here we recover Lorden s performance measure. It is clear that J P (T ) J L (T ), since for Lorden s measure we maximize over {p t } while in J P (T ) we consider p t = 1. As it was demonstrated in Moustakides (1986) and Ritov (199), solving the optimization problem in (3.3) for Lorden s criterion and for i.i.d. observations, gives rise to the CUSUM test proposed by Page (1954). It is interesting to mention that Ritov (199) based his proof of optimality on a change-time formulation, similar to the one proposed here. A slight variation of the previous uncertainty class consists in assuming that the change cannot occur outside a sequence {t n } n of known time instants. In other words, we have ϖ t = ift/ {t,t 1,...}. This modifies the previous criterion in the following way (3.1) J EL (T ) = sup {ϖ tn },{p tn } J(T ) = sup essup E tn (T t n ) + F tn ]. n With a more accurate description of the time instants where the change can occur, one might expect to improve detection as compared to the CUSUM test. This measure is presented for the first time and will be treated in detail and under a more interesting frame in Section 4. It is also possible to examine, under the general model, the case where {ϖ t } is known and {p t } unknown or, alternatively, {ϖ t } unknown and {p t } known. Clearly, the first case could be regarded as an extension of Shiryayev s approach to the general change-time model proposed here. Unfortunately both cases lead to rather complicated performance criteria, we, therefore, omit the corresponding analysis.

8 794 G. V. MOUSTAKIDES Discussion. From the preceding presentation it is evident that the three performance measures are ordered in the following way: J S (T ) J P (T ) J L (T ), giving the impression that Lorden s criterion is more pessimistic than Shiryayev s and Pollak s. This conclusion, however, is misleading since the underlying changetime model for the Shiryayev and Pollak criterion is completely different and significantly more limited than Lorden s. We recall that J S (T ) and J P (T ) rely on the assumption that the change at time t is triggered with a probability that does not depend on the observed history F t. In practice there are clearly applications where this assumption is false and where it is more realistic to assume that the observations supply at least some partial information about the events that can trigger the change. Therefore, whenever we adopt this logic, Lorden s performance measure becomes more suitable than Shiryayev s and Pollak s. The same way J P (T ) is preferable to J S (T ) when there is no prior knowledge of {ϖ t } despite the fact that J P (T ) is more pessimistic than J S (T )], we can also argue that J L (T ) is preferable to J S (T ) and J P (T ) for problems where we need to follow the general change-time model and there is no prior knowledge regarding the change-point mechanism. Even if we still insist that J L (T ) is overly pessimistic, it has now become clear that J S (T ) and J P (T ) are not the right alternatives, since they correspond to a drastically different change-time model. Our previous arguments also suggest a word of caution when evaluating or comparing performances through Monte Carlo simulations. Selecting the time of change in an arbitrary way that has no relation with the observation sequence, is equivalent to adopting the restrictive change-time model with p t = 1. This in turn is expected to favor tests that rely on this specific selection. 4. Change at observable random times. Let us now attempt a different parametrization of the change-time τ. Suppose that in addition to the process {ξ t } t> we also observe a strictly increasing sequence of random times {τ n },n= 1, 2,... These times correspond to occurrences of random events that can trigger the change of measures. In other words, we make the assumption that the change can occur only at the observable time instants {τ n }. We would like to emphasize that we consider the flow of the observation sequence {ξ t } to be continuous and not synchronized in any sense with the random times {τ n }. It is, therefore, clear that detection can be performed at any time instant, that is, even between occurrences. There are interesting applications that can be modeled with this setup. For example, earthquake damage detection in structures, where earthquakes occurring at (observable) random times can trigger a change (damage), while detection is performed by continuously acquiring vibration measurements from the structure. Similar application is the detection of a change in financial data after major importance events or, as reported by Rodionov and Overland (25), detection of regime shifts in sea ecosystems due to (observable) changes in the climate system.

9 CHANGE DETECTION REVISITED 795 Let us now relate our problem to the change-time model introduced in the previous section. Consider the strictly increasing sequence of occurrence times {τ n }, n = 1, 2,... Since we assume that observations are available after time,it is clear that τ 1 >, therefore, we arbitrarily include τ = into our sequence. Notice that τ does not necessarily correspond to a real occurrence. This term is needed to account for the case where the change took place before any observation was taken. If N t denotes the number of observed occurrences up to (and including) time t, thatis, N t = sup{n : τ n t}, n then we can define our filtration {F t } as F t = σ {ξ s, N s, <s t} and F to be the trivial σ -algebra. With this filtration the random times τ n are transformed immediately into s.t. adapted to {F t } (since by consulting the history F t we can directly deduce whether τ n t is true). The probability π t takes now the special form N t π t = 1 {t=τn } π n = 1 {t=τn } π n, n= n= where π n is F τn -measurable. As we can see, the resulting π t is nonzero only if we have an occurrence at t. By decomposing π n = ϖ n p n with n= ϖ n = 1andE p n ]=1, we can define the equivalent of all performance measures introduced in Section 3.1. We limit our presentation to Lorden s measure since this is the case we are going to treat in detail. If we use the last equation for π t in (3.1), we obtain the following form for our performance measure J(T ): n= ϖ n E p n E (T τ n ) + F τn ]] (4.1) J(T ) = n=. ϖ n E p n 1 {T>τn }] Assuming no prior knowledge for { ϖ n } and { p n }, we have to maximize J(T ) in (4.1) with respect to the two processes. This leads to the following extended Lorden measure: (4.2) J EL (T ) = sup J(T ) = sup essup E τn (T τ n ) + F τn ]. { ϖ n },{ p n } n The difference with the previous definition of J EL (T ) in (3.1), is that the time instants τ n are now s.t. instead of deterministic times Detection of a change in the constant drift of a Brownian motion. Although it is possible to analyze the problem of detecting a change in the pdf of i.i.d. observations, we prefer to consider the continuous time alternative of detecting a change in the constant drift of a BM. This is because the corresponding solution is more elegant, offering formulas for the optimum performance and therefore

10 796 G. V. MOUSTAKIDES allowing for direct comparison with the classical CUSUM test. Thus, let us assume that the observation process {ξ t } is a BM satisfying ξ t = μ(t τ) + + w t,where w t a standard Wiener process and μ a known constant drift. For the change-time τ we assume that it can be equal to any τ n from the observable sequence of s.t. {τ n }. Finally for the occurrence times {τ n }, we assume that they are Poisson distributed with a constant rate λ and independent from the observation process {ξ t }. We recall that the problem of detecting a change in the drift of a BM has been considered with the classical Lorden measure (where occurrences are not taken into account, therefore the change is assumed to happen at any time instant) by Shiryayev (1996) andbeibel(1996) and under a more general framework by Moustakides (24). If we denote with u t the log-likelihood ratio between the two probability measures, then u t =.5μ 2 t + μξ t. Let us consider the following process {m t }: m = ; ( m t = m inf 1 n N t u τn ) = ( ) inf u τn, 1 n N t where x = min{x,}. Notice that m t starts from and becomes the running minimum of the process {u t } but updated only at the occurrence times. We can now define the extended CUSUM (ECUSUM) process as follows: y t = u t m t and the corresponding ECUSUM s.t. with threshold ν as S ν = inf {t : y t ν}. <t As opposed to the CUSUM process which is always nonnegative, the ECUSUM process y t can take upon negative values as well. Figure 1 depicts an example of the paths of {u t } and {m t }. Process {m t } is piecewise constant with right continuous paths and can exhibit jumps at the occurrence instances {τ n }. The ECUSUM process {y t }, being the difference of {u t } (which is continuous) and {m t }, is also right continuous with continuous paths between occurrences. From Figure 1, we can also deduce that {y t } exhibits a jump at τ n only if y τn < in which case y τn becomes. This can be written as (4.3) y τn = (y τn ) +. For technical reason, it is also necessary to introduce a version of ECUSUM which can start from any value y = y (as compared to the regular version which starts at y = ). For this we simply have to assume that u t = y.5μ 2 t + μξ t, while m t,y t and the s.t. are defined as before. To distinguish this new version of the s.t. from the regular one let us denote it as S ν. It is then clear that S ν = S ν for y =. Since inter-occurrence times are i.i.d. and independent from the past, the process {y τn } is Markov and S ν given that y = y has the same statistics as ( S ν τ n ) + given that y τn = y and S ν >τ n.

11 CHANGE DETECTION REVISITED 797 FIG. 1. Sample paths of u t, m t and stopping of ECUSUM Performance evaluation of the ECUSUM test. In this subsection, we are going to obtain a formula for the expectation of S ν. We first present a lemma that states an important property for this quantity. LEMMA 3. Let the occurrences be Poisson distributed with rate λ, then the average E S ν ] is decreasing in y = y and for every y we have (4.4) E S ν y = y] 1 λ + ES ν]. PROOF. The paths of {y t } are increasing in y = y, therefore S ν is decreasing in y and so is E S ν ]; consequently for y wehavee S ν y = y] E S ν y = ]=ES ν ]. Assume now that y<, then since S ν τ 1 + ( S ν τ 1 ) +,bytaking expectation we can write E S ν y = y] Eτ 1 ]+E( S ν τ 1 ) + y = y] = Eτ 1 ]+E( S ν τ 1 ) + y = y, S ν >τ 1 ]P S ν >τ 1 y = y] Eτ 1 ]+E( S ν τ 1 ) + y = y, S ν >τ 1 ] = Eτ 1 ]+E E( S ν τ 1 ) + y τ1, S ν >τ 1 ] y = y ] Eτ 1 ]+sup E( S ν τ 1 ) + y τ1 = z, S ν >τ 1 ] z = 1 λ + sup E S ν y = z] z = 1 λ + E S ν y = ]

12 798 G. V. MOUSTAKIDES = 1 λ + ES ν]. Where we have used the property that ( S ν τ 1 ) + conditioned on the event that S ν >τ 1 and y τ1 = y, has the same statistics as S ν given y = y and, furthermore, that at an occurrence the ECUSUM statistics is nonnegative. This concludes the proof. From Lemma 3 we deduce that E S ν y = y] is decreasing and uniformly bounded in y. Let us now proceed with the computation of E S ν y = y]. We have the following theorem that provides the desired formula. THEOREM 1. Let u t = y +at +bw t with {w t } a standard Wiener process with w = and a,b. Define the ECUSUM s.t. S ν as above, with the occurrences being Poisson distributed with rate λ, then for y ν the expectation of S ν is given by the following expression: 1 E S ν y = y]= a y + ν + A(e 2ya/b2 e 2ya/b2 )], y, (4.5) 1 a ν + A(1 e 2ya/b2 )]+ 1 λ 1 ery ], y <, where r = a + a 2 + 2λb 2 b 2, A= b2 ( ) ar 2a λ 1. PROOF. Denote with f(y)the function in the right-hand side of (4.5) which, as we can verify, is twice continuously differentiable, strictly decreasing in y and uniformly bounded for <y ν. Consider now the difference f(y t ) f(y ), we can then write f(y t ) f(y ) = f(y t ) f(y ) + N t τn f(y t τn ) f(y τn 1 )] = f(y t ) f(y ) + N t τn f(y t τn ) f(y τn 1 )] N t + f(y τn ) f(y τn )], where we used the fact that {y t } is right continuous. In the time interval τ n 1,τ n ), the process y t has continuous paths and m t is constant, therefore using Itô calculus we can write f(y τn ) f(y τn 1 ) = τn τ n 1 f (y s )(a ds + bdw s ) +.5b 2 f (y s )ds.

13 CHANGE DETECTION REVISITED 799 If t is not an occurrence, a similar expression holds for the time interval τ Nt,t]. This suggests that f(y t ) f(y ) + N t τn f(y t τn ) f(y τn 1 )] t t = af (y s ) +.5b 2 f (y s )] ds + bf (y s )dw s. The sum involving the jumps, using (4.3), can be written as N t f(y τn ) f(y τn )]= f ((y τn ) + ) f(y τn )] t = f ((y s ) + ) f(y s )] dn s. Combining the two expressions leads to t t f(y t ) f(y ) = af (y s ) +.5b 2 f (y s )] ds + bf (y s )dw s (4.6) t + f ((y s ) + ) f(y s )] dn s. N t For any integer n let S n ν = S ν n. Then we know that for a process {ω t } which is a {F t }-adapted and uniformly bounded in the sense that ω t c<, wehave from Protter (24)thatE S ν n ω s dn s ]=E S ν n ω s λds] and from Karatzas and Shreve (1988)thatE S ν n ω s dw s ]=. Replacing t with the s.t. S ν n in (4.6), taking expectation and using the fact that f(y + ) f(y)and f (y) are uniformly bounded for y (,ν], allows us to write Ef(y S n ν )] f(y ) S n ] ν = E {af (y t ) +.5b 2 f (y t ) + λf ((y t ) + ) f(y t )]} dt. It is straightforward to verify that the function f(y)is a solution to the differential equation (4.7) af (y) +.5b 2 f (y) + λf(y + ) f(y)]= 1, <y ν. This, if substituted in the previous expression, yields f(y ) Ef(y S ν n)]=e S ν n ]. Letting now n,wehave S ν n S ν monotonically. In the previous equality, using monotone convergence on the right-hand side and bounded convergence since f(y)is uniformly bounded] on the left, we obtain f(y ) Ef(y S ν )]=E S ν ].

14 8 G. V. MOUSTAKIDES At the time of stopping the process {y t } hits the threshold ν (see Figure 1), therefore, we have y S ν = ν, suggesting that Ef(y S ν )]=f(ν). We can now verify that f(ν)=, which yields f(y ) = E S ν ] and completes the proof. REMARK 1. One might wonder, why is (4.5) the desired formula and not any other solution of the differential equation in (4.7) that satisfies the boundary condition f(ν)=? It turns out that among the solutions of (4.7) that are twice continuously differentiable (property needed to apply Itô calculus) and satisfy the boundary condition f(ν)=, the formula in (4.5) istheunique solution which is uniformly bounded in (,ν] (property imposed by Lemma 3). By letting λ and setting y = in(4.5), we recover the average run length of the classical CUSUM test as obtained in Taylor (1975). If we denote by g ν (y), h ν (y) the average of S ν under P and P respectively, then under P we have u t = y.5μ 2 t + μξ t = y +.5μ 2 t + μw t, therefore by substituting a =.5μ 2,b= μ in (4.5), we can write g ν (y) = E S ν y = y]= where 2 μ 2 y + ν + A (e y e ν )], y, 2 μ 2 ν + A (1 e ν )]+ 1 λ 1 er y ], y <, r = λ μ 2, A = μ2 2λ r 1. Similarly substituting a =.5μ 2,b= μ in (4.5), we obtain 2 h ν (y) = E S ν y = y]= μ 2 y ν + A (e ν e y )], y, 2 μ 2 ν + A (e ν 1)]+ 1 λ 1 er y ], y <, where r = λ μ 2, A = μ2 2λ r + 1. To compute the performance of the regular ECUSUM s.t. S ν we must set y = in the previous formulas. It is then clear that g ν () expresses the (worst) average detection delay and h ν () the average period between false alarms for S ν. Specifically, after noticing that r r = 2λ/μ 2,wehave g ν () = E S ν ]= 2 { μ 2 ν 1 + e ν ]+ 1 } (4.8) (1 e ν ), r h ν () = E S ν ]= 2 {e ν μ 2 ν 1]+ 1 } (4.9) (e ν 1), r

15 CHANGE DETECTION REVISITED 81 FIG. 2. Normalized average detection delay of ECUSUM as a function of normalized average false alarm period, for different values of the parameter μ 2 /λ. where the first term in both right hand side expressions corresponds to the performance of the classical CUSUM test (obtained by letting λ ). Figure 2 depicts the normalized average detection delay μ 2 g ν ()/2 as a function of the normalized average false alarm period μ 2 h ν ()/2, for different values of the ratio μ 2 /λ. We observe that, in the average, ECUSUM detects the change faster than CUSUM. Of course this is not surprising since ECUSUM has available more information than CUSUM (CUSUM does not observe the occurrences). We can also see that the performance difference between the two schemes, for given value of μ 2 /λ, is uniformly bounded by a constant. Finally, we conclude that the gain obtained by using ECUSUM instead of CUSUM becomes significant only for large values of the parameter μ 2 /λ or, equivalently, when the occurrences that can trigger the change are very infrequent. 5. Optimality of ECUSUM. Using the formula in (4.9) for the average period between false alarms, we can relate the threshold ν to the false alarm constraint parameter γ through the equation h ν () = 2 {e ν μ 2 ν 1]+ 1 } (e ν 1) = γ. r The left-hand side of the last equality is increasing in ν and for ν = it is equal to, also for ν it tends to, we can, therefore, conclude that for given γ, the last equation has a unique solution which we call ν.sinceν is the

16 82 G. V. MOUSTAKIDES solution to the previous equation it is clear that (5.1) h ν () = γ. Using ν as threshold we can now define the corresponding ECUSUM s.t. S ν. Our goal in the sequel is to demonstrate that this test is optimum in the extended Lorden sense. We recall that the occurrence times are Poisson distributed with a known constant rate λ. Observe however that λ enters only in the correspondence between the threshold ν and the constraint γ without affecting the ECUSUM test otherwise. Consider the functions g ν (y), h ν (y) associated with S ν. Both functions will play a key role in our proof of optimality. The next lemma presents an important property for each function which is an immediate consequence of Theorem 1. LEMMA 4. If T is a s.t. and S ν the regular ECUSUM s.t. with threshold ν, define T ν = T S ν, then (5.2) (5.3) E h ν () h ν (y Tν )]=E T ν ], E τn {gν (y τn ) g ν (y Tν )}1 {Tν >τ n } F τn ] = Eτn (T ν τ n ) + F τn ]. With the next theorem we provide a suitable lower bound for the extended Lorden measure. First, we introduce a technical lemma. LEMMA 5. Let T be a s.t. and define T ν = T S ν, then E e y Tν ] 1. PROOF. Following similar steps as in the proof for Theorem 1, iff(y) is a twice continuously differentiable function with f(y + ) f(y)and f (y) uniformly bounded for y ν, wehave E f(y Tν )] f(y ) Tν ] = E {.5μ 2 f (y t ) +.5μ 2 f (y t ) + λf ((y t ) + ) f(y t )]} dt. For f(y)= e y and recalling that we treat the regular ECUSUM case with y =, we immediately obtain that Tν ] E e y Tν ] 1 = E λ(e y+ s e y s )ds This concludes the proof. = λe Tν ] (1 e y s ) + ds.

17 CHANGE DETECTION REVISITED 83 (5.4) THEOREM 2. For any s.t. T let T ν = T S ν, then J EL (T ) g ν () E g ν (y Tν )e y Tν ] E e y. Tν ] PROOF. The proof follows similar steps as in Theorem 2, Moustakides (24). Since T T ν, it is clear that J EL (T ) J EL (T ν ). Also from the definition of J EL ( ) in (4.2) we conclude (5.5) J EL (T ) J EL (T ν ) E τn (T ν τ n ) + F τn ], n=, 1, 2,... Using (5.3) from Lemma 4, the previous inequality for n 1 can be written as J EL (T ) E τn (T ν τ n ) + F τn ] = E τn {gν (y τn ) g ν (y Tν )}1 {Tν >τ n } F τn ] = E e u Tν u τn {g ν (y τn ) g ν (y Tν )}1 {Tν >τ n } F τn ]. Multiplying both sides with the nonnegative quantity (1 e m τn m τ n 1 )1 {Tν >τ n } and taking expectation with respect to P yields J EL (T )E (1 e m τn m τn 1 ] )1 {Tν >τ n } (5.6) E e u Tν u τn (1 e m τn m τ n 1 ){g ν (y τn ) g ν (y Tν )}1 {Tν >τ n }]. Now notice that 1 e m τn m τ n 1 is different from, only when there is a jump in m t at τ n, in which case y τn = andm τn = u τn. This means that E e u Tν u τn (1 e m τn m ] τ n 1 ){g ν (y τn ) g ν (y Tν )}1 {Tν >τ n } = E e u Tν m τn (1 e m τn m ] τ n 1 ){g ν () g ν (y Tν )}1 {Tν >τ n } = E e u Tν (e m τn e m τ n 1 ){g ν () g ν (y Tν )}1 {Tν >τ n }]. Furthermore, since E e u Tν u τn F τn ]=1 we can write E (1 e m τn m τn 1 )1 {Tν >τ n }] = E e u Tν u τn (1 e m τn m ] τ n 1 )1 {Tν >τ n } = E e u Tν m τn (1 e m τn m ] τ n 1 )1 {Tν >τ n } = E e u Tν (e m τn e m τ n 1 )1 {Tν >τ n }]. Substituting the two equalities in (5.6) and summing over all n 1wehave J EL (T ) E e u Tν (e m τn e m ] τ n 1 )1 {Tν >τ n } (5.7) E e u Tν (e m τn e m τ n 1 ){g ν () g ν (y Tν )}1 {Tν >τ n }].

18 84 G. V. MOUSTAKIDES In the second sum, interchanging summation and expectation, yields E e u Tν (e m τn e m ] τ n 1 ){g ν () g ν (y Tν )}1 {Tν >τ n } N Tν ] = E {g ν () g ν (y Tν )}e u Tν (e m τn e m τ n 1 ) = E {g ν () g ν (y Tν )}e u Tν (e m Tν 1)]. Similarly for the first sum we have E e u Tν (e m τn e m ] τ n 1 )1 {Tν >τ n } = E e u Tν (e m Tν 1)]. Substituting the two expressions in (5.7) we obtain (5.8) J EL (T )E e u Tν (e m Tν 1)] E {g ν () g ν (y Tν )}e u Tν (e m Tν 1)]. There is one last inequality we have not used so far from (5.5), namely for n =. Recalling that τ =, this inequality takes the form (5.9) J EL (T ) E T ν ]. Using (5.3) from Lemma 4, weget E T ν ]=E g ν () g ν (y Tν )] = E {g ν () g ν (y Tν )}e u Tν ]. Also since E e u Tν ]=1, (5.9) is equivalent to J EL (T )E e u Tν ] E {g ν () g ν (y Tν )}e u Tν ]. If this is added to (5.8), we obtain J EL (T )E e u Tν m Tν ] E {g ν () g ν (y Tν )}e u Tν m Tν ], or (5.1) J EL (T )E e y Tν ] g ν ()E e y Tν ] E g ν (y Tν )e y Tν ]. Since y Tν ν, thanks also to Lemma 5, wehavee ν E e y Tν ] 1. We can thus divide both sides of (5.1) with E e y Tν ] and obtain the desired expression. We will base our proof of optimality of S ν on Theorem 2. Let us first introduce an additional technical lemma.

19 CHANGE DETECTION REVISITED 85 LEMMA 6. If T is a s.t. and T ν = T S ν then the function ψ(ν)= E T ν ] is continuous and increasing in ν with ψ() = and ψ( ) = E T ]. PROOF. The proof is exactly similar as in Lemma 3, Moustakides (24). Consider κ>ν, then from which we obtain T κ T ν S κ S ν, ψ(κ) ψ(ν) h κ () h ν (). From (4.9) we have that the function h ν () is continuous in ν. If we use this property in the previous relation, we deduce that ψ(ν) is also continuous. Finally, we can directly verify that the two limiting values ψ(), ψ( ) are correct. An immediate consequence of the previous lemma is the fact that if E T ] >γ then we can find a threshold ν such that E T ν ]=γ.sincej EL (T ) J EL (T ν ) this suggests that for the proof of optimality of S ν we can limit ourselves to s.t. that satisfy the false alarm constraint with equality. THEOREM 3. If a s.t. T satisfies E T ]=γ then it possesses an extended Lorden measure J EL (T ) that is no less than g ν () = J EL (S ν ). PROOF. From E T ]=γ, thanks to Lemma 6, foreveryɛ>wecan find a threshold ν ɛ such that for T νɛ = T S νɛ we have (5.11) γ E T νɛ ] γ ɛ. Using (5.2) from Lemma 4, we can write E h ν () h ν (y Tνɛ )]=E T νɛ ]. Recalling from (5.1) thath ν () = γ and using (5.11) in the previous equality, we obtain (5.12) ɛ E h ν (y Tνɛ )]. Define the function p(y) = e y g ν (y) (r /r )h ν (y) and consider the derivative p (y). By direct substitution we can verify that e y g ν (y) = (r /r )h ν (y), from which we deduce that p (y) = e y g ν (y). Since g ν (y) is strictly decreasing in y and also satisfies g ν (ν ) =, this suggests that p (y) has the same sign as ν y, orthatp(y) has a global maximum at y = ν. Because p(ν ) = this means that p(y) which yields

20 86 G. V. MOUSTAKIDES E p(y Tνɛ )]. Using this inequality and replacing p(y) by its definition, we obtain E p(y Tνɛ )]=E e y Tνɛ g ν (y Tνɛ ) r ] h ν (y Tνɛ ) r E e y Tνɛ g ν (y Tνɛ )] r r ɛ, where for the last inequality we used (5.12). This yields r (5.13) ɛ E e y Tνɛ g ν (y Tνɛ )]. r From Theorem 2, using(5.13) and Lemma 5, we can now write J EL (T ) g ν () E e y Tνɛ g ν (y Tνɛ )] E e y Tνɛ ] g ν () r r ɛ. Since ɛ is arbitrary this means that J EL (T ) g ν (), thus establishing optimality of ECUSUM. REFERENCES BEIBEL, M. (1996). A note on Ritov s Bayes approach to the minmax property of the CUSUM procedure. Ann. Stat MR BAYRAKTAR, E. anddayanik, S. (26). Poisson disorder problem with exponential penalty for delay. Math. Oper. Res MR BAYRAKTAR, E., DAYANIK, S. andkaratzas, I. (26). Adaptive Poisson disorder problem. Ann. App. Prob MR22662 KARATZAS, I. (23). A note on Bayesian detection of change-points with an expected miss criterion. Stat. Decis MR KARATZAS, I.andSHREVE, S. E. (1988). Brownian Motion and Stochastic Calculus. Springer, New York. MR91765 LORDEN, G. (1971). Procedures for reacting to a change in distribution. Ann. Math. Stat MR39251 MEI, Y. (26). Comment on A note on optimal detection of a change in distribution by Benjamin Yakir. Ann. Statist MR MOUSTAKIDES, G. V. (1986). Optimal stopping times for detecting changes in distributions. Ann. Statist MR86836 MOUSTAKIDES, G. V. (24). Optimality of the CUSUM procedure in continuous time. Ann. Statist MR2519 PAGE, E. S. (1954). Continuous inspection schemes. Biometrika MR8885 PESKIR, G. andshiryaev, A. N. (22). Solving the Poisson disorder problem. In Advances in Finance and Stochastics (K. Sandmann and P. J. Schönbucher, eds.) Springer, Berlin. MR POLLAK, M. (1985). Optimal detection of a change in distribution. Ann. Statist MR POOR, H. V. (1998). Quickest detection with exponential penalty for delay. Ann. Statist MR17227 PROTTER, P. E. (24). Stochastic Integration and Differential Equations, 2nd ed. Springer, Berlin. MR22294

21 CHANGE DETECTION REVISITED 87 RITOV, Y. (199). Decision theoretic optimality of the CUSUM procedure. Ann. Statist MR16272 RODIONOV, S. R. and OVERLAND, J. E. (25). Application of a sequential regime shift detection method to the Bering Sea ecosystem. ICES J. Marine Science SHIRYAYEV, A. N. (1978). Optimal Stopping Rules. Springer, New York. MR46867 SHIRYAYEV, A. N. (1996). Minimax optimality of the method of cumulative sums (CUSUM) in the case of continuous time. Russ. Math. Surv MR TAYLOR, H. M. (1975). A stopped Brownian Motion formula. Ann. Probab MR DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING UNIVERSITY OF PATRAS 265 RIO GREECE moustaki@ece.upatras.gr

Change-point models and performance measures for sequential change detection

Change-point models and performance measures for sequential change detection Change-point models and performance measures for sequential change detection Department of Electrical and Computer Engineering, University of Patras, 26500 Rion, Greece moustaki@upatras.gr George V. Moustakides

More information

Sequential Detection. Changes: an overview. George V. Moustakides

Sequential Detection. Changes: an overview. George V. Moustakides Sequential Detection of Changes: an overview George V. Moustakides Outline Sequential hypothesis testing and Sequential detection of changes The Sequential Probability Ratio Test (SPRT) for optimum hypothesis

More information

Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes

Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes George V. Moustakides INRIA, Rennes, France Outline The change detection problem Overview of existing results Lorden s criterion and

More information

Quickest Detection With Post-Change Distribution Uncertainty

Quickest Detection With Post-Change Distribution Uncertainty Quickest Detection With Post-Change Distribution Uncertainty Heng Yang City University of New York, Graduate Center Olympia Hadjiliadis City University of New York, Brooklyn College and Graduate Center

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Optimal and asymptotically optimal CUSUM rules for change point detection in the Brownian Motion model with multiple alternatives

Optimal and asymptotically optimal CUSUM rules for change point detection in the Brownian Motion model with multiple alternatives INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Optimal and asymptotically optimal CUSUM rules for change point detection in the Brownian Motion model with multiple alternatives Olympia

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205,

More information

Decentralized Sequential Hypothesis Testing. Change Detection

Decentralized Sequential Hypothesis Testing. Change Detection Decentralized Sequential Hypothesis Testing & Change Detection Giorgos Fellouris, Columbia University, NY, USA George V. Moustakides, University of Patras, Greece Outline Sequential hypothesis testing

More information

Surveillance of BiometricsAssumptions

Surveillance of BiometricsAssumptions Surveillance of BiometricsAssumptions in Insured Populations Journée des Chaires, ILB 2017 N. El Karoui, S. Loisel, Y. Sahli UPMC-Paris 6/LPMA/ISFA-Lyon 1 with the financial support of ANR LoLitA, and

More information

Least Favorable Distributions for Robust Quickest Change Detection

Least Favorable Distributions for Robust Quickest Change Detection Least Favorable Distributions for Robust Quickest hange Detection Jayakrishnan Unnikrishnan, Venugopal V. Veeravalli, Sean Meyn Department of Electrical and omputer Engineering, and oordinated Science

More information

Monitoring actuarial assumptions in life insurance

Monitoring actuarial assumptions in life insurance Monitoring actuarial assumptions in life insurance Stéphane Loisel ISFA, Univ. Lyon 1 Joint work with N. El Karoui & Y. Salhi IAALS Colloquium, Barcelona, 17 LoLitA Typical paths with change of regime

More information

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data Taposh Banerjee University of Texas at San Antonio Joint work with Gene Whipps (US Army Research Laboratory) Prudhvi

More information

The Shiryaev-Roberts Changepoint Detection Procedure in Retrospect - Theory and Practice

The Shiryaev-Roberts Changepoint Detection Procedure in Retrospect - Theory and Practice The Shiryaev-Roberts Changepoint Detection Procedure in Retrospect - Theory and Practice Department of Statistics The Hebrew University of Jerusalem Mount Scopus 91905 Jerusalem, Israel msmp@mscc.huji.ac.il

More information

Change Detection Algorithms

Change Detection Algorithms 5 Change Detection Algorithms In this chapter, we describe the simplest change detection algorithms. We consider a sequence of independent random variables (y k ) k with a probability density p (y) depending

More information

Change detection problems in branching processes

Change detection problems in branching processes Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University

More information

arxiv:math/ v2 [math.st] 15 May 2006

arxiv:math/ v2 [math.st] 15 May 2006 The Annals of Statistics 2006, Vol. 34, No. 1, 92 122 DOI: 10.1214/009053605000000859 c Institute of Mathematical Statistics, 2006 arxiv:math/0605322v2 [math.st] 15 May 2006 SEQUENTIAL CHANGE-POINT DETECTION

More information

EARLY DETECTION OF A CHANGE IN POISSON RATE AFTER ACCOUNTING FOR POPULATION SIZE EFFECTS

EARLY DETECTION OF A CHANGE IN POISSON RATE AFTER ACCOUNTING FOR POPULATION SIZE EFFECTS Statistica Sinica 21 (2011), 597-624 EARLY DETECTION OF A CHANGE IN POISSON RATE AFTER ACCOUNTING FOR POPULATION SIZE EFFECTS Yajun Mei, Sung Won Han and Kwok-Leung Tsui Georgia Institute of Technology

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in

More information

Applications of Optimal Stopping and Stochastic Control

Applications of Optimal Stopping and Stochastic Control Applications of and Stochastic Control YRM Warwick 15 April, 2011 Applications of and Some problems Some technology Some problems The secretary problem Bayesian sequential hypothesis testing the multi-armed

More information

Asymptotically Optimal Quickest Change Detection in Distributed Sensor Systems

Asymptotically Optimal Quickest Change Detection in Distributed Sensor Systems This article was downloaded by: [University of Illinois at Urbana-Champaign] On: 23 May 2012, At: 16:03 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954

More information

Optimum Joint Detection and Estimation

Optimum Joint Detection and Estimation Report SSP-2010-1: Optimum Joint Detection and Estimation George V. Moustakides Statistical Signal Processing Group Department of Electrical & Computer Engineering niversity of Patras, GREECE Contents

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS JOHANNES RUF AND JAMES LEWIS WOLTER Appendix B. The Proofs of Theorem. and Proposition.3 The proof

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Optimum Joint Detection and Estimation

Optimum Joint Detection and Estimation 20 IEEE International Symposium on Information Theory Proceedings Optimum Joint Detection and Estimation George V. Moustakides Department of Electrical and Computer Engineering University of Patras, 26500

More information

Robust Detection of Unobservable Disorder Time in Poisson Rate

Robust Detection of Unobservable Disorder Time in Poisson Rate Robust Detection of Unobservable Disorder Time in Poisson Rate Angers, Septembre 2015 Nicole El Karoui (LPMA UPMC/Paris 6) Stéphane Loisel and Yahia Salhi (ISFA Actuarial School, Lyon 1) with the financial

More information

Asynchronous Multi-Sensor Change-Point Detection for Seismic Tremors

Asynchronous Multi-Sensor Change-Point Detection for Seismic Tremors Asynchronous Multi-Sensor Change-Point Detection for Seismic Tremors Liyan Xie, Yao Xie, George V. Moustakides School of Industrial & Systems Engineering, Georgia Institute of Technology, {lxie49, yao.xie}@isye.gatech.edu

More information

Introduction to Bayesian Statistics

Introduction to Bayesian Statistics Bayesian Parameter Estimation Introduction to Bayesian Statistics Harvey Thornburg Center for Computer Research in Music and Acoustics (CCRMA) Department of Music, Stanford University Stanford, California

More information

SEQUENTIAL CHANGE-POINT DETECTION WHEN THE PRE- AND POST-CHANGE PARAMETERS ARE UNKNOWN. Tze Leung Lai Haipeng Xing

SEQUENTIAL CHANGE-POINT DETECTION WHEN THE PRE- AND POST-CHANGE PARAMETERS ARE UNKNOWN. Tze Leung Lai Haipeng Xing SEQUENTIAL CHANGE-POINT DETECTION WHEN THE PRE- AND POST-CHANGE PARAMETERS ARE UNKNOWN By Tze Leung Lai Haipeng Xing Technical Report No. 2009-5 April 2009 Department of Statistics STANFORD UNIVERSITY

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

The tail does not determine the size of the giant

The tail does not determine the size of the giant The tail does not determine the size of the giant arxiv:1710.01208v2 [math.pr] 20 Jun 2018 Maria Deijfen Sebastian Rosengren Pieter Trapman June 2018 Abstract The size of the giant component in the configuration

More information

CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD

CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD Vijay Singamasetty, Navneeth Nair, Srikrishna Bhashyam and Arun Pachai Kannu Department of Electrical Engineering Indian

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Poisson Disorder Problem with Exponential Penalty for Delay

Poisson Disorder Problem with Exponential Penalty for Delay MATHEMATICS OF OPERATIONS RESEARCH Vol. 31, No. 2, May 26, pp. 217 233 issn 364-765X eissn 1526-5471 6 312 217 informs doi 1.1287/moor.16.19 26 INFORMS Poisson Disorder Problem with Exponential Penalty

More information

Joint Detection and Estimation: Optimum Tests and Applications

Joint Detection and Estimation: Optimum Tests and Applications IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012 4215 Joint Detection and Estimation: Optimum Tests and Applications George V. Moustakides, Senior Member, IEEE, Guido H. Jajamovich, Student

More information

Worst case analysis for a general class of on-line lot-sizing heuristics

Worst case analysis for a general class of on-line lot-sizing heuristics Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University

More information

Sequential Change-Point Approach for Online Community Detection

Sequential Change-Point Approach for Online Community Detection Sequential Change-Point Approach for Online Community Detection Yao Xie Joint work with David Marangoni-Simonsen H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology

More information

Large-Scale Multi-Stream Quickest Change Detection via Shrinkage Post-Change Estimation

Large-Scale Multi-Stream Quickest Change Detection via Shrinkage Post-Change Estimation Large-Scale Multi-Stream Quickest Change Detection via Shrinkage Post-Change Estimation Yuan Wang and Yajun Mei arxiv:1308.5738v3 [math.st] 16 Mar 2016 July 10, 2015 Abstract The quickest change detection

More information

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

OPTIMAL STOPPING OF A BROWNIAN BRIDGE OPTIMAL STOPPING OF A BROWNIAN BRIDGE ERIK EKSTRÖM AND HENRIK WANNTORP Abstract. We study several optimal stopping problems in which the gains process is a Brownian bridge or a functional of a Brownian

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

e - c o m p a n i o n

e - c o m p a n i o n OPERATIONS RESEARCH http://dx.doi.org/1.1287/opre.111.13ec e - c o m p a n i o n ONLY AVAILABLE IN ELECTRONIC FORM 212 INFORMS Electronic Companion A Diffusion Regime with Nondegenerate Slowdown by Rami

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Lecture notes on statistical decision theory Econ 2110, fall 2013

Lecture notes on statistical decision theory Econ 2110, fall 2013 Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic

More information

CHANGE POINT PROBLEMS IN THE MODEL OF LOGISTIC REGRESSION 1. By G. Gurevich and A. Vexler. Tecnion-Israel Institute of Technology, Haifa, Israel.

CHANGE POINT PROBLEMS IN THE MODEL OF LOGISTIC REGRESSION 1. By G. Gurevich and A. Vexler. Tecnion-Israel Institute of Technology, Haifa, Israel. CHANGE POINT PROBLEMS IN THE MODEL OF LOGISTIC REGRESSION 1 By G. Gurevich and A. Vexler Tecnion-Israel Institute of Technology, Haifa, Israel and The Central Bureau of Statistics, Jerusalem, Israel SUMMARY

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

How Much Evidence Should One Collect?

How Much Evidence Should One Collect? How Much Evidence Should One Collect? Remco Heesen October 10, 2013 Abstract This paper focuses on the question how much evidence one should collect before deciding on the truth-value of a proposition.

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

arxiv: v1 [math.ho] 25 Feb 2008

arxiv: v1 [math.ho] 25 Feb 2008 A Note on Walking Versus Waiting Anthony B. Morton February 28 arxiv:82.3653v [math.ho] 25 Feb 28 To what extent is a traveller called Justin, say) better off to wait for a bus rather than just start walking

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2)

SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES. 1. Introduction (1.2) SEQUENTIAL TESTING OF SIMPLE HYPOTHESES ABOUT COMPOUND POISSON PROCESSES SAVAS DAYANIK AND SEMIH O. SEZER Abstract. One of two simple hypotheses is correct about the unknown arrival rate and jump distribution

More information

Space-Time CUSUM for Distributed Quickest Detection Using Randomly Spaced Sensors Along a Path

Space-Time CUSUM for Distributed Quickest Detection Using Randomly Spaced Sensors Along a Path Space-Time CUSUM for Distributed Quickest Detection Using Randomly Spaced Sensors Along a Path Daniel Egea-Roca, Gonzalo Seco-Granados, José A López-Salcedo, Sunwoo Kim Dpt of Telecommunications and Systems

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS Josef Teichmann Abstract. Some results of ergodic theory are generalized in the setting of Banach lattices, namely Hopf s maximal ergodic inequality and the

More information

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding

A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

INEQUALITY FOR VARIANCES OF THE DISCOUNTED RE- WARDS

INEQUALITY FOR VARIANCES OF THE DISCOUNTED RE- WARDS Applied Probability Trust (5 October 29) INEQUALITY FOR VARIANCES OF THE DISCOUNTED RE- WARDS EUGENE A. FEINBERG, Stony Brook University JUN FEI, Stony Brook University Abstract We consider the following

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED

STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED J. Appl. Prob. 42, 826 838 (25) Printed in Israel Applied Probability Trust 25 STOPPING AT THE MAXIMUM OF GEOMETRIC BROWNIAN MOTION WHEN SIGNALS ARE RECEIVED X. GUO, Cornell University J. LIU, Yale University

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

Data-Efficient Quickest Change Detection

Data-Efficient Quickest Change Detection Data-Efficient Quickest Change Detection Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign http://www.ifp.illinois.edu/~vvv (joint work with Taposh Banerjee)

More information

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift

Predicting the Time of the Ultimate Maximum for Brownian Motion with Drift Proc. Math. Control Theory Finance Lisbon 27, Springer, 28, 95-112 Research Report No. 4, 27, Probab. Statist. Group Manchester 16 pp Predicting the Time of the Ultimate Maximum for Brownian Motion with

More information

The Uniform Integrability of Martingales. On a Question by Alexander Cherny

The Uniform Integrability of Martingales. On a Question by Alexander Cherny The Uniform Integrability of Martingales. On a Question by Alexander Cherny Johannes Ruf Department of Mathematics University College London May 1, 2015 Abstract Let X be a progressively measurable, almost

More information

Refining the Central Limit Theorem Approximation via Extreme Value Theory

Refining the Central Limit Theorem Approximation via Extreme Value Theory Refining the Central Limit Theorem Approximation via Extreme Value Theory Ulrich K. Müller Economics Department Princeton University February 2018 Abstract We suggest approximating the distribution of

More information

Compound Poisson disorder problem

Compound Poisson disorder problem Compound Poisson disorder problem Savas Dayanik Princeton University, Department of Operations Research and Financial Engineering, and Bendheim Center for Finance, Princeton, NJ 844 email: sdayanik@princeton.edu

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Common Knowledge and Sequential Team Problems

Common Knowledge and Sequential Team Problems Common Knowledge and Sequential Team Problems Authors: Ashutosh Nayyar and Demosthenis Teneketzis Computer Engineering Technical Report Number CENG-2018-02 Ming Hsieh Department of Electrical Engineering

More information

The Modified Keifer-Weiss Problem, Revisited

The Modified Keifer-Weiss Problem, Revisited The Modified Keifer-Weiss Problem, Revisited Bob Keener Department of Statistics University of Michigan IWSM, 2013 Bob Keener (University of Michigan) Keifer-Weiss Problem IWSM 2013 1 / 15 Outline 1 Symmetric

More information

Joint Detection and Estimation: Optimum Tests and Applications

Joint Detection and Estimation: Optimum Tests and Applications IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED) 1 Joint Detection and Estimation: Optimum Tests and Applications George V. Moustakides, Senior Member, IEEE, Guido H. Jajamovich, Student Member, IEEE,

More information

Temporal point processes: the conditional intensity function

Temporal point processes: the conditional intensity function Temporal point processes: the conditional intensity function Jakob Gulddahl Rasmussen December 21, 2009 Contents 1 Introduction 2 2 Evolutionary point processes 2 2.1 Evolutionarity..............................

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses

Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Ann Inst Stat Math (2009) 61:773 787 DOI 10.1007/s10463-008-0172-6 Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Taisuke Otsu Received: 1 June 2007 / Revised:

More information

Optimal Stopping under Adverse Nonlinear Expectation and Related Games

Optimal Stopping under Adverse Nonlinear Expectation and Related Games Optimal Stopping under Adverse Nonlinear Expectation and Related Games Marcel Nutz Jianfeng Zhang First version: December 7, 2012. This version: July 21, 2014 Abstract We study the existence of optimal

More information

A Stochastic Paradox for Reflected Brownian Motion?

A Stochastic Paradox for Reflected Brownian Motion? Proceedings of the 9th International Symposium on Mathematical Theory of Networks and Systems MTNS 2 9 July, 2 udapest, Hungary A Stochastic Parado for Reflected rownian Motion? Erik I. Verriest Abstract

More information

Stochastic Chemical Kinetics

Stochastic Chemical Kinetics Stochastic Chemical Kinetics Joseph K Scott November 10, 2011 1 Introduction to Stochastic Chemical Kinetics Consider the reaction I + I D The conventional kinetic model for the concentration of I in a

More information

Module 3. Function of a Random Variable and its distribution

Module 3. Function of a Random Variable and its distribution Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given

More information

Jónsson posets and unary Jónsson algebras

Jónsson posets and unary Jónsson algebras Jónsson posets and unary Jónsson algebras Keith A. Kearnes and Greg Oman Abstract. We show that if P is an infinite poset whose proper order ideals have cardinality strictly less than P, and κ is a cardinal

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Introduction. log p θ (y k y 1:k 1 ), k=1

Introduction. log p θ (y k y 1:k 1 ), k=1 ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE

More information

Dynamic Call Center Routing Policies Using Call Waiting and Agent Idle Times Online Supplement

Dynamic Call Center Routing Policies Using Call Waiting and Agent Idle Times Online Supplement Submitted to imanufacturing & Service Operations Management manuscript MSOM-11-370.R3 Dynamic Call Center Routing Policies Using Call Waiting and Agent Idle Times Online Supplement (Authors names blinded

More information

GENERALIZED ANNUITIES AND ASSURANCES, AND INTER-RELATIONSHIPS. BY LEIGH ROBERTS, M.Sc., ABSTRACT

GENERALIZED ANNUITIES AND ASSURANCES, AND INTER-RELATIONSHIPS. BY LEIGH ROBERTS, M.Sc., ABSTRACT GENERALIZED ANNUITIES AND ASSURANCES, AND THEIR INTER-RELATIONSHIPS BY LEIGH ROBERTS, M.Sc., A.I.A ABSTRACT By the definition of generalized assurances and annuities, the relation is shown to be the simplest

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

B. LAQUERRIÈRE AND N. PRIVAULT

B. LAQUERRIÈRE AND N. PRIVAULT DEVIAION INEQUALIIES FOR EXPONENIAL JUMP-DIFFUSION PROCESSES B. LAQUERRIÈRE AND N. PRIVAUL Abstract. In this note we obtain deviation inequalities for the law of exponential jump-diffusion processes at

More information