On Some Ergodic Impulse Control Problems with Constraint

Size: px
Start display at page:

Download "On Some Ergodic Impulse Control Problems with Constraint"

Transcription

1 Wayne State University Mathematics Faculty Research Publications Mathematics On Some Ergodic Impulse Control Problems with Constraint J. L. Menaldi Wayne State University, Maurice Robin Fondation Campus Paris-Saclay, Recommended Citation J. L. Menaldi and M. Robin, On Some Ergodic Impulse Control Problems with Constraint, SIAM J. Control Optim., , pp doi: /17M Available at: This Article is brought to you for free and open access by the Mathematics at It has been accepted for inclusion in Mathematics Faculty Research Publications by an authorized administrator of

2 SIAM J. CONTROL OPTIM. Vol. 56, No. 4, pp c 218 Society for Industrial and Applied Mathematics ON SOME ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT J. L. MENALDI AND M. ROBIN Abstract. This paper studies the impulse control of a general Markov process under the average or ergodic cost when the impulse instants are restricted to be the arrival times of an exogenous process, and this restriction is referred to as a constraint. A detailed setting is described, a characterization of the optimal cost is obtained as a solution of an HJB equation, and an optimal impulse control is identified. Key words. Markov Feller processes, information constraints, impulse control, control by interventions, ergodic control AMS subject classifications. Primary, 49J4; Secondary, 6J6, 6J75 DOI /17M Introduction. Since the introduction of impulse control problems by Bensoussan and Lions in the seventies, many studies have been devoted to various aspects, both theoretical and applied, of this subject see the references in Bensoussan and Lions [6], Bensoussan [2], Davis [1]. Several of these studies cover the case of the long term average cost or ergodic control e.g., see Gatarek and Stettner [14] and the references therein. In these works, the impulse control is a sequence of stopping times and random variables acting on the state x t of the system, and the stopping times can be arbitrary. In the present paper, we address the ergodic impulse control when the stopping times must satisfy a constraint, namely, the impulses are allowed only at the jump times of a given process y t, these times representing the arrival of a signal. This type of constraint has been treated in [3] for the case of a discounted cost, using results on the optimal stopping problem with constraints of [29] which generalizes the results of Dupuis and Wang [11]. To the best of our knowledge, there are only a few references related to impulse control with constraint: Brémaud [7], Liang [24], Liang and Wei [25], and Wang [39]. A different kind of constraint is considered in Costa, Dufour, and Piunovskiy [8], where the constraints are written as infinitehorizon expected discounted costs. Nevertheless, we are not aware of any references in the case of ergodic impulse control with constraint. In many cases of optimal control, the ergodic control problem is treated via the asymptotic behavior of the discounted problem, often called the vanishing discount approach e.g., see Bensoussan [3], Gatarek and Stettner [14]. We could use this method for the present problem in the situation where the set of admissible impulses Γ is independent of the state x; however, it seems more difficult to do the same with Γx. Here, we use a direct approach, based on the study of the ergodic HJB equation. As for the discounted cost, an auxiliary ergodic control problem in discrete time is the basis to solve the original problem. The main results concern the solution of the ergodic HJB equations and the existence of an optimal control. Received by the editors September 14, 217; accepted for publication in revised form March 5, 218; published electronically July 19, Department of Mathematics, Wayne State University, Detroit, MI 4822 menaldi@wayne.edu. Fondation Campus Paris-Saclay, 9119 Saint-Aubin, France maurice.robin@polytechnique.edu. 269

3 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 2691 The content of the paper is as follows: section 2 includes the statement of the problem, notations, and assumptions; section 3 is devoted to the heuristic derivation of the HJB equations which are solved in section 4; section 5 shows the existence of an optimal control and the characterization of the optimal ergodic cost; section 6 discusses a few extensions. 2. Statement of the problem The uncontrolled process. Let Ω, F, F t, x t,y t,p xy be a homogeneous Markov process on Ω = DR + ; E R +, where F t is the universal completion of the canonical σ-algebras and E xy denotes the expectation with respect to P xy. The x t component is the process to be controlled by impulses. The y t component will define the constraint on the controls, namely, the controller may apply an impulse on x t only at the jumps times of y t. These jump times represent the arrival times of a signal and, actually, y t is the elapsed time since the last arrival of a signal. It is assumed that 2.1 E is a compact metric space; x t is a Feller process with semigroup Φt and infinitesimal generator A x, so that Φt is a continuous semigroup on the space CE of real-valued continuous functions on E; 2.2 y t is a process with values in R +,whichjumpsattimes τ 1,τ 2,...,τ n, and such that y τn =, y t = t τ n for τ n t<τ n+1,n 1, and, conditionally to x t, the intervals between jumps T n = τ n+1 τ n are independent identically distributed random variables with intensity λx, y; see [29, 3]. It is also assumed that 2.3 λx, y is a nonnegative, continuous, and bounded function defined on E R + and there exist positive constants, <a 1 <a 2, such that a 1 E x τ 1 a 2. For x given, one could consider that the infinitesimal generator of y t is A y g = g + λx, y[g gy] y for smooth functions g on R +. We will assume that the weak infinitesimal generator of x t,y t inc b E R + is 2.4 A xy gx, y =A x gx, y+ g x, y+λx, y[gx, gx, y]. y In addition, we make the following assumption 1 for x t : if BE denotes the Borel is sometimes called Doeblin s condition.

4 2692 J. L. MENALDI AND M. ROBIN σ-algebra of E, and Remark 2.1. a We have if P x, U =E x 1 U x τ1 U BE, then there exists a positive measure m on E such that <me 1 and P x, U mu U BE. E x τ 1 = E x tλx t,texp λx s,sds dt, so the condition E x τ 1 a 2 is satisfied if, for instance, λx, y k > fory y, x E, thena 2 = y +1/k. Also if λx, y k 1 < for every y, and x E, then E x τ 1 a 1 =1/k 1. b Note that 2.7 P x, U =E x λx t,texp λx s,sds 1 U x t dt. With 2.7 and assuming the same property λ as in a above, one can check that 2.6 is satisfied when the transition probability of x t has a density with respect to a probability on E satisfying for every ε > there exists kε such that 2.8 px, t, x kε > on E [ε, [ E. This is the case, for instance, for periodic diffusion processes see Bensoussan [3], and for reflected diffusion processes with jumps; see Garroni and Menaldi [12, 13] which is also valid for reflected diffusion processes without jumps Assumptions on costs and impulse values. It is assumed that there are a running cost fx, y and a cost of impulse cx, ξ satisfying f : E R + R + c : E E [c, + [, c >, bounded and continuous, bounded and continuous. Moreover, for any x E, the possible impulses must be in Γx =ξ E :x, ξ Γ, where Γ is a given analytic set in E E, with the following properties: Γx x E, x E ξ Γx, Γξ Γx, x E ξ Γx ξ Γξ Γx, cx, ξ+cξ,ξ cx, ξ. Finally, defining the operator M 2.11 Mgx = inf ξ Γx cx, ξ+gξ, it is assumed that 2.12 M maps CE into CE, and there exists a measurable selector ˆξx =ˆξx, g realizing the infimum in Mgx g CE.

5 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 2693 Remark 2.2. a The last property in 2.1 implies that it is not optimal to make simultaneous impulses. b Equation 2.12 needs some regularity property of Γx: e.g., see Davis [1]. c It is possible but not necessary that x belongs to Γx, actually, even Γx = E forsomeoreveryx is allowed. However, an impulse occurs when the system moves from a state x to another state ξ x, i.e., it suffices to avoid or not to allow impulses that moves x to itself, since they have a higher cost The controlled process. We briefly describe the controlled process. For a detailed construction we refer to Bensoussan and Lions [6] see also Davis [1], Lepeltier and Marchal [23], Robin [35], Stettner [37]. Let us consider Ω =[DR + ; E R + ], and define Ft = F t and Ft n+1 = Ft n F t for n, where F t is the universal completion of the canonical filtration as previously. An arbitrary impulse control ν not necessarily admissible at this stage is a sequence θ n,ξ n n 1, where θ n is a stopping time of Ft n 1, θ n θ n 1, and the impulse ξ n is a F n 1 θ n measurable random variable with values in E. The coordinate in Ω has the form x t,y t,x1 t,y1 t,...,xn t,yn t,..., and for any impulse control ν there exists a probability Pxy ν on Ω such that the evolution of the controlled process x ν t,yt ν is given by the coordinates x n t,yt n ofω when θ n t < θ n+1, n setting θ =, i.e., x ν t,yν t = xn t,yn t forθ n t < θ n+1. Note that clearly x ν t,yt ν is defined for any t, but x i t,yt i is only used for any t,andx i 1 ξ i,y i 1,y i 1 isthestateattime just before the impulse or jump to =x i,y i, as long as <. For the sake of simplicity, we will not always indicate, in the following, the dependency of x ν t,y ν t with respect to ν. A Markov impulse control ν is identified by a closed subset S of E R + and a Borel measurable function x, y ξx, y froms into C = E R + S, with the following meaning: intervene only when the the process x t,y t is leaving the continuation region C and then apply an impulse ξx, y while in the stopping region S, moving back the process to the continuation region C, i.e., +1 =inft > :x i t,y i t S, with the convention that inf = and ξ i+1 = ξx i +1,y i +1 for any i, as long as <. Now, the admissible controls are defined as follows, recalling that τ n are the arrival times of the signal. Definition 2.3. i A stopping time s called admissible if almost surely there exists n = ηω 1 such that θω =τ ηω ω or, equivalently, if θ satisfies θ> and y θ =a.s. ii An impulse control ν =,ξ i,i 1 as above is called admissible, if each is admissible i.e., > and y θi =, and ξ i Γx i 1. The set of admissible impulse controls is denoted by V. iii If θ 1 =is allowed, then ν is called zero-admissible. The set of zeroadmissible impulse controls is denoted by V. iv An admissible Markov impulse control corresponds to a stopping region S = S with S E, and an impulse function satisfying ξx, = ξ x Γx for any x S and, therefore, = τη i i and η i+1 =infk >η i : x i S τk i with τ =, τk i =inft >τi k 1 : yi t =, for any k i 1. The discrete time impulse control problem has been considered in Bensoussan [4], Hernández-Lerma and Lasserre [16, 17], and Stettner [36]. As we will see later, it will be useful to consider an auxiliary problem in discrete time for the Markov chain X n = x τn with the filtration G = G n : n, G = F and G n = Fτ n 1 n for n 1. The impulses occur at the stopping times η k with values in the set N =, 1, 2,...

6 2694 J. L. MENALDI AND M. ROBIN and are related to θ k by η i =infk 1:θ k = τ k for admissible controls θ k and similarly for zero-admissible controls. Thus, we have the following. Definition 2.4. If ν = η i,ξ i,i 1 is a sequence of G-stopping times and random variables ξ i G ηi measurable with ξ i Γx τηi, η i increasing and η i + a.s., then ν is referred to as an admissible discrete time impulse control if η 1 1. If η i is allowed, it is referred as an zero-admissible discrete time impulse control One can now define the average cost to be minimized as T J T,x,y,ν=E ν xy fx ν s,yν s ds + i 1 Jx, y, ν = lim inf T T J T,x,y,ν, then the problem is to characterize 2.14 μx, y = inf Jx, y, ν. ν V The auxiliary problem is concerned with 2.15 μ x, y = inf ν V Jx, y, ν with 2.16 Jx, y, ν = lim inf n and J τn,x,y,ν as in 2.13 with T = τ n. 1 E ν xyτ n J τn,x,y,ν, 1 θi T cx i 1,ξ i, Remark 2.5. Actually, as seen later, μx, y =μ x, y isaconstant. 3. Dynamic programming. To introduce the HJB equations corresponding to this problem, we consider the dynamic programming argument in a heuristic way. Define the finite horizon cost as T t 3.1 J T t, x, y, ν; h =E ν xy fx ν s,ys ν ds + i 1 θi<t tcx i 1,ξ i +hx T t,y T t, where h is a bounded terminal cost and the corresponding optimal costs 3.2 u T t, x, y = inf J T t, x, y, ν :ν V and 3.3 u T t, x, y = inf J T t, x, y, ν :ν V. Note that, from the definitions, u T t, x, y =u T t, x, y ify>.

7 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 2695 If s a stopping time, then 3.4 J T t, x, y, ν; h θ T t = E ν xy fx ν s,ys ν ds + i T t + E ν xy fx ν s,yν s ds θ T t + i 1 θi<θ T tcx i 1,ξ i 1 θ T t θi<t tcx i 1,ξ i +hx T t,y T t. Considering now u T t, x, y, assuming that one can apply the Markov property and minimizing separately on [,θ[and[θ, T t[, we deduce 3.5 θ T t u T t, x, y = inf E ν xy fx ν ν V s,ys ν ds + 1 θi<θ T tcx i 1,ξ i ds i + 1 θ<t t u T θ, x θ,y θ +1 θ T t hx T t,y T t. At time t, ify =, either one applies an impulse, i.e., θ 1 =orθ 1 τ 1, therefore, inf ξ Γx cx, ξ+u T t, ξ, = Mu T t, x,, τ1 T t 3.6 u T t, x, = min E x fx s,y s ds + u T τ 1 T t,x τ1 T t,y τ1 T t. If y>, no impulse is allowed before τ 1, therefore, 3.7 τ1 T u T t t, x, y =E xy fx s,y s ds + u T τ 1 T t,x τ1 T t,y τ1 T t. Let us now make the assumption 3.8 there exists a bounded measurable function w x, y and a constant μ such that, as T, εt, x, y, T :=u T t, x, y T tμ w x, y, locally uniformly in x, y for each fixed t, which is similar to properties showed in Markov decision processes, e.g., Prieto-

8 2696 J. L. MENALDI AND M. ROBIN Rumeau and Hernández-Lerma [33, Thm 3.6]. Then, using 3.8 in 3.5, one obtains T tμ + w x, y+εt θ T t = inf E ν xy fx ν s,ys ν ds + 1 θi<θ T tcx i 1 ν V,ξ i i [ + 1 θ<t t T θμ + w x θ,y θ +εt ] + 1 θ>t t hx T t,y T t, and when T, we deduce for any θ almost surely finite w x, y = inf E ν xy ν V + i θ [ fx ν s,ys ν μ ] ds 1 θi<θcx i 1,ξ i +1 θ< w x θ,y θ. Arguing as for 3.6, we get τ1 w x, = min Mw x,, E ν x [fx ν s,yν s μ ]ds + 1 τ1< w x τ1,, and the factor 1 τ1< is not needed since the assumptions in section 2.1 imply E x τ 1 <. Finally, we obtain the HJB equation for μ,w, τ1 3.9 w x, = min Mw x,, E x [fx s,y s μ ]ds + w x τ1,, 3.1 w x, y =E xy τ1 [fx s,y s μ ]ds + w x τ1,. Now, let us apply similar arguments to u T. Assuming θ τ 1 in 3.4, one minimizes separately on [,θ[and[θ, T t[. Either s an arrival time of the signal and an impulse may be applied or s not and no impulse is allowed. In any case, the possible actions on [θ, T t[ arethesameasfortheimpulsecontrolinv. Therefore, we obtain θ T t u T t, x, y = inf E ν xy fx ν s,ys ν ds ν V θi<θ T tcx i 1,ξ i i + 1 θ<t t u T θ, x θ,y θ +1 θ>t t hx T t,y T t. Taking θ = τ 1 and, since no impulse is allowed before τ 1,thisgives 3.12 τ1 T t u T t, x, y =E xy fx s,y s ds + 1 τ1<t tu T x τ 1, + 1 τ1 T thx T t,y T t.

9 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 2697 Remark 3.1. Since u T t, x, y = u T t, x, y y > it is sufficient to obtain u T t, x, and u T t, x, to get the values for y>. Let us assume that similarly to the case of w 3.13 there exists a bounded measurable function wx, y such that for the same constant μ as in 3.8, εt, x, y, T :=u T t, x, y T tμ wx, y, as T, locally uniformly in x, y, for each fixed t. Then the same arguments give 3.14 wx, y = inf E ν xy ν V θ [ fx ν s,ys ν μ ] ds + i 1 θi<θcx i,ξ i +1 θ< w x θ,y θ, and since no control can take place before τ 1, one obtains τ wx, y =E xy [fx s,y s μ ]ds + w x τ1,. 4. Solutions of the HJB equations. It is clear from 3.15 that knowledge of μ,w x, will give wx, y, and also w x, y fory>. Therefore the key step is to solve 3.9 for μ,w x,. For this purpose, we consider a discrete time HJB equation equivalent to 3.9 as follows: Let us define τ1 4.1 lx =E ν x fx s,y s ds. Recall that under the assumptions of section 2.1, we have 4.2 <a 1 E x τ 1 a 2 and, therefore, 4.3 lx a 2 f. Moreover, lx is continuous from the Feller property of x t and the law of τ 1. From 2.5, define also the operator P on CE by 4.4 Pgx =E x gx τ1 = E x gx 1, where X n is the Markov chain X n = x τn. The Feller property of x t and the regularity of λ yield 4.5 Pgx maps CE into itself and, from 2.12, it is also the case of M, as defined by In this section, we denote 4.6 τx =E x τ 1.

10 2698 J. L. MENALDI AND M. ROBIN With the previous notation, 3.9 is written as with w x =w x, 4.7 w x =min inf ξ Γx cx, ξ+w ξ,lx μ τx +Pw x. This expresses the fact that either there is an immediate impulse when w x = Mw x or the chain evolves with the kernel P and a running cost lx μ τx is incurred. We will need the following lemma. Lemma 4.1. Under the assumption 2.6, there exist a positive measure γ on E and a constant <β<1 such that 4.8 P x, B τxγb B BE x E, and γe > 1 β τx. Proof. Recall that the assumptions on λ imply <a 1 τx a 2. Thus, to satisfy the first inequality in 4.8 it is sufficient to take γb = 1 a 2 mb B BE, where m is the measure in 2.6. For the second inequality it is sufficient to take β ], 1[ such that me a 2 > 1 β a 1 1 β τx x E, i.e., 1 β<mea 1 /a 2. Theorem 4.2. Under the assumptions of section 2.1, there exists a solution μ,w in R + CE of 4.7 and, therefore, of 3.9. Proof. For the sake of simplicity, in this proof, we drop the index. We first transform 4.7: The assumptions on cx, ξ imply that multiple simultaneous impulses are not optimal see Remark 2.2, so we restrict the controls to those without multiple simultaneous impulses. Therefore, in 4.7, after an impulse ξ, the chain evolves without control until the next transition, i.e., w ξ = lξ μτξ+pwξ. This gives 4.9 wx = inf ξ Γx x lξ+1x ξ cx, ξ μτξ+pwξ. Denote Lx, ξ =lξ+1 x ξ cx, ξ and, for v BE, T x, ξ; v =Lx, ξ+pvξ τξ and Rvx = E inf T x, ξ; v. ξ Γx x vzγdz

11 Define also ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 2699 P x, dz =P x, dz τxγdz x E, which is a positive measure on E for each x in E, in view of Lemma 4.1. Denote P x, v = vzp x, dz. Letting v 1,v 2 be two bounded measurable functions, we have 4.1 T x, ξ; v 1 T x, ξ; v 2 = P ξ,v 1 v 2. Moreover, from Lemma 4.1, we have E P x, E <β x E. Therefore from 4.1 we deduce that R is a contraction on BE and has a unique fixed point w = Rw. If we define μ = wzγdz, then μ, w is a solution of 4.9 and 4.7. Moreover, since Rvx =min lx+p x, v, inf lξ+cx, ξ+p ξ,v, E ξ Γx it is clear that Rv CE ifv CE and, therefore, w CE. In addition, since lx andcx, ξ >, we deduce that the fixed point w, which also implies μ. As a corollary, wx, y in 3.15 is well defined with w x, = w x fromtheorem 4.2. Remark 4.3. The assumption 4.8 is used in Kurano [21, 22], in the context of semi-markov decision processes. Remark 4.4. In the case where τx is constant, which corresponds to an intensity λy independent of x, 4.7 is the HJB equation of a standard discrete time impulse control as studied in Stettner [36] for Γx =Γfixed. 5. Existence of an optimal control. We make the following additional assumptions: 5.1 for the process x t,y t as defined in section 2.1, there exists a unique invariant measure ζ on E R + and there exists a continuous function hx, y such that, for any stopping time τ with E xy τ <, E xy hxτ,y τ τ = hx, y E xy fxt,y t f dt, where f = fx, yζdx, dy. E R + Note that h plus a constant also satisfies this equation. For the auxiliary problem, we state the following.

12 27 J. L. MENALDI AND M. ROBIN Theorem 5.1. Under the assumptions of section 2.1, and5.1, 5.2 μ = inf ν V Jx,,ν, where μ is given by Theorem 4.2 and J is defined by Moreover, there exists an optimal feedback control. Proof. Letusfirstshowthat 5.3 Jx, y, = f, where ν = means no control, that is, Jx, y, = lim inf n Indeed, from 5.1, we have 1 E xy τ n E xy τn 5.4 E xy hxτn,y τn = hx, y E xy τn fx t,y t dt. [ fxt,y t f ] dt note that Eτ n < and y τn =, which gives 1 τn E xy τ n E xy fx t,y t dt = f 1 + E xy τ n E xy hx, y hxτn,y τn. Taking the limit when n,sincey τn =andhx, is bounded since h is continuous and E compact, 5.3 is obtained. As a consequence, one can restrict the set of controls to those such that 5.5 Jx, y, ν Jx, y, = f. Next let us show that 5.6 μ Jx,,ν v V. Rewrite the HJB equation 4.7 as w x =min Mw x,lx+pw x with Lx =lx τxμ. From this equation, for any discrete impulse control η i,ξ i :i 1, we deduce w x, E ν x n 1 i= LX i + j 1 ηj ncx ηj,ξ j +w X n, where, here, X i denotes the controlled discrete time process. Denoting ν =,ξ i : i 1 as the impulse control corresponding to η i,ξ i :i 1, i.e., with = τ ηi,weobtain τn w x E ν [ ] x fxt,y t μ dt + 1 θj τ n cx j 1 θ j,ξ j +w x τn j

13 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 271 and, therefore, 1 τn μ E ν x τ n Eν x fx t,y t dt + j 1 θj τ n cx j 1 θ j,ξ j +w x τn w x, and since w is bounded and E ν xτ n, we obtain 5.6. Therefore, with 5.5, Consequently, if μ = f then μ inf ν V Jx,,ν Jx,, = f. μ = inf ν V Jx,,ν = Jx,,, and do nothing i.e., the control without any impulse is optimal. Let us now consider the case μ < f. Using 5.4 for n = 1, the HJB equation 3.9 for μ,w can be rewritten, dropping again the index for simplicity and with ψ = Mw, w hx =min ψ hx, f μe x τ 1 + E x w hx τ1, where, here, h = hx,. Moreover, since h is defined up to an additive constant and is bounded, one can assume h, and rewrite it as 5.7 wx =min ψx, lx+p wx with w = w h, ψ = ψ h, and lx = f μe x τ 1. For the Markov chain X n = x τn, this 5.7 is the HJB equation of a stopping time problem as studied in Bensoussan [4, Chapter 7, pp ], with the conditions stated herein, namely, ψ, lx l >. Therefore, we have η 1 wx = inf E x lx j + ψx η, j=1 where the infimum is taken over the G-stopping times η with values in N, and there is an optimal control ˆη given by ˆη =inf n : wx n = ψx n with E x ˆη <. Going back to w, the same result holds, with the same ˆη, namely, η 1 wx = inf E x lx j +ψx η, η j=1 with and wx = inf η τ1 lx =E x [fx t,y t μ]dt η 1 E x lx j +MwX η. j=1

14 272 J. L. MENALDI AND M. ROBIN and By the standard results on the optimal stopping time problems, n 1 M n = E x lx j +wx n is a G n submartingale j=1 η 1 n 1 ˆM n = E x j=1 lx j +wx η n is a G n martingale. From this, for the zero-admissible discrete impulse control ν = η i,ξ i :i 1 one can compute n 1 wx E ν x lx i + 1 ηi ncx ηi,ξ i +wx n i=1 i which means τn wx E ν [ x fx ν t,yt ν μ] dt + i 1 θηi τ n cx i 1 θ ηi,ξ i +wx n 1 τ n and, therefore, since w is bounded and E x τ n we deduce 5.8 μ Jx,,ν ν V. Then, defining ˆν = ˆη i, ˆξ i :i 1 by ˆη i =inf n ˆη i 1 : wx n =MwX n, ˆξi = ˆξXˆηi, i 1, where ˆη =andˆξx is a Borel measurable selector realizing the infimum in Mwx, we obtain the equality in 5.8 using the martingale property of Mn. Remark When λx, y =λy, y t is independent of x t and under the assumptions of section 2.1, y t has a unique invariant measure given by ζ 1 B = 1 τ Eτ E 1 B y t dt B BR + with τ =inf t :y t =, e.g., see Davis [1, pp ]; actually, in this case, y t is a simple example of a piecewise deterministic Markov process. Therefore, if x t has a unique invariant probability ζ 2 on E, then the couple x t,y t has the invariant probability ζ = ζ 2 ζ 1. 2 If fx, y =fx then it is sufficient to assume that Poisson s equation for x t alone, i.e., A x hx =fx f, has a continuous solution. 3 In the general case namely, λx, y andfx, y, it seems necessary to have an explicit knowledge of x t to verify directly assumption 5.1, which is clearly satisfied if there exists a continuous bounded solution of Poisson s equation A xy hx, y = fx, y f, but this assumption is too restrictive in our case because of y t. This would not be the case if the signal y t belongs to a bounded interval [,b] instead of the whole R +, however, some new difficulties arrive with the corresponding infinitesimal generator A xy.

15 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 273 Corollary 5.3. Under the assumptions of section 2.1 and 5.1, 5.9 μ =inf ν V Jx, y, ν, where μ is given by Theorem 4.2 and J is defined by Proof. Recall the definition of w x, y in 3.1: τ1 [ ] w x, y =E xy fxt,y t μ dt + w x τ1,, where w x, = w x iswithμ the solution obtained in Theorem 4.2. If ν = τ i,ξ i :i 1 is an admissible impulse control, then θ 1 τ 1. Therefore, θ 1 can be written as θ 1 = τ 1 + θ 1,where θ 1 is a stopping time with respect to F τ1+t, and similarly with τ 2 = τ 1 + τ 2. From the properties of w,wehave τ2 w x τ1 E ν [ ] x τ1 fxt,y t μ dt + 1 θ1 τ 2 cx θ1,ξ 1 +w x τ2,, which gives τ1 w x, y E ν [ ] τ2 [ ] xy fxt,y t μ dt + fxt,y t μ dt τ θ1 τ 2 cx θ 1,ξ 1 +w x τ2, or, equivalently, w x, y E ν xy τ2 Iterating this argument we obtain [ ] fxt,y t μ dt + 1θ1 τ 2 cx θ 1,ξ 1 +w x τ2,. μ Jx, y, ν ν V. Using the optimal control defined in Theorem 5.1, we get the equality for the control ˆν 1 translated by τ 1, i.e., with ˆ = τ 1 + τˆηi, i 1. Remark 5.4. As mentioned in Arapostathis et al. [1, p. 287], our definition, either 2.13 or 2.16, of our cost with lim inf gives a rather optimistic measure of performance; however, the inequality just before 5.8 shows that essentially there are no changes if lim inf is replaced by lim sup in the definition, either 2.13 or 2.16, of the cost, either Jx, y, ν or Jx, y, ν. Moreover, the minimization with either lim inf or lim sup yields the same value μ = μ. Proposition 5.5. If μ,w is a solution of 3.9 provided by Theorem 4.2, then wx, y, defined by 3.15, is the solution of the equation 5.1 A xy wx+λx, y [ wx, Mwx, ] + = fx, y μ. Proof. The first step is to show the following lemma. Lemma 5.6. Under the assumptions of section 2.1, we have 5.11 w x =min wx,,mwx,.

16 274 J. L. MENALDI AND M. ROBIN Proof of lemma. Note that Therefore wx, = E x τ1 [ ] fxt,y t μ dt + w x τ1 := Tw x. minmw x,wx, =minmw x,tw x = w, where the last equality comes from 3.9 for μ,w. So the point is now to show 5.12 Mw x =Mwx,. Observe also that, by definition, 5.13 w x Tw x =wx, and, therefore, 5.14 Mw x Mwx,. For a fixed x, letˆξ satisfy the infimum in Mw x, i.e., Mw x =cx, ˆξ +w ˆξ. Let us check that ˆξ is also a minimizer for Mwx,. Indeed, the equality yields Now, if Mw ˆξ <wˆξ,, then w x =minmw x,wx, Mw x =cx, ˆξ+minwˆξ,,Mw ˆξ. Mw x =cx, ˆξ+Mw ˆξ =cx, ˆξ+cˆξ,ξ +w ξ,, where ξ realizes the infimum in Mw ˆξ. However, if we assume that the strict inequality holds in assumption 2.1 on c, then one gets Mw x >cx, ξ +w ξ, Mw x, which is impossible. Therefore, and Mw ˆξ wˆξ, Mw x =cx, ˆξ+wˆξ, Mwx,. Coming back to the assumption 2.1 on c, let us replace c with c ε x, ξ = 1 x ξ ε + cx, ξ where the strict inequality holds in assumption 2.1 on c ε, and as ε, we also get Mw x Mwx,. Hence, this together with 5.14 gives 5.12 and, therefore, Continuing with the proof, the equality 5.11 obtained in the above Lemma 5.6 in 3.15 gives τ1 [ ] 5.15 wx, y =E xy fxt,y t μ dt +min wxτ1,,mwx τ1,. Note that, y t = y + t on [,τ 1 [ under P xy.

17 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 275 Using the law of τ 1,onecanwrite wx, y =E xy exp λx t,y+ texp λx r,y+ rdr dt [ ] fxs,y+ s μ ds + E xy λx t,y+ t λx r,y+ rdr min wx t,,mwx t, dt After integrating by parts, the first term 1 becomes therefore, 1 = E xy wx, y =E xy Since one gets exp exp = [fxt ] λx r,y+ rdr,y+ t μ dt, λx r,y+ rdr [ fx t,y+ t μ + λx t,y+ tmin wx t,,mwx t, ] dt. min wx,,mwx, = wx, [ wx, Mwx, ] +, 5.16 wx, y =E xy with exp λx r,y+ rdr ϕx t,y+ tdt, 5.17 ϕx, y = fx, y μ + λx, ywx, λx, y [ wx, Mwx, ] +. It is clear that x t,y + t is a homogeneous Markov process and we can consider probabilities P xy such that and write 5.16 as 5.18 wx, y =Ẽxy Ẽ xy gxt,y t =Φtgx, y + t By the Markov property, 5.18 gives 5.19 wx, y =Ẽxy exp exp s λx r,y r dr ϕx t,y t dt. λx r,y r dr ϕx s,y s ds + Ẽxy exp λx r,y r dr wx t,y t.

18 276 J. L. MENALDI AND M. ROBIN From 5.19 one can check that the process 5.2 s M t = exp λx r,y r dr ϕx s,y s ds +exp λx r,y r dr wx t,y t is a martingale for the probability P xy. Then we use the following lemma, which is a slight modification of Lemma 3.3 in Bensoussan and Lions [5, p. 354] so we skip its proof. Lemma 5.7. Let ψ t, ζ t,andv t be bounded adapted processes. If M t = is a martingale, then the process ρ t = ζ t exp v s ds + is also a martingale. ψ s ds + ζ t t ψs v s ζ s exp s Now, let us apply the previous lemma to ζ t =exp λx r,y r dr wx t,y t, ψ t = λx t,y t exp with v t = λx t,y t α, α>. From 5.2 we deduce that ρt =e αt wx t,y t + λx r,y r dr ϕx t,y t v r dr ds e αs[ λx s,y s ϕx s,y s + α λx s,y s wx s,y s ] ds is a martingale and, therefore, wx, y =Ẽxy e αt wx t,y t + e αs[ λx s,y s ϕx s,y s + α λx s,y s wx s,y s ] ds and wx, y =Ẽxy e αt[ fx t,y t μ + λx s,y s wx t, wx t,y t + αwx t,y t λx s,y s wx s, Mwx t, +] dt with the definition of ϕ in But since w is a bounded and continuous function, as well as f, the expression 5.21 gives the resolvent of process x t,y+ t, the generator of which is A x + y,

19 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 277 therefore, we can write 5.22 A x wx, y wx, y y + αwx, y =fx, y μ + αwx, y + λx, y [ wx, wx, y ] λx, y [ wx, Mwx, ] +. Rearranging the terms, and recalling the expression 2.4 for A xy, we deduce 5.1. We can now state the following theorem. Theorem 5.8. Under the assumptions of section 2.1, and5.1, we have 5.23 μ =infjx, y, ν =Jx, y, ˆν, ν V where μ is given by Theorem 4.2 and J is defined by 2.13, andˆν is defined as in Corollary 5.3, i.e., ˆν = ˆ, ˆξ i : i 1 is the admissible Markov impulse control corresponding to the stopping region S = x E : w x =Mw x and impulse function ξ x =ˆξx, which is a Borel measurable optimal selector of Mw x; see Definition 2.3 and assumption Proof. From 5.22, we have 5.24 A xy wx, y+αwx, y =fx, y μ + αwx, y which implies, in particular, that M α T = T λx, y [ wx, Mwx, ] + fxt,y t μ + αwx t,y t e αt dt + wx T,y T e αt is a submartingale. Since w is bounded, one can let α inm α T M T = T to deduce that fxt,y t μ dt + wxt,y T is a submartingale. For an arbitrary impulse control ν =,ξ i :i 1 in V, wehave i.e., wx, y E ν xy θ1 T fxt,y t μ dt + wxθ1 T,y θ1 T, wx, y E ν xy T fxt,y t μ dt 1θ1 T T θ 1 fxt,y t μ dt + 1 θ1 T wx θ1, + 1 θ1>t wx T,y T. Moreover, note that even if we do not have, in general, w Mw,wedohave wx θ1, cx θ1,ξ 1 +wξ 1, at the times of the impulses.

20 278 J. L. MENALDI AND M. ROBIN So we have T wx, y E ν xy fx t,y t μ dt + 1θ1 T cx θ1,ξ 1 +1 θ1 T wξ 1, 1 θ1 T fxt,y t μ dt + wxt,y T 1 θ1 T wx T,y T θ 1 T and since by the submartingale property [ T ] E ν xy 1 θ1 T wξ 1, fxt,y t μ dt wxt,y T θ 1 we obtain wx, y E ν xy T, fxt,y t μ dt + 1θ1 T cx θ1,ξ 1 +wx T,y T ; iterating this argument, we deduce T wx, y E ν xy fxt,y t μ dt + 1 θi T cx i 1,ξ i +wx T,y T, which implies that μ Jx, y, ν i=1 ν V. Next using the control ˆν as defined in Corollary 5.3, we also have μ = Jx, y, ˆν, and equality 5.23 follows. 6. Extensions. For instances, we reconsider our assumptions on the space E and on the signal process Locally compact. When E is locally compact, we take CE =C b E, and we replace 2.1 by the assumption ΦtC C, C being the space of functions vanishing at infinity from Palczewski and Stettner [32, Corollary 2.2], then ΦtC C and lim t Φtgx =gx uniformly on every compact of E. The rest of the assumptions of sections 2.1 and 2.2 remain the same. Then there is no difficulty in extending the result of section 4, but the assumption 5.1 is no longer sufficient to extend the results of section 5. One possible additional assumption is to require that hx, be bounded. This is the case when the Poisson equation A xy h = f f has a bounded solution see Stettner [38] for conditions giving this property, but the extension of the results under more general assumptions would require further work. Note that in the particular case when λ is independent of x and f depends only on x, we can adapt the proof of Theorem 5.1 by using only hx defined as the solution of the discrete time Poisson equation P Ih = l l, which has a bounded solution under 2.6 with l being the integral of l with respect to the invariant measure of P, which is in this case the same as the invariant measure of Φt. Let us mention that the assumption 2.6 is also relatively restrictive when E is locally compact; actually, considering, for instance, a nondegenerate diffusion process in R d, one cannot assume 2.8. A less restrictive assumption is to replace 2.6 by 6.1 there exist a recurrent set K, apositive measure m, and <α<1 such that P x, B α1 K xmb with <mk 1. B BE

21 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 279 As shown in Höpfner and Löcherbach [18], if, for instance, x t is a diffusion process which is Harris recurrent, then, in particular, there exists K, α, m as in 6.1 such that k 1 e k1t Φt1 B xdt α 1 K xmb, from which we deduce P x, B k α 1 K xmb; k 1 this is 6.1 with α = k α /k 1. Note that, for example, nondegenerate diffusions for which there exists an adequate Liapunov function are Harris recurrent; see Löecherbach [26] for details. Now, to obtain a solution of 4.9, one can adapt the results of Luque-Vásquez and Hernández- Lerma [27], which has an ergodicity assumption of the type 6.1. Then, the results of section 5 can be extended with6.1, atleastwhenhx, is bounded Infinite dimension. If x t : t takes values in an infinite dimensional space, then the prototype could be given by a stochastic partial differential equation, where E is a Banach or Hilbert space e.g., Menaldi and Sritharan [31]. Several techniques are available to treat optimal stopping and impulse control problems in this context e.g., see [28], Priola [34], and the discussion in [29, section 5.1.2]. Moreover, see the book by Da Prato and Zabczyk [9] for some results of ergodicity in infinite dimensions. Actually, this includes a weak C b -semigroup, but calculations are harder, even under suitable assumptions. However, further works are needed to fully analyze and to really include infinite dimensions Other signal processes. The signal admit several generalizations, e.g., sticky signal, i.e., when the process y t may remain for a positive time at so that the controller has a positive continuous-time interval where impulses can be applied. Indeed, this can regarded as signals on/off by simply assuming that the process y t takesvaluesinrand only while y are impulses allowed or admissible. Even more general is the case where the process giving the signals is a semi- Markov process which cover both the independent and identically distributed case and the pure jump Markov processes conditioned to the initial Markov process x t. Assume that yt 1 : t is a semi-markov process with values in a space E 1 with the discrete topology and the Borel σ-algebra and y t =yt 1,y2 t :t is the appropriated Markov process where yt 2 : t is the elapsed time since the last jump of yt 1 : t ; e.g., see Davis [1, Appendix, pp ], Gikhman and Skorokhod [15, section III.3, pp ], Jacod [19], and Robin [35], among others. If we are given λ : E 1 [, [ [, [ satisfying λx, y 1,y 2 M, x, y 1,y 2 λy 1,y 2 continuous and a transition probability qx, y 1,y 2, Γ with x, y 2 qx, y 1,y 2, dzϕz E 1 continuous, for every ϕ bounded measurable on E 1, one can show that y t : t can be constructed as a Markov process with infinitesimal generator for a given x in E A y xfy 1,y 2 = fy1,y 2 [ ] y 2 + λx, y 1,y 2 qx, y 1,y 2, dzfz, fy 1,y 2, E 1

22 271 J. L. MENALDI AND M. ROBIN and A xy = A x +A y is the infinitesimal generator of the Markov Feller process x t,y t. In this case, a given proper or not subset D of the space E 1 determines whether or not impulses are allowed. Calculations are complicated, but perhaps not insurmountable. These types of models are particular cases of general hybrid control models, which are considered in more details in a coming book by Jasso-Fuentes, Menaldi, and Robin [2]. REFERENCES [1] A. Arapostathis, V. S. Borkar, E. Fernández-Gaucherand, M. K. Ghosh, and S. I. Marcus, Discrete-time controlled Markov processes with average cost criterion: A survey, SIAM J. Control Optim., , pp [2] A. Bensoussan, Stochastic Control by Functional Analysis Methods, North-Holland, Amsterdam, [3] A. Bensoussan, Perturbation Methods in Optimal Control, Gauthier-Villars, Paris, [4] A. Bensoussan, Dynamic Programming and Inventory Control, IOS Press, Amsterdam, 211. [5] A. Bensoussan and J.-L. Lions, Applications des inéquations variationnelles en contrôle stochastique, Dunod, Paris, [6] A. Bensoussan and J.-L. Lions, Contrôle impulsionnel et inéquations quasi-variationnelles, Gauthier-Villars, Paris, [7] P. Brémaud, Optimal thinning of a point process, SIAM J. Control Optim., , pp [8] O. L. V. Costa, F. Dufour, and A. B. Piunovskiy, Constrained and unconstrained optimal discounted control of piecewise deterministic Markov processes, SIAM J. Control Optim., , pp [9] G. Da Prato and J. Zabczyk, Ergodicity for Infinite-Dimensional Systems, Cambridge University Press, Cambridge, [1] M. H. A. Davis, Markov Models and Optimization, Chapman & Hall, London, [11] P. Dupuis and H. Wang, Optimal stopping with random intervention times, Adv. Appl. Probab., 34 22, pp [12] M. Garroni and J. Menaldi, Green Functions for Second Order Parabolic Integro-Differential Problems, Longman Scientific & Technical, Harlow, [13] M. G. Garroni and J. L. Menaldi, Second Order Elliptic Integro-Differential Problems, Chapman & Hall/CRC, Boca Raton, FL, 22. [14] D. Gatarek and L. Stettner, On the compactness method in general ergodic impulsive control of Markov processes, Stoch. Stoch. Rep., , pp [15] I. Gikhman and A. Skorokhod, The Theory of Stochastic Processes. II, Springer, Berlin, 24. [16] O. Hernández-Lerma and J. Lasserre, Discrete-time Markov Control Processes, Springer, New York, [17] O. Hernández-Lerma and J. Lasserre, Further Topics on Discrete-Time Markov Control Processes, Springer, New York, [18] R. Höpfner and E. Löcherbach, Limit Theorems for Null Recurrent Markov Processes, Mem. Amer. Math. Soc. 161, 23. [19] J. Jacod, Semi-groupes et mesures invariantes pour les processus semi-markoviens àespace d état quelconque, Ann. Inst. H. Poincaré Sect. B, , pp [2] H. Jasso-Fuentes, J. Menaldi, and M. Robin, Hybrid Control for Markov-Feller Processes, to appear. [21] M. Kurano, Semi-Markov decision processes and their applications in repalcement models, J. Oper. Res. Soc. Japan, , pp [22] M. Kurano, Semi-Markov decision processes with a reachable state-subset, Optimization, , pp , [23] J. P. Lepeltier and B. Marchal, Theorie generale du controle impulsionnel Markovien, SIAM J. Control Optim., , pp [24] G. Liang, Stochastic control representations for penalized backward stochastic differential equations, SIAM J. Control Optim., , pp [25] G. Liang and W. Wei, Optimal switching at Poisson Random Intervention Times, Discrete Contin.Dyn.Syst.Ser.B,toappear. [26] E. Löecherbach, Ergodicity and Speed of Convergence to Equilibrium for Diffusion Processes,

23 ERGODIC IMPULSE CONTROL PROBLEMS WITH CONSTRAINT 2711 [27] F. Luque-Vásquez and O. Hernández-Lerma, Semi-Markov control models with average costs, Appl. Math. Warsaw, , pp [28] J. Menaldi, Stochastic Hybrid Optimal Control Models, in Stochastic Models, II Guanajuato, 2, Aportaciones Mat. Investig. 16, Sociedad Matemática Mexicana, México, 21, pp [29] J. L. Menaldi and M. Robin, On some optimal stopping problems with constraint, SIAMJ. Control Optim., , pp [3] J. L. Menaldi and M. Robin, On some impulse control problems with constraint, SIAMJ. Control Optim., , pp [31] J. Menaldi and S. Sritharan, Impulse control of stochastic Navier-Stokes equations, Nonlinear Anal., 52 23, pp [32] J. Palczewski and L. Stettner, Finite horizon optimal stopping of time-discontinuous functionals with applications to impulse control with delay, SIAM J. Control Optim., 48 21, pp [33] T. Prieto-Rumeau and O. Hernández-Lerma, Bias optimality for continuous-time controlled Markov chains, SIAM J. Control Optim., 45 26, pp [34] E. Priola, On a class of Markov type semigroups in spaces of uniformly continuous and bounded functions, Studia Math., , pp [35] M. Robin, Contrôle Impulsionnel des Processus de Markov, Thèse d état, Paris Dauphine University, Paris, 1978, [36] L. Stettner, Discrete time adaptive impulsive control theory, Stochastic Process. Appl., , pp [37] L. Stettner, On ergodic impulsive control problems, Stochastics, , pp [38] L. Stettner, On the Poisson equation and optimal stopping of ergodic Markov processes, Stochastics, , pp [39] H. Wang, Some control problems with random intervention times, Adv. Appl. Probab., 33 21, pp

On Ergodic Impulse Control with Constraint

On Ergodic Impulse Control with Constraint On Ergodic Impulse Control with Constraint Maurice Robin Based on joint papers with J.L. Menaldi University Paris-Sanclay 9119 Saint-Aubin, France (e-mail: maurice.robin@polytechnique.edu) IMA, Minneapolis,

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

Zero-sum semi-markov games in Borel spaces: discounted and average payoff

Zero-sum semi-markov games in Borel spaces: discounted and average payoff Zero-sum semi-markov games in Borel spaces: discounted and average payoff Fernando Luque-Vásquez Departamento de Matemáticas Universidad de Sonora México May 2002 Abstract We study two-person zero-sum

More information

REMARKS ON THE EXISTENCE OF SOLUTIONS IN MARKOV DECISION PROCESSES. Emmanuel Fernández-Gaucherand, Aristotle Arapostathis, and Steven I.

REMARKS ON THE EXISTENCE OF SOLUTIONS IN MARKOV DECISION PROCESSES. Emmanuel Fernández-Gaucherand, Aristotle Arapostathis, and Steven I. REMARKS ON THE EXISTENCE OF SOLUTIONS TO THE AVERAGE COST OPTIMALITY EQUATION IN MARKOV DECISION PROCESSES Emmanuel Fernández-Gaucherand, Aristotle Arapostathis, and Steven I. Marcus Department of Electrical

More information

Noncooperative continuous-time Markov games

Noncooperative continuous-time Markov games Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and

More information

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS

SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS SUPPLEMENT TO CONTROLLED EQUILIBRIUM SELECTION IN STOCHASTICALLY PERTURBED DYNAMICS By Ari Arapostathis, Anup Biswas, and Vivek S. Borkar The University of Texas at Austin, Indian Institute of Science

More information

Spatial Ergodicity of the Harris Flows

Spatial Ergodicity of the Harris Flows Communications on Stochastic Analysis Volume 11 Number 2 Article 6 6-217 Spatial Ergodicity of the Harris Flows E.V. Glinyanaya Institute of Mathematics NAS of Ukraine, glinkate@gmail.com Follow this and

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes

Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes RAÚL MONTES-DE-OCA Departamento de Matemáticas Universidad Autónoma Metropolitana-Iztapalapa San Rafael

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Impulse Control of Stochastic Navier-Stokes Equations

Impulse Control of Stochastic Navier-Stokes Equations Wayne State University Mathematics Faculty Research Publications Mathematics 1-1-23 Impulse Control of Stochastic Navier-Stokes Equations J. L. Menaldi Wayne State University, menaldi@wayne.edu S. S. Sritharan

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

A fixed point theorem for weakly Zamfirescu mappings

A fixed point theorem for weakly Zamfirescu mappings A fixed point theorem for weakly Zamfirescu mappings David Ariza-Ruiz Dept. Análisis Matemático, Fac. Matemáticas, Universidad de Sevilla, Apdo. 1160, 41080-Sevilla, Spain Antonio Jiménez-Melado Dept.

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain

Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain Nicolás Carreño Université Pierre et Marie Curie-Paris 6 UMR 7598 Laboratoire

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Finite-time Blowup of Semilinear PDEs via the Feynman-Kac Representation. CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS GUANAJUATO, MEXICO

Finite-time Blowup of Semilinear PDEs via the Feynman-Kac Representation. CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS GUANAJUATO, MEXICO Finite-time Blowup of Semilinear PDEs via the Feynman-Kac Representation JOSÉ ALFREDO LÓPEZ-MIMBELA CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS GUANAJUATO, MEXICO jalfredo@cimat.mx Introduction and backgrownd

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

On the bang-bang property of time optimal controls for infinite dimensional linear systems

On the bang-bang property of time optimal controls for infinite dimensional linear systems On the bang-bang property of time optimal controls for infinite dimensional linear systems Marius Tucsnak Université de Lorraine Paris, 6 janvier 2012 Notation and problem statement (I) Notation: X (the

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem

Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Available online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 (206), 424 4225 Research Article Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem Jong Soo

More information

Goodness of fit test for ergodic diffusion processes

Goodness of fit test for ergodic diffusion processes Ann Inst Stat Math (29) 6:99 928 DOI.7/s463-7-62- Goodness of fit test for ergodic diffusion processes Ilia Negri Yoichi Nishiyama Received: 22 December 26 / Revised: July 27 / Published online: 2 January

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE

WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 126, Number 10, October 1998, Pages 3089 3096 S 0002-9939(98)04390-1 WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE JEAN B. LASSERRE

More information

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim

GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS. Jong Kyu Kim, Salahuddin, and Won Hee Lim Korean J. Math. 25 (2017), No. 4, pp. 469 481 https://doi.org/10.11568/kjm.2017.25.4.469 GENERAL NONCONVEX SPLIT VARIATIONAL INEQUALITY PROBLEMS Jong Kyu Kim, Salahuddin, and Won Hee Lim Abstract. In this

More information

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces

Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces Viscosity Iterative Approximating the Common Fixed Points of Non-expansive Semigroups in Banach Spaces YUAN-HENG WANG Zhejiang Normal University Department of Mathematics Yingbing Road 688, 321004 Jinhua

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński

EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński DISCRETE AND CONTINUOUS Website: www.aimsciences.org DYNAMICAL SYSTEMS SUPPLEMENT 2007 pp. 409 418 EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE Leszek Gasiński Jagiellonian

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

An invariance result for Hammersley s process with sources and sinks

An invariance result for Hammersley s process with sources and sinks An invariance result for Hammersley s process with sources and sinks Piet Groeneboom Delft University of Technology, Vrije Universiteit, Amsterdam, and University of Washington, Seattle March 31, 26 Abstract

More information

Observer design for a general class of triangular systems

Observer design for a general class of triangular systems 1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals

More information

Optimal stopping time formulation of adaptive image filtering

Optimal stopping time formulation of adaptive image filtering Optimal stopping time formulation of adaptive image filtering I. Capuzzo Dolcetta, R. Ferretti 19.04.2000 Abstract This paper presents an approach to image filtering based on an optimal stopping time problem

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

On a Class of Multidimensional Optimal Transportation Problems

On a Class of Multidimensional Optimal Transportation Problems Journal of Convex Analysis Volume 10 (2003), No. 2, 517 529 On a Class of Multidimensional Optimal Transportation Problems G. Carlier Université Bordeaux 1, MAB, UMR CNRS 5466, France and Université Bordeaux

More information

Information and Credit Risk

Information and Credit Risk Information and Credit Risk M. L. Bedini Université de Bretagne Occidentale, Brest - Friedrich Schiller Universität, Jena Jena, March 2011 M. L. Bedini (Université de Bretagne Occidentale, Brest Information

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES

CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES CONNECTIONS BETWEEN BOUNDED-VARIATION CONTROL AND DYNKIN GAMES IOANNIS KARATZAS Departments of Mathematics and Statistics, 619 Mathematics Building Columbia University, New York, NY 127 ik@math.columbia.edu

More information

GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY

GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 17, Number 1, Spring 2009 GLOBAL ATTRACTIVITY IN A CLASS OF NONMONOTONE REACTION-DIFFUSION EQUATIONS WITH TIME DELAY XIAO-QIANG ZHAO ABSTRACT. The global attractivity

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS The Annals of Probability 2001, Vol. 29, No. 1, 411 417 MAJORIZING MEASURES WITHOUT MEASURES By Michel Talagrand URA 754 AU CNRS We give a reformulation of majorizing measures that does not involve measures,

More information

LONG-TERM OPTIMAL REAL INVESTMENT STRATEGIES IN THE PRESENCE OF ADJUSTMENT COSTS

LONG-TERM OPTIMAL REAL INVESTMENT STRATEGIES IN THE PRESENCE OF ADJUSTMENT COSTS LONG-TERM OPTIMAL REAL INVESTMENT STRATEGIES IN THE PRESENCE OF ADJUSTMENT COSTS ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider the problem of determining in a dynamical way the optimal capacity level

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Zero-Sum Average Semi-Markov Games: Fixed Point Solutions of the Shapley Equation

Zero-Sum Average Semi-Markov Games: Fixed Point Solutions of the Shapley Equation Zero-Sum Average Semi-Markov Games: Fixed Point Solutions of the Shapley Equation Oscar Vega-Amaya Departamento de Matemáticas Universidad de Sonora May 2002 Abstract This paper deals with zero-sum average

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation Acta Polytechnica Hungarica Vol. 14, No. 5, 217 Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation Roksana Brodnicka, Henryk Gacki Institute of Mathematics, University of Silesia

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Internal Stabilizability of Some Diffusive Models

Internal Stabilizability of Some Diffusive Models Journal of Mathematical Analysis and Applications 265, 91 12 (22) doi:1.16/jmaa.21.7694, available online at http://www.idealibrary.com on Internal Stabilizability of Some Diffusive Models Bedr Eddine

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Recurrence and Ergodicity for A Class of Regime-Switching Jump Diffusions

Recurrence and Ergodicity for A Class of Regime-Switching Jump Diffusions To appear in Appl. Math. Optim. Recurrence and Ergodicity for A Class of Regime-Switching Jump Diffusions Xiaoshan Chen, Zhen-Qing Chen, Ky Tran, George Yin Abstract This work develops asymptotic properties

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS Josef Teichmann Abstract. Some results of ergodic theory are generalized in the setting of Banach lattices, namely Hopf s maximal ergodic inequality and the

More information

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1) Title On the stability of contact Navier-Stokes equations with discont free b Authors Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 4 Issue 4-3 Date Text Version publisher URL

More information

ANALYSIS AND APPLICATION OF DIFFUSION EQUATIONS INVOLVING A NEW FRACTIONAL DERIVATIVE WITHOUT SINGULAR KERNEL

ANALYSIS AND APPLICATION OF DIFFUSION EQUATIONS INVOLVING A NEW FRACTIONAL DERIVATIVE WITHOUT SINGULAR KERNEL Electronic Journal of Differential Equations, Vol. 217 (217), No. 289, pp. 1 6. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ANALYSIS AND APPLICATION OF DIFFUSION EQUATIONS

More information

Chapter 1. Poisson processes. 1.1 Definitions

Chapter 1. Poisson processes. 1.1 Definitions Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s

More information

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Jingnan Fan Andrzej Ruszczyński November 5, 2014; revised April 15, 2015 Abstract For controlled discrete-time

More information

Stochastic 2-D Navier-Stokes Equation

Stochastic 2-D Navier-Stokes Equation Wayne State University Mathematics Faculty Research Publications Mathematics 1-1-22 Stochastic 2-D Navier-Stokes Equation J. L. Menaldi Wayne State University, menaldi@wayne.edu S. S. Sritharan United

More information

The Azéma-Yor Embedding in Non-Singular Diffusions

The Azéma-Yor Embedding in Non-Singular Diffusions Stochastic Process. Appl. Vol. 96, No. 2, 2001, 305-312 Research Report No. 406, 1999, Dept. Theoret. Statist. Aarhus The Azéma-Yor Embedding in Non-Singular Diffusions J. L. Pedersen and G. Peskir Let

More information

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Optimal Control of Partially Observable Piecewise Deterministic Markov Processes Nicole Bäuerle based on a joint work with D. Lange Wien, April 2018 Outline Motivation PDMP with Partial Observation Controlled

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

Continuous-Time Markov Decision Processes. Discounted and Average Optimality Conditions. Xianping Guo Zhongshan University.

Continuous-Time Markov Decision Processes. Discounted and Average Optimality Conditions. Xianping Guo Zhongshan University. Continuous-Time Markov Decision Processes Discounted and Average Optimality Conditions Xianping Guo Zhongshan University. Email: mcsgxp@zsu.edu.cn Outline The control model The existing works Our conditions

More information

Compensating operator of diffusion process with variable diffusion in semi-markov space

Compensating operator of diffusion process with variable diffusion in semi-markov space Compensating operator of diffusion process with variable diffusion in semi-markov space Rosa W., Chabanyuk Y., Khimka U. T. Zakopane, XLV KZM 2016 September 9, 2016 diffusion process September with variable

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. COTROL OPTIM. Vol. 43, o. 4, pp. 1222 1233 c 2004 Society for Industrial and Applied Mathematics OZERO-SUM STOCHASTIC DIFFERETIAL GAMES WITH DISCOTIUOUS FEEDBACK PAOLA MAUCCI Abstract. The existence

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations

Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations Threshold behavior and non-quasiconvergent solutions with localized initial data for bistable reaction-diffusion equations P. Poláčik School of Mathematics, University of Minnesota Minneapolis, MN 55455

More information

INFINITE TIME HORIZON OPTIMAL CONTROL OF THE SEMILINEAR HEAT EQUATION

INFINITE TIME HORIZON OPTIMAL CONTROL OF THE SEMILINEAR HEAT EQUATION Nonlinear Funct. Anal. & Appl., Vol. 7, No. (22), pp. 69 83 INFINITE TIME HORIZON OPTIMAL CONTROL OF THE SEMILINEAR HEAT EQUATION Mihai Sîrbu Abstract. We consider here the infinite horizon control problem

More information

Controllability of linear PDEs (I): The wave equation

Controllability of linear PDEs (I): The wave equation Controllability of linear PDEs (I): The wave equation M. González-Burgos IMUS, Universidad de Sevilla Doc Course, Course 2, Sevilla, 2018 Contents 1 Introduction. Statement of the problem 2 Distributed

More information

BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009

BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, March 2009 BSDEs and PDEs with discontinuous coecients Applications to homogenization K. Bahlali, A. Elouain, E. Pardoux. Jena, 16-20 March 2009 1 1) L p viscosity solution to 2nd order semilinear parabolic PDEs

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

On the Bellman equation for control problems with exit times and unbounded cost functionals 1

On the Bellman equation for control problems with exit times and unbounded cost functionals 1 On the Bellman equation for control problems with exit times and unbounded cost functionals 1 Michael Malisoff Department of Mathematics, Hill Center-Busch Campus Rutgers University, 11 Frelinghuysen Road

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Stability of an abstract wave equation with delay and a Kelvin Voigt damping Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces

Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces An. Şt. Univ. Ovidius Constanţa Vol. 19(1), 211, 331 346 Some unified algorithms for finding minimum norm fixed point of nonexpansive semigroups in Hilbert spaces Yonghong Yao, Yeong-Cheng Liou Abstract

More information

On nonexpansive and accretive operators in Banach spaces

On nonexpansive and accretive operators in Banach spaces Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 3437 3446 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa On nonexpansive and accretive

More information

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem Chapter 4 Inverse Function Theorem d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d dd d d d d This chapter

More information