An Ergodic Control Problem for Constrained Diffusion Processes: Existence of Optimal Markov Control.

Size: px
Start display at page:

Download "An Ergodic Control Problem for Constrained Diffusion Processes: Existence of Optimal Markov Control."

Transcription

1 An Ergodic Control Problem for Constrained Diffusion Processes: Existence of Optimal Markov Control Amarjit Budhiraja Department of Statistics University of North Carolina at Chapel Hill Chapel Hill, NC April 5, 21 Abstract An ergodic control problem for a class of constrained diffusion processes is considered The goal is the almost sure minimization of long term cost per unit time The main result of the paper is that there exists an optimal Markov control for the considered problem It is shown that under the assumption of regularity of the Skorohod map and appropriate conditions on the drift coefficient the class of controlled diffusion processes considered have strong, uniform in control, stability properties The stability results on the class of controlled constrained diffusion processes considered in this work are non-trivial in that the domains are unbounded and the corresponding unconstrained diffusions are typically transient These stability properties are key in obtaining appropriate tightness estimates Once these estimates are available the remaining work lies in identifying weak limits of a certain family of occupation measures In this regard an extension to the Echeverria-Weiss-Kurtz characterization of invariant measures of Markov processes, to the case of constrained-controlled processes considered in this paper, is proved This characterization result is also crucially used in proving the compactness of the family of invariant measures of Markov processes corresponding to all possible Markov controls Keywords: Ergodic control, Optimal Markov control, Controlled reflected diffusions, Constrained processes, Control of queuing networks, Patchwork martingale problem, Controlled martingale problem, Echeverria s criterion 1

2 1 Introduction Constrained diffusion processes arise in a natural fashion in the heavy traffic analysis of queuing networks coming from problems in computer, communications and manufacturing systems The problem of control of such queuing systems is of great current interest (cf [24], [13], [17], [12], [25] Excepting a few special cases, the control problem for queuing networks is quite difficult to analyze directly and thus one tries to find more tractable approximations In that respect, diffusion approximations obtained via appropriate scaling limits become very attractive since in the limit many fine details are eliminated and usually the only parameters remaining are the means, variances of the various processes and the mean routing structure Because of the simple structure, the limit problem is considerably easier to solve Once the optimal solution to the control problem for the constrained diffusions is obtained, one can then approximate the properties and suggest good policies for the actual physical system Thus the study of constrained controlled diffusion processes is one of the central objectives in the optimal control of queuing networks In this work we consider a control problem for a class of diffusion processes which are constrained to lie in a polyhedral cone G with a vertex at origin The domain G IR k is given as an intersection of N half spaces G i ; i = 1, N Associated with each G i are two vectors: the first vector, denoted as n i, represents the inward normal to G i while the second, denoted as d i, gives the direction of constraint Roughly speaking, the constrained version of a given unrestricted trajectory in IR k is obtained by pushing back the trajectory, whenever it is about to exit the domain, in a prespecified direction of constraint using the minimal force required to keep the trajectory within the domain Precise definitions will be given in Section 2 Constraining mechanism is described via the Skorohod map, denoted as Γ( which takes an unrestricted trajectory ψ( and maps it to a trajectory φ( = Γ(ψ( such that φ(t G for all t (, Under appropriate conditions on (d i, n i N it follows from the results in [7] that the Skorohod map is well defined and it enjoys a rather strong regularity property (See Theorem 23 The controlled constrained diffusion process that we consider in this paper are obtained as a solution to the equation: X(t = Γ ( X( + b(x(s, u(sds + σ(x(sdw (s (t; t [,, (11 where W ( is a standard Wiener process, b : G U IR k ; σ : G IR k k are suitable coefficients, U is a given control set and u( is a U valued admissible control process The control problem that we study is concerned with the ergodic cost criterion: lim sup T 1 T k(x(s, u(sds, (12 T 1

3 where the limit above is taken almost surely and k : G U IR is a suitable map The two key objectives of the controller are: first, to choose a control, in a non-anticipative fashion, which minimizes the cost in (12 and second, to obtain a control which is easy to implement As regards the second goal, one of the most desirable feature of a good control is that the control depend only on the current value of the state and not on the whole history of the state and/or the control process In other words, we are seeking controls u( such that there exists some measurable map v : G U satisfying u(t = v(x(t, as for all t [, Under such a control the solution to (11 becomes a Markov process and for this reason the map v( is referred to as a Markov control The objective of this work is to show that, under appropriate conditions on the model, (cf Conditions 22, 24, 25, 31 there is a Markov control which minimizes the cost in (12 The ergodic control problem for unconstrained diffusions is one of the classical problems in stochastic control The problem has been studied extensively in [32, 35, 23, 4, 3, 33, 15, 22] The approach taken in the present paper has been inspired by the techniques and results in [4] For ergodic control results on constrained jump-diffusion processes in bounded domains and expected long term cost per unit time criterion, we refer the reader to [2, 26, 24] For the case of constrained diffusions in unbounded domains, we are not aware of any results which give the existence of optimal Markov control under the ergodic cost criterion considered in this paper As is classical in an ergodic control problem of the above form (cf [3], [33], [24], [22] the problem of existence of optimal Markov controls under the cost criterion in (12 is closely related to certain stability properties of the solutions to (11 In a recent work [1], which dealt with the case of uncontrolled constrained diffusions, various stability properties of the solution to ( ξ(t = Γ ξ( + β(ξ(sds + σ(ξ(sdw (s (t; t [,, (13 where β : IR k IR k is an appropriate drift vector, were obtained In particular it was shown that if for all x G, β(x lies in a certain cone C (see (32 and its distance from the boundary of the cone is uniformly bounded below by a positive constant then the constrained diffusion is positive recurrent and has a unique invariant measure The above results identify an important, non-trivial class of ergodic constrained diffusions in unbounded domains, in the sense that the corresponding unconstrained version of these processes would typically be transient To see this one only needs to consider the case where b(x, u b, where b is some fixed vector in C For this latter case (b(x, u b, the cone, in fact, provides a necessary and sufficient condition for positive recurrence of constrained diffusions, ie if b C then the corresponding constrained diffusion is transient[5] In the context of the constant drift case, this necessary and sufficient condition for positive recurrence, for constrained diffusions which correspond to single class networks, was first proved in [14] 2

4 The estimates used in the study of the stability properties of the uncontrolled system in (13 can be used for the controlled problem studied in this paper as well Using these estimates, we will obtain rather strong (uniform in control stability properties for the processes obtained as solutions to (11 (See Section 6 These stability results are then used in various tightness arguments in this paper As another consequence of results in [1] we have that with appropriate assumptions on the drift vector b(,, under any Markov control v(, the solution to (11 is positive recurrent and has a unique invariant measure, denoted as η v We abbreviate this statement by saying that all Markov controls are stable Using this fact and an ergodic theorem of Khasminskii[18] it will follow that under a Markov control v( the limit in (12 is almost surely equal to: k(x, v(xη v (dx (14 G The next key step in the program is to show that for any admissible control u( the limit in (12 can be expressed in the form (14 for some measurable v : G U which may depend on ω (the parameter of randomness, the control u( and other data This is done in Proposition 75 As an immediate consequence of this step it follows that lim sup T 1 T T k(x(s, u(sds inf v G k(x, v(xη v (dx, (15 as, where the infimum on the right side above is taken over all measurable maps v : G U In order to prove the above step we need a characterization result for the invariant measures of solutions of (11 (with u( = v(x( for some measurable v( The characterization result, which extends similar results due to Echeverria [1], Weiss [36], Kurtz [21], to the case of controlled constrained diffusions considered in this paper is proved in Section 5 The proof uses the ideas of patchwork and constrained martingale problems introduced by Kurtz [2, 21] The key idea, as in [21], is to show that if (52 holds then there is a sequence of solutions to certain patchwork controlled martingale problems (see Section 5 for definitions which converge to a stationary solution of (11 We then use the results on existence of optimal Markov controls (for unconstrained processes obtained in [22] and the Feller properties of solutions of (11 (using a Markov control to conclude that for this stationary solution u( = v(x( for some v : G U Once this characterization result is available we can identify the limit in (12 as an integral with respect to the invariant measure η v for some v : G U As a final step we need to show that the infimum on the right side of (15 is attained for some Markov control v : G U This step, (Proposition 74 combined with the ergodic theorem in [18] yields our main theorem: Theorem 34 The fact that the infimum is attained, is a consequence of the fact that the family {η v : v is a Markov control} is compact The proof of the compactness of this family once more uses the characterization result for the invariant 3

5 measures studied in Section 5 and various tightness estimates which follow from the stability properties of our controlled diffusions This is done in Lemma 73 Observe that since (15 holds almost surely, we can replace the left side of the expression by: ess inf lim sup T 1 T k(x(s, u(sds T Thus our main result says that there is a Markov control for which the cost, for almost every realization, is no worse than that for the (essentially best possible realization corresponding to any other control In this paper we do not address the problem of construction of the optimal Markov control or the approximation of optimal control for physical queuing systems using the optimal Markov control for the limit diffusion These questions will be addressed in a future work The paper is organized as follows In Section 2 we present the basic definitions and properties of the Skorohod map We also introduce the main assumptions on our problem data which assure the regularity of the Skorohod map We then define the class of constrained controlled diffusions considered in the work The study of Markov controls forces us to consider the solution of (11 in a weak sense This weak formulation is also introduced in the section Section 3 introduces the ergodic cost problem that interests us in this work We give our key condition on the drift vector which assures that all Markov controls are stable Finally in this section we state our main result: Theorem 34 Section 4 is an assortment of some background results used in the proof of our main theorem We state the ergodic theorem of Khasminskii which permits us to write the limit in (12 corresponding to a Markov control v( as the integral in (14 We also present a very useful result from [6] which enables us to control the growth and obtain the tightness of the reflection terms in our constrained diffusions We then show that we can assume without loss of generality that all our control processes u( are adapted to the filtration generated by the state process and the reflection terms This result is then used to show that certain conditional laws of the constrained diffusions in (11 are almost surely the same as the law of some other constrained diffusion of same form as (11 but with possibly a different control process This result is key to many of the uniform estimates and tightness results in Sections 6 and 7 The last two results, for the case of unconstrained diffusions, are proved in [3], Chapter I and the proofs for the constrained case are similar However, for the sake of completeness we provide the proofs of these results In Section 5 we present our extension of the Echeverria-Weiss-Kurtz criterion for the invariant measures We define the patchwork and constrained controlled martingale problems of Kurtz and study some of their basic properties The main result of this section is Theorem 57 Section 6 is devoted to obtaining stability properties of our constrained diffusions This section crucially uses some estimates derived in [1] (cf Lemmas 61 and 62 As consequences of these results we prove strong, uniform in control, 4

6 tightness properties of the solutions of (11 Finally, in Section 7 we present the proof of Theorem 34 In the Appendix of this paper we provide an alternate proof of Lemma 43 of [1] which, unlike the proof in [1], does not appeal to the Markov property of ξ( This lemma is needed for the proof of Theorem 44 of [1] (See Remark 63 and thus in the proof of Lemma 62 of the present paper 2 Skorohod Map and Controlled-Constrained Diffusions Let G IR k be a polyhedral cone in IR k with the vertex at origin given as the intersection of half spaces G i, i = 1,, N Each half space G i is associated with a unit vector n i via the relation G i = {x IR k : x, n i }, where, denotes the usual inner product in IR k Denote the boundary of a set B IR k by B We will denote the set {x G : x, n i = } by F i For x G, define the set, n(x, of inward normals to G at x by n(x = {r : r = 1, r, x y, y G} With each face F i we associate a unit vector d i such that d i, n i > This vector defines the direction of constraint associated with the face F i For x G define d(x = d IRk : d = α i d i ; α i ; d = 1, where i In(x In(x = {i {1, 2, N} : x, n i = } We will denote the collection of all subsets of {1, N} by Λ Also for λ Λ we will define F λ = i λ F i As a convention we will take F as G Let D([, : IR k denote the set of functions mapping [, to IR k that are right continuous and have limits from the left We endow D([, : IR k with the usual Skorohod topology Let D G ([, : IR k = {ψ D([, : IR k : ψ( G} For η D([, : IR k let η (T denote the total variation of η on [, T ] with respect to the Euclidean norm on IR k Definition 21 Let ψ D G ([, : IR k be given Then (φ, η D([, : IR k D([, : IR k solves the Skorohod problem (SP for ψ with respect to G and d if and only if φ( = ψ(, and for all t [, 5

7 1 φ(t = ψ(t + η(t; 2 φ(t G; 3 η (t < ; 4 η (t = [,t] I {φ(s G} d η (s; 5 There exists (Borel measurable γ : [, IR k such that γ(t d(φ(t (d η -almost everywhere and η(t = γ(sd η (s [,t] On the domain D D G ([, : IR k on which there is a unique solution to the Skorohod problem we define the Skorohod map (SM Γ as Γ(ψ = φ, if (φ, ψ φ is the unique solution of the Skorohod problem posed by ψ We will make the following assumptions on the data defining the Skorohod problem above Condition 22 (a There exists a compact, convex set B IR k with B, such that if v(z denotes the set of inward normals to B at z B, then for i = 1, 2,, N, z B and z, n i < 1 implies that v, d i = for all v v(z (b There exists a map π : IR k G such that if y G, then π(y = y, and if y G, then π(y G, and y π(y = αγ for some α and γ d(π(y (c For every x G, there is n n(x such that d, n > for all d d(x The above assumptions can be verified for a rich class of problems arising from queuing networks For example, in the seminal work [11], it was shown that the above properties hold for Skorohod problems associated with open, single class queuing networks (cf [8] Other classes of network examples for which the above properties hold are in [7], [8], [9], [28], [29] Condition (c above is equivalent to the assumption that the N N matrix with the (i, j-th entry d i, n j is completely - S (cf [3], [7] The following result is taken from [7] Theorem 23 ([7] Under Condition 22 the Skorohod map is well defined on all of D G ([, : IR k, ie, D = D G ([, : IR k and the SM is Lipschitz continuous in the following sense There exists a K < such that for all φ 1, φ 2 D G ([, : IR k : sup t< Γ(φ 1 (t Γ(φ 2 (t < K sup t< φ 1 (t φ 2 (t (21 6

8 In rest of the paper Condition 22 will always be taken to hold We will also assume without loss of generality that K 1 We now introduce the controlled constrained diffusion processes that will be studied in this paper Throughout this paper we will assume the relaxed control framework, ie there is a compact metric space S such that the control set is U = P(S (the space of all probability measures on S endowed with the weak convergence topology All topological spaces in this paper will be endowed with their natural Borel σ- field For a topological space K, we will denote its Borel- σ field by B(K The space of all real, continuous and bounded functions defined on K will be denoted as C b (K and the space of all probability measures on (K, B(K by P(K The space P(K will be endowed with the weak convergence topology For A B(K, I A ( will denote the indicator function of the set A Also, we will denote by Cb 2(G and C (G the space of real valued, bounded and twice continuously differentiable functions on G and the space of real valued, infinitely differentiable, vanishing at infinity, functions on G, respectively By a filtered probability space: (Ω, F, P, (F t we will mean a probability space (Ω, F, P endowed by a filtration (F t t satisfying the usual hypothesis A pair of stochastic processes (u(, W ( defined on some filtered probability space: (Ω, F, P, (F t is said to be an admissible pair if: (a W ( is a F t - standard Wiener process (b u( is a U valued, measurable, {F t } adapted process We will consider controlled constrained diffusion processes of the form: ( X(t = Γ X( + b(x(s, u(sds + σ(x(sdw (s (t, (22 where for (x, u G U, b(x, u = S b(x, αu(dα and the coefficients σ : G IR k k and b : G S IR k are maps satisfying the following conditions Condition 24 There exists r (, such that (i b is a continuous map and for all x, y G and α S b(x, α b(y, α r x y (ii For all x G and α S b(x, α r (iii For all x, y G σ(x σ(y r x y 7

9 (iv For all x G σ(x r We will also assume the following nondegeneracy assumption on σ Condition 25 There exists c (, such that for all x G and α IR k α (σ(xσ (xα c α α In the rest of the paper, in addition to Condition 22, the Conditions: 24 and 25 will also be assumed to hold The following result on the unique strong solution for (22 follows on using the Lipschitz property of the Skorohod map and the usual fixed point arguments Theorem 26 Let (u(, W ( be an admissible pair on some filtered probability space: (Ω, F, P, (F t Then there exists a {F t } adapted process X( with continuous sample paths satisfying (22 for all t as Furthermore, if X 1 ( and X 2 ( are two such processes then P (X 1 (t = X 2 (t; t (, = 1 Remark 27 If X( solves (22 then (cf Theorem 351 [24] there exist continuous, increasing F t adapted processes {Y i ( ; 1 i N} such that X(t = X( + t b(x(s, u(sds + t σ(x(sdw (s + for all t, as Furthermore Y i ( = and for all t > d i Y i (t, (23 as; i = 1,, N t I Fi (X(sdY i (s = Y i (t, From the point of view of applications it is important to consider Markov controls, namely the case where u( = v(x( for some measurable map v : G U However, in such a case (22 may not admit a strong solution Thus we need to work with a weak solution of (22 Definition 28 Let v : G U be a measurable map We say that the equation ( X(t = Γ X( + b(x(s, v(x(sds + σ(x(sdw (s (t, X( µ (24 admits a weak solution if there exists a filtered probability space: (Ω, F, P, {F t } on which is given a {F t } Wiener process W ( and a F t adapted process X( 8

10 with continuous paths such that X( has the probability law µ and for all t the equality in (24 holds as We say that (24 admits a unique weak solution if whenever there are two sets of such spaces and processes denoted as (Ω i, F i, P i, Ft i ; (W i (, X i ( ; i = 1, 2 then the probability law of X 1 ( is same as that of X 2 ( With an abuse of terminology we will also call the map v above a Markov control Under the standing assumptions of this paper we have the following result For the proof of this result we refer the reader to Theorem 422 of [24] Although the proof there is for the case where G is a compact set, exactly the same arguments hold for the case of unbounded state space The key idea in the proof, as in the case of unconstrained diffusions, is to use Condition 25 and Girsanov s theorem to get rid of the drift and then use the strong Feller property of the new process and estimates on the Radon-Nikodym derivative to conclude the strong Feller property of the original process Theorem 29 There is a unique weak solution for (24 Denoting the law of the solution process X(, when X( = x as, by Px v we have that {Px v } x G is a strongly Feller Markov family Furthermore the transition probability law P (t, x, dy of this Markov process is mutually absolutely continuous with respect to the Lebesgue measure on G in the following uniform sense Given δ > and < t < t 1 < there exists an ɛ > such that for all A B(G with λ(a ɛ, where λ denotes the Lebesgue measure on G, P (t, x, A δ, x G, t [t, t 1 ] Finally for any ɛ >, < t < t 1 < and compact set K G there is a δ > such that for all x K, t [t, t 1 ] and A B(G with λ(a K ɛ we have that P (t, x, A δ 3 The Ergodic Cost Problem In this work we are interested in a control problem with an ergodic cost criterion Namely, we are interested in minimizing, over the class of all admissible controls, the cost: lim sup t 1 t t k(x(s, u(sds, (31 where X( is given as a solution of (22 on some filtered probability space with an admissible pair (u(, W (, the limit above is taken almost surely on the corresponding probability space and k : G U IR is a map defined as follows For (x, u G U k(x, u = k(x, αu(dα, where k is in C b (G S S 9

11 We will call a Markov control v a stable Markov control (SMC if the corresponding controlled Markov process {Px v } x G is positive recurrent and has a unique invariant measure We are interested in obtaining conditions under which there is an optimal SMC for the cost criterion (31 The following stability assumption on the underlying model will be assumed throughout this paper Define { } C = α i d i : α i ; i {1,, N} (32 The cone C was used to characterize stability of a certain class of constrained diffusion processes in [5, 1] Let δ (, be fixed Define the set C(δ = {v C : dist(v, C δ} Our next assumption, which also will be assumed throughout this paper, on the diffusion model stipulates the permissible drifts in the underlying diffusion Condition 31 There exist a δ (, such that for all (x, u G U, b(x, u C(δ Under the assumptions made above the results of [1] show that all Markov controls are SMC, more precisely: Theorem 32 The Markov family {P v x } x G of Theorem 29 is positive recurrent and admits a unique invariant measure, denoted as η v Remark 33 In [1] the proof of positive recurrence assumes that the drift coefficient in the constrained diffusion process satisfies a Lipschitz condition, however, as is pointed out in Remark 46 of that paper, the same proof continues to hold with the assumptions on the coefficients made in this paper Now we are able to state the main result of this paper Theorem 34 There exists a Markov control v( such that if for some µ P(G, X( is the corresponding process solving (24, on some filtered probability space, with the probability law of X( being µ then: lim sup T 1 T k(x(s, v(x(sds = inf T ess inf lim sup T 1 T k(x(s, u(sds, T (33 as, where the outside infimum on the right side above is taken over all controlled processes X( with an arbitrary initial distribution and solving (22 over some filtered probability space with some admissible pair (W (, u( The proof of the above theorem will be given in Section 7 1

12 4 Some Background Results: In this section we collect some background results which will be used in the proof of Theorem 34 We begin with an ergodic theorem of Khasminskii [18] which is applicable to the Markov family {P v x } considered in this paper because of Theorem 29 As a consequence of this result the limiting time averages on the left side of (33 can be replaced with expectations with respect to the measure η v Lemma 41 [Khasminskii [18] Theorem 31] For a given µ P(G and a Markov control v let X( be the process on some filtered probability space solving (24 with the distribution of X( being µ Then for all η v integrable functions 1 g on G: T g(x(sds converges almost surely to G g(xη v(dx T The following lemma has been proved in [6], however the domain G there is different from our problem Thus for the sake of completeness we sketch the proof below This lemma will be used several times in this paper in controlling the reflection term Y ( in our constrained diffusion processes Lemma 42 There exists a g Cb 2 (G such that g(x, d i 1; x F i ; i {1, N} (41 Proof: We begin by observing that the geometry of the space G implies that there exists C (1, such that x G and λ Λ d(x, F λ C max i λ x, n i Next, from Condition 22(c, it follows that λ Λ, there exist positive constants {c λ i } i λ such that η λ = i λ cλ i n i satisfies η λ, d i > ; i λ Define c = inf i λ; λ Λ η λ, d i Furthermore as a convenient normalization we take i λ cλ i = 1 2 Next define constants (γ k, β k ; k =, 1, N inductively as follows: 1 γ N = 2(C + 1 ; β N = cγ N and for k = 1, N γ N k β N k+1 = ; β N k = cγn k, C Let φ, ψ be maps from IR + to IR + defined as follows: φ(x = x 1; x [, 1 2 ] = ; x 1 11

13 and ψ(x = ; x [, 1 2 ] = 1; x 1 Now define for λ Λ; f λ : G IR as follows: f λ (x = ( η λ, x ( nj, x a λ φ ψ, β λ where λ denotes the cardinality of the set λ, a λ are suitable positive chosen inductively as follows For all λ with λ = 1 we choose a λ so that j λ a λ η λ, d i β 1 ; i λ Having chosen a λ for all λ with 1 λ < k, we choose a λ, for a λ Λ with λ = k, such that a λ η λ, d i β k (M k 1 + 1; i λ, where M k 1 k 1 = Finally define g : G IR as: j=1 λ: λ =j γ λ ( + inf f λ(x, d i x F i;i λ g(x = λ Λ f λ (x It can be verified as in [6] that g defined as above satisfies (41 The following lemma essentially says that in considering admissible controls, we can without loss of generality restrict ourselves to controls that are adapted with respect to the filtration generated by (X(, Y ( Lemma 43 Let (Ω, F, {F t }, P be a filtered probability space on which is given an admissible pair (u(, W ( Let X( be a solution to (22 with the corresponding boundary processes {Y i ( } N Then there exists a enlargement (Ω, F, {F t }, P of the above probability space on which is given a {F t } Wiener process W ( and P(S valued measurable stochastic process ũ( such that for ae t [, ũ(t is F X,Y t measurable, where F X,Y t denotes the P completion of σ{x(s; {Y i (s} N ; s t}, and X( solves X(t = X( + t b(x(s, ũ(sds + t σ(x(sd W (s + d i Y i (t 12

14 Proof: Let {f i ( } be a countable dense set in C b (S Then there exists a P(S valued measurable stochastic process ũ( (cf Theorem 271 [16] satisfying: ( f i (αũ(t(dα = E f i (αũ(t(dα F X,Y t, ae t [, ; as, S i = 1, 2, Noting that is an Itô process given as: S X( X( b(x(s, u(sds + d i Y i ( σ(x(sdw (s we have from Theorem 42 of Wong [37] that there is a IR k valued measurable process ˆφ( and a k-dimensional Brownian motion on some augmented space such that the following representation holds as and ae ω t X(t = X( + ˆφ(sds + t σ(x(sd W (s + d i Y i (t, ˆφ(s, ω = E ( b(x(s, u(s Fs X,Y ; ae s [, The result now follows on observing that for ae s [, E ( b(x(s, u(s Fs X,Y = b(x(s, ũ(s, as The following lemma will be used in some conditioning arguments in the proofs of Lemma 67 and Proposition 75 For a Polish space K, denote by C([, : K the space of continuous functions from [, to K, endowed with the topology of uniform convergence on compacts Lemma 44 Let (Ω, F, {F t }, P be a filtered probability space Let (u(, W ( be an admissible pair on this probability space Let X( be given as a solution of (22 and let τ be an as finite {F t } stopping time Denote the conditional distribution of X(τ + given F τ by π(ω(, ie for A B(C([, : G and ae ω P (X(τ + A F τ (ω = π(ω(a Then, for ae ω, π(ω equals the probability law of X ω (, where X ω ( solves an equation of the form (22 with (u(, W ( replaced by some other admissible pair: (u ω (, W ω ( given on some filtered probability space (Ω ω, F ω, {F ω t }, P ω and X ω ( = X(τ(ω 13

15 Proof: Let Y ( = (Y 1 (,, Y N (, where {Y i ( } N is the boundary process for X( By virtue of Lemma 43 we can assume without loss of generality that u( is {F X,Y t } adapted Thus there is a measurable map (cf Theorem 272 [16]: f : [, Ω 1 Ω 2 U, where Ω 1 = C([, : G and Ω2 = C([, : [, N such that for ae t [,, [P ] ae ω u(t, ω = f(t, X t (, ω, Y t (, ω where X t (s, ω = X(s, ω; s t = X(t, ω; s t and Y t (, ω is defined in a similar manner Let Ω 3 = C([, : IR k Endow Ω = Ω 1 Ω 2 Ω 3 with the usual product Borel σ - field, denoted as F and let {F t } t denote the canonical filtration Consider the probability measure P ω on (Ω, F which is the conditional probability law of (X(τ +, Y (τ +, W (τ + W (τ given F τ Complete the filtration {F t } with respect to P ω and denote the completed filtration by the same symbol Let (X (, Y (, W ( be the canonical coordinate process on Ω and let X ω ( be the process defined on the filtered probability space (Ω, F, {F t }, P ω as a solution of: ( X ω (t = Γ X ω ( + b(x ω (s, u ω (sds + σ(x ω (sdw (s (t X ω ( = X(τ(ω, ω, where for t and [P ω ]-ae ω (ω denotes a typical element of Ω u ω (t, ω = f(τ(ω + t, X ω (, ω, Ỹω(, ω, and for ω Ω, the processes X ω ( and Ỹω( in the above control process are defined as: and X ω (t, ω = X(t, ω; t [, τ(ω] = X (t τ(ω, ω; t τ(ω Ỹ ω (t, ω = Y (t, ω; t [, τ(ω] = Y (t τ(ω, ω; t τ(ω Then it can be shown following the proof of Theorem 612 of [34] that for [P ]- ae ω the probability law of X ω ( equals Π(ω This completes the proof of the lemma on identifying (Ω ω, F ω, {F ω t }, W ω ( with (Ω, F, {F t }, W ( 14

16 5 Characterization of the Invariant Measure: One of the key ingredients of the proof of Theorem 34 is an extension of Echeverria-Weiss characterization of invariant measures (cf [1], [36] to the class of constrained controlled Markov processes considered in this paper The proof of this characterization uses a clever idea presented in the proof of a similar characterization result for constrained (uncontrolled Markov processes in Kurtz [21] It also uses ideas from [22] We begin with the following definitions For f Cb 2 (G let Lf : G U IR be defined as: (Lf(x, u = 1 2 k 2 f a i,j (x (x + x i x j i,j=1 k b i (x, u f x i (x; (x, u G U, where a ij (x = σ(xσ T (x With an abuse of notation we will write for α S, (Lf(x, δ {α}, merely as (Lf(x, α, where δ {α} denotes the probability measure concentrated at the point α Thus with this notation, for (x, u G U, (Lf(x, u = (Lf(x, αu(dα For i = 1, 2, N and f C 2 b (G let D if : G IR be defined as: S (D i f(x = d i, f(x, x G Definition 51 (Constrained-Controlled Martingale Problem (CCMP For µ P(G a solution to the (µ, L, G, (D i, F i N CCMP is a pair of {F t} adapted processes (Z(, Φ( on some filtered probability space (Ω, F, {F t }, P such that the following hold (i Z( is a G valued process with, almost surely, continuous trajectories (ii Z( has probability law µ (iii Φ( is a U valued, measurable and {F t } -adapted process (iv There is an {F t }-adapted, N-dimensional boundary process Y ( = (Y 1 (, Y N ( such that for each i {1, 2, N}, P almost surely (a Y i ( = (b Y i ( is continuous and non-decreasing (c for all t (,, t I F i (Z(sdY i (s = Y i (t (v For all f C (G f(z(t t S is a F t martingale (Lf(Z(s, αφ(s(dαds t (D i f(z(sdy i (s 15

17 The proof of the following result is standard and thus is omitted (cf Theorem 452 [34] Theorem 52 Let (Z(, Φ( be a solution of the (µ, L, G, (D i, F i N CCMP on some filtered probability space (Ω, F, {F t }, P Then there exists an enlargement (Ω, F, {F t }, P of the above space such that: (i there is a F t Wiener process W ( defined on the enlarged space, (ii the processes (Z(, Φ( are measurable and F t adapted, (iii for all t, as ( Z(t = Γ Z( + t S b(z(s, αφ(s(dαds + t σ(z(sdw (s (51 Conversely, if there is a pair of processes (Z(, Φ( solving (51 on some filtered probability space (Ω, F, {F t }, P satisfying (i, (ii and (iii above then the pair is a solution of the (µ, L, G, (D i, F i N CCMP where µ is the probability law of Z( A solution to the CCMP is closely related to the following Patchwork Controlled Martingale Problem (PCMP introduced in the context of uncontrolled constrained processes by Kurtz in [2] Definition 53 For µ P(G a solution to the (µ, L, G, (D i, F i N PCMP is an {F t } adapted vector stochastic process (ξ(, Λ(, λ (,, λ N (, with values in G U IR + N+1 on some filtered probability space (Ω, F, {F t }, P such that the following hold (i ξ( has continuous trajectories almost surely (ii ξ( has probability law µ (iii Λ( is a U valued, measurable, {F t } adapted process (iv For all i {, 1, 2, N}, P almost surely (a λ i ( = (b λ i ( is continuous and non-decreasing (c for all t [,, t I F i (ξ(sdλ i (s = λ i (t, where we define F = G (iv For all t, m i= λ i(t = t, as (v For all f C (G f(ξ(t t S is a F t martingale (Lf(ξ(s, αλ(s(dαdλ (s t (D i f(z(sdλ i (s 16

18 The proof of the following result is similar to the proof of Lemma 31 of [6] except that instead of condition (Sa and (Sb of [6] we use Condition 22(c Lemma 54 Suppose that (ξ(, Λ(, λ (,, λ N ( is a solution of the (µ, L, G, (D i, F i N PCMP on some filtered probability space (Ω, F, {F t}, P Then, almost surely, λ ( is a strictly increasing process such that λ (t as t The following proposition establishes the connection between a solution of a CCMP and a solution of the PCMP For the proof of the proposition we refer the reader to Theorem 34 of [6] Proposition 55 Suppose that (ξ(, Λ(, λ (,, λ N ( is a solution of the (µ, L, G, (D i, F i N PCMP on some filtered probability space (Ω, F, {F t }, P Then if for t τ(t = inf{s : λ (s t}, G t = Fτ(t, Z(t = ξ(τ(t, Φ(t = Λ(τ(t and for i = 1, N Y i (t = λ i (τ(t then (Z(, Φ( is the solution of the (µ, L, G, (D i, F i N CCMP on (Ω, F, {G t}, P with the corresponding boundary processes {Y i ( } N The following lemma is the first step in the characterization of the invariant measure for the family {Px v } of Theorem 29 For a measurable space (Ω, F we denote by M F (Ω the space of all finite, possibly identically zero, measures on (Ω, F Also, denoting by the identically measure, we let M(Ω = M F (K\ For µ M(Ω we denote its normalized version by ˆµ, ie ˆµ( = µ( µ(ω For a measurable map v : G U, x G and B B(S we will sometimes write v(x(b as v(x, B Lemma 56 Let v : G U be a measurable map Let η v be as in Theorem 32 Then there exist measures µ i M F (F i such that for all f C (G: (Lf(x, αµ (dx, dα + (D i f(xµ i (dx =, (52 G S F i where µ P(G S is given as µ (A B = v(x, Bη v (dx, A B(G, B B(S A Proof: Let X( be a solution of (24 with X( η v on some filtered probability space Then X( is a stationary process From Remark 27 there exist continuous increasing adapted processes Y i ( ; i = 1, N such that (23 holds with u( replaced by v(x( Let g Cb 2 (G be as in Lemma 42 Then via an application of Ito s formula we have that g(x(t = g(x( + + t t (Lg(X(s, v(x(sds + g(x(s, σ(x(sdw (s 17 t (D i g(x(sdy i (s

19 Taking expectations in the above equality, using the stationarity of X( and recalling the properties of the function g( we have that for all t where E(Y i (t C = t Ct, Thus if we define for A B(F i ; i = 1, N, then µ i M F (F i since ( t E (D i g(x(sdy i (s E Lg(X(s, v(x(s ds sup Lg(x, u (53 x G,u U µ i (A = ( 1 E I A (X(sdY i (s ( 1 E I Fi (X(sdY i (s = E(Y i (1 C (54 Now let f C (G be arbitrary Then another application of Ito s formula gives f(x(1 = f(x( (Lf(X(s, v(x(sds + f(x(s, σ(x(sdw (s Taking expectations and using the stationarity of X( we have that G (Lf(x, v(xη v (dx + 1 F i (D i f(xµ i (dx = (D i f(x(sdy i (s The proof now follows on recalling the definition of µ and observing that for (x, u G U (Lf(x, u = (Lf(x, αu(dα S The following extension of Echeverria-Weiss-Kurtz criterion (cf [36], [21] is an essential step in our proof of Theorem 34 18

20 Theorem 57 Suppose that there exist measures µ M(G S, µ i M F (F i i = 1, N such that for all f C (G (52 holds Decompose ˆµ as: where η P(G is given as ˆµ (dx, dα = v(x, dαη(dx, (55 η(a = ˆµ (A S; A B(G and v(x, dα is the appropriate regular conditional distribution Then there exists a solution (Z(, Φ( to the CCMP (η, L, G, (D i, F i N on some filtered probability space (Ω, F, {F t }, P such that (i Z( is a stationary process with the invariant measure η (ii Φ(s( = v(z(s, for all s [,, as (iii Z( is a positive recurrent, strongly Feller Markov process with transition probability family {Px v } x G and η = η v is its unique invariant measure (cf Theorem 32 Proof: Assume without loss of generality that S {1, N} = Define a new control set S = S {1, N} Let d(, be a distance on it defined as follows For x, y S: d(x, y = d(x, y if x S and y S = if x = y = 1 otherwise, where d(, is the given metric on S Clearly ( S, d(, is a compact metric space For n IN (where IN is the set of all positive integers, define the linear operator C n : C (G C b (G S as follows For f C (G and (x, α G S (C n f(x, α = (Lf(x, α; if α S = n(d i f(x; if α {1, N} Define ν n P(G S as follows For h C b (G S h(x, α ν n (dx, d α G S = 1 K n ( G S h(x, αµ (dx, dα + 1 n where K n = µ (G S + 1 n N µ i(f i F i h(x, iµ i (dx, (56 19

21 From the assumption that (52 holds it follows now that for all f C (G (C n f(x, α ν n (dx, d α = (57 G S Disintegrate ν n as follows For A B(G and B B( S ν n (A B = A ṽ n (x, B Sη n (dx + A ṽ i,n (xδ {i} (Bη n (dx, where for B B(S the maps ṽ n (, B and ṽ i,n ( are measurable; for all x G ṽ n (x, M F (S, ṽ i,n (x, i = 1, N; ṽ n (x, S + N ṽi,n(x = 1 and η n P(G is given as follows For A B(G ( η n (A = 1 µ (A S + 1 K n n µ i (A F i (58 Also define ν P(G S as the normalization of µ, ie ν = ˆµ Recall that from (55: ν(dx, dα = v(x, dαη(dx For fixed x G define a probability measure v n (x, d α on S as follows For A B( S v n (x, A = v(x, A S; if x G = ṽ n (x, A S + ṽ i,n (xδ {i} (A; otherwise (59 It is easy to check that for all h C b (G S h(x, α ν n (dx, d α = h(x, αv n (x, d αη n (dx (51 G S G S Using (57, (51 and Theorem 24 of [22] we now have that there exists a filtered probability space (For the sake of simplicity we suppress the dependence of the filtered probability space on n in our notation (Ω, F, P, (F t on which is given an adapted G valued stationary process X n ( with continuous paths such that the probability law of X n ( is η n and for all f C (G f(x n (t is an F t martingale t ( (C n f(x n (s, αv n (X n (s, d α ds (511 S 2

22 For x G let v n(x, dα M F (S be defined as follows For A B(S v n(x, A Also, define for i {1, N} vi,n : G [, 1] as: Note that for all x G = v(x, A; if x G (512 = ṽ n (x, A; if x G v i,n(x = ṽ i,n (xi G (x (513 v n (x, A = vn(x, A; if A B(S; = vi,n(xi {i} (A; if A {1,, N} (514 Rewriting (511 using (514 we have that for all f C (G f(x n (t n t t ( (Lf(X n (s, αvn(x n (s, dα ds S (D i f(x n (sv i,n(x n (sds is an F t martingale Now define for t [,, i {1,, N} λ n (t λ n i (t = = t t v n(x n (s, Sds v i,n(x n (sds Also, for x G define Λ n (x, P(S as follows For A B(S Λ n (x, A = v n(x, A vn(x, S ; if v n(x, S (515 = π(a; otherwise, where π is an arbitrary probability measure on S Then in this new notation we have that for all f C (G f(x n (t n t t ( (Lf(X n (s, αλ n (X n (s, dα dλ n (s S (D i f(x n (sdλ n i (s 21

23 is a F t martingale Clearly, for all t, N i = 1,, N λ n i (t = t i= λn i (t = t Furthermore, for I Fi (X n (sdλ n i (s; t (516 To prove (516 note that, it suffices to show that t E(vi,n(X n (t = E(I Fi (X n (tvi,n(x n (t Also, E(vi,n(X n (t = vi,n(xη n (dx G = ν n (G {i} = ν n (F i {i} = E(I Fi (X n (tvi,n(x n (t, where the next to last equality follows from (56 This proves (516 Thus it follows that (X n (, Λ n (X n (, λ n (,, λ n N ( solves the (η n, L, G, (nd i, F i N PCMP on (Ω, F, P, (F t From Lemma 54 it follows that λ n is as strictly increasing Now define τ n : [, [, as: τ n (t = inf{s : λ n (s t}; t [, Also for t let, Gt n = F τn(t, Z n (t = X n (τ n (t, Φ n (t = Λ n (Z n (t and for i = 1,, N Y i,n (t = λ i,n (τ n (t Then from Proposition 55 (Z n (, Φ n ( solve the CCMP on (Ω, F, P, (Gt n with the corresponding boundary processes {Y i,n ( } N Next note that since λn ( = and for s t < λ n (t λ n (s = t s v n(x n (r, Sdr t s ; as we have that the family {λ n ( } is tight in C([, ; [, Next observe that from Theorem 52 there exists an enlargement (Ω, F, {F t }, P of the above space such that: there is a F t Wiener process W ( defined on the enlarged space and Z n (t = Γ ( Z n ( + S b(z n (s, αφ n (s(dαds + σ(z n (sdw (s (t, (517 where the dependence of the Wiener process and the space on n is again suppressed in the notation Since the probability law of Z n ( is same as that of X n (, ie η n and from (58 η n (A η(a as n for all A B(G, we 22

24 have that the family {Z n (} is tight Furthermore using the Lipschitz property of the Skorohod map we have that for s t < Z n (t Z n (s Kr t s + K t s σ(z n (sdw (s Recalling that σ( is bounded we have as a result of the above observations that the family {Z n ( } is tight in C([, : G Let (Z(, λ ( be a weak limit point of the sequence (Z n (, λ n ( and re-label the convergent subsequence as (Z n (, λ n ( Observing that λ n (t t, as for all t and n IN and E(λ n (t = t µ (G S K n t as n, we have that λ (t = t for all t, as Next observe that from the weak convergence of Z n ( and λ n ( we have that as n X n ( = Z n (λ n ( converges weakly to Z(λ ( Z( Since for each n IN X n ( is stationary, we must have that the limit Z( is a stationary process too Also, since the law of Z( is η we have that the stationary distribution is η Next note that, from (512 and (515, for all x G Λ n (x, dα = v(x, dα Also from Theorem 421 [24] we have that for all n IN ( E I G (Z n (sds = From these two observations, (517 and the Lipschitz property of the Skorohod map it follows that ( Z n (t = Γ Z n ( + b(z n (s, αv(z n (s, dαds + σ(z n (sdw (s (t, S for all t, as The Feller property of the family {Px v } (See Theorem 29 now gives that Z n ( converges weakly to the solution of ( Z(t = Γ Z( + b( Z(s, αv( Z(s, dαds + σ( Z(sdW (s (t S Since Z n ( also converges weakly to Z( we must have that Z( and Z( have the same distribution, in particular Z( is a stationary Markov process with the stationary distribution η This proves (i and (ii of the theorem Finally part (iii follows from Theorem 32 23

25 6 Stability Properties of the Constrained Controlled Diffusions: We will now like to obtain some stability properties of the class of processes obtained as a solution of an equation of the form (22 We will begin with the following definitions Let β : [, IR k be a measurable map such that t β(s ds < ; for all t [, (61 Let x G Define the trajectory z : [, IR k as z(t = ( Γ x + β(sds (t; t [, (62 For x G let A(x be the collection of all absolutely continuous functions z : [, IR k defined via (62 for some β : [, C(δ which satisfies (61 For a fixed x G, we now define the hitting time to the origin function as follows: T (x = sup inf{t [, : z(t = } (63 z A(x The following properties of the function T ( have been proved in [1] Lemma 61 (Lemma 33 [1] There exist constants c, C 1 (, depending only on K and δ such that the following holds (i For all x, y G (ii For all x G T (x T (y C 1 x y T (x c x For x G let X x ( be the solution of (22 with X x ( = x on some filtered probability space (Ω, F, {F t }, P on which is given an admissible pair (u(, W ( For a compact set B G, define τ B (x = inf{t > : X x (t B} Also, for a (, define B a = {y G : T (y a} Let α, (, be chosen such that: (kα C 1 Kr 2 α + log 8 2 = θ < The proof of the following lemma is contained in the proof of Theorem 44 of [1] 24

26 Lemma 62 (cf Theorem 44 [1] For all x G and t (, sup P (τ B (x > t eαt (x e (α θ e θt, where the supremum on the left side above is taken over all possible solutions X( of (22 with X( = x given on some filtered probability space (Ω, F, {F t }, P with some admissible pair (u(, W ( Remark 63 Theorem 44 of [1] is stated for uncontrolled constrained diffusion processes, however the result (and most of the proof holds in the generality considered here The only place where the proof in [1] needs to be modified is as follows The proof of Theorem 44 of [1] relies on Lemma 43 of the same paper However, the proof of Lemma 43, presented in [1], at one place uses the Markov property of X x ( Thus we provide, in the appendix of this work an alternate proof of this lemma which does not appeal to the Markov property and holds for the class of processes X x ( considered here Define r 1 = c, (64 where c is as in Lemma 61 For X x ( as above, let τ 1 (x = inf{t > : X x (t = r 1 } From Lemma 61 (ii it follows that for all x G τ 1 (x τ B (x; as (65 Proposition 64 Let r 2 (r 1, be fixed Then there exist κ, θ (, such that for all t sup sup x G: x =r 2 u(,w ( P (τ 1 (x > t < κe θt, where the inside supremum on the left side is taken over all possible solutions X( of (22 with X( = x given on some filtered probability space (Ω, F, {F t }, P with some admissible pair (u(, W ( Proof: Observe that, for all x G with x = r 2 P (τ 1 (x > t P (τ B (x > t 1 e (α θ eαt (x e θt eαc1r2 e (α θ e θt, where the first inequality follows from (65, the second from Lemma 62 while the third one follows from Lemma 61(i This proves the lemma 25

27 Lemma 65 Let r 1 be as defined in (64 and let r 2 (r 1, be arbitrary For x G let X x,(u,w ( denote the solution of (22, with X x,(u,w ( = x, given on some filtered probability space (Ω, F, {F t }, P and with an admissible pair (u(, W ( Let τ(x = inf{t : X x,(u,w (t = r 1 and X x,(u,w (s = r 2 for some s [, t]}, where we have suppressed the dependence of τ(x on (u(, W ( in the notation Then there exists a δ (, such that inf x G: x =r 1 inf E(τ(x > δ (66 u(,w ( and there exist κ, θ (, such that for all t [, sup sup x G: x =r 1 u(,w ( P (τ(x > t < κ e θt (67 Proof: For notational simplicity we denote X x,(u,w ( by X x ( We first prove (67 Given X x ( as in the statement of the lemma, define τ (x = inf{t : X x (t = r 2 } In view of Proposition 64 and Lemma 44 it suffices to show that there exist κ, θ (, such that for all t sup sup x G: x =r 1 u(,w ( This will follow, if we show that for all k IN sup u(,w ( P (τ (x > t < κ e θ t sup P (τ (x > k < e θ k (68 x G: x =r 1 Noting that P (τ (x > k = E ( E ( I [τ(x>k] F k 1 I[τ(x>k 1] we have from Lemma 44 that in order to show (68 it suffices to show that there exists an ɛ (, 1 such that sup sup x G: x r 2 u(,w ( P (τ (x > 1 < ɛ (69 We will prove this by the method of contradiction Suppose that (69 does not hold for any ɛ (, 1 Then there exist {x n, u n (, W n (, X n (, Y n ( } n 1 26

28 such that for each n IN x n {x G : x r 2 }, (u n (, W n ( is an admissible pair on some filtered probability space, X n ( and Y n ( = (Y1 n (, YN n( are obtained as a solution of (22 with (u(, W ( replaced by (u n (, W n (, X n ( = x n and lim P (τ,n > 1 = 1, n where τ,n = inf{t : X n (t = r 2 } Let {f i } be a countable dense set in the unit ball of C(S Define, for t and j 1, βj n (t = f j (αu n (t, dα S Let B denote the closed unit ball of L [, endowed with the metric d(, defined as follows For x, y B d(x, y = M=1 j=1 x y, e M j M 2 M 2 j, where for each M IN {e M j ( } j=1 is a CONS in L2 [, M] and, M denotes the usual inner product in L 2 [, M] Clearly (B, d(, is a compact metric space Let E denote the countable product of B endowed with the product topology Then E is a compact Polish space and β n ( = (β n 1 (, is an E valued random variable Recalling that X n ( = x n and x n r 2 we see that the family X n ( is tight Furthermore, using the Lipschitz property of the Skorohod map and Condition 24 (ii we see that there exists C < such that for all s t < X n (t X n (s C[ t s + t s σ(x n (qdw n (q ] Using the boundedness of σ( we now have that {X n ( } is tight in C([, : G Next choosing g Cb 2 (G as in Lemma 42 we see that for s t < Y n i (t Y n i (s + t s t s t s (D i g(x n (qdy n i (q Lg(X n (q, u n (q dq C t s + g(x n (q, σ(x n (qdw n (q t s g(x n (q, σ(x n (qdw n (q, 27

29 where C is as defined in (53 Combining this with the fact that Y n ( = we have that {Y n ( } is tight in C([, : [, N Thus (β n (, X n (, Y n ( is a tight family of random variables with values in E = E C([, : G C([, : [, N Pick a weakly convergent subsequence of the above sequence and re-label it as the original sequence By going to the Skorohod representation space: (Ω, F, P, however keeping the same notation for random variables for convenience, we have that there exists a E valued random element (α(, X(, Y ( such that (α n (, X n (, Y n ( converges almost surely to (α(, X(, Y ( as n Next note that for f C (G and t 1 t 2 < E (( f(x n (t 2 f(x n (t 1 t2 t 1 (D i f(x n (sdy n i (s t2 t 1 ( (Lf(X n (s, αu n (s, dα S ds ψ(x n (s 1, Y n (s 1, X n (s m, Y n (s m where m IN; s 1 s m t 1 and ψ is an arbitrary continuous and bounded function defined on the obvious domain From Lemma 24 of [6] we have that t 1 t2 t 1 (D i f(x n (sdy n i (s t2 t 1 (D i f(x(sdy i (s almost surely as n Also from the Lipschitz property of the coefficients b( and σ (cf Condition 24 (i, (iii and Lemma II13 of [3] we have that t2 ( t2 ( (Lf(X n (s, αu n (s, dα ds (Lf(X(s, αu(s, dα ds S S almost surely, as n, where u( is a U valued measurable process satisfying f i (αu(t, dα = α i (t, S for all i IN Thus taking limit as n in (61 we have that (( t2 ( E f(x(t 2 f(x(t 1 (Lf(X(s, αu(s, dα ds t 1 S t2 (D i f(x(sdy i (s ψ(x(s 1, Y (s 1, X(s m, Y (s m = t 1 t 1 =, (61 28

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Ergodic Control for Constrained Diffusions: Characterization using HJB Equations.

Ergodic Control for Constrained Diffusions: Characterization using HJB Equations. Ergodic Control for Constrained Diffusions: Characterization using HJB Equations. Vivek Borkar School of Technology and Computer Science Tata Institute of Fundamental Research Homi Bhabha Road, Mumbai

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Some Properties of NSFDEs

Some Properties of NSFDEs Chenggui Yuan (Swansea University) Some Properties of NSFDEs 1 / 41 Some Properties of NSFDEs Chenggui Yuan Swansea University Chenggui Yuan (Swansea University) Some Properties of NSFDEs 2 / 41 Outline

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Deterministic and Stochastic Differential Inclusions with Multiple Surfaces of Discontinuity

Deterministic and Stochastic Differential Inclusions with Multiple Surfaces of Discontinuity Deterministic and Stochastic Differential Inclusions with Multiple Surfaces of Discontinuity Rami Atar, Amarjit Budhiraja and Kavita Ramanan Abstract: We consider a class of deterministic and stochastic

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books. Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define

More information

4.6 Example of non-uniqueness.

4.6 Example of non-uniqueness. 66 CHAPTER 4. STOCHASTIC DIFFERENTIAL EQUATIONS. 4.6 Example of non-uniqueness. If we try to construct a solution to the martingale problem in dimension coresponding to a(x) = x α with

More information

Reflected diffusions defined via the extended Skorokhod map

Reflected diffusions defined via the extended Skorokhod map E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 11 (26), Paper no. 36, pages 934 992. Journal URL http://www.math.washington.edu/~ejpecp/ Reflected diffusions defined via the extended

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Riesz Representation Theorems

Riesz Representation Theorems Chapter 6 Riesz Representation Theorems 6.1 Dual Spaces Definition 6.1.1. Let V and W be vector spaces over R. We let L(V, W ) = {T : V W T is linear}. The space L(V, R) is denoted by V and elements of

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Jingnan Fan Andrzej Ruszczyński November 5, 2014; revised April 15, 2015 Abstract For controlled discrete-time

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Weak solutions of mean-field stochastic differential equations

Weak solutions of mean-field stochastic differential equations Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Long Time Stability and Control Problems for Stochastic Networks in Heavy Traffic

Long Time Stability and Control Problems for Stochastic Networks in Heavy Traffic Long Time Stability and Control Problems for Stochastic Networks in Heavy Traffic Chihoon Lee A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Basic Definitions: Indexed Collections and Random Functions

Basic Definitions: Indexed Collections and Random Functions Chapter 1 Basic Definitions: Indexed Collections and Random Functions Section 1.1 introduces stochastic processes as indexed collections of random variables. Section 1.2 builds the necessary machinery

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

The Heine-Borel and Arzela-Ascoli Theorems

The Heine-Borel and Arzela-Ascoli Theorems The Heine-Borel and Arzela-Ascoli Theorems David Jekel October 29, 2016 This paper explains two important results about compactness, the Heine- Borel theorem and the Arzela-Ascoli theorem. We prove them

More information

PCA sets and convexity

PCA sets and convexity F U N D A M E N T A MATHEMATICAE 163 (2000) PCA sets and convexity by Robert K a u f m a n (Urbana, IL) Abstract. Three sets occurring in functional analysis are shown to be of class PCA (also called Σ

More information

Krzysztof Burdzy Robert Ho lyst Peter March

Krzysztof Burdzy Robert Ho lyst Peter March A FLEMING-VIOT PARTICLE REPRESENTATION OF THE DIRICHLET LAPLACIAN Krzysztof Burdzy Robert Ho lyst Peter March Abstract: We consider a model with a large number N of particles which move according to independent

More information

SDE Coefficients. March 4, 2008

SDE Coefficients. March 4, 2008 SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

+ 2x sin x. f(b i ) f(a i ) < ɛ. i=1. i=1

+ 2x sin x. f(b i ) f(a i ) < ɛ. i=1. i=1 Appendix To understand weak derivatives and distributional derivatives in the simplest context of functions of a single variable, we describe without proof some results from real analysis (see [7] and

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden Non-linear wave equations Hans Ringström Department of Mathematics, KTH, 144 Stockholm, Sweden Contents Chapter 1. Introduction 5 Chapter 2. Local existence and uniqueness for ODE:s 9 1. Background material

More information

Convergence of Markov Processes. Amanda Turner University of Cambridge

Convergence of Markov Processes. Amanda Turner University of Cambridge Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

CHAPTER I THE RIESZ REPRESENTATION THEOREM

CHAPTER I THE RIESZ REPRESENTATION THEOREM CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

WHY SATURATED PROBABILITY SPACES ARE NECESSARY

WHY SATURATED PROBABILITY SPACES ARE NECESSARY WHY SATURATED PROBABILITY SPACES ARE NECESSARY H. JEROME KEISLER AND YENENG SUN Abstract. An atomless probability space (Ω, A, P ) is said to have the saturation property for a probability measure µ on

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

Class Notes for MATH 255.

Class Notes for MATH 255. Class Notes for MATH 255. by S. W. Drury Copyright c 2006, by S. W. Drury. Contents 0 LimSup and LimInf Metric Spaces and Analysis in Several Variables 6. Metric Spaces........................... 6.2 Normed

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

Feller Processes and Semigroups

Feller Processes and Semigroups Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials

More information

Positive recurrence of reflecting Brownian motion in three dimensions

Positive recurrence of reflecting Brownian motion in three dimensions Positive recurrence of reflecting Brownian motion in three dimensions Maury Bramson 1, J. G. Dai 2 and J. M. Harrison 3 First Draft: November 19, 2008, revision: May 23, 2009 Abstract Consider a semimartingale

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

Stable Lévy motion with values in the Skorokhod space: construction and approximation

Stable Lévy motion with values in the Skorokhod space: construction and approximation Stable Lévy motion with values in the Skorokhod space: construction and approximation arxiv:1809.02103v1 [math.pr] 6 Sep 2018 Raluca M. Balan Becem Saidani September 5, 2018 Abstract In this article, we

More information

Invariant measures for iterated function systems

Invariant measures for iterated function systems ANNALES POLONICI MATHEMATICI LXXV.1(2000) Invariant measures for iterated function systems by Tomasz Szarek (Katowice and Rzeszów) Abstract. A new criterion for the existence of an invariant distribution

More information

Commutative Banach algebras 79

Commutative Banach algebras 79 8. Commutative Banach algebras In this chapter, we analyze commutative Banach algebras in greater detail. So we always assume that xy = yx for all x, y A here. Definition 8.1. Let A be a (commutative)

More information

1.2 Fundamental Theorems of Functional Analysis

1.2 Fundamental Theorems of Functional Analysis 1.2 Fundamental Theorems of Functional Analysis 15 Indeed, h = ω ψ ωdx is continuous compactly supported with R hdx = 0 R and thus it has a unique compactly supported primitive. Hence fφ dx = f(ω ψ ωdy)dx

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

THEOREMS, ETC., FOR MATH 516

THEOREMS, ETC., FOR MATH 516 THEOREMS, ETC., FOR MATH 516 Results labeled Theorem Ea.b.c (or Proposition Ea.b.c, etc.) refer to Theorem c from section a.b of Evans book (Partial Differential Equations). Proposition 1 (=Proposition

More information