Ergodic Control of a Singularly Perturbed Markov Process in Discrete Time with General State and Compact Action Spaces

Size: px
Start display at page:

Download "Ergodic Control of a Singularly Perturbed Markov Process in Discrete Time with General State and Compact Action Spaces"

Transcription

1 Appl Math Optim 38: (1998) 1998 Springer-Verlag New York Inc. rgodic Control of a Singularly Perturbed Markov Process in iscrete Time with General State and Compact Action Spaces T. R. Bielecki 1 and L. Stettner 2 1 epartment of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, Chicago, IL 60607, USA 2 Institute of Mathematics, Polish Academy of Sciences, Warszawa, Poland Communicated by. Ocone Abstract. rgodic control of singularly perturbed Markov chains with general state and compact action spaces is considered. A new method is given for characterization of the limit of invariant measures, for perturbed chains, when the perturbation parameter goes to zero. It is also demonstrated that the limit control principle is satisfied under natural ergodicity assumptions about controlled Markov chains. These assumptions allow for the presence of transient states, a situation that has not been considered in the literature before in the context of control of singularly perturbed Markov processes with long-run-average cost functionals. Key Words. Markov process, Invariant measure, Singular perturbation, rgodic control, Limit control principle. AMS Classification. Primary 60J05, 47A55, Secondary 90C Introduction Numerous dynamic optimization problems are distinguished by the presence of socalled strong and weak interactions characterizing the dynamics of the problems (see, Present address: epartment of Mathematics, Northeastern Illinois University, Chicago, IL 60625, USA.

2 262 T. R. Bielecki and L. Stettner e.g., [Q], [PG], and [G]). In discrete time many such problems are nicely modeled with the help of nearly decomposable, controlled Markov chains [BF], [PG], [KT1], [KT2]. The near decomposability of a Markov process is typically expressed in terms of an appropriately constructed infinitesimal generator of the process. The construction takes the form of a perturbation of the infinitesimal generator of some other Markov process [BS1], [BS2]. In the case of Markov processes with discrete time this translates directly into perturbation of the corresponding transition kernel. From another perspective, a large number of dynamic optimization problems are characterized by inadequate knowledge of their parameters, most commonly the parameters of the problem s dynamics. It is often convenient to capture this inadequacy by incorporating a perturbation of the parameters into the problem description. Again, in the case of controlled, discrete time Markov processes, the perturbation takes form of perturbation of the transition kernel. In both of the above situations the perturbation would typically be small in applications. Smallness of the perturbation is usually encoded in the form of a single parameter, which is a positive real number close to zero. It is therefore a legitimate question of practical importance to ask what is the behavior of the optimization problem as the perturbation parameter goes to zero. The study of this question has been conducted by many authors in recent years ([B], [K], [BF], [ABF], [BS1], [BS2], and [PG] among others). The problem of the asymptotic behavior of a controlled, perturbed Markov process in discrete time when the perturbation goes to zero is particularly interesting and challenging when one considers the long-run-average cost criterion and the perturbation is singular [BF], [BS1], [BS2]. Singularity of perturbation in a discrete time framework basically means that the perturbation reduces the number of ergodic classes of the unperturbed problem. In this paper we extend the results of [BF] to the case of discrete time, singularly perturbed, controlled Markov processes with general state and compact action spaces. In particular, we do not impose any restrictions on the measurable structure of the state space. Moreover we allow for the presence of transient classes for both unperturbed and perturbed chains. This is an important step forward in the asymptotic analysis of singularly perturbed Markov processes in our opinion. The proof of our main result, Theorem 2.1, is based on the approach taken in the book by Korolyuk and Turbin (see Section 6.6 of [KT1]). Our important contribution is that we extend Korolyuk and Turbin s results in two directions: we consider a Markov chain on a general, measurable state space, and we allow for the presence of transient states. The paper is organized as follows: In Section 2 we formulate a model for a singularly perturbed ergodic Markov chain, provide two motivating examples of applications of our model, and state Theorem 2.1 providing an asymptotic expansion of the chain s invariant measure. In Section 3 we introduce a model for a controlled singularly perturbed Markov chain and motivate it, referring to controlled versions of the two examples introduced in Section 2. Then we apply Theorem 2.1 in order to verify validity of the so-called limit control principle for the singularly perturbed, controlled Markov chain that is considered in this section. We also provide here a result on approximation of the limit control problem with an appropriately discretized one. Section 4 contains some final remarks and suggestions for future research. In the Appendix to the paper we provide a proof of Theorem 2.1.

3 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 263 Throughout the paper we use the following notation: (, ) is a measurable space. For any transition function Q(x, A) on (, ) such that A, x Q(x, A) is measurable, x, Q(x, ) is countably additive, we let Q(x, ) var denote the variation norm for Q(x, ) and define Q Q(x, ) var. x For any f b (bounded measurable, real-valued functions on (, )) we define Qf(x) Q(x, f ) : f (y)q(x, dy), x. The identity kernel I (x, A) is defined by { 1 if x A, I (x, A) 0 if x / A. 2. Asymptotic xpansion of Invariant Measures for a Singularly Perturbed oeblin Process Let x (x n ) n0 be a Markov chain on (, ) with probability transition kernel P(, ). We impose the following assumption on x : Assumption (). The transition kernel P(, ) satisfies the oeblin condition () (see p. 192 of []). That is, there is a finite measure ϕ, ε>0and a positive integer n such that for any set A if ϕ(a) <ε, then x P n (x, A) 1 ε, or, equivalently, the operator P considered as an operator on the space of bounded Borel functions on is quasi-compact [N, Section V-3]. The following consequences of () hold: C1. There exists a finite number, say r, of disjoint invariant sets i, and invariant probability measures π i, π i ( i ) 1, for i 1, 2,...,r. C2. Let \ r i; then x {T r } <, i x where T A : inf{n 0:x n A} for A. C3. Let, for x and A, (x, A) : ρ(x, j )π j (A), where ρ(x, j ) : P x { n : x n j }.

4 264 T. R. Bielecki and L. Stettner Then there exists N 0 > 0 such that for all n N 0 the following operator is well defined: [ n 1 1 n (x, ) : I (x, ) + (x, ) n 1 P i (x, ) + n 1 (I P)(x, )], for x. We also have n < for n N 0. Now let n 1 s 1 Q n (x, A) : I (x, A) + n 1 (P i )(x, A) s1 for n 1, x, A. Fix n 0 N 0, and define the quasi-potential R 0 n0 Q n0. Then it is easy to see that R 0 (I P + ) I, and therefore, since R 0, we have a very important identity: R 0 P R 0 + I. i0 Note that if P is aperiodic then one can choose N 0 n 0 1 in the above, and R 0 coincides with the 0-potential of P. We now introduce a perturbed chain x ε (x n ε) n0. For that, we define the probability transition kernel P ε (, ) for x ε by P ε (, ) : P(, ) + εb(, ), where B(, ) : P(, ) I (, ), P(, ) is a probability transition kernel on (, ) and 0 <ε<ε 0 for some ε 0 > 0. The following two examples illustrate the applicability of the above model. xample 2.1. This is a discrete time version of the continuous time situation considered in Section 5.9 of [SZ], where a flexible manufacturing system is studied with machine states admitting strong and weak interactions. Capacity dynamics of a manufacturing machine are modeled by using a Markov chain with a transition operator P ε (, ) : P(, ) + εb(, ). In the above formula P(, ) corresponds to (I + Q (2) ) of [SZ] and models the strong (or fast) interactions between the states of the machine. On the other hand, B(, ) corresponds to Q (1) of [SZ] and models the weak (or slow) interactions between the states of the machine. iscrete time counterpart of the ergodic assumptions about Q (2) made in [SZ] implies that the oeblin condition is satisfied for P(, ). xample 2.2. This example is motivated by Altman and Gaitsgory [AG].

5 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 265 In queuing networks modeling, a relationship between the number of customers X n+1 at time n + 1 and the number of customers X n at time n can be conveniently expressed as X n+1 f (X n, Y ε n ), where (Yn ε) n 0 is a Markov chain of random perturbations whose average time between transitions is of order ε (fast time transitions), corresponding to fast changes in routing, flow control, compared with the ordinary time scale n 0, 1, 2,...of the network itself. Such a model gives rise to the singular perturbation form of the transition operator for the Markov chain Zn ε (X n, Yn ε ), where appropriate ergodicity assumptions hold. To illustrate the point we consider a very simple situation in which Yn ε lives on a finite state space. In general, the process (Yn ε) n 0 should be considered on a general measurable state space. Let a, b be two nonnegative integers. Also, let X n {a, b}, Y ε n {0, 1}, f (a, 0) f (b, 0) b, f (a, 1) f (b, 1) a, Prob(Yn+1 ε i Y n ε j) 1, i, j {0, 1}. 2 Then the transition matrix for (Zn ε) n 0 is given by (recall that (Yn ε) n 0 is the fast chain) ε , where the states of Zn ε are ordered as (a, 0), (a, 1), (b, 0), (b, 1). A crucial role in the asymptotic analysis of x ε 0 will be played by P, where P(, ) : P (, ). Clearly, P is a probability transition kernel on (, ), and we have for x i, P(x, j ) i (y, j )P(z, dy) (x, dz) (y, j )P(z, dy)π i (dz)+ j i (y, j )P(z, dy)π i (dz)

6 266 T. R. Bielecki and L. Stettner for x, P(x, j ) ( ) P(z, j ) + ρ(y, j )P(z, dy) π i (dz) i : p(i, j), (2.1) for x, P(x, ) i i (y, j )P(z, dy)π i (dz)ρ(x, i ) [ P(z, i ) + ] ρ(y, i )P(z, dy) π i (dz)ρ(x, i ) : p(x, j), (2.2) (y, )P(z, dy) (x, dz) 0 : p(x, ), (2.3) with i, j 1, 2,...,r. Let Ê : {1, 2,...,r} and let Ê e (2.1) (2.3). : Ê, and let p(, ) be as defined in efinition 2.1. (a) A Markov chain x ( x n ) n0 on Ê, whose transition probability matrix is p [p(i, j)] i, j Ê, is called aggregated in a strict sense. (b) A Markov chain x e ( x n e) n0 on (Ê e, e ), whose probability transition kernel is p(, ), is called aggregated in an extended sense. Clearly, is a transient class for x e, and x e restricted to Ê coincides with x.in what follows we shall need the following assumption: Assumption (I). There exists a unique invariant probability measure µ for x, that is, µ is the unique row vector so that (i) µ(i) 0, i 1, 2,...,r, µ(i) 1, (ii) r µ(i)p(i, j) µ( j), j 1, 2,...,r. It is not difficult to see that the oeblin condition is satisfied for any finite state space Markov chain. We may thus conclude that the quasi-potential matrix r 0 is well defined for x by (compare with the discussion in C3) r 0 : ρ n0 q n0,

7 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 267 where n 0 1 is sufficiently large, ρ n0 and q n0 are matrices given by ρ ( 1) n 0 (i, j) δ(i, j) + µ( j) n 1 q n0 (i, j) δ(i, j) + n n 0 1 n0 p (n) (i, j) + n 1 0 (δ(i, j) p(n) (i, j)), i, j 1,...,r, (p (k) (i, j) µ( j)), n 0 1 s 1 s0 k0 where ρ n ( 1) 0 (i, j) is the (i, j)th entry of the inverse ρn 1 0 to ρ n0, and where p (n) (i, j) is the (i, j)th entry in p n. We have that r 0 (I r p) I r µ, (2.4) where I r is the r r identity matrix, and µ(1) µ(2) µ(r) µ(1) µ(2) µ(r) µ :... µ(1) µ(2) µ(r) r r µ µ.. µ Before we state Theorem 2.1, below, we need to introduce some more notation. For A, i 1, 2,...,r, and m 0, 1, 2,...,we let β i m (A) : i B(R 0 B) m (x, A)π i (dx) and, additionally, for j 1, 2,...,r, we let β m (i, j) : βm i ( j) + ρ(y, j )βm i (dy). Now we inductively define matrices β (1) : β 1, β (2) : β 2 + β (1) r 0 β 1, β (m) : β m + β (1) r 0 β m 1 + +β (m 2) r 0 β 2 + β (m 1) r 0 β 1, and row vectors µ (0) : µ, µ (m) : µ (0) β (m) r 0, m 1.

8 268 T. R. Bielecki and L. Stettner Finally, we let, for A and m 1, γ 0 (A) : γ m (A) : µ (0) (i)π i (A), [µ (0) (i)π i (BR 0 ) m + µ (1) (i)π i (BR 0 ) m 1 + +µ (m) (i)π i ](A). The following theorem is inspired by Theorem 6.8 in [KT1]. Theorem 2.1. Let Assumptions and I be satisfied. Assume also that there exists a unique invariant measure π ε for P ε. Then for ε < min{ε 0,(3a 2 bc) 1 }, where a max{ B, 1}, b max{ R 0, 1}, c max{ r 0, 1}, we have and γ m var (3a 2 bc) m, m 1, π ε ( ) ε m γ m ( ). (2.5) m0 Proof. See the Appendix. 3. rgodic Control of Singularly Perturbed Markov Chains In this section we assume that the transition kernels P and P depend on a control parameter a U, where U is a compact, metric space. We denote this dependence by P a and P a, respectively. Let A : B(, U), the space of measurable functions from to U. Foru A, with a slight abuse of notation, we denote by P u and P u controlled kernels such that, for each x, and P u (x, ) P u(x) (x, ) P u (x, ) P u(x) (x, ). We pose for the time being that the dependence of P a and P a on a U is such that for every u A the controlled kernels P u and P u are in fact probability transition kernels. In particular this is true under assumption (A6) that we introduce later in the text. The functions u A are called stationary Markovian controls. We consider such controls only. This will not reduce generality, since it is well known that, in view of the

9 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 269 ergodic assumptions made below, optimal nonanticipating controls can be found among stationary Markovian controls (see [Y] for example). We impose the following assumptions on {P u, u A}: (A1) There exists a finite sequence 1,..., r of disjoint subsets of, that do not depend on u and are invariant for each P u. That is, x i, a U, P a (x, i ) 1 for i 1, 2,...,r. (A2) Let : \ r i. Then x P a (x, ) <1. a U (A3) N<, 0<ρ<1, u A, i 1, 2,...,r, n N, disjoint sets C1 u(i),...,cu n (i) such that (i) n Cu j (i) i, (ii) P u (x, Cj u (i)) 1 for x Cu j 1 (i) and j 2, 3,...,n, P u (x, C1 u (i)) 1 for x Cu n (i), (iii),2,...,n (P u ) n (x,ɣ) (P u ) n (y,ɣ) <ρ. x,y C u j (i) Ɣ Remark 3.1. (a) Assumption (A2) implies that x {T u } <, x where { T u min j 0:xj u } r i and (xj u) j0 is a controlled Markov chain corresponding to Pu. (b) Assumption (A3) implies that the process (xnk u ) k0, with x 0 u Cu j (i), is uniformly ergodic on Cj u(i). That is, there exists a probability measure π j u (i) such that (P u ) nk (x, ) π u x C u j (i) j (i)( ) var 2(1 ρ) k 1. Consequently, for the quasi-potential R0 u we have R0 u M(ρ) <. In addition, we assume what follows about P u : (A4) inf x r i inf a U P a (x, j )>0, j 1, 2,...,r.

10 270 T. R. Bielecki and L. Stettner (A5) ε0, ε<ε0, there is a unique invariant measure π ε u for Pu,ε : P u + ε(p u I ), and x, n 1 lim n n 1 i0 (P u,ε ) i (x, f ) f (y)π ε u (dy) for every f b. Remark 3.2. (a) Assumption (A1) indicates that the ergodic structure of the underlying controlled Markov chain is invariant with respect to the control parameter. There are examples of controlled systems in which this assumption is not satisfied. Nevertheless, there are also important examples of controlled systems for which this assumption is valid (see points (b) and (c) below). The other assumptions, (A2) (A5), are consistent with (A1). (b) In Section 5.8 of their book Sethi and Zhang [SZ] consider machine capacities dependent on production rate. In the context of our xample 2.1 this would correspond to considering operators P and P depending on control parameter u, as is done in this section. Note that in view of the interpretation of the perturbed chain in terms of the presence of strong and weak interactions among the capacity states, assumption (A1) about uniform (with respect to u) ergodic decomposition of the state space of the underlying Markov chain is quite natural. (c) In their paper Altman and Gaitsgory [AG] consider a situation where the perturbing random process is in fact a controlled Markov chain. In the context of our xample 2.2 this would mean that the chain Yn ε is a controlled Markov chain. Note that in this case assumption (A1) would automatically be satisfied. (d) Time discretized versions of control problems considered in Bensoussan and Blankenship [BB] satisfy assumption (A1). In the rest of the paper we use (xn u,ε ) in order to denote a Markov chain corresponding to P u,ε, for u A. Moreover we let, for x and u A, J ε (u, x) : lim n { 1 n 1 n x i0 c(x u,ε i, u(x u,ε i )) where c: U R is a bounded, measurable function which is continuous in a U, uniformly in x. By (A5) we have that J ε (u, x) J ε (u) : c(z, u(z))πu ε (dz) for all u A and x. We are interested in the following optimization problem: inf J ε(u). },

11 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 271 Let, for u A, J(u) : c(x, u(x))π u,i (dx)µ u (I ), where π u,i, i 1,...,r, are the invariant measures corresponding to P u, and µ u is the unique invariant measure for the controlled, aggregated Markov chain ( x n u ), which exists due to (A4). The following corollary to Theorem 2.1 and Remark 3.1 is a version of what is known in the literature as the limit control principle (see [BF] and [BS1]) corresponding to the case considered in this paper. Corollary 3.1. for all ε<ε δ, Assume (A1) (A5). Then for every δ>0 there exists ε δ > 0 such that, J ε (u) J(u) <δ. Proof. By Theorem 2.1 we have, for all u A, J ε (u) J(u) From Remark 3.1 it follows that c(x, u(x))γ u,i (dx)ε i. γ u,i var K i, i 1, 2,..., for some constant K > 0. By Corollary 3.1 the minimization problem for J ε can be approximated by the minimization problem for J. In the remaining part of the paper we simplify the latter one by approximating it using appropriate discretization of the action space A. Let A 1 B(, U) and A 2 B( r i, U) be the spaces of measurable functions from to U and r i to U, respectively. If u A is such that u 1 (x), x, u(x) u 2 (x), x r i, for some u i A i, i 1, 2, then we write u u 1 u 2.

12 272 T. R. Bielecki and L. Stettner Note that inf J(u) inf u 1 A 1 inf J(u 1 u 2 ), u 2 A 2 and that ergodic assumptions about the aggregated process ( x n u ) imply that c(x, u(x))π u,i(dx) depends on u 2 alone. Also, if P u does not depend on u, then µ u (i) depends on u 2 only through the invariant measures π u,i. The above observations can be effectively used in approximations of the minimization problem for J(u). Let be the metric on U, and let (a n ) be a dense sequence in U. Since U is compact we have that m 1, n(m), U m :{a 1,...,a n(m) } is 1 m net in U. For any u A and x let u m (x) : min{ (u(x), a k), a k U m }, am u (x) : min{a k U m : u m (x) (u(x), a k)}. efine a discretization operator S m : A A by S m u( ) am u ( ). Clearly, we have x (u(x), S m u(x)) 1 m for all u A. Before we state our final result, Proposition 3.1, we formulate the following technical assumption: (A6) Both P a (x, ) and P a (x, ) are continuous in a, in variation norm, uniformly in x. That is to say, we have that, η>0, δ>0, a,a εu, (a, a )<δ Q (x, ) Q a (x, ) var <η, x where Q stands for P or P. Proposition 3.1. for all m m δ, Assume (A1) (A6). Then, for each δ>0, there exists m δ such that, J(u) J(S m u) <δ. Proof. In the proof of Proposition 1 in [S] it has been demonstrated that, for all u A,

13 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 273 i 1,...,r, and n 1, π u,i π Sm u,i var i x i P u (x, ) P S mu (x, ) var K 1,n (i)k 2,n (i), where var i denotes the variation norm on i, and ( n 1 K 1,n (i) : I + π u,i ( ) n 1 (P u ) k (x, ) K 2,n (i) : n A x i x i k0 + n 1 (I P u )(x, )) 1 var i, n 1 I + l 1 n 1 (P u ) k (x, ) π u,i ( ). var i l1 k1 By (A3), for a sufficiently large n we have that K 1,n (i) K 1 <, K 2,n (i) K 2 < for some K 1 and K 2. Therefore we may conclude that π u,i π Sm u,i var i,2,...,r Similarly, for the aggregated chain ( x n u ) we have µ u (i) µ Sm u(i) K 1 K 2,2,...,r K 1 K 2 P u P Smu. (3.1) p u (i, j) p Smu (i, j), (3.2) i,,...,r for some positive and finite constants K 1, K 2. Next, we observe that J(u) J(S m u) c(x, u(x))π u,i (dx)µ u (i) c(x, S m u(x))π Sm u,i(dx)µ Sm u(i) c(x, u(x))(π u,i π Sm u,i)(dx)µ u (i)

14 274 T. R. Bielecki and L. Stettner + c(x, u(x))π Sm u,i(dx)(µ u (i) µ Sm u(i)) + c(x, S m u(x)) c(x, u(x)). (3.3) x Therefore, in view of (3.1) (3.3) and assumption (A6), in order to complete the proof of Proposition 3.1 it remains to show that p u (i, j) p Smu (i, j) 0 (3.4) i,,2,...,r as m. By the definition of p u (i, j), taking into account (3.1) and x P u(x) (x, ) P S mu(x) (x, ) var 0 as m, we only need to show that, for i 1, 2,...,r, x ρ u (x, i ) ρ Smu (x, i ) 0 (3.5) as m. Since by Remark 3.1 x Px u {T u > N} 2 x x{t u } 0 N as N, in order to demonstrate (3.5) we need to show that, for k 1, 2,..., x,2,...,r Px u {x 0 u,...,x k 1 u, x k u i} P S mu x {x S mu 0,...,x S mu k 1, x S mu k i } 0 (3.6) as m. We prove (3.6) by induction: For k 1, (3.6) holds by (A5). Assume (3.6) is true for k 1. Then, for k + 1 we have, for x, u A, and j 1,...,r, Px u {x 0 u,...,x k u, x i+1 u j} P S mu x {x S mu 0,...,x S mu k, x S mu k+1 j} χ (x)p u (x, dx 1 ) P u (x k, dx k+1 ) j χ (x)p Smu (x, dx 1 ) P Smu (x k, dx k+1 ) j ( χ (x) P u (x, dx 1 )P u (x 1, dx 2 ) P u (x k, dx k+1 ) j P u (x, dx 1 )P Smu (x 1, dx 2 ) P Smu (x k, dx k+1 )) j

15 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time χ (x) (P u (x, dx 1 ) P Smu (x, dx 1 )) j P Smu (x 1, dx 2 ) P Smu (x k, dx k+1 ) χ (x){ Px u 1 {x0 u,...x k 1 u, x k u j} x 1 P S mu x 1 {x S mu 0,...,x S mu k 1, x S mu k j } + P u P Smu } which goes to 0 as m in view of the induction hypothesis and (A6). The proof of Proposition 3.1 is complete. Remark 3.3. In this paper we have not analyzed the structure of the optimal/suboptimal controls. That is, we have not studied the question of asymptotic expansion for these controls. It is conceivable that optimal/suboptimal controls can be split into fast and slow components, and that approximations to these components can be obtained combining Proposition 3.1 with the results of [ABF], among others. These issues are very important for applications and are under investigation. 4. Conclusion In this paper we have considered a situation when the perturbed Markov chain has only one invariant measure. One step aggregation was enough to deal with asymptotic expansion of this invariant measure. Construction of an asymptotic expansion becomes much more difficult if we allow for a singular perturbation that does not reduce the number of ergodic classes of the original chain to just one. We will be considering situations of this kind in a forthcoming work. Acknowledgment We are grateful to the two anonymous referees for reading all versions of the manuscript very carefully and significantly contributing to its improvement. Appendix. Proof of Theorem 2.1 Step 1. We have βm i var R 0 m B m+1, β m 2 R 0 m B m+1 for i 1, 2,...,r and m 0. Therefore β (1) 2 R 0 B 2 2ba 2

16 276 T. R. Bielecki and L. Stettner and β (2) 2 R 0 B 2 + β (1) r 0 β 1 2ba 2 + β (1) c 2ba 2 6b 2 a 4 c. We shall show by induction that β (m) 2a 2 b(3a 2 bc) m 1 (A.1) for m 1. For m 1, 2, (A.1) has already been verified. Suppose (A.1) holds for k m. Then, for m + 1, we have β (m+1) 2b m+1 a m+2 + 2ba 2 c 2b m a m a 2 b(3a 2 b c) m 1 c 2b a 2 ] m a 2(m+1) b m+1 c [2 m i 1 2a 2 b(a 2 b c) m ( m 1 3 ) 2a 2 b(3a 2 bc) m. Step 2. From Step 1 it follows that µ (m) var 2 3 m 1 (a 2 bc) m for m 1, 2,...Therefore, since γ 0 var 1, we have, for m 1, 2,..., γ m var µ (0) var B m R 0 m + µ (1) var B m 1 R 0 m µ (m) ] var m a 2m b m c [1 m i 1 (3a 2 bc) m. Step 3. By Step 2 we have that for all ε<min(ε 0,(3a 2 bc) 1 ) the series m ε i γ i i0 is convergent in variation norm as n. Therefore we can define a countably additive function η ε ( ) : γ i ( )ε i. i0 Note that B() 0 implies γ m () 0 for m 1. Therefore, since γ 0 () 1, we have that η ε () 1. We demonstrate in Step 4 below that if we pose that η ε P ε η ε, (A.2)

17 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 277 then, since η ε () 1, we have η ε π ε. (A.3) Step 4. Assume (A.2) is satisfied. By Jordan decomposition we have η ε η ε + η ε and, for A, +,, ηε Pε (A) ηε (A) η ε (\A) η ε Pε (\A). (A.4) Since η ε + Pε () η ε + () and η ε Pε () ηε (), then (A.4) implies η ε + Pε η ε +, η ε Pε ηε. (A.5) Therefore η ε +, η ε are invariant measures for Pε. By the uniqueness of invariant probability measure for P ε we conclude that η + ε η + ε () π ε and, if ηε () >0, Thus η ε η ε () π ε. η ε (η + ε () η ε ())π ε η ε ()π ε π ε. Step 5. It remains to demonstrate that (A.2) is satisfied. This will take some time. We start by observing that (A.2) is equivalent to ε i γ i (P + εb) ε i γ i, (A.6) i0 i0 and thus to (γ i P + γ i 1 B γ i )ε i + γ 0 P γ 0 0. (A.7) Now, since γ 0 P γ 0 then (A.7) will be verified once we show that γ i P + γ i 1 B γ i 0, i 1, 2,.... (A.8)

18 278 T. R. Bielecki and L. Stettner Applying formulas for γ i 1, γ i we get γ i P + γ i 1 B γ i [µ (0) ( j)π j (BR 0 ) i P + µ (1) ( j)π j (BR 0 ) i 1 P + +µ (i) ( j)π j P + µ (0) ( j)π j (BR 0 ) i 1 B + µ (1) ( j)π j (BR 0 ) i 2 B + +µ (i 1) ( j)π j B µ (0) ( j)π j (BR 0 ) i µ (1) ( j)π j (BR 0 ) i 1 µ (i) ( j)π j ] {[µ (0) ( j)π j (BR 0 ) i 1 + µ (1) ( j)π j (BR 0 ) i 2 + +µ (i 1) ( j)π j ] B(R 0 P + I R 0 ) + µ (i) ( j)π j (P I )}. (A.9) Note that π j (P I ) 0, j 1,...,r, (A.10) and, for f b, by C3 of Section 2 we get R 0 P + I R 0. (A.11) From (A.9) (A.11) it follows that we will have demonstrated (A.8) once we have verified that [µ (0) ( j)π j (BR 0 ) i 1 +µ (1) ( j)π j (BR 0 ) i 2 + +µ (i 1) ( j)π j ]B 0 (A.12) for i 1, 2,... However, (A.12) means that, for all A and for i 1, 2,..., we need to have 0 k1 (y, A)[µ (0) ( j)π j B(R 0 B) i 1 (dy) + µ (1) ( j)π j B(R 0 B) i 2 (dy) k1 + +µ (i 1) ( j)π j B(dy)] ρ(y, k )π k (A)[µ (0) ( j)π j B(R 0 B) i 1 (dy) + µ (1) ( j)π j B(R 0 B) i 2 (dy) + +µ (i 1) ( j)π j B(dy)] { π k (A)[µ (0) ( j)π j B(R 0 B) i 1 ( k ) + µ (1) ( j)π j B(R 0 B) i 2 ( k ) + +µ (i 1) ( j)π j B( k )]

19 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time ρ(y, k )π k (A)[µ (0) ( j)π j B(R 0 B) i 1 (dy) } + +µ (i 1) ( j)π j B(dy)] [ ( π k (A) µ (0) ( j) π j B(R 0 B) i 1 ( k )+ k1 π k (A) k1 π k (A) k1 ( + µ (1) ( j) π j B(R 0 B) i 2 ( k ) + + +µ (i 1) ( j) [ ( µ (0) ( j) β j i 1 ( k) + ( + µ (1) ( j) β j + +µ (i 1) ( j) ) ρ(y, k )π j B(R 0 B) i 1 (dy) ) ρ(y, k )π j B(R 0 B) i 2 (dy) ( π j B( k ) + i 2 ( k) + ρ(y, k )π j B(dy) ) ρ(y, k )β j i 1 (dy) ) i 2 (dy) ( β j 0 ( k) + ρ(y, k )β j [µ (0) ( j)β i 1 ( j, k) + µ (1) ( j)β i 2 ( j, k) )] ρ(y, k )β j 0 (dy) )] + +µ (i 1) ( j)β 0 ( j, k)]. (A.13) Now, using the definitions of µ (m), we have that, for i 1, 2,..., and k 1,...,r, [µ (0) ( j)β i 1 ( j, k) + µ (1) ( j)β i 2 ( j, k) + +µ (i 1) ( j)β 0 ( j, k)] [µ (0) ( j)β i 1 ( j, k) + µ (0) β (1) r 0 ( j)β i 2 ( j, k) Observe that since and we have r 0 (I p) I µ β 0 p I I r 0 β 0 + µ. + +µ (0) β (i 1) r 0 ( j)β 0 ( j, k)]. (A.14)

20 280 T. R. Bielecki and L. Stettner Therefore we may continue (A.14) as [µ (0) ( j)β i 1 ( j, k) + µ (0) β (1) r 0 ( j)β i 2 ( j, k) + +µ (0) β (i 1) r 0 ( j)β 0 ( j, k)] l1 + [µ (0) ( j)β i 1 ( j, l)( r 0 β 0 (l, k) + µ(k)) + µ (0) β (1) r 0 ( j)β i 2 ( j, l)( r 0 β 0 (l, k) + µ(k)) + +µ (0) β (i 2) r 0 ( j)β 1 ( j, l)( r 0 β 0 (l, k) + µ(k))] µ (0) β (i 1) r 0 ( j)β 0 ( j, k). (A.15) Now notice that, for m 0 and j 1,...,r, β m ( j,l) l1 j j ( r B(R 0 B) m x, s1 ) ( s π j (dx) + ρ ( ) r B(R 0 B) m x, s π j (dx) + βm i () s1 y, ) r s βm i (dy) B(R 0 B) m (x, )π j (dx) 0. (A.16) j Finally, (A.15) and (A.16) imply that, for k 1,...,r,wehave [µ (0) ( j)β i 1 ( j, k) + µ (0) β (1) r 0 ( j)β i 2 ( j, k) + +µ (0) β (i 1) r 0 ( j)β 0 ( j, k)] µ (0) [ β i 1 β (1) r 0 β i 2 β (i 2) r 0 β 1 + β (i 1) ]r 0 β 0 (k) 0, (A.17) where the least equality holds by the definition of β (I 1). We see now that (A.17) implies (A.13). The proof of the theorem is complete. s1 References [ABF] M. Abbad, T. R. Bielecki, and J. Filar, Algorithms for singularly perturbed limiting average control problems, I Transactions on Automatic Control, vol. 37, 1992, pp [AG]. Altman and V. Gaitsgory, Control of a hybrid stochastic system, System and Control Letters, vol. 20, 1993, pp [B] A. Bensoussan, Perturbation Methods in Optimal Control, Wiley, New York, [BB] A. Bensoussan and G.L. Blankenship, Singular perturbations in stochastic control, in Singular Perturbations and Asymptotic Analysis in Control Systems, P. Kokotovic, A. Bensoussan, and G. Blankenship, eds., Springer-Verlag, Berlin, 1986, pp [BF] T. R. Bielecki and J. Filar, Singularly perturbed Markov control problem: limiting average cost, Annals of Operations Research, vol. 28, 1991, pp

21 rgodic Control of a Singularly Perturbed Markov Process in iscrete Time 281 [BS1] T. R. Bielecki and L. Stettner, On some problems arising in asymptotic analysis of Markov processes with singularly perturbed generators, Stochastic Analysis and Its Applications, vol. 6, 1988, pp [BS2] T. R. Bielecki and L. Stettner, On ergodic control problems for singularly perturbed Markov processes, Applied Mathematics & Optimization, vol. 20, 1989, pp [] J.L. oob, Stochastic Processes, Wiley, New York, [Q] F. elebecque and J.P. Quadrat, Optimal control of Markov chains admitting strong and weak interactions, Automatica, vol. 17, 1981, pp [Y].B. ynkin and A.A. Yushkevich, Controlled Markov Processes, Springer-Verlag, Berlin, [G] V. Gaitsgory, Control of Systems with Slow and Fast Motions: Averaging and Asymptotic Analysis, Nauka, Moscow, 1991 (in Russian). [K] H.J. Kushner, Weak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering Problems, Birkhauser, Basel, [KT1] W.S. Korolyuk and A.F. Turbin, Semimarkov Processes and Their Applications, Naukova umka, Kiev, 1976 (in Russian). [KT2] W.S. Korolyuk and A.F. Turbin, Mathematical Foundations of Phase nlargement of Composite Systems, Naukova umka, Kiev, 1978 (in Russian). [N] J. Neveu, Bases mathematiques du calcul des probabilites, Masson, Paris, [PG] A. Pervozvansky and V. Gaitsgory, Theory of Suboptimal ecisions, Kluwer, ordrecht, [S] L. Stettner, On nearly self-optimizing strategies for a discrete-time uniformly ergodic adaptive model, Applied Mathematics & Optimization, vol. 27, 1993, pp [SZ] S.P. Sethi and Q. Zhang, Hierarchical ecision Making in Stochatic Manufacturing Systems, Birkhauser, Basel, Accepted 3 ecember 1996

PERTURBATION ANALYSIS FOR DENUMERABLE MARKOV CHAINS WITH APPLICATION TO QUEUEING MODELS

PERTURBATION ANALYSIS FOR DENUMERABLE MARKOV CHAINS WITH APPLICATION TO QUEUEING MODELS Adv Appl Prob 36, 839 853 (2004) Printed in Northern Ireland Applied Probability Trust 2004 PERTURBATION ANALYSIS FOR DENUMERABLE MARKOV CHAINS WITH APPLICATION TO QUEUEING MODELS EITAN ALTMAN and KONSTANTIN

More information

APPLICATIONS OF THE KANTOROVICH-RUBINSTEIN MAXIMUM PRINCIPLE IN THE THEORY OF MARKOV OPERATORS

APPLICATIONS OF THE KANTOROVICH-RUBINSTEIN MAXIMUM PRINCIPLE IN THE THEORY OF MARKOV OPERATORS 12 th International Workshop for Young Mathematicians Probability Theory and Statistics Kraków, 20-26 September 2009 pp. 43-51 APPLICATIONS OF THE KANTOROVICH-RUBINSTEIN MAIMUM PRINCIPLE IN THE THEORY

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES UNIVERSITATIS IAGELLONICAE ACTA MATHEMATICA, FASCICULUS XL 2002 ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES by Joanna Jaroszewska Abstract. We study the asymptotic behaviour

More information

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Vladislav B. Tadić Abstract The almost sure convergence of two time-scale stochastic approximation algorithms is analyzed under

More information

Invariant measures for iterated function systems

Invariant measures for iterated function systems ANNALES POLONICI MATHEMATICI LXXV.1(2000) Invariant measures for iterated function systems by Tomasz Szarek (Katowice and Rzeszów) Abstract. A new criterion for the existence of an invariant distribution

More information

arxiv: v2 [math.pr] 4 Feb 2009

arxiv: v2 [math.pr] 4 Feb 2009 Optimal detection of homogeneous segment of observations in stochastic sequence arxiv:0812.3632v2 [math.pr] 4 Feb 2009 Abstract Wojciech Sarnowski a, Krzysztof Szajowski b,a a Wroc law University of Technology,

More information

HASSE-MINKOWSKI THEOREM

HASSE-MINKOWSKI THEOREM HASSE-MINKOWSKI THEOREM KIM, SUNGJIN 1. Introduction In rough terms, a local-global principle is a statement that asserts that a certain property is true globally if and only if it is true everywhere locally.

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Continuous-Time Markov Decision Processes. Discounted and Average Optimality Conditions. Xianping Guo Zhongshan University.

Continuous-Time Markov Decision Processes. Discounted and Average Optimality Conditions. Xianping Guo Zhongshan University. Continuous-Time Markov Decision Processes Discounted and Average Optimality Conditions Xianping Guo Zhongshan University. Email: mcsgxp@zsu.edu.cn Outline The control model The existing works Our conditions

More information

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing MIXING PROPERTIES OF TRIANGULAR FEEDBACK SHIFT REGISTERS BERND SCHOMBURG Abstract. The purpose of this note is to show that Markov chains induced by non-singular triangular feedback shift registers and

More information

Introduction to Dynamical Systems

Introduction to Dynamical Systems Introduction to Dynamical Systems France-Kosovo Undergraduate Research School of Mathematics March 2017 This introduction to dynamical systems was a course given at the march 2017 edition of the France

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Prime numbers and Gaussian random walks

Prime numbers and Gaussian random walks Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

ANALYSIS OF THE LAW OF THE ITERATED LOGARITHM FOR THE IDLE TIME OF A CUSTOMER IN MULTIPHASE QUEUES

ANALYSIS OF THE LAW OF THE ITERATED LOGARITHM FOR THE IDLE TIME OF A CUSTOMER IN MULTIPHASE QUEUES International Journal of Pure and Applied Mathematics Volume 66 No. 2 2011, 183-190 ANALYSIS OF THE LAW OF THE ITERATED LOGARITHM FOR THE IDLE TIME OF A CUSTOMER IN MULTIPHASE QUEUES Saulius Minkevičius

More information

Queuing system evolution in phase merging scheme

Queuing system evolution in phase merging scheme BULTINUL ACADMII D ŞTIINŢ A RPUBLICII MOLDOVA. MATMATICA Number 3(58), 2008, Pages 83 88 ISSN 1024 7696 Queuing system evolution in phase merging scheme V. Korolyk, Gh. Mishkoy, A. Mamonova, Iu. Griza

More information

Total Expected Discounted Reward MDPs: Existence of Optimal Policies

Total Expected Discounted Reward MDPs: Existence of Optimal Policies Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600

More information

FIXED POINT THEOREMS AND CHARACTERIZATIONS OF METRIC COMPLETENESS. Tomonari Suzuki Wataru Takahashi. 1. Introduction

FIXED POINT THEOREMS AND CHARACTERIZATIONS OF METRIC COMPLETENESS. Tomonari Suzuki Wataru Takahashi. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 8, 1996, 371 382 FIXED POINT THEOREMS AND CHARACTERIZATIONS OF METRIC COMPLETENESS Tomonari Suzuki Wataru Takahashi

More information

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE FRANÇOIS BOLLEY Abstract. In this note we prove in an elementary way that the Wasserstein distances, which play a basic role in optimal transportation

More information

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}.

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}. Dual Space of L 1 Note. The main theorem of this note is on page 5. The secondary theorem, describing the dual of L (µ) is on page 8. Let (X, M, µ) be a measure space. We consider the subsets of X which

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Integrable operator representations of R 2 q and SL q (2, R)

Integrable operator representations of R 2 q and SL q (2, R) Seminar Sophus Lie 2 (1992) 17 114 Integrable operator representations of R 2 q and SL q (2, R) Konrad Schmüdgen 1. Introduction Unbounded self-adjoint or normal operators in Hilbert space which satisfy

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

ALMOST PERIODIC SOLUTIONS OF NONLINEAR DISCRETE VOLTERRA EQUATIONS WITH UNBOUNDED DELAY. 1. Almost periodic sequences and difference equations

ALMOST PERIODIC SOLUTIONS OF NONLINEAR DISCRETE VOLTERRA EQUATIONS WITH UNBOUNDED DELAY. 1. Almost periodic sequences and difference equations Trends in Mathematics - New Series Information Center for Mathematical Sciences Volume 10, Number 2, 2008, pages 27 32 2008 International Workshop on Dynamical Systems and Related Topics c 2008 ICMS in

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

arxiv: v2 [math.nt] 25 Oct 2018

arxiv: v2 [math.nt] 25 Oct 2018 arxiv:80.0248v2 [math.t] 25 Oct 208 Convergence rate for Rényi-type continued fraction expansions Gabriela Ileana Sebe Politehnica University of Bucharest, Faculty of Applied Sciences, Splaiul Independentei

More information

Rafał Kapica, Janusz Morawiec CONTINUOUS SOLUTIONS OF ITERATIVE EQUATIONS OF INFINITE ORDER

Rafał Kapica, Janusz Morawiec CONTINUOUS SOLUTIONS OF ITERATIVE EQUATIONS OF INFINITE ORDER Opuscula Mathematica Vol. 29 No. 2 2009 Rafał Kapica, Janusz Morawiec CONTINUOUS SOLUTIONS OF ITERATIVE EQUATIONS OF INFINITE ORDER Abstract. Given a probability space (, A, P ) and a complete separable

More information

Compensating operator of diffusion process with variable diffusion in semi-markov space

Compensating operator of diffusion process with variable diffusion in semi-markov space Compensating operator of diffusion process with variable diffusion in semi-markov space Rosa W., Chabanyuk Y., Khimka U. T. Zakopane, XLV KZM 2016 September 9, 2016 diffusion process September with variable

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

On asymptotic behavior of a finite Markov chain

On asymptotic behavior of a finite Markov chain 1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

A generalization of Dobrushin coefficient

A generalization of Dobrushin coefficient A generalization of Dobrushin coefficient Ü µ ŒÆ.êÆ ÆÆ 202.5 . Introduction and main results We generalize the well-known Dobrushin coefficient δ in total variation to weighted total variation δ V, which

More information

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads

Operations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible

More information

Linear Dependence of Stationary Distributions in Ergodic Markov Decision Processes

Linear Dependence of Stationary Distributions in Ergodic Markov Decision Processes Linear ependence of Stationary istributions in Ergodic Markov ecision Processes Ronald Ortner epartment Mathematik und Informationstechnologie, Montanuniversität Leoben Abstract In ergodic MPs we consider

More information

A note on the σ-algebra of cylinder sets and all that

A note on the σ-algebra of cylinder sets and all that A note on the σ-algebra of cylinder sets and all that José Luis Silva CCM, Univ. da Madeira, P-9000 Funchal Madeira BiBoS, Univ. of Bielefeld, Germany (luis@dragoeiro.uma.pt) September 1999 Abstract In

More information

On Uniform Spaces with Invariant Nonstandard Hulls

On Uniform Spaces with Invariant Nonstandard Hulls 1 13 ISSN 1759-9008 1 On Uniform Spaces with Invariant Nonstandard Hulls NADER VAKIL Abstract: Let X, Γ be a uniform space with its uniformity generated by a set of pseudo-metrics Γ. Let the symbol " denote

More information

MATH 202B - Problem Set 5

MATH 202B - Problem Set 5 MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there

More information

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation Acta Polytechnica Hungarica Vol. 14, No. 5, 217 Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation Roksana Brodnicka, Henryk Gacki Institute of Mathematics, University of Silesia

More information

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process Author manuscript, published in "Journal of Applied Probability 50, 2 (2013) 598-601" Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process L. Hervé and J. Ledoux

More information

Limiting distribution for subcritical controlled branching processes with random control function

Limiting distribution for subcritical controlled branching processes with random control function Available online at www.sciencedirect.com Statistics & Probability Letters 67 (2004 277 284 Limiting distribution for subcritical controlled branching processes with random control function M. Gonzalez,

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Reminder Notes for the Course on Measures on Topological Spaces

Reminder Notes for the Course on Measures on Topological Spaces Reminder Notes for the Course on Measures on Topological Spaces T. C. Dorlas Dublin Institute for Advanced Studies School of Theoretical Physics 10 Burlington Road, Dublin 4, Ireland. Email: dorlas@stp.dias.ie

More information

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS

CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS Int. J. Appl. Math. Comput. Sci., 2002, Vol.2, No.2, 73 80 CONTROLLABILITY OF NONLINEAR DISCRETE SYSTEMS JERZY KLAMKA Institute of Automatic Control, Silesian University of Technology ul. Akademicka 6,

More information

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1) 1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line

More information

ON LEFT-INVARIANT BOREL MEASURES ON THE

ON LEFT-INVARIANT BOREL MEASURES ON THE Georgian International Journal of Science... Volume 3, Number 3, pp. 1?? ISSN 1939-5825 c 2010 Nova Science Publishers, Inc. ON LEFT-INVARIANT BOREL MEASURES ON THE PRODUCT OF LOCALLY COMPACT HAUSDORFF

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

Newton, Fermat, and Exactly Realizable Sequences

Newton, Fermat, and Exactly Realizable Sequences 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 8 (2005), Article 05.1.2 Newton, Fermat, and Exactly Realizable Sequences Bau-Sen Du Institute of Mathematics Academia Sinica Taipei 115 TAIWAN mabsdu@sinica.edu.tw

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with

More information

Censoring Technique in Studying Block-Structured Markov Chains

Censoring Technique in Studying Block-Structured Markov Chains Censoring Technique in Studying Block-Structured Markov Chains Yiqiang Q. Zhao 1 Abstract: Markov chains with block-structured transition matrices find many applications in various areas. Such Markov chains

More information

QUASI-METRIC AND METRIC SPACES

QUASI-METRIC AND METRIC SPACES CONFORMAL GEOMETRY AND DYNAMICS An Electronic Journal of the American Mathematical Society Volume 10, Pages 355 360 (December 26, 2006) S 1088-4173(06)055-X QUASI-METRIC AND METRIC SPACES VIKTOR SCHROEDER

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this

More information

CHARACTERIZATION OF CARATHÉODORY FUNCTIONS. Andrzej Nowak. 1. Preliminaries

CHARACTERIZATION OF CARATHÉODORY FUNCTIONS. Andrzej Nowak. 1. Preliminaries Annales Mathematicae Silesianae 27 (2013), 93 98 Prace Naukowe Uniwersytetu Śląskiego nr 3106, Katowice CHARACTERIZATION OF CARATHÉODORY FUNCTIONS Andrzej Nowak Abstract. We study Carathéodory functions

More information

Compression on the digital unit sphere

Compression on the digital unit sphere 16th Conference on Applied Mathematics, Univ. of Central Oklahoma, Electronic Journal of Differential Equations, Conf. 07, 001, pp. 1 4. ISSN: 107-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

THERE has been a growing interest in using switching diffusion

THERE has been a growing interest in using switching diffusion 1706 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 5, MAY 2007 Two-Time-Scale Approximation for Wonham Filters Qing Zhang, Senior Member, IEEE, G. George Yin, Fellow, IEEE, Johb B. Moore, Life

More information

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS Qiao, H. Osaka J. Math. 51 (14), 47 66 EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS HUIJIE QIAO (Received May 6, 11, revised May 1, 1) Abstract In this paper we show

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

1 The Stokes System. ρ + (ρv) = ρ g(x), and the conservation of momentum has the form. ρ v (λ 1 + µ 1 ) ( v) µ 1 v + p = ρ f(x) in Ω.

1 The Stokes System. ρ + (ρv) = ρ g(x), and the conservation of momentum has the form. ρ v (λ 1 + µ 1 ) ( v) µ 1 v + p = ρ f(x) in Ω. 1 The Stokes System The motion of a (possibly compressible) homogeneous fluid is described by its density ρ(x, t), pressure p(x, t) and velocity v(x, t). Assume that the fluid is barotropic, i.e., the

More information

INSTITUTE OF MATHEMATICS THE CZECH ACADEMY OF SCIENCES. Note on the fast decay property of steady Navier-Stokes flows in the whole space

INSTITUTE OF MATHEMATICS THE CZECH ACADEMY OF SCIENCES. Note on the fast decay property of steady Navier-Stokes flows in the whole space INSTITUTE OF MATHEMATICS THE CZECH ACADEMY OF SCIENCES Note on the fast decay property of stea Navier-Stokes flows in the whole space Tomoyuki Nakatsuka Preprint No. 15-017 PRAHA 017 Note on the fast

More information

Czechoslovak Mathematical Journal

Czechoslovak Mathematical Journal Czechoslovak Mathematical Journal Oktay Duman; Cihan Orhan µ-statistically convergent function sequences Czechoslovak Mathematical Journal, Vol. 54 (2004), No. 2, 413 422 Persistent URL: http://dml.cz/dmlcz/127899

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Measures and Measure Spaces

Measures and Measure Spaces Chapter 2 Measures and Measure Spaces In summarizing the flaws of the Riemann integral we can focus on two main points: 1) Many nice functions are not Riemann integrable. 2) The Riemann integral does not

More information

The Lusin Theorem and Horizontal Graphs in the Heisenberg Group

The Lusin Theorem and Horizontal Graphs in the Heisenberg Group Analysis and Geometry in Metric Spaces Research Article DOI: 10.2478/agms-2013-0008 AGMS 2013 295-301 The Lusin Theorem and Horizontal Graphs in the Heisenberg Group Abstract In this paper we prove that

More information

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Journal of Functional Analysis 253 (2007) 772 781 www.elsevier.com/locate/jfa Note Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Haskell Rosenthal Department of Mathematics,

More information

Nonparametric inference for ergodic, stationary time series.

Nonparametric inference for ergodic, stationary time series. G. Morvai, S. Yakowitz, and L. Györfi: Nonparametric inference for ergodic, stationary time series. Ann. Statist. 24 (1996), no. 1, 370 379. Abstract The setting is a stationary, ergodic time series. The

More information

x log x, which is strictly convex, and use Jensen s Inequality:

x log x, which is strictly convex, and use Jensen s Inequality: 2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and

More information

Positive entries of stable matrices

Positive entries of stable matrices Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

4. Ergodicity and mixing

4. Ergodicity and mixing 4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

St. Petersburg Math. J. 13 (2001),.1 Vol. 13 (2002), No. 4

St. Petersburg Math. J. 13 (2001),.1 Vol. 13 (2002), No. 4 St Petersburg Math J 13 (21), 1 Vol 13 (22), No 4 BROCKETT S PROBLEM IN THE THEORY OF STABILITY OF LINEAR DIFFERENTIAL EQUATIONS G A Leonov Abstract Algorithms for nonstationary linear stabilization are

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

The Game of Normal Numbers

The Game of Normal Numbers The Game of Normal Numbers Ehud Lehrer September 4, 2003 Abstract We introduce a two-player game where at each period one player, say, Player 2, chooses a distribution and the other player, Player 1, a

More information

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables

On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

7 Complete metric spaces and function spaces

7 Complete metric spaces and function spaces 7 Complete metric spaces and function spaces 7.1 Completeness Let (X, d) be a metric space. Definition 7.1. A sequence (x n ) n N in X is a Cauchy sequence if for any ɛ > 0, there is N N such that n, m

More information

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle

High-Gain Observers in Nonlinear Feedback Control. Lecture # 2 Separation Principle High-Gain Observers in Nonlinear Feedback Control Lecture # 2 Separation Principle High-Gain ObserversinNonlinear Feedback ControlLecture # 2Separation Principle p. 1/4 The Class of Systems ẋ = Ax + Bφ(x,

More information

Summary of Results on Markov Chains. Abstract

Summary of Results on Markov Chains. Abstract Summary of Results on Markov Chains Enrico Scalas 1, 1 Laboratory on Complex Systems. Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale Amedeo Avogadro, Via Bellini 25 G,

More information