STEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL

Size: px
Start display at page:

Download "STEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL"

Transcription

1 STEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL ILIE GRIGORESCU AND MIN KANG Abstract. In a general model used in internet congestion control, the time dependent data flow vector x(t) > undergoes a biased random walk on two distinct scales. The amount of data of each component x(t) goes up to x(t) + 1 with probability p on a unit scale or down to γx(t) with probability 1 p on a logarithmic scale, where p depends on the joint state of the system. We investigate the long time behavior, mean field limit, and the one particle case. A scaling limit is proved, in the form of a continuum model with jump rate α(x). The ergodic properties are established via the local Doeblin condition for the case when the rate α is bounded above an away from zero, and an explicit formula of the invariant measure is provided when α is constant. 1. Introduction In a general model used in internet congestion control [2, 3], related to classical autoregressive models ([8], Chapter 2), the time dependent data flow x(t) undergoes a biased random walk with unit steps in one direction (x moves to x + 1) and on a logarithmic scale in the other (x moves to γx, where < γ < 1). More precisely, assume (Ω, Σ, P ) is a probability space, {F t } t is a filtration. Let ζ i : [, ) (, ) n [, 1] be continuous functions, for each index i, 1 i n. In addition, we consider a family of n independent Poisson processes {π i (t)} 1 i n with rate λ >, adapted to the same filtration. For any starting point x with components x i, 1 i n, n Z +, let x(t) denote the pure jump Markov process on (, ) n with components (x 1 (t), x 2 (t),..., x n (t)), constructed as follows. A Poisson clock π i (t) is attached to each particle x i (t), 1 i n. When the Poisson clock π i, associated to the particle x i (t), rings at time τ, then x i moves to x i (τ ) + 1 with probability 1 ζ i (τ, x(τ )) and to γx i (τ ) with probability ζ i (τ, x(τ )). In this standard construction, there are no simultaneous jumps. Date: October 23, Mathematics Subject Classification. Primary: 6K35; Secondary: 6J5, 35K15. 1

2 The process x(t) can be regarded as a generalization of a continuous time linear state space model LSS(F, G) ([8], page 9), with driving matrices F and G depending on the trajectory of the process. The paper is organized as follows. The multidimensional process and the canonical coupling with a driving family of Poisson processes, together with a general result on the existence of probability invariant measures (Proposition 1) are presented in the current section. The rest of the paper is divided in two parts. First, Section 2 and 3 discuss the mean field limit of the multidimensional process in the case when the jump probabilities ζ depend on the average of the particles at a given time. The components decouple as the size of the system n, reducing the study of all relevant questions to the one particle process case. Existence and uniqueness of the solution to a hybrid linear differencedifferential equation (2.2) is proven, and an explicit formula for the invariant measure when ζ is constant is provided. The second part of the paper looks at a scaling limit (proven in Section 4), leading to a process x(t) in the continuum. This moves along the trajectory of a deterministic solution of an ode, and is interrupted by jumps to γx(t) at random time intervals, as in the discrete version. Section 5 deals with the ergodic properties of the process, via the local Doeblin condition, and again gives an explicit formula for the invariant measure when the jump rate is constant Martingale problem and a class of test functions. Due to the natural bound x i (t) x i + π i (t), for all 1 i n and t, (1.1) E[exp{θx i (t)}] exp{θ[x i + (λt)(e θ 1)]}, θ >. Equivalently, the process {x(t)} can be seen as the solution to a martingale problem. Denote R i x = (x 1, x 2,..., x i + 1,..., x n ) and L i x = (x 1, x 2,..., γx i,..., x n ). For a test function f C b ((, ) n ), let M f (t) be defined by the stochastic differential equation (1.2) λ M f (t) = f(x(t)) f(x()) n [( )( ) 1 ζ i (s, x(s )) f(r i x(s )) f(x(s )) i=1 ( )] ζ i (s, x(s )) f(l i x(s )) f(x(s )) ds. Then, M f (t) is a martingale with quadratic variation 2

3 (1.3) (1.4) M f (t) = λ n i=1 [( )( ) 2 1 ζ i (x(s )) f(r i x(s )) f(x(s )) ( ) 2 ] + ζ i (x(s )) f(l i x(s )) f(x(s )) ds. Definition 1. The solution to the martingale problem given by (1.2) will be called a γ - process. The average of the positions x i of the particles will be denoted by x. In the special case when ζ i (t, x) = p(t, x) for all particles 1 i n, where p(t, x) is a continuous function p : [, ) (, ) [, 1], {x(t)} t shall be called a mean field γ-process. Definition 2. Given θ >, let φ C 1,2 θ ([, ) (, )) be functions in C 1,2 ([, ) (, )) with derivatives a t b xφ, a 1, b 2 uniformly bounded by an exponential e θx. Precisely, there exists a positive constant K φ such that (1.5) sup sup t a xφ(t, b x)e θx = K φ <. a 1, b 2 (t,x) [, ) (, ) 1.2. Existence of invariant measures. A Poisson process escapes towards infinity as t. However, a lower bound on the rate ζ i of going back x γx, ensures that the process has at least one invariant probability measure. Proposition 1. Assuming that there exists p > such that ζ i p, for all 1 i n. Then the process defined by (1.2)-(1.3) has at least one invariant probability measure. Proof. Our goal is to prove the tightness of the family of probability measures on (, ) n (1.6) ν t (dx) = t 1 P x (s, dx) ds, t > where P x (s, dx) = P (x(s) dx x() = x ). Without loss of generality we shall take λ = 1 and also show the result only for n = 1. In this case, we drop the subscripts i. First, we shall give an estimate on the first moment of the process. Namely, if M m (t) = E[x(t) m ], we prove for m = 1 (1.7) M 1 (s)ds x + (1 p )t p (1 γ) The first observation is that moments are finite due to the bound provided by the Poisson processes dominating x(t).. Second, the moments are continuous functions in time, an 3

4 immediate consequence of the differential formula (1.2). We apply the expected value in (1.2) to a time interval t s t in the one-particle case, to obtain M m (t) = M m (t ) + t M m (t ) + (1 p ) t [ E (1 ζ(s, x(s )) (x(s ) + 1) m x(s ) m) m 1 j= for any m 1. Particularizing for m = 1, t = (1 γ m )ζ(s, x(s ))x(s ) m] ds ( ) m M j (s) (1 γ m )p M m (s)ds j M 1 (t) x + (1 p )t (1 γ)p M 1 (s)ds which can be re-written with the integral term on the left hand side where C(x ) depends only on x. M 1 (s)ds x + (1 p )t (1 γ)p C(x )t, Let M > be a large number. Then, for ν t defined in (1.6), (1.8) ν t ((M, )) = 1 t showing that (1.9) lim M P x (x(s) > M)ds 1 t lim sup ν t ([, M] c ) t E[x(s)] M ds C(x ) M C(x ) lim M M = which proves our claim. Any limit point of {ν t (dx)} t> is an invariant measure. 2. The one particle process We want to investigate the dynamics of the process governed by (1.2)-(1.3) in the special case n = 1. In view of Definition 1, without loss of generality, we denote the jump probabilities by p(t, x). Denote by A t the operator (2.1) A t φ(t, x) = (1 p(t, x))(φ(t, x + 1) φ(t, x)) + p(t, x)(φ(t, γx) φ(t, x)), applied to φ C 1,2 θ ([, ) (, ) 2 ). For a probability measure µ (dx) on (, ), we say that the time indexed measures µ(t, dx) are a weak solution to the evolution equation µ t = A t µ with initial condition 4

5 µ(, x) = µ (dx), where the star indicates the formal adjoint of A t in the space variable, if (2.2) φ(t, x), µ(t, dx) φ(, x), µ (dx) = A s φ(s, x), µ(s, dx) ds, with the notation φ, µ designating the space variable integral of the test function φ against µ. When the time dependence is emphasized, we shall write φ(t), µ(t) The forward equation. The next proposition looks at the case when p(t, x) depends only on time. We write p(t, x) = p(t) for simplicity. On one hand, this is a special case of the one particle process, and Proposition 2 applies to the case when p(t, x) is constant. On the other hand, the interest for this setting comes form the fluid limit from Section 3. By a law of large numbers effect, the average x(t) approaches a deterministic limit as in (3.4). In that case, p(t, x(t)) p(t) becomes a function of t only. Uniqueness of the pde (2.2) is essential for closing the argument of the fluid limit from Theorem 2. Proposition 2. The solution in weak sense to equation (2.2), corresponding to the forward equation of the one particle γ - process with jump rates ζ(t, x) = p(t), exists and is unique. Proof. Existence. It is evident that the transition probability µ(t, dx) = P µ (x(t) dx) of the process defined by (1.2)-(1.3) for n = 1 satisfies the desired equation, which is exactly the forward equation of the process. Uniqueness. The forward equation in integral form reads (2.3) φ(t, x), µ(t) φ(, x), µ() = { s φ(s, x) + (1 p(s))(φ(s, Rx) φ(s, x)) } +p(s)(φ(s, Lx) φ(s, x)), µ(s) ds, and should be valid for test functions φ(t, x) C 1,2 θ ([, ) (, )), and in particular φ(x) = e θ x, θ < θ, implying that their moment generating function is nontrivial (exists on an interval including the origin). Next, let φ(t, x) = x m, for integers m. We can see that { x m, µ(t) } m are defined recursively by a system of affine odes with uniqueness and global existence of solutions (see [5]). Two solutions will have equal moments for any fixed t, and will have equal moment generating functions, hence the measures are the same. 5

6 2.2. The invariant measure. The next result describes the invariant measure when the jump rate p is constant. Theorem 1. In the case when the jump probabilities are constant with ζ(t, x) = p > for all (t, x), the invariant measure µ(dx) is unique and has characteristic function ( (2.4) E µ [e iξ ] = Π n= 1 1 p ) 1 p (eiγnξ 1), ξ R. Alternatively, let {W n } n be i.i.d. geometric random variables with parameter p, that is P (W n = k) = (1 p) k p, k. Then, the probability measure µ(dx) on (, ) defined in (2.4) is the distribution of the random variable (2.5) X = γ n W n. Remark. n= Under general conditions on the function p(t, x), the solution to (3.4) has a limit as t. The particles are approaching a steady state corresponding equal to the equilibrium distribution of the process with constant p = lim t p(t, z(t)), whenever the limit exists. Proof. The existence and uniqueness of the invariant measure µ(dx) is proven in Propositions 1 and 2. The functions ξ e iξ are bounded and continuous. The series (2.5) is convergent in distribution (also almost surely) as long as γ < 1, so the distribution of X is well defined. We also notice that the infinite product in (2.4) is convergent due to the inequality e iξ 1 2 sin ξ 2 ξ. Then, it is immediate to verify that the characteristic function (2.4) satisfies the equation (2.6) (, ) A φ(x)µ(dx) =, with A the operator defined at (2.1) for constant p(t, x) = p. 3. The fluid limit for the mean-field model Throughout this section we assume that p(t, x) is differentiable with bounded derivatives in both variables. We start the investigation by considering the empirical measure of the n particle γ-process (3.1) µ n (t, dx) = 1 n n δ xi (t)(dx), i=1 6

7 with initial distribution (3.2) µ n (, dx) = 1 n n δ xi ()(dx). i=1 Assumption (A1). There exists θ > such that (3.3) lim sup E[ e θx, µ n (dx) ] <. n Assumption (A2). The initial distribution is said to have an initial deterministic profile µ (dx) M 1 ((, )) if lim n µ n (, dx) = µ (dx) in probability in the sense of the weak convergence of finite measures. Assumption (A3). We choose a finite collection of particles {x n j ( )}, 1 j l, with l a positive integer fixed for all n. Since the limit is considered as n the condition n l is trivial. We assume that for all 1 j l, the initial point x n j () has a deterministic limit x j. Theorem 2. Under (A1) and (A2), the average process x n ( ) is tight in D([, ), (, )) and any limit point x( ) is the solution of the ordinary differential equation (3.4) dy dt = (1 p(t, y)) (1 γ)p(t, y)y, y(). Additionally, the empirical measure process (3.1) is tight in the Skorohod space of timeindexed measure-valued paths D([, ), M 1 ((, )). Any limit point is the unique weak solution in the sense of (2.2) to the equation (3.5) φ(t), µ(t) φ(), µ() { s φ(s) + (1 p(s, x(s))(φ(s, Rx) φ(s, x)) } +p(s, x(s))(φ(s, Lx) φ(s, x)), µ(s) ds =, where φ C 1,2 θ ([, ) (o, )). Proof. The average process. We recall the coupling between x(t) and a family of Poisson processes, as in the discussion from Subsection 1. We arrange the particles, including the Poisson points, in a set of n pairs (x i (t), π i (t)), 1 i n. The Poisson processes are the clocks that trigger the jumps of the particles x i, and they only move forward. We then 7

8 have inequality x i (t) x i () + π i (t) and (1.1) for all i, and (3.6) x n (t) x() + 1 n n π i (t). i=1 The differential equations (1.2)-(1.3) are valid for f(x) C b ((, )), for any θ >. Here we are interested in f(x) = n 1 n i=1 φ(x i), with φ(x) C 1,2 θ ([, ) (, )). It is easy to see that we can extend (1.2)-(1.3) to this class of test functions, and implicitly to polynomials, due to the exponential bounds (1.1). Let T >, be fixed but arbitrary. Let φ(x) = x. At time t =, the average process x n () is tight (3.3). Then, (1.2)-(1.3) show that for t 1 t 2 T, [ ] E sup x n (t) x n (s) 2 C(t 2 t 1 ), t 1 s t t 2 with C = C(T ) independent of n, showing that { x n ( )} n 1 is tight in D([, T ], (, )), and any limit point x( ) is continuous. Moreover, the tight family of processes { x n ( )} n 1 satisfy, due to Doob s maximal inequality, [ lim E (3.7) sup x n (t) x n () n t T (1 p(s, x n (s ))) (1 γ)p(s, x n (s )) x n (s )ds 2 ] =. Assumption (A2) and (A1) imply that x n () converges in distribution to the deterministic point y = xµ (dx). Since p is continuous and bounded, and using once more the fact that expected values polynomials φ(x) have uniform bounds over n and t T, we have shown that any limit point x( ) solves (3.4). We note that (3.4) has unique local solutions, and since it has an affine bound, it also has global solutions [5]. The fluid limit. The proof of (3.5) follows the same steps as the proof for the average process. One has to prove tightness, which is a consequence of the Doob s maximal inequality applied to the martingale (1.2). The tightness is true on the Skorohod space D([, ), M 1 ((, ))) of time indexed, probability measure-valued paths, continuous to the right and with limit to the left. Moreover, any limit point of the tight family of measure valued processes (indexed by n) satisfies equation (3.5) modulo an error term of order 1/n. To close the argument, we only need the uniqueness of the solution of the pde (2.2), proven in Proposition 2. The details of the proof are standard in any hydrodynamic limit [6], also in a more similar 8

9 context in [4]. In addition, the proof of Theorem 4 outlines the main steps of essentially the same argument. Theorem 3. Under assumptions (A1), (A2), (A3), each particle x n j ( ) is tight and its limit point is equal in law to the one particle γ - process defined by equation (1.2) with n = 1 and a space independent p(t, x) = p(t, x(t)), where x( ) is the solution of (3.4). Moreover, the joint system of l tagged particles converges to a collection of independent one particle processes starting at x j, 1 j l. Proof. Under (A1), (A2), the average process x n ( ) converges to the solution of the ode (3.4). It is easy to see that under (A3), that takes care of the initial point, each individual particle is tight. In the Skorohod space D([, T ], (, ) l ), consider the joint l - dimensional limit point {x j ( )} 1 j l of the tagged particle collection x n j ( ), 1 j l, obtained as n. Taking into account the continuity of the coefficients p(s, ), we see that the l dimensional process {x j ( )} 1 j l solves the martingale problem (1.2)-(1.3) with n = l, i = j, ζ j (s, x) = p(t, x(t)) and x() = (x 1, x 2,..., x l ). Naturally, in this setting the l components are independent since no coefficient of the infinitesimal generator depends on more than one component. 4. Scaling limit We consider the mean field γ process with scaling given by n N, a time speed up t Nt given by λ N, the shrinking of the forward jump size equal to N 1. In addition, p N (t, x) = N 1 α(t, x), with α C,1 b ([, ) (, )), the space of functions with bounded continuous derivatives up to the multiindex (, 1). Finally, the backward jump size obeys the same rule with γ (, 1). The scaled process considered is x N (t) = x(nt). Denote the empirical measure by µ N (t, dx) = N 1 δ xi (Nt). We recall that for φ(x) C 1,2 θ ([, ) (, )) we use the shorthand φ, µ for the integral of φ against the measure µ Initial profile. We shall assume that there exists θ >, such that (4.1) lim sup E[ e θx, µ N (, dx) ] < +. N 9

10 Assume that there exists a deterministic measure µ (dx) M 1 ((, )) having all finite moments such that, for any ɛ >, and any φ C ((, )) we have the limit (4.2) lim N P ( φ(x), µ N (, dx) φ(x), µ (dx) > ɛ ) =. The measure µ (dx) is called the initial profile of the particle system. Notice that, formally, one can write x N () = x, µ N (, dx). Due to the condition (4.1) we are allowed to use polynomial functions as test functions in the definition of weak convergence, even though (, ) is unbounded. Then lim N x, µ N (, dx) = x, µ (dx) which implies that z = xµ (dx) The ode (4.3) satisfied by the average. For a given α(t, x), an initial value z, construct the solution z(t) to the ode (4.3) z (t) = 1 (1 γ)α(t, z(t))z(t), z() = z. Remark. Let λ = 1 and α constant. In this case z(t) = 2 α (1 (1 α 2 )e α 2 t ) remains bounded for all t The pde. Starting with the solution z(t) of equation (4.3), we define the operator B t on the space of test functions φ C 1,2 θ ([, ) (, )) as follows (4.4) B t φ(t, x) = φ(t, x) + α(t, z(t))(φ(t, γx) φ(t, x)). For any t, B t denotes the formal adjoint of B t in the space variable. We shall say that time indexed measures µ(t, dx) satisfy the equation (4.5) t µ = B t µ, µ(, dx) = µ (dx) with initial condition µ (dx) in weak sense, if µ(, dx) = µ (dx), and for any test function φ C 1,2 θ ([, ) (, )), (4.6) φ(t), µ(t) φ(), µ() s φ(s) + B s φ(s), µ(s) ds =. A solution µ(t, dx) to (4.6) can be obtained probabilistically as the transition probabilities of the inhomogeneous Markov process solving the martingale problem associated to (4.4). Since the construction can be done from a scaled pure jump random walk, there is only a question about the uniqueness of the solution, which is done in Step 4 of the proof of Theorem 4. 1

11 Theorem 4. For any initial profile, the average x N ( ) converges in probability to a deterministic continuous function z(t) solving the equation (4.3). In addition, the measurevalued processes µ N (, dx) converge weakly in probability to the unique measure-valued path µ(, dx) C([, ), M F ((, )) solving the equation (4.5). Proof. Without loss of generality, we shall prove the theorem on any time interval [, T ], where T is fixed but arbitrary. Step 1. We shall prove that all moment bounds (4.1) hold for µ N (t, dx), for any t. In fact, we prove [ ] (4.7) lim sup E N sup e θx, µ N (s, dx) s t < +. Also, ( (4.8) P sup s t x m, µ N (s, dx) µ N (, dx) ) > λt e C(t)N. This inequality is based on the martingale inequality applied to a process π N (t) defined by coupling. At time t =, both processes start from the same points x i () = π i (). Whenever the clock associated to, say, particle i rings, the particle π i simply jumps forward by N 1. Naturally π i (t) x i () are independent Poisson processes, and all bounds are easy to calculate. The uniform integrability from (4.7) implies that {µ N (t, dx)} N> is a tight family of measures for any t. Step 2. We prove the limit for the average process. First, assume φ(x) = x. Denote φ, µ(t) = z(t) in this case. Then equation (4.6) is a differential equation in all coordinates of Z d of the form (4.3), where we assume that the initial profile has asymptotic average z. Step 3. Let φ C 1,2 θ ([, ) (, )). For each N, the differential formula corresponding to φ(t), µ N (t) can be obtained directly from (1.2), applied to the function f(t, x) = N 1 φ N (t, x i ), with the substitution t Nt using the scaled test function φ N (t, x) = 11

12 φ(n 1 t, x). The differential formulas (1.2)-(1.3) give (4.9) (4.1) (4.11) (4.12) M N φ (t) = φ(t), µn (t) φ(), µ N () { t n N 1 s φ(s, x N i (s )) + i=1 [( )( N 1 p N (s, x N (s )) φ(s, x N i (s ) + 1 ) N ) φ(s, xn i (s )) ( + p N (s, x N (s )) φ(s, γx N i (s )) φ(s, x N i (s )))] } ds. (1) The integrands on the right hand side of (4.1)-(4.12) have uniformly bounded moments due to the estimates (4.7), which proves that φ( ), µ N (t), indexed by N, are tight processes on D([, T ], M 1 ((, ))). Moreover, any limit point on the Skorohod space is, in fact, continuous in time, more precisely belongs to C([, T ], M 1 ((, ))). (2) Modulo error terms of order N 1, line (4.11) is equal to N 1 φ(x N i (s)). Here we use that α C,1 b ([, ) (, )), the Taylor formula with the remainder in integral form. (3) If φ γ (t, x) = φ(t, γx), and ɛ >, we have (4.13) (4.14) lim P N ( sup t T φ(t), µ N (t) φ(), µ N () ) s φ(s) + φ + α(s, x N (s ))(φ γ φ), µ N (s ) ds > ɛ = as a consequence of the martingale inequality and the fact that the quadratic variation (1.3) of M N (T ) is of order O(N 2 ). (4) Any limit point µ(, dx) of the tight sequence of empirical measures {µ N (, dx)} is continuous in time and satisfies equation (4.5) in weak sense. We only have to prove that the equation has a unique deterministic solution, completed in the next step. Step 4. To prove the uniqueness of the solution to (4.5), we define µ m (t) = x m, µ(t, dx), m and see that (4.6) with φ(t, x) = x m gives the recurrence (4.15) d dt µ m(t) = mµ m 1 (t) + α(t, z(t))(γ m 1)µ m (t), µ (t) = 1. The affine ode have unique global solutions since α is bounded (a general result when the equation has an affine bound [5]). Once again (4.7) imply the existence of the moment 12

13 generating function of the measures µ(t, dx) obtained as limit points. moments shows uniqueness. The equality of Remark. An alternative proof for the uniqueness of the weak solution to (4.5) can be carried out as follows. We solve the forward equation corresponding to the operator (4.4) by construction the Markov process y(t) and calculating explicitly its transition kernel g(s, x; t, dx ). Next, we show that g is absolutely continuous, with Radon-Nikodym derivative a smooth function vanishing at infinity, denoted by g as well. Finally we take φ(, ) = g(, ; t, x ) in (4.6) and obtain uniqueness. 5. The one particle process for the scaled model The one particle case arises naturally. If we isolate a single particle tagged by label i = 1 without loss of generality, equations (1.2)-(1.3) applied to f(t, x) = φ(t, x 1 ) in the same manner as in (4.9), show that {x N 1 (t)} N> is tight and any limit point is a process solving the martingale problem associated to (4.4) Irreducibility and recurrence. It is intuitive that if α(x) approaches zero, the time before a jump backwards becomes very large and the particle escapes, at constant speed, to infinity. The extreme case α(x) is when the particle performs a uniform forward motion. Evidently, there is no equilibrium probability measure. The other enlightening case is when α(x) is constant, when Theorem 6 gives the explicit form of the invariant measure. However, if α is only bounded away from zero, we have the next theorem. Theorem 5. The time homogeneous process with generator (4.4) with α bounded away from zero has a unique invariant measure, absolutely continuous with respect to the Lebesgue measure λ(dx). Remark 1. The invariant measure may not be finite. However, a proof along the same lines as Proposition 1 shows that the invariant measure is a probability measure. Alternatively, Proposition 3 implies the same when < γ < 1/2. Remark 2. Theorem 5 is not true when α(x) converges to zero as x, an example being α(x) = (1 + x) 1. Then z ln(2 + z) = t + C, C = z ln(2 + z ). We note that z > 1/2 hence it is increasing to infinity. Before anything else notice that as t (at equilibrium, if it exists), z(t) and α(z(t)). The steady state equation (4.5) reduces to µ = so the solution is the Lebesgue measure (not a probability distribution). 13

14 Proof. Let l(dx) be the Lebesgue measure on (, ), A B((, )) with l(a) >. Write G(x, A) = E x [ ] 1 A (x(t))dt for the Green function associated to the process, with x (, ). We shall use Theorem 1..1 in [8] to prove the theorem. We have to show that (1) P x (τ A < ), or the process is λ - irreducible, and (2) G(x, A) = for any x (, ) and any A B((, )) with λ(a) >. Without loss of generality, we assume that A is an open interval (a, b). Let < a < a and denote τ = inf{t x(t) < a } and τ A the first hitting time of A. Denote α > the lower bound of α such that α(x) α > and pick an arbitrary ɛ >. Let χ 1, χ 2,... be the i.i.d. holding times of the Poisson process driving x(t), w 1, w 2,... the actual holding times of the process and τ =, τ 1, τ 2,... be the actual jump times of x(t). More precisely, for j 1, (5.1) τ j = inf{t > τ j 1 χ i < τ j 1 α(x(s))ds}, w i = τ i τ i 1. Part (1). We first show that P x (τ < ) = 1 for all x >. By construction, (5.1) implies that w i α 1 χ i with probability one. Right after exactly the n-th jump, a particle that started at x will be at n 1 (5.2) x(τ n) = γ n x + γ k w n k γn x + α 1 γk χ n k. k= It is straightforward to see that unless x < a, we have that τ coincides with the position x(τ n), exactly after a jump, for some n 1. A coupling argument based on a process with constant α = α driven by the holding times {χ i } i, together with (5.2), shows that if the process with constant rate reaches (, a ), then for sure {x(t)} t reaches it even before. Proposition 4 (not dependent on the results in this section) concludes the argument. Part (2). Using once again (5.2), the position at t = τ n, after n consecutive holding times of length less than ɛ is n 1 x(τ n) = γ n x + γ k w n k γn x + ɛα 1 (1 γ) 1. k= 14

15 We choose n and ɛ such that γ n x < a /2 and ɛ < ( a 2 )α (1 γ). Noticing that τ n τ, we have P x (τ A < ) P x (χ 1 ɛ, χ 2 ɛ,... χ n ɛ, w n+1 > a) (1 e ɛ ) n e α a >. Part (3). Starting with τ defined in Part (i), for i = 1, 2,... we set (5.3) σ i = inf{t > τ i x(t) A}, τ i+1 = inf{t > σ i x(t) (, a )}. In view of the results from Part 1, the event that all τ i τ i 1 <, i 1, has probability one. We need to calculate [ ] [ (5.4) G(x, A) E x 1 A (x(t))dt E x τ i= τi+1 τ i ] 1 A (x(t))dt. Applying the strong Markov property to (5.4), it is enough to show that [ τ 1 ] (5.5) inf E y (,a y 1 A (x(t))dt >. ) For χ an exponential random variable with mean value one, independent of the process, define the first jump time of the process w = inf{t > χ < α(x(s))ds}. The rates α(x) are bounded above by < α < and α w χ. Then [ τ 1 ] E y 1 A (x(t))dt ( b a ( 2 )P y w + y > a + b ) 2 ( b a ( 2 )P y w > a + b ) 2 ( b a ( 2 )P y χ > α ( a + b ) 2 ) = exp{ α ( a + b 2 )} > Ergodic properties. Let τ F = inf{t > x(t) F } be the first entry time in the open set F. Both the following definition, which is a variant of the local Doeblin condition, and the next theorem due to Orey are adapted form [1] and [7]. Definition 3. (Local Doeblin condition) A subset F of the state space S of the Markov process will be said Doeblin attractive if (i) P x (τ F < ) = 1, for any x S, and (ii) there exists a probability measure ν (dx) concentrated on F, a constant c (, 1) such that, for all x F and all Borel sets B of S, we have P x (B) cν (B). 15

16 Theorem (Orey). If a Markov process has a Doeblin attractive set, then the process has a unique invariant measure µ and for every x in the state space, we have the limit lim t P x (x(t) ) µ( ) =, where is the total variation norm of a finite measure. Proposition 3. If γ (, 1 2 ), then any set (, a ), a > is an attractive Doeblin subset of the state space S = (, ). Remark. The condition γ < 1 2 can be relaxed by replacing the condition that exactly one jump has happened (as in the proof of the proposition) with the condition that exactly n > 1 jumps have happened, for an appropriate integer n. Proof. We shall use the same notations for the holding times and jump times as in the proof of Theorem 5. In that proof we show that condition (i) in Definition 3 is satisfied for any set (, a ). We proceed to the proof of (ii). Pick a point x (, a ). Denote A(y) = y α(x )dx and notice that A is differentiable with continuous strictly positive derivative, so A is invertible. Then (5.6) (5.7) A(w 1 + x) A(x) = χ 1 A(γ(x + w 1) + w 2) A(γ(x + w 1)) = χ 2. Whenever γ < 1/2, we can fix t such that t/a (γ(1 γ) 1, (1 γ)γ 1 ) and consider y 1 y 2 two numbers in [γa + γt, t]. The condition on t makes sure (i) that t γa + γt, (ii) whenever y γa + γt, then the lower bound for w 1 in (5.1) is nonnegative, and (iii) this interval has a nontrivial intersection with (, a ). (5.8) (5.9) If exactly one jump has been observed, then x(t) = γ(x + w 1 ) + (t w 1 ) and ( ) ) P x y 1 < x(t) y 2 P x (y 1 < x(t) y 2, w 1 t < w 1 + w 2 P x (y 1 < γ(x + w 1) + (t w 1) y 2, w 1 t < w 1 + w 2 Due to the conditions on t, y 1, y 2, the event we calculate the probability of in (5.9) is equal to (5.1) = ). { } (1 γ) 1 (γx + t y 2 ) w 1 < (1 γ) 1 (γx + t y 1 ), t w 1 < w 2. 16

17 From (5.7) we see that t w 1 < w 2 is the same as A(γ(x + w 1) + t w 1) A(γ(x + w 1)) < χ 2. Equation (5.6) shows that w 1 = A 1 (A(x) + χ 1 ) x is a function of χ 1, proving that w 1 and χ 2 are independent. Let ( )} { } ρ(w) = α(x + w) exp{ A(w + x) A(x) α exp α w be the density of the random variable w 1. If y i = (1 γ) 1 (γx + t y i ), the probability from (5.9) is equal to i = 1, 2, then (5.11) y 1 { ( )} exp A(γ(x + w) + t w) A(γ(x + w)) ρ(w) dw y 2 { ( )} (1 γ) 1 α exp α t (y 2 y 1 ). We have shown (ii) from Definition 3 with ν (dy) equal to the uniform probability measure on [γa + γt, t] and c < max{1, α (1 γ) 1 (t(1 γ) γa ) 1 e α t }, after incorporating the normalization constant for the uniform measure on [γa + γt, t] in the lower bound The case α constant. In the present discussion we assume a constant α(t, x) = α > and the process with pre-generator (4.4). Proposition 4. For any a > and any x >, let τ = inf{t > x(t) < a } and u θ (x) = E x [e θτ ], θ. If û(β), β is the Laplace transform of u θ (x) in the variable x >, then (5.12) û(β)(θ + 1 β) = γ 1 û(γ 1 β) + θβ 1 (1 e βa ) 1 which leads to (5.13) û(β) = n= [ ] [ 1 θβ 1 γ n (1 e γ n βa ) 1 Π n k= γ(1 + θ γ β)] k, whenever β γ n (1 + θ), n. Moreover, u (x) = P x (τ < ) = 1. Proof. For any θ, let (5.14) u θ (x) = E x [e θτ ], u θ (x) = 1, x a. 17

18 We are interested mainly in P x (τ < ) = u (x), having to show that u (x) 1. Let χ be the first exponential holding time of intensity one (without loss of generality). Then, if x < a, we have u θ (x) = 1, and if x a we can derive the relation E x [e θτ ] = which reduces to (5.15) u θ (x) = E x [e θτ χ = s] e s ds = After a substitution y = γ(x + s), E γ(x+s) [e θ(s+τ ) χ = s] e s ds u θ (γ(x + s))e (θ+1)s ds, x a. (5.16) u θ (x)e (θ+1)x = γ 1 u θ (y)e γ 1 (θ+1)y dy, x a. γx Due to the integral equation (5.16), u θ (x) is bounded, we derive that it is continuous, and then that it is also differentiable. simplification by a factor e (θ+1)x, we have After differentiating both sides of (5.16) and a (5.17) u θ (x) = (θ + 1)u θ(x) u θ (γx), x a. Naturally, this is (B θ)u θ =, with B = B t in the time homogeneous case and α 1 from (4.4). Keeping in mind that (5.17) is true only for x a, we calculate the Laplace transform û θ (β), β, of u θ (x) (in the space variable). It satisfies [ (5.18) 1 + βû(β) = (θ + 1) û(β) 1 ] [ β (1 e βa ) γ 1 û(γ 1 β) 1 ] β (1 e βa ) which gives (5.12). Solving the recurrence relation for β γ k β, k n 1 and performing the necessary cancellations we obtain (5.13). We want to prove that the solution is unique. The Laplace transform of the difference v θ (x) of two possible solutions of (5.15) satisfies (5.19) γ 1ˆv(γ 1 β) = (1 + θ β)ˆv(β), ˆv( ) =. Iterating once again relation (5.19) for β γ k β, k n 1, we can see that [ 1 (5.2) ˆv(x) = Π n 1 k= γ(1 + θ γ β)] k ˆv(γ n β), β > 1 + θ. Letting n, we have ˆv =. When θ =, the constant solution equal to 1 solves (5.15)-(5.18). By uniqueness, we have proved that P x (τ < ) = 1. 18

19 5.4. The invariant measure. If the scaled process solving the martingale problem associated to the operator (4.4) with α(t, x) = α(x) is ergodic, then x N (t) approaches a deterministic value as t, (5.21) lim t x N (t) = lim t x, µ N eq(dx) and then we look at the scaled limit (5.22) lim N x, µn eq(dx) = z eq. Then the steady state equation derived from (4.5) is (5.23) φ + α(φ(γx) φ(x)), µ =, α = α(z eq ), µ(dx) M 1 ((, )), where φ C 1,2 θ ([, ) (, )). An alternative way to identify the steady state is by setting Bw(x) = 2w(2x) α 1 w(x); then a steady state is a fixed point of the operator B. A heuristic approach, that offers insight in the way the equilibrium is approached by the process, is to look at the position of the Markov process right after each jump. A particle starting at y drifts with constant velocity one in the positive direction. When an exponential clock with intensity α rings, it jumps to a position equal to γ times its current position. Let W n be a sequence of i.i.d. exponentials with intensity α. Let Y n be the position at the n th jump. Then Y 1 = γ(y + W 1 ), Y 2 = γ(y 1 + W 2 ),... Y n = γ(y n 1 + W n ) yielding Y n = γ n Y + n j=1 γn+1 j W j. We calculate the limiting distribution of Y n. The moment generating function is [ ] (5.24) E e ξy n = e ξγn y Π n j=1 (1 ( ξ α )γn+1 j) 1 with limit as n equal to ] (5.25) E [e ξy = [ Π j=1 (1 ξ( γj )] 1 α ). One can see from (5.25) that Y k=1 γk Wk, where W k are i.i.d. exponential with parameter α and indicates equivalence in probability law. Theorem 6. The steady state of the process, and also the stationary solution of the equation (4.5) for constant α(t, x) = α is the measure with moment generating function (5.25). 19

20 Proof. Once the uniqueness of the invariant measure is proven (Theorem 5), the proof reduces to a verification of B µ =, the equation (4.5) when α is constant and the solution is stationary. Using the moment generating function, it is enough to check (4.4) for exponential test functions. The details are identical to the proof of Theorem 1. References [1] Duflo, M. Random iterative models. Applications of Mathematics (New York), 34. Springer-Verlag, Berlin, [2] Eun, D. Y. On the limitation of fluid-based approach for internet congestion control. IEEE International Conference on Computer Communications and Networks (ICCCN), San Diego, CA, Oct. 25 [3] Eun, D. Y. Fluid approximation of a Markov chain for TCP/AQM with many flows. Preprint. [4] Grigorescu, I., Kang, M. (24) Hydrodynamic Limit for a Fleming-Viot Type System. Stochastic Process. Appl. 11, no. 1, [5] Hale, J.K. Ordinary Differential Equations, Wiley-Interscience, [6] Kipnis, C.; Landim, C. (1999) Scaling Limits of Interacting Particle Systems. Springer-Verlag, New York. [7] Lacroix,J. Chaines de Markov et processus de Poisson, [8] Meyn, S. P.; Tweedie, R. L. Markov chains and stochastic stability. Communications and Control Engineering Series. Springer-Verlag London, Ltd., London, Department of Mathematics, University of Miami, Coral Gables, FL address: igrigore@math.miami.edu Department of Mathematics, North Carolina State University, Raleigh, NC address: kang@math.ncsu.edu 2

STEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL

STEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL STEADY STATE AND SCALING LIMIT FOR A TRAFFIC CONGESTION MODEL ILIE GRIGORESCU AND MIN KANG Abstract. In a general model (AIMD) of transmission control protocol (TCP) used in internet traffic congestion

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Hydrodynamic Limit for a Fleming-Viot Type System

Hydrodynamic Limit for a Fleming-Viot Type System Hydrodynamic Limit for a Fleming-Viot Type System Ilie Grigorescu a,, Min Kang b a epartment of Mathematics, University of Miami, 365 Memorial rive, Ungar Building, room 55, Coral Gables, FL 334-45 b epartment

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Multivariate Risk Processes with Interacting Intensities

Multivariate Risk Processes with Interacting Intensities Multivariate Risk Processes with Interacting Intensities Nicole Bäuerle (joint work with Rudolf Grübel) Luminy, April 2010 Outline Multivariate pure birth processes Multivariate Risk Processes Fluid Limits

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

The first order quasi-linear PDEs

The first order quasi-linear PDEs Chapter 2 The first order quasi-linear PDEs The first order quasi-linear PDEs have the following general form: F (x, u, Du) = 0, (2.1) where x = (x 1, x 2,, x 3 ) R n, u = u(x), Du is the gradient of u.

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences

GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences The Annals of Probability 1996, Vol. 24, No. 3, 1324 1367 GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1 By Bálint Tóth Hungarian Academy of Sciences We consider

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Insert your Booktitle, Subtitle, Edition

Insert your Booktitle, Subtitle, Edition C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27

More information

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively); STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend

More information

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit

A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit David Vener Department of Mathematics, MIT May 5, 3 Introduction In 8.366, we discussed the relationship

More information

Lecture Characterization of Infinitely Divisible Distributions

Lecture Characterization of Infinitely Divisible Distributions Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

ODE Final exam - Solutions

ODE Final exam - Solutions ODE Final exam - Solutions May 3, 018 1 Computational questions (30 For all the following ODE s with given initial condition, find the expression of the solution as a function of the time variable t You

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Krzysztof Burdzy Robert Ho lyst Peter March

Krzysztof Burdzy Robert Ho lyst Peter March A FLEMING-VIOT PARTICLE REPRESENTATION OF THE DIRICHLET LAPLACIAN Krzysztof Burdzy Robert Ho lyst Peter March Abstract: We consider a model with a large number N of particles which move according to independent

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Other properties of M M 1

Other properties of M M 1 Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Nonlinear Systems Theory

Nonlinear Systems Theory Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

UNIVERSITY OF MANITOBA

UNIVERSITY OF MANITOBA Question Points Score INSTRUCTIONS TO STUDENTS: This is a 6 hour examination. No extra time will be given. No texts, notes, or other aids are permitted. There are no calculators, cellphones or electronic

More information

F n = F n 1 + F n 2. F(z) = n= z z 2. (A.3)

F n = F n 1 + F n 2. F(z) = n= z z 2. (A.3) Appendix A MATTERS OF TECHNIQUE A.1 Transform Methods Laplace transforms for continuum systems Generating function for discrete systems This method is demonstrated for the Fibonacci sequence F n = F n

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ), MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2

More information

Symmetry Methods for Differential and Difference Equations. Peter Hydon

Symmetry Methods for Differential and Difference Equations. Peter Hydon Lecture 2: How to find Lie symmetries Symmetry Methods for Differential and Difference Equations Peter Hydon University of Kent Outline 1 Reduction of order for ODEs and O Es 2 The infinitesimal generator

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Translation Invariant Exclusion Processes (Book in Progress)

Translation Invariant Exclusion Processes (Book in Progress) Translation Invariant Exclusion Processes (Book in Progress) c 2003 Timo Seppäläinen Department of Mathematics University of Wisconsin Madison, WI 53706-1388 December 11, 2008 Contents PART I Preliminaries

More information

u n 2 4 u n 36 u n 1, n 1.

u n 2 4 u n 36 u n 1, n 1. Exercise 1 Let (u n ) be the sequence defined by Set v n = u n 1 x+ u n and f (x) = 4 x. 1. Solve the equations f (x) = 1 and f (x) =. u 0 = 0, n Z +, u n+1 = u n + 4 u n.. Prove that if u n < 1, then

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: Chapter 7 Heat Equation Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: u t = ku x x, x, t > (7.1) Here k is a constant

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

1 Delayed Renewal Processes: Exploiting Laplace Transforms

1 Delayed Renewal Processes: Exploiting Laplace Transforms IEOR 6711: Stochastic Models I Professor Whitt, Tuesday, October 22, 213 Renewal Theory: Proof of Blackwell s theorem 1 Delayed Renewal Processes: Exploiting Laplace Transforms The proof of Blackwell s

More information

Theoretical Tutorial Session 2

Theoretical Tutorial Session 2 1 / 36 Theoretical Tutorial Session 2 Xiaoming Song Department of Mathematics Drexel University July 27, 216 Outline 2 / 36 Itô s formula Martingale representation theorem Stochastic differential equations

More information

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables. Chapter 2 First order PDE 2.1 How and Why First order PDE appear? 2.1.1 Physical origins Conservation laws form one of the two fundamental parts of any mathematical model of Continuum Mechanics. These

More information

Markov Chain BSDEs and risk averse networks

Markov Chain BSDEs and risk averse networks Markov Chain BSDEs and risk averse networks Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) 2nd Young Researchers in BSDEs

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes

More information

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA McGill University Department of Mathematics and Statistics Ph.D. preliminary examination, PART A PURE AND APPLIED MATHEMATICS Paper BETA 17 August, 2018 1:00 p.m. - 5:00 p.m. INSTRUCTIONS: (i) This paper

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC R. G. DOLGOARSHINNYKH Abstract. We establish law of large numbers for SIRS stochastic epidemic processes: as the population size increases the paths of SIRS epidemic

More information

On an uniqueness theorem for characteristic functions

On an uniqueness theorem for characteristic functions ISSN 392-53 Nonlinear Analysis: Modelling and Control, 207, Vol. 22, No. 3, 42 420 https://doi.org/0.5388/na.207.3.9 On an uniqueness theorem for characteristic functions Saulius Norvidas Institute of

More information

MATH 220: MIDTERM OCTOBER 29, 2015

MATH 220: MIDTERM OCTOBER 29, 2015 MATH 22: MIDTERM OCTOBER 29, 25 This is a closed book, closed notes, no electronic devices exam. There are 5 problems. Solve Problems -3 and one of Problems 4 and 5. Write your solutions to problems and

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

Latent voter model on random regular graphs

Latent voter model on random regular graphs Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with

More information