Mean-field dual of cooperative reproduction

Size: px
Start display at page:

Download "Mean-field dual of cooperative reproduction"

Transcription

1 The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018

2 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time Markov process with finite state space S, semigroup (P t ) t 0 of transition probabilities P t (x, y), and generator G, i.e., P t = e tg = n=0 1 n! tn G n. We view P t and G as matrices and use the notation µp t (y) := x S µ(x)p t (x, y), P t f (x) := y S P t (x, y)f (y). A Markov generator satisfies G(x, y) 0 (x y) and G(x, y) = 0. y We call G(x, y) (x y) the rate of jumps from x to y.

3 Poisson construction of Markov processes Let M be a collection of maps m : S S and let (r m ) m M be nonnegative constants. Then Gf (x) := { ( ) ( )} r m f m(x) f x m M defines a Markov generator. Conversely, every Markov generator can (not uniquely) be written in the form (1). We call (1) a random mapping representation. Let ω be a Poisson point set on M R with intensity (1) µ ( {m} A ) = r m l(a) ( A B(R) ), where B(R) is the Borel-σ-field on R and l denotes Lebesgue measure. If m M r m <, then we may order the elements of ω s,t := ω M (s, t] = { (m 1, t 1 ),..., (m n, t n ) } such that t 1 < < t n.

4 Poisson construction of Markov processes Define random maps X s,t : S S (s t) by X s,t := m n m 1. (Poisson construction of Markov processes) Define maps (X s,t ) s t as above in terms of a Poisson point set ω. Let X 0 be an S-valued random variable, independent of ω. Then X t := X 0,t (X 0 ) (t 0) is a Markov process with generator G. Remark The sample paths of X are right-continuous. We get left-continuous paths by defining X s,t in terms of ω s,t := ω M (s, t).

5 Poisson construction of Markov processes The random maps (X s,t ) s t form a stochastic flow X s,s = 1 and X t,u X s,t = X s,u with independent increments, in the sense that X t0,t 1,..., X tn 1,t n are independent for each t 0 < < t n. Different stochastic flows can define the same Markov process, as there may be many different ways of writing down a random mapping representation Gf (x) = { ( ) ( )} r m f m(x) f x, for a given generator G. m M Compare: Different SDE s can define the same diffusion process.

6 Cooperative branching Let Λ be a finite set and let S := {0, 1} Λ be the space of all functions Λ i x(i) {0, 1}. For each i, j, k Λ, we define a cooperative branching map coop ijk : S S by { ( ) x(i) x(j) x(k) if l = k, coop ijk x(l) := x(l) otherwise. For each i Λ, we define a death map death i : S S by { 0 if k = i, death i x(k) := x(k) otherwise. On {0,..., n 1} (with periodic boundary conditions) we define Gf (x) := { ( f deathi (x) ) f ( x )} { ( i ) ( )} { ( ) ( )} f coopi 1,i,i+1 f x + α f coopi+1,i,i 1 f x. +α i i

7 A graphical representation time space We denote coop ijk and death i by suitable symbols.

8 A graphical representation X t = X 0,t (X 0 ) time X 0 The map x X 0,t (x). space

9 Pathwise duality Def Two maps m : S S and ˆm : T T are dual with duality function ψ iff ψ ( m(x), y ) = ψ ( x, ˆm(y) ) (x S, y T ). Let (X s,t ) s t be a stochastic flow defined in terms of a Poisson point set ω whose elements are pairs (m, t) with m M. Fix some duality function ψ and assume that each m M has some dual ˆm w.r.t. ψ. Then we can define a dual flow (X t,s ) t s by X s,t := m n m 1 and X t,s := ˆm 1 ˆm n, with ω s,t := ω M (s, t] = { (m 1, t 1 ),..., (m n, t n ) } and t 1 < < t n.

10 Pathwise duality Let X 0 and Y 0 be independent of ω. For fixed s R, setting X t := X s,s+t (X 0 ) and Y t := X s,s t (Y 0 ) (t 0) defines Markov processes with generators Gf (x) = { ( ) ( )} r m f m(x) f x, m M Gf (y) = m M r m { f ( ˆm(y) ) f ( y )}. Then X and Y are pathwise dual in the sense that ψ ( X s,t (x), y ) = ψ ( x, X t,s (y) ). Slight complication: (Y t ) t 0 has left-continuous sample paths.

11 Pathwise duality Trivial example Set T := P(S) and ψ(x, A) := 1 {x A}. Then each map m : S S is dual w.r.t. ψ to the inverse image map m 1 : P(S) P(S): ψ ( m(x), y ) = 1 {m(x) A} = 1 {x m 1 (A)} = ψ( x, m 1 (y) ). Nontrivial examples are associated with invariant subspaces of P(S).

12 Monotone systems duality Let S be equipped with a partial order. A set A S is increasing if A x y implies y A. A map m : S S is monotone if x y implies m(x) m(y) or equivalently A increasing implies m 1 (A) increasing. We can encode increasing subsets by their minimal elements. This leads to the duality function ψ ( x, Y ) := 1 {x y for some y Y }. Note: The maps coop ijk and death i are monotone.

13 Monotone systems duality time 1 1 1

14 Monotone systems duality time

15 Monotone systems duality time

16 Monotone systems duality time

17 Monotone systems duality time dual process Y t P [ X t y for some y Y 0 ] = P [ X0 y for some y Y t ].

18 The mean-field limit Let S be a Polish space, let G be a collection of measurable maps g : S k S, where k = k g 0 can depend on g, and let (r g ) g G be nonnegative constants. We will be interested in Markov processes with state space S N that are constructed in the following way. With rate r g, we sample a collection (i 1,..., i kg ) of elements of {1,..., N}, all different. We calculate g ( x(i 1 ),..., x(i kg ) ). At some site j, we replace the value x(j) by g ( x(i 1 ),..., x(i kg ) ). The site j may or may not be an element of {i 1,..., i kg }, depending on the process we wish to construct.

19 The mean-field limit In particular, we are interested in S = {0, 1} and the maps cob : S 3 S and dth : S 0 S defined as cob ( x(1), x(2), x(3) ) := ( x(1) x(2) ) x(3) dth ( ) := 0. Here we view S 0 as a set with a single element, the empty sequence ( ).

20 The mean-field limit Let [N] := {1,..., N} and [N] k := { i = (i 1,..., i k ) [N] k : i m i n n m }, which has N k := N(N 1) (N k + 1) elements. For g : S k S, N k, i [N] k, and j [N], define g i,j : S N S N by { ( g x(i1 g i,j x(j ),..., x(i k ) ) if j = j, ) := x(j ) otherwise.

21 The mean-field limit We are interested in the Markov process with state space {0, 1} N and generator Note that Gf (x) := αn { ( f cob i,i 3 N 3 (x) ) f ( x )} i [N] 3 + { ( f dth (),i (x) ) f ( x )}. i [N] cob i,i 3 = coop i1,i 2,i 3 since we replace the value x(i 3 ) by ( x(i 1 ) x(i 2 ) ) x(i 3 ).

22 The mean-field limit In general, we are interested in generators of the form Gf (x) = N g G In words: r g N kg + ( 1 i [N] kg k g m=1 [ kg { ( p m f g i,i m (x) ) f ( x )} m=1 p m ) 1 N { ( f g i,j (x) ) f ( x )}]. j [N] With rate Nr g, we select a random k g -tuple i = (i 1,..., i kg ) and calculate g ( x(i 1 ),..., x(i kg ) ). With probability p m, we write the outcome in place of x(i m ). With the remaining probability 1 k g m=1 p m, we write the outcome at a randomly chosen site j [N].

23 The mean-field limit Let (X N t ) t 0 be the resulting Markov process and let µ N t := µ [ X N ] t (t 0) with µ[x] := 1 N i [N] δ x(i) denote the associated empirical process. In the mean-field limit N, we expect (µ N t ) t 0 to be close to the solution of an ODE.

24 The mean-field ODE Let M 1 (S) denote the space of all probability measures on S, equipped with the topology of weak convergence and the associated Borel-σ-algebra. For each measurable map g : S k S, we define a measurable map ǧ : M 1 (S) k M 1 (S) by ǧ(µ 1,..., µ k ) := P [ g(x 1,..., X k ) ] where X 1,..., X k are indep. with P[X i ] = µ i. We also define T g : M 1 (S) M 1 (S) by T g (µ) := ǧ(µ,..., µ). Note that T g is in general nonlinear, unless k = 1.

25 The mean-field ODE Theorem [Mach, Sturm & S. 18] Assume that r g k g <. g G Then, for each initial state µ 0 M 1 (S), the mean-field ODE t µ t = { } r g Tg (µ t ) µ t g G (t 0) (2) has a unique solution. Here, writing µ, φ := φ dµ, we interpret (2) in a weak sense: for each bounded measurable φ : S R, the function t µ t, φ is continuously differentiable and t µ t, φ = { r g Tg (µ t ), φ µ t, φ } (t 0). g G

26 The mean-field ODE Theorem [Mach, Sturm & S. 18] Assume that g G r g k g < and let S be finite. Let (µ N t ) t 0 be empirical processes such that µ N 0 µ 0 for some µ 0 M 1 (S). Then P [ sup µ N (t) µ t ε ] 0 ε > 0, T <, 0 t T N where (µ t ) t 0 solves the mean-field ODE (2). Note Something similar should hold for infinite (even uncountable) S. Note The probabilities p m drop out of the limiting ODE.

27 The mean-field ODE Recall that in our example S = {0, 1}, G = {cob, dth}, r cob = α and r dth = 1 A probability measure µ on {0, 1} is uniquely determined by µ({1}). Setting X t := µ t ({1}), we find the mean-field ODE t X t = αx 2 t (1 X t ) X t =: F α (X t ).

28 Cooperative branching F α (x) α = 3 x For α < 4, the equation t X t = F α (X t ) has a single, stable fixed point x = 0.

29 Cooperative branching 0.3 F α (x) α = x For α = 4, a second fixed point appears at x = 0.5.

30 Cooperative branching F α (x) α = x For α > 4, there are two stable fixed points and one unstable fixed point, which separates the domains of attraction of the other two.

31 Cooperative branching x z upp 0.4 z mid z low Fixed points of t X t = F α (X t ) for different values of α. α

32 A recursive tree representation We can construct the process (X N t ) t 0 from a stochastic flow (X N s,t) s t which is defined in terms of a Poisson point set ω. We aim to find a similar stochastic representation for solutions (µ t ) t 0 to the mean-field ODE (2). In the limit N, the state X N t (j) of a randomly chosen site j [N] is approximatey distributed according to µ t. We trace this site back in time till the first time when its state is changed by the application of a map g i,j. From this moment on, we trace back the sites i 1,..., i kg, and so on...

33 A recursive tree representation X t (j) has law µ t coop coop coop coop death coop death death death death This leads to a representation of (µ t ) t 0 in terms of a marked branching process.

34 A recursive tree representation Let T be the set of all finite words i = i 1 i n (n 0) made up from the alphabet N + = {1, 2,...}. We view T as a tree with root, the word of length zero, in which each individual i has infinitely many offspring i1, i2,... Let r := g G r g and let (γ i ) i T be i.i.d. with law P[γ i = g] = r 1 r g (g G). We inductively define a random subtree T T which contains the root and satisfies ij T iff i T and j k i, where k i := k γi is the integer such that γ i : S k S. Then T is the family tree of a branching process with maps (γ i ) i T attached to its vertices, such that the individual i has k i offspring.

35 A recursive tree representation For any subtree U T that contains the root, we write U := {ij T\U : i U}. Let (σ i ) i T be i.i.d. exponentially distributed random variables with mean r 1, independent of (γ i ) i T. We interpret σ i as the lifetime of i and let τi 1 i n := σ + σ i1 + + σ i1 i n 1 and τ i := τi + σ i denote its birth and death time. Then T t := {i T : τ i t} and T t denote the set of individuals that have died before time t resp. are alive at time t. In particular, ( Tt ) is a branching process where each individual i gives with rate r g birth to offspring i1,..., ik g, for each g G. t 0

36 A recursive tree representation X law µ t coop coop coop coop death coop death death death death X 32 X 21 X 23 i.i.d. µ 0 X 131 X 132 X 133

37 A recursive tree representation We let F t := σ ( ( T s ) 0 s t, (γ i ) i Tt ) denote the filtration generated by the branching process ( T t ) t 0 as well as the maps attached to the particles that have died by time t.

38 A recursive tree representation Given a finite subtree U T that contains the root, we define a map G U : S U S by G U := x where (x i ) i U satisfy x i = γ i (x i1,..., x iki ) (i U). In particular, we set G t := G Tt. Theorem [Mach, Sturm & S. 18] Assume that g G r g k g <. Then the solution to the mean-field equation (2) is given by µ t = E [ T Gt (µ 0 ) ] (t 0), i.e., µ t = P[X ] where (X i ) i Tt Tt satisfy (i) Conditional on F t, the (X i ) i Tt are i.i.d. with law µ 0. (ii) X i = γ i (X i1,..., X iki ) (i T t ).

39 A recursive tree representation In the special case that k g = 1 for each g G, the mean-field ODE (2) is just the backward equation of a continuous-time Markov chain where each map g G is applied with Poisson rate r g. We can think of the collection of random variables (γ i, σ i ) i T as a generalization of the Poisson construction of a continuous-time Markov chain, where time now has a tree-like structure.

40 Unique ergodicity Lemma Assume that P [ t < s.t. G t is constant ] = 1. Then the mean-field ODE (2) has a unique fixed point ν and solutions started in an arbitrary initial law µ 0 satisfy µ t ν t 0, where denotes the total variation norm.

41 Unique ergodicity For our process with cooperative branching and deaths, say that S T is a good subtree if S and (i) γ i death for all i S (ii) i S, {i1, i2, i3} S is either {i1} or {i2, i3}. Lemma The following events are a.s. equal: (i) T contains no good subtrees. (ii) G t is constant for some t <. Moreover, P[T contains a good subtree] > 0 iff α 4.

42 Good subtrees X law µ t coop coop coop coop death coop death death death death X 32 X 21 X 23 X 131 X 132 X 133 i.i.d. µ 0

43 Good subtrees X law µ t X 32 X 21 X 23 X 131 X 132 X 133 i.i.d. µ 0

44 Good subtrees a good subtree

45 Good subtrees dual process Y t

46 A Recursive Distributional Equation Fixed points ν of the mean-field ODE (2) solve the Recursive Distributional Equation (RDE) µ = r 1 g G T g (µ). (3) For each solution ν to the RDE, it is possible to define a collection of random variables (γ i, X i ) i T such that: (i) (γ i ) i T is an i.i.d. collection of G-valued random variables with law P[γ i = g] = r 1 r g (g G). (ii) For each finite subtree U T that contains the root, (X i ) i U are i.i.d. with common law ν and independent of (γ i ) i U. (iii) X i = γ i (X i1,..., X iki ) (i T). Following Aldous and Bandyopadhyay (2005), we call such a collection of r.v. s a Recursive Tree Process (RTP).

47 Recursive Tree Processes Let X have law µ. Then the RDE (3) says X = d γ ( ) X 1,..., X kγ with X 1, X 2,... i.i.d. µ, indep. of γ with P[γ = g] = r 1 r g. We can think of fixed points ν of the mean-field ODE (2) as a generalization of the invariant law of a (continuous-time) Markov chain. Then a Recursive Tree Process (RTP) is a generalization of a stationary (continuous-time) Markov chain. If we add independent exponentially distributed lifetimes (σ i ) i T as before, then for each t 0: Conditional on the σ-field F t of events measurable before time t, the (X i ) i Tt are i.i.d. with common law ν.

48 Endogeny Let (γ i, X i ) i T be the Recursive Tree Process (RTP) corresponding to a solution ν of the Recursive Distributional Equation (RDE) (3). Following Aldous and Bandyopadhyay (2005), we say that the RTP correspondig to ν is endogenous if X is measurabe w.r.t. the σ-field generated by the maps (γ i ) i T. In [AB05], it is proved that endogeny is equivalent to bivariate uniqueness.

49 The n-variate ODE Recall that the mean-field ODE (2) describes the Markov process (X N t ) t 0 on the complete graph in the mean-field limit N. The Markov process (X N t ) t 0 is defined in terms of a stochastic flow (X N s,t) s t. The stochastic flow (X N s,t) s t contains more information than the Markov process (X N t ) t 0 alone; in particular, the stochastic flow provides us with a natural way of coupling processes with different initial states. We would like to understand this coupling in the mean-field limit N.

50 The n-variate ODE For each measurable map g : S k S and n 1, we define an n-variate map g (n) : (S n ) k S n by g (n)( x 1,..., x n ) := ( g(x 1 ),..., g(x n ) ) (x 1,..., x n S k ). Let G and (r g ) g G be as before. We will be interested in the n-variate ODE t µ(n) t = g G r g { Tg (n)(µ (n) t ) µ (n) } t (t 0), that describes the mean-field limit of n coupled Markov processes, that are constructed from the same stochastic flow but have different initial states.

51 The n-variate ODE Let M 1 sym(s n ) be the space of all probability measures on S n that are symmetric under a permutation of the coordinates. For any µ M 1 (S), let M 1 sym(s n ) µ be the set of all symmetric µ (n) whose one-dimensional marginals are given by µ. Let S n diag := { (x 1,..., x n ) S n : x 1 = = x n}. Observations Let (µ (n) t ) t 0 solve the n-variate ODE. Then: Its m-dimensional marginals solve the m-variate ODE. µ (n) 0 M 1 sym(s n ) implies µ (n) t M 1 sym(s n ) (t 0). If ν solves the RDE (3), then µ (n) 0 M 1 sym(s n ) ν implies µ (n) t M 1 sym(s n ) ν (t 0). If µ (n) 0 is concentrated on Sdiag n then so is µ(n) t (t 0).

52 The n-variate ODE In particular, these observations show that if ν (n) solves the n-variate RDE, then its marginals must solve the RDE (3). Conversely, if ν solves the RDE (3) and X is a random variable with law ν, then ν (n) := P [ (X,..., X ) ] solves the n-variate RDE. Question Are all fixed points of the n-variate RDE of this form?

53 The bivariate ODE for cooperative branching x z upp 0.4 z mid z low Fixed points of t X t = F α (X t ) for different values of α. α

54 The bivariate ODE for cooperative branching For our system with cooperative branching and deaths, set z low := 0, z mid := α, and z upp := α, where z mid and z upp are only defined for α 4 and satisfy z mid = z upp for α = 4 and z mid < z upp for α > 4. Let ν low, ν mid, ν upp be the measures on {0, 1} with these intensities. Then ν low, ν mid, ν upp constitute all fixed points of the mean-field ODE (2).

55 The bivariate ODE for cooperative branching Proposition The measures ν (2) low, ν(2) mid, ν(2) upp defined as ν (2) low := P[ (X, X ) ] with P[X ] = ν low etc. are fixed points of the bivariate ODE. In addition, for α > 4, there exists one more fixed point ν (2) mid M1 sym({0, 1} 2 ) that has marginals ν mid but differs from ν (2) mid. Any solution (ν (2) t ) t 0 to the bivariate ODE with µ (2) 0 M 1 sym({0, 1}) νmid and µ (2) 0 ν (2) mid satisfies µ (2) t = ν (2) t mid. Unlike ν (2) mid, the fixed point ν(2) mid is not concentrated on S diag.

56 The bivariate ODE for cooperative branching Interpretation For large N, let x, x {0, 1} N be initial states such that 1 N x(i) = z mid = 1 N x (i), N N and i=1 1 N i=1 N 1 {x(i) x (i)} > 0, i=1 but arbitrarily small. Then 1 N N 1 {X N 0,t (x)(i) X N 0,t (x )(i)} i=1 converges in probability as N and then t to ν (2) mid ({(0, 1), (1, 0)}) > 0. In particular, the evolution under the stochastic flow is unstable in the sense that small differences in the initial states are multiplied, provided the initial density is z mid.

57 Endogeny Recall that a Recursive Tree Process (RTP) (γ i, X i ) i T is endogenous if X is measurable w.r.t. the σ-field generated by (γ i ) i T. Theorem [AB 05, MSS 18] For any solution ν to the RDE (3), the following statements are equivalent. (i) The RTP corresponding to ν is endogenous. (ii) The measure ν (2) is the only solution of the bivariate RDE in the space M 1 sym(s 2 ) ν. In our example of a system with cooperative branching and deaths, the RTPs corresponding to ν low and ν upp are endogenous, but for α > 4, the RTP corresponding to ν mid is not endogenous.

58 A higher-level ODE Recall that for each measurable map g : S k S, we have defined a measurable map ǧ : M 1 (S) k M 1 (S) by ǧ(µ 1,..., µ k ) := P [ g(x 1,..., X k ) ] where X 1,..., X k are indep. with P[X i ] = µ i. In particular, T g : M 1 (S) M 1 (S) is defined by T g (µ) := ǧ(µ,..., µ), and the mean-field ODE reads t µ t = { } r g Tg (µ t ) µ t g G (t 0). We will be interested in the higher-level ODE t ρ t = { } r g Tǧ (ρ t ) ρ t (t 0), g G with ρ t M 1 (M 1 (S)).

59 A higher-level ODE Let ξ be a M 1 (S)-valued random variable, i.e., a random probability measure on S, and let ρ M 1 (M 1 (S)) denote its law. Conditional on ξ, let X 1,..., X n be independent with law ξ. Then ρ (n) := P [ (X 1,..., X n ) ] = E [ ξ ξ] }{{} n times is called the n-th moment measure of ξ. Lemma If (ρ t ) t 0 solves the higher level ODE, then its n-th moment measures solve the n-variate ODE.

60 Higher-level RTPs Theorem [MSS 18] Let ν be a fixed point of the mean-field ODE (2). Let (γ i, X i ) i T be the RTP corresponding to ν. Set and ν := P[ξ ] ξ i := P [ X i (γ ij ) j T ]. and ν := P[δ X ]. Then ν, ν are fixed points of the higher-level ODE and (ˇγ i, ξ i ) i T and (ˇγ i, δ Xi ) i T are higher level RTPs corresponding to ν, ν. The original RTP is endogenous if and only if ν = ν. Remark 1 ν and ν correspond to minimal and maximal knowledge about X. The former describes the knowledge contained in (γ i ) i T, the latter represents perfect knowledge. Remark 2 ν (n) := P [ (X,..., X ) ] where X has law ν, in line with earlier notation.

61 The convex order [Strassen 65, MSS 18] Let S be a compact metrizable space and let C cv (M 1 (S)) denote the space of all convex continuous φ : M 1 (S) R. Then for ρ 1, ρ 2 M 1 (M 1 (S)), the following are equivalent: (i) φ dρ 1 φ dρ 2 for all φ C cv (M 1 (S)). (ii) There exists a random variable X and σ-fields F 1 F 2 such that ρ i = P [ P[X F i ] ] (i = 1, 2). [MSS 18] If ρ M 1 (M 1 (S)) ν solves the higher-level RDE, then ν cv ρ cv ν, where cv denotes the convex order.

62 The higher-level ODE In our example of a system with cooperative branching and deaths, we identify M 1 ({0, 1}) µ µ({1}) [0, 1] and correspondingly M 1 (M 1 ({0, 1})) = M 1 [0, 1]. If g = cob, then ǧ : [0, 1] 3 [0, 1] is given by ǧ(ω 1, ω 2, ω 3 ) := P[X 1 (X 2 X 3 ) = 1] with P[X i = 1] = ω i, which gives ǧ(ω 1, ω 2, ω 3 ) = ω 1 + (1 ω 1 )ω 2 ω 3 and, for any ρ M 1 [0, 1], Tǧ (ρ) = P [ ω 1 +(1 ω 1 )ω 2 ω 3 ] with ω 1, ω 2, ω 3 i.i.d. with law ρ.

63 The higher-level ODE Proposition For our system with cooperative branching and deaths, for α > 4, the higher-level ODE t ρ t = α { Tǧ (ρ) ρ } + { δ 0 ρ } has precisely four fixed points. Three trivial fixed points of the form ρ = ν... = (1 z... )δ 0 + z... δ 1 with z... = z low, z mid, z upp, and a nontrivial fixed point ν mid which is the law of the [0, 1]-valued random variable P [ X = 1 (γ i ) i T ] where (γ i, X i ) i T is the RTP with P[X = 1] = z mid.

64 Numerical results Inductively define measures µ n as µ 0 = δ zmid and Then µ n = α α + 1 T ǧ (µ n 1 ) + 1 α + 1 δ 0. P [ X = 1 {γi1 i m : m < n} ] = µ n n ν mid. We plot the distribution functions F n (s) := µ n ( [0, s] ) ( s [0, 1] ) for the parameters α = 9/2, z mid = 1/3, z upp = 2/3 and various values of n.

65 Numerical results F

66 Numerical results F

67 Numerical results F

68 Numerical results F

69 Numerical results F

70 Numerical results F

71 Numerical results F

72 Numerical results F

73 Numerical results F

74 Numerical results F

75 Numerical results F

76 Numerical results F

77 Numerical results α =

78 Numerical results α =

79 Numerical results α =

80 Numerical results α =

81 Numerical results α =

82 Numerical results α =

83 Numerical results α =

84 Numerical results α =

85 Numerical results α =

86 Numerical results α =

87 Numerical results α =

88 Numerical results α =

89 Numerical results α =

90 Numerical results α =

Recursive tree processes and the mean-field limit of stochastic flows

Recursive tree processes and the mean-field limit of stochastic flows Recursive tree processes and the mean-field limit of stochastic flows Tibor Mach Anja Sturm Jan M. Swart December 26, 2018 Abstract Interacting particle systems can often be constructed from a graphical

More information

A new characterization of endogeny

A new characterization of endogeny A new characterization of endogeny Tibor Mach Anja Sturm Jan M. Swart October 1, 2018 Abstract Aldous and Bandyopadhyay have shown that each solution to a recursive distributional equation (RDE) gives

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

Pathwise duals of monotone and additive Markov processes

Pathwise duals of monotone and additive Markov processes Pathwise duals of monotone and additive Markov processes Anja Sturm Jan M. Swart February 23, 2016 Abstract This paper develops a systematic treatment of monotonicity-based pathwise dualities for Markov

More information

Tame definable topological dynamics

Tame definable topological dynamics Tame definable topological dynamics Artem Chernikov (Paris 7) Géométrie et Théorie des Modèles, 4 Oct 2013, ENS, Paris Joint work with Pierre Simon, continues previous work with Anand Pillay and Pierre

More information

Hard-Core Model on Random Graphs

Hard-Core Model on Random Graphs Hard-Core Model on Random Graphs Antar Bandyopadhyay Theoretical Statistics and Mathematics Unit Seminar Theoretical Statistics and Mathematics Unit Indian Statistical Institute, New Delhi Centre New Delhi,

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

THE NOISY VETO-VOTER MODEL: A RECURSIVE DISTRIBUTIONAL EQUATION ON [0,1] April 28, 2007

THE NOISY VETO-VOTER MODEL: A RECURSIVE DISTRIBUTIONAL EQUATION ON [0,1] April 28, 2007 THE NOISY VETO-VOTER MODEL: A RECURSIVE DISTRIBUTIONAL EQUATION ON [0,1] SAUL JACKA AND MARCUS SHEEHAN Abstract. We study a particular example of a recursive distributional equation (RDE) on the unit interval.

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

4.5 The critical BGW tree

4.5 The critical BGW tree 4.5. THE CRITICAL BGW TREE 61 4.5 The critical BGW tree 4.5.1 The rooted BGW tree as a metric space We begin by recalling that a BGW tree T T with root is a graph in which the vertices are a subset of

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Intertwining of Markov processes

Intertwining of Markov processes January 4, 2011 Outline Outline First passage times of birth and death processes Outline First passage times of birth and death processes The contact process on the hierarchical group 1 0.5 0-0.5-1 0 0.2

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Infinitely iterated Brownian motion

Infinitely iterated Brownian motion Mathematics department Uppsala University (Joint work with Nicolas Curien) This talk was given in June 2013, at the Mittag-Leffler Institute in Stockholm, as part of the Symposium in honour of Olav Kallenberg

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Latent voter model on random regular graphs

Latent voter model on random regular graphs Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with

More information

Bivariate Uniqueness in the Logistic Recursive Distributional Equation

Bivariate Uniqueness in the Logistic Recursive Distributional Equation Bivariate Uniqueness in the Logistic Recursive Distributional Equation Antar Bandyopadhyay Technical Report # 629 University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860

More information

Theory of Stochastic Processes 3. Generating functions and their applications

Theory of Stochastic Processes 3. Generating functions and their applications Theory of Stochastic Processes 3. Generating functions and their applications Tomonari Sei sei@mist.i.u-tokyo.ac.jp Department of Mathematical Informatics, University of Tokyo April 20, 2017 http://www.stat.t.u-tokyo.ac.jp/~sei/lec.html

More information

Endogeny for the Logistic Recursive Distributional Equation

Endogeny for the Logistic Recursive Distributional Equation Zeitschrift für Analysis und ihre Anendungen Journal for Analysis and its Applications Volume 30 20, 237 25 DOI: 0.47/ZAA/433 c European Mathematical Society Endogeny for the Logistic Recursive Distributional

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Stochastic flows associated to coalescent processes

Stochastic flows associated to coalescent processes Stochastic flows associated to coalescent processes Jean Bertoin (1) and Jean-François Le Gall (2) (1) Laboratoire de Probabilités et Modèles Aléatoires and Institut universitaire de France, Université

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

Branching, smoothing and endogeny

Branching, smoothing and endogeny Branching, smoothing and endogeny John D. Biggins, School of Mathematics and Statistics, University of Sheffield, Sheffield, S7 3RH, UK September 2011, Paris Joint work with Gerold Alsmeyer and Matthias

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Rotation set for maps of degree 1 on sun graphs. Sylvie Ruette. January 6, 2019

Rotation set for maps of degree 1 on sun graphs. Sylvie Ruette. January 6, 2019 Rotation set for maps of degree 1 on sun graphs Sylvie Ruette January 6, 2019 Abstract For a continuous map on a topological graph containing a unique loop S, it is possible to define the degree and, for

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

VARIATIONAL PRINCIPLE FOR THE ENTROPY

VARIATIONAL PRINCIPLE FOR THE ENTROPY VARIATIONAL PRINCIPLE FOR THE ENTROPY LUCIAN RADU. Metric entropy Let (X, B, µ a measure space and I a countable family of indices. Definition. We say that ξ = {C i : i I} B is a measurable partition if:

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

Part II: Markov Processes. Prakash Panangaden McGill University

Part II: Markov Processes. Prakash Panangaden McGill University Part II: Markov Processes Prakash Panangaden McGill University 1 How do we define random processes on continuous state spaces? How do we define conditional probabilities on continuous state spaces? How

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

An Ergodic Theorem for Fleming-Viot Models in Random Environments 1

An Ergodic Theorem for Fleming-Viot Models in Random Environments 1 An Ergodic Theorem for Fleming-Viot Models in Random Environments 1 Arash Jamshidpey 2 216 Abstract The Fleming-Viot (FV) process is a measure-valued diffusion that models the evolution of type frequencies

More information

(U) =, if 0 U, 1 U, (U) = X, if 0 U, and 1 U. (U) = E, if 0 U, but 1 U. (U) = X \ E if 0 U, but 1 U. n=1 A n, then A M.

(U) =, if 0 U, 1 U, (U) = X, if 0 U, and 1 U. (U) = E, if 0 U, but 1 U. (U) = X \ E if 0 U, but 1 U. n=1 A n, then A M. 1. Abstract Integration The main reference for this section is Rudin s Real and Complex Analysis. The purpose of developing an abstract theory of integration is to emphasize the difference between the

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

1.1. MEASURES AND INTEGRALS

1.1. MEASURES AND INTEGRALS CHAPTER 1: MEASURE THEORY In this chapter we define the notion of measure µ on a space, construct integrals on this space, and establish their basic properties under limits. The measure µ(e) will be defined

More information

Interacting Particle Systems. J.M. Swart July 20, 2011

Interacting Particle Systems. J.M. Swart July 20, 2011 Interacting Particle Systems J.M. Swart July 20, 2011 2 Contents 1 Construction of particle systems 7 1.1 Probability on Polish spaces...................... 7 1.2 Markov chains..............................

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

ABTEKNILLINEN KORKEAKOULU STOCHASTIC RELATIONS OF RANDOM VARIABLES AND PROCESSES

ABTEKNILLINEN KORKEAKOULU STOCHASTIC RELATIONS OF RANDOM VARIABLES AND PROCESSES Helsinki University of Technology Institute of Mathematics Research Reports Espoo 2008 A554 STOCHASTIC RELATIONS OF RANDOM VARIABLES AND PROCESSES Lasse Leskelä ABTEKNILLINEN KORKEAKOULU TEKNISKA HÖGSKOLAN

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Definably amenable groups in NIP

Definably amenable groups in NIP Definably amenable groups in NIP Artem Chernikov (Paris 7) Lyon, 21 Nov 2013 Joint work with Pierre Simon. Setting T is a complete first-order theory in a language L, countable for simplicity. M = T a

More information

Information Theory and Statistics Lecture 3: Stationary ergodic processes

Information Theory and Statistics Lecture 3: Stationary ergodic processes Information Theory and Statistics Lecture 3: Stationary ergodic processes Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Measurable space Definition (measurable space) Measurable space

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Introduction to Stochastic Optimization Part 4: Multi-stage decision

Introduction to Stochastic Optimization Part 4: Multi-stage decision Introduction to Stochastic Optimization Part 4: Multi-stage decision problems April 23, 29 The problem ξ = (ξ,..., ξ T ) a multivariate time series process (e.g. future interest rates, future asset prices,

More information

Lecture 10: Semi-Markov Type Processes

Lecture 10: Semi-Markov Type Processes Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov

More information

Invariance Principle for Variable Speed Random Walks on Trees

Invariance Principle for Variable Speed Random Walks on Trees Invariance Principle for Variable Speed Random Walks on Trees Wolfgang Löhr, University of Duisburg-Essen joint work with Siva Athreya and Anita Winter Stochastic Analysis and Applications Thoku University,

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide aliprantis.tex May 10, 2011 Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide Notes from [AB2]. 1 Odds and Ends 2 Topology 2.1 Topological spaces Example. (2.2) A semimetric = triangle

More information

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1].

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1]. THEOREM OF OSELEDETS We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets []. 0.. Cocycles over maps. Let µ be a probability measure

More information

The parabolic Anderson model on Z d with time-dependent potential: Frank s works

The parabolic Anderson model on Z d with time-dependent potential: Frank s works Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),

More information

Extreme points of compact convex sets

Extreme points of compact convex sets Extreme points of compact convex sets In this chapter, we are going to show that compact convex sets are determined by a proper subset, the set of its extreme points. Let us start with the main definition.

More information

MARKOV PROCESSES ON THE DUALS TO INFINITE-DIMENSIONAL CLASSICAL LIE GROUPS

MARKOV PROCESSES ON THE DUALS TO INFINITE-DIMENSIONAL CLASSICAL LIE GROUPS MARKOV PROCESSES O THE DUALS TO IFIITE-DIMESIOAL CLASSICAL LIE GROUPS CESAR CUECA Abstract. We construct a four parameter z, z, a, b family of Markov dynamics that preserve the z-measures on the boundary

More information

Undirected Graphical Models

Undirected Graphical Models Undirected Graphical Models 1 Conditional Independence Graphs Let G = (V, E) be an undirected graph with vertex set V and edge set E, and let A, B, and C be subsets of vertices. We say that C separates

More information

Stochastic Processes (Stochastik II)

Stochastic Processes (Stochastik II) Stochastic Processes (Stochastik II) Lecture Notes Zakhar Kabluchko University of Ulm Institute of Stochastics L A TEX-version: Judith Schmidt Vorwort Dies ist ein unvollständiges Skript zur Vorlesung

More information

Discrete Mathematics. Benny George K. September 22, 2011

Discrete Mathematics. Benny George K. September 22, 2011 Discrete Mathematics Benny George K Department of Computer Science and Engineering Indian Institute of Technology Guwahati ben@iitg.ernet.in September 22, 2011 Set Theory Elementary Concepts Let A and

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

Game Theory and its Applications to Networks - Part I: Strict Competition

Game Theory and its Applications to Networks - Part I: Strict Competition Game Theory and its Applications to Networks - Part I: Strict Competition Corinne Touati Master ENS Lyon, Fall 200 What is Game Theory and what is it for? Definition (Roger Myerson, Game Theory, Analysis

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional 15. Perturbations by compact operators In this chapter, we study the stability (or lack thereof) of various spectral properties under small perturbations. Here s the type of situation we have in mind:

More information

PROBABILITY THEORY II

PROBABILITY THEORY II Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016

More information

Regular finite Markov chains with interval probabilities

Regular finite Markov chains with interval probabilities 5th International Symposium on Imprecise Probability: Theories and Applications, Prague, Czech Republic, 2007 Regular finite Markov chains with interval probabilities Damjan Škulj Faculty of Social Sciences

More information

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms Consider the following two graph problems: 1. Analysis of Algorithms Graph coloring: Given a graph G = (V,E) and an integer c 0, a c-coloring is a function f : V {1,,...,c} such that f(u) f(v) for all

More information

WHY SATURATED PROBABILITY SPACES ARE NECESSARY

WHY SATURATED PROBABILITY SPACES ARE NECESSARY WHY SATURATED PROBABILITY SPACES ARE NECESSARY H. JEROME KEISLER AND YENENG SUN Abstract. An atomless probability space (Ω, A, P ) is said to have the saturation property for a probability measure µ on

More information

Entropy and Ergodic Theory Notes 21: The entropy rate of a stationary process

Entropy and Ergodic Theory Notes 21: The entropy rate of a stationary process Entropy and Ergodic Theory Notes 21: The entropy rate of a stationary process 1 Sources with memory In information theory, a stationary stochastic processes pξ n q npz taking values in some finite alphabet

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}.

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}. Dual Space of L 1 Note. The main theorem of this note is on page 5. The secondary theorem, describing the dual of L (µ) is on page 8. Let (X, M, µ) be a measure space. We consider the subsets of X which

More information

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours.

UNIVERSITY OF YORK. MSc Examinations 2004 MATHEMATICS Networks. Time Allowed: 3 hours. UNIVERSITY OF YORK MSc Examinations 2004 MATHEMATICS Networks Time Allowed: 3 hours. Answer 4 questions. Standard calculators will be provided but should be unnecessary. 1 Turn over 2 continued on next

More information

MONOTONICALLY COMPACT AND MONOTONICALLY

MONOTONICALLY COMPACT AND MONOTONICALLY MONOTONICALLY COMPACT AND MONOTONICALLY LINDELÖF SPACES GARY GRUENHAGE Abstract. We answer questions of Bennett, Lutzer, and Matveev by showing that any monotonically compact LOT S is metrizable, and any

More information

Random trees and branching processes

Random trees and branching processes Random trees and branching processes Svante Janson IMS Medallion Lecture 12 th Vilnius Conference and 2018 IMS Annual Meeting Vilnius, 5 July, 2018 Part I. Galton Watson trees Let ξ be a random variable

More information