2 Symmetric Markov processes on finite spaces
|
|
- Randall Farmer
- 5 years ago
- Views:
Transcription
1 Introduction The purpose of these notes is to present the discrete time analogs of the results in Markov Loops and Renormalization by Le Jan []. A number of the results appear in Chapter 9 of Lawler and Limic [2], but there are additional results. We will tend to use the notation from [2] (although we will use [] for some quantities not discussed in [2]), but our Section heading will match those in [] so that a reader can read both papers at the same time and compare 2 Symmetric Markov processes on finite spaces We let X denote a finite or countably infinite state space and let q(x, y) be the transition probabilities for an irreducible, discrete time, Markov chain X n on X. Let A be a nonempty, finite, proper subset of X and let Q = [q(x, y)] x,y A denote the corresponding matrix restricted to states in A. For everything we do, we may assume that X \ A is a single point denoted and we let κ x = q(x, ) = x A q(x, y). We say that Q is strictly submarkov on A if for each x A with probability one the chain eventually leaves A. Equivalently, all of the eigenvalues of Q have absolute value stricly less than one. We will call such weights allowable. Let N = #(A) and α,...,α N the eigenvalues of Q all of which have absolute value strictly less than one. We let Xn denote the path X n = [X, X,...,X n ]. We will let ω denote paths in A, i.e., finite sequences of points ω = [ω, ω,...,ω n ], ω j A. We call n the length of ω and sometimes denote this by ω. The weight q induces a measure on paths in A, n q(ω) = P ω {Xn = ω} = q(ω j, ω j+ ). The path is called a (rooted) loop if ω = ω n. We let η x denote the trivial loop of length, η x = [x]. By definition q(η x ) = for each x A. We have not assumed that Q is irreducible, but only that the chain restricted to each component is strictly submarkov. We do allow q(x,x) >. Since q is symmetric we sometimes write q(e) where e denotes an edge. Let f(x) = (Q I)f(x) = y X j= q(x, y) [f(y) f(x)].
2 Unless stated otherwise, we will consider as an operator on functions f on A which can be considered as functions on X that vanish on X \ A. In this case, we can write f(x) = κ x f(x) + y A q(x, y) [f(y) f(x)]. [] uses C x,y for q(x,y) and calls these quantities conductances. This paper does not assume that the conductances are coming from a transition probability, and allows more generality by letting κ x be anything and setting λ x = κ x + q(x,y). y We do not need to do this the major difference in our approach is that we allow the discrete loops to stay at the same point, i.e., q(x,x) > is allowed. The important thing to remember when reading [] is that under our assumption λ x = for all x A, and hence one can ignore λ x wherever it appears. Two important examples are the following. Suppose A = {x} with q(x, x) = q (, ). We will call this the one-point example. Suppose q is an allowable weight on A and A A. We can consider a Markov chain Y n with state space A { } given as follows. Suppose X A. Then Y n = X ρ(n) where ρ = and ρ j = min {n > ρ j : X n A { }}. The corresponding weights on A are given by the matrix ˆQ A = [ˆq A (x, y)] x,y A where ˆq A (x, y) = P x { X ρ() = y }, x, y A. We call this the chain viewed at A. This is not the same as the chain induced by the weight q(x, y), x, y A, which corresponds to a Markov chain killed when it leaves A. Let G A denote the Green s function restricted to A. Then ˆQ A = I [G A ]. Note that [G A ] is not the same matrix as G restricted to A. 2
3 We will be relating the Markov chain on A with random variables {Z x : x A} having joint normal distribution with covariance matrix G. One of the main properties of the joint normal distribution is that if A A, the marginal distribution of {Z x : x A } is the joint normal with covariance matrix G A. We have just seen that this can be considered in terms of a Markov chain on A with a particular matrix ˆQ A. Note that even if Q has no positive diagonal entries, the matrix ˆQ A may have positive diagonal entries. This is one reason why it is useful to allow such entries from the beginning. We let S t denote a continuous time Markov chain with rates q(x, y). Since q is a Markov transition probability (on A { }), we can construct the continuous time Markov chain from a discrete Markov chain X n as follows. Let T, T 2,... be independent Exp() random variables, independent of the chain X n, and let τ n = τ + + τ n with τ =. Then S t = X n if τ n t < τ n+. We write S t for the discrete path obtained from watching the chain when it jumps, i.e., S t = [X,...,X n ] = X n if τ n t < τ n+. If ω is a path with ω = x and τ ω = inf{t : S t = ω}, then one sees immediately that P x {τ ω < } = q(ω). () We allow q(x,x) > so the phrase when it jumps is somewhat misleading. Suppose that X = x,x = x and t is a time with τ t < τ 2. Then S t = [x,x]. If we only observed the continuous time chain, we would not observe the jump from x to x, but in our setup we consider it a jump. It is useful to consider the continuous time chain as the pair of the discrete time chain and the exponential holding times. We are making use of the fact that q is a transition probability and hence the holding times can be chosen independently of the position of the discrete chain. 2. Energy The Dirichlet form or energy is defined by E(f, g) = e q(e) e f e g, where e f = f(x) f(y) where e = {x, y}. (This defines e up to a sign but we will only use it in products in e f e g we take the same orientation of e for both differences.) We 3
4 will consider this as a form on functions in A, i.e., on functions on X that vanish on X \ A. In this case we can write E(f, g) = q(e) e f e g + q(e) e f e g We let E(f) = E(f, f). e e(a) e ea = y) [f(x) f(y)] [g(x) g(y)] + 2 x,y Aq(x, κ x f(x) g(x) x A = f(x) g(x) q(x, y) f(x) g(y). x,y A x,y A If we write E q (f,g) to denote the dependence on q, then it is easy to see for a R, E a 2 q(f,g) = E q (af,ag) = a 2 E q (f,g). The definition of E does not require q to be a submarkov transition matrix. However, we can always find an a such that a 2 q is submarkov, so assuming that q is submarkov is not restrictive. The set X in [] corresponds to our A. [] uses z x,x X to denote a function on X. [] uses e(z) for E(f); we will use e for edges. Recall that ( ) = (I Q) is the Green s function defined by G(x, y) = ω:x y q(ω) = n= ω:x y, ω =n P x {X n = ω} = This is also the Green s function for the continuous time chain. Proposition 2.. G(x, y) = P x {S t = y} dt = ω:x y P x {S t P x {X n = y}. n= = ω} dt. Proof. The second equality is immediate. For any path ω in A, it is not difficult to verify that This follows from () and q(ω) = P{S t = ω} dt. E P{St = ω τ ω = s} dt =. s The latter equality holds since the expected amount of time spent at each point equals one. 4
5 The following observation is important. It follows from the definition of the chain viewed at A. Proposition 2.2. If q is an allowable weight on A with Green s function G(x, y), x, y A, and A A, then the Green s function for the chain viewed at A is G(x, y), x, y A. In [], is denoted by L. There are two Green s functions discussed, V and G. These two quantities are the same under our assumption λ. 2.2 Feynman-Kac formula The Feynman-Kac formula describes the affect of a killing rate on a Markov chain. Suppose q is an allowable weight on A and χ : A [, ) is a nonnegative function Discrete time We define another allowable weight q χ by If ω = [ω,...,ω n ] is a path, then q χ (ω) = q(ω) n j= q χ (x, y) = q(x, y). + χ(x) = q(ω) exp + χ(ω j ) { } n log[ + χ(ω)]. (2) We think of χ/( + χ) as an additional killing rate to the chain. More precisely, suppose T is a positive integer valued random variable with distribution Then if ω = x, P{T = n T > n, X n = x} = P x {S n = ω, T > n} = q(ω) n j= j= χ(x) + χ(x). + χ(ω j ) = qχ (ω). This is the Feynman-Kac formula in the discrete case. we will compare it to the continuous time process with killing rate χ. Let Q χ denote the corresponding matrix of rates. Then we can write Q χ = M +χ Q. 5
6 Here and below we use the following notation. If g : A C is a function, then M g is the diagonal matrix with M g (x, x) = g(x). Note that if g is nonzero, Mg = M /g. We let be the Green s function for q χ. G χ = (I Q χ ) = (I M +χq) (3) Our G χ is not the same as G χ in []. The G χ in [] corresponds to what we call Gχ below Continuous time Now suppose T is a continuous killing time with rate χ. To be more precise, T is a nonnegative random variable with P{T t + t T > t, S t = x} = χ(x) t + o( t). In particular, the probability that the chain starting at x is killed before it takes a discrete step is χ(x)/[ + χ(x)]. We define the corresponding Green s function G by G(x, y) = P x {S t = y} dt There is an important difference between discrete and continuous time when considering killing rates. Let us first consider consider the case without killing. Let S t denote a continuous time random walk with rates q(x, y). Then S waits an exponential amount of time with mean one before taking jumps. At any time t, there is a corresponding discrete path obtained by considering the process when it jumps (this allows jumps to the same site). Let St denote the discrete path that corresponds to the random walk when it jumps. For any path ω in A, it is not difficult to verify that q(ω) = P{S t = ω} dt. The basic reason is that if τ ω = inf{t : S t = ω}, then E s P{S t = ω τ ω = s} dt =. since the expected amount of time spent at each point equals one. From this we see that the Green s function for the continuous walk which is defined by G χ (x, y) = P x {S t = y, T > t} dt. 6
7 Proposition 2.3. Proof. This is proved in the same was as 2. except that G χ = G χ M +χ. (4) P{S t = ω, T > t} dt = qχ (ω) + χ(y). The reason is that the time until one leaves y (by either moving to a new site or being killed) is exponential with rate + χ(y). By considering generators, one could establish in a different way G χ = ( Q + M χ ), which follows from (3) (4). This is just a matter of personal preference as to which to prove first. In particular, det[ G χ ] x [ + χ(x)] = det[g χ ], (5) and G χ = [I Q + M χ ] = (I Q) (I + GM χ ) = G (I + GM χ ). (6) Example Let us consider the one-point example. Then G(x, x) = + q + q 2 + = q. For the discrete time walk with killing rate λ = χ/( + χ), G χ (x, x) = + qλ + [qλ] 2 + = qλ = χ + χ q. For the continuous time walk with the same killing rate χ, we start the path and we consider an exponential time with rate + χ. Then the expected time spent at x before jumping for the first time is ( + χ). At the first jump time, the probability that we are not killed is q/( + χ). (Here / + χ is the probability that the continuous time walk decides to move before being killed.) Therefore G χ (x, x) = + χ + q + χ G χ(x, x), which gives G χ (x, x) = q + χ = G χ + χ. 7
8 3 Loop measures 3. A measure on based loops Here we expand on the definitions in Section (2) defining (discrete time) unrooted loops and continuous time loops and unrooted loops. A (discrete time) unrooted loop ω is an equivalence class of rooted loops under the equivalence relation [ω,...,ω n ] [ω j, ω j+,...,ω n, ω,...,ω j, ω j ]. We define q(ω) = q(ω) where ω is any representative of ω. A nontrivial continuous time rooted loop of length n > is a rooted loop ω of length n combined with times T = (T,..., T n ) with T j >. We think of T j as the time for the jump from ω j to ω j. We will write the loop in one of two ways (ω, T) = (ω, T, ω, T 2,..., T n, ω n ). The continuous time loop also gives a function ω(t) of period T + + T n with Here τ = and τ j = T + + T j. ω(t) = ω j, τ j t < τ j. We caution that the function ω(t) may not carry all the information about the loop; in particular, if q(x,x) > for some x, then one does not observe the jump from x to x if one only observes ω(t). A nontrival continuous time unrooted loop of length n is an equivalence class where (ω, T, ω, T 2,...,T n, ω n ) (ω, T 2,...,T n, ω n, T, ω ). A trivial continuous time rooted loop is an ordered pair (η x, T) where T >. In both the discrete and continuous time cases, unordered trivial loops are the same as ordered trivial loops. A loop functional (discrete or continuous time) is a function on unordered loops. Equivalently, it is a function on ordered loops that is invariant under the time translations that define the equivalence relation for unordered loops. 3.. Discrete time measures Define q x to the be measure q restricted to loops rooted at x. In other words, q x (ω) is only nonzero for loops rooted at x and for such loops. q x (ω) = P x {[X,..., X n ] = ω}. n= We let q = x q x, i.e., the measure that assigns measure q(ω) to each loop. 8
9 Although q can be considered also as a measure on paths, when considering loop measures one restricts q to loops, i.e., to paths beginning and ending at the same point. We use m for the rooted loop measure and m for the unrooted loop measure as in [2]. Recall that these measures are supported on nontrivial loops and m(ω) = q(ω) ω, m(ω) = ω ω m(ω), Here ω ω means that ω is a rooted loop that is in the equivalence class defining ω. If we let m x denote m restricted to loops rooted at x, then we can write m x (ω) = n= As in [2] we write { } { } F(A) = exp m(ω) = exp m(ω) = 3..2 Continuous time measure ω n Px {Xn = ω}. (7) ω = det G. (8) det(i Q) We now define a measure on loops with continuous time which corresponds to the measure introduced in []. For each nontrivial discrete loop we associate holding times ω = [ω, ω,...,ω n, ω n ], T,...,T n, where T,...,T n have the distribution of independent Exp() random variables. Given ω and the values T,...,T n, we consider the continuous time loop of time duration τ n = T + +T n (or we can think of this as period τ n ) given by ω(t) = ω j, τ j t < τ j+, where τ =, τ j = T + + T j. We therefore have a measure q on continuous time loops which we think of as a measure on (ω, T), T = (T,...,T n ). The analogue of m is the measure µ defined by dµ d q (ω, T) = T T + + T n. 9
10 Since T,...,T n are identically distributed, [ ] T E = T + + T n n n [ ] T j E = T + + T n n. j= Hence if we integrate out the T we get the measure m. Note that this generates a well defined measure on continuous time unrooted loops which we write (with some abuse of notation since the vector T must also be translated) as (ω, T), We let µ and µ denote the corresponding measures on rooted and unrooted loops, respectively. They can be considered as measures on discrete time loops by forgetting the time. This is the measure µ restricted to nontrivial loops. The measure gives infinite measure to trivial loops. More precisely, if ω is a trivial loop, then the density for (ω, t) is e t /t. We summarize. Proposition 3.. The measure µ considered as a measure on discrete loops agrees with m when restricted to nontrivial loops. For trivial loops. µ(η x ) =, ˆm(η x ) =. In other words to sample from µ restricted to nontrivial loops we can first sample from m and then choose independent holding times. We can relate the continuous time measure to the continuous time Markov chain as follows. Suppose S t is a continuous time Markov chain with rates q and holding times T, T 2,.... Define the continuous time loop S t as follows. Recall that S t is the discrete time path obtained from S t when it moves. If t < T, S t is the trivial continuous time loop (η S, t) which is the same as (S t, t). If T n t < T n+ then S t = (S t, T) where T = (T,...,T n ). Let µ x denote the measure µ restricted to loops rooted at x. Let Q x,x t on S t when S = x and restricting to the event {S t = x}. Then One can compare this to (7). µ x = t Qx,x t dt. denote the measure 3..3 Killing rates We now consider the measures m, m, µ, µ if subjected to a killing rate χ : A [, ). We call the correspondng measures m χ, m χ, µ χ, µ χ. The construction uses the following standard fact about exponential random variables (we omit the proof). We write Exp(λ) for the exponential distribution with rate λ, i.e., with mean /λ.
11 Proposition 3.2. Suppose T, T 2 are independent with distributions Exp(λ ), Exp(λ 2 ) respectively. Let T = T T 2, Y = {T = T }. Then T, Y are independent random variables with T Exp(λ + λ ) and P{Y = } = λ /(λ + λ 2 ). The definitions are as follows. m χ is the measure on discrete time paths obtained by using weight q χ (x, y) = q(x, y) + χ(x). µ χ restricted to nontrivial loops is the measure on continuous time paths obtained from m χ by adding holding times as follows. Suppose ω = [ω,..., ω n ] is a loop. Let T,...,T n be independent random variables with T j Exp( + χ(ω j )). Given the holding times, the continuous time loop is defined as before. ˆm χ agrees with m χ on nontrivial loops and ˆm χ (η x ) =. For trivial loops ω rooted at x µ χ gives density e t(+χ(x)) /t for (ω, t). m χ, µ χ are obtained as before by forgetting the root. There is another way of obtaining µ χ on nontrivial loops. Suppose that we start with the measure µ on discrete loops. Then we define the conditional measure on (µ, T) by saying that the density on (T,...,T n ) is given by f(t,...,t n ) = e (λ t +λ nt n), where λ j = + χ(ω j ). Note that this is not a probability density. In fact, f(t,...,t n ) dt = n j= + χ(ω j ) = mχ (ω) m(ω). If we normalize to make it a probability measure, then the distribution of T,..., T n is that of independent random variables, T j Exp( + χ(ω j )). The important fact is as follows. Proposition 3.3. The measure µ χ, considered as a measure on discrete loops, restricted to nontrivial loops is the same as m χ. We now consider trivial loops. If η x is a trivial loop with time T with (nonintegrable) density g(t) = e t /t, then [e rt ] g(t) dt = e (+r)t e t dt = log t + r. (9)
12 Hence, although µ and µ χ both give infinite measure to the trivial loop ω at x, we can write µ χ (η x ) µ(η x ) = log + ξ(x). Note that µ χ (η x ) µ(η x ) is not the same as m χ (η x ) m(η x ). The reason is that the killing in the discrete case does not affect the trivial loops but it does affect the trivial loops in the continuous case. 3.2 First properties In [2, Proposition 9.3.3], it is shown that F(A) = det[(i Q) ] = det G. Here we give another proof of this based on []. The key observation is that m{ω : ω = x, ω = n} = n Qn (x, x), and hence m{ω : ω = n} = n Tr[Qn ]. Let α,...,α N denote the eigenvalues of Q. Then the eigenvalues of Q n are α n,...,αn N and the total mass of the measure m is n= n Tr[Qn ] = N j= n= Here we use the fact that α j < for each j. α n j N n = log[ α j ] = log[det(i Q)]. j= If we define the logarithm of a matrix by the power series then the argument shows the relation log[i Q] = n= n Qn, Tr[log(I Q)] = log det(i Q) = n= n Tr[Qn ]. This is valid for any matrix Q all of whose eigenvalues are all less than one in absolute value. 2
13 3.3 Occupation field 3.3. Discrete time For a nontrivial loop ω = [ω,...,ω n ] define its (discrete time) occupation field by N x (ω) = #{j : j n : ω j = x} = n {ω j = n}. j= Note that N x (ω) depends only on the unrooted loop, and hence is a loop functional. If χ : A C is a function we write N, χ (ω) = x A N x (ω) χ(x). Proposition 3.4. Suppose x A. Then for any discrete time loop functional Φ, m [N x Φ] = m[n x Φ] = q x [Φ]. Proof. The first equality holds since N x Φ is a loop functional. The second follows from the important relation q(ω) = N x (ω) m (ω). () ω ω,ω =x To see this, assume ω = n and N x (ω) = k >. Let rn denote the number of distinct representatives of ω and let N x (ω) = k. Then it is easy to check that the number of distinct representatives of ω that are rooted at x equals rk. Recall that m(ω) = r q(ω) = rk N x (ω) q(ω) = ω ω,ω =x q(ω) N x (ω). Example Setting Φ gives m [N x ] = G(x, x). Setting Φ = N y with y x gives ˆm [N x N y ] = q x (N y ). For any loop ω = [ω,...,ω n ] rooted at x with N y (ω) = k, there are k different wasy that we can write ω as [ω,...,ω k ] [ω k,...,ω n ], 3
14 with ω k = y. Therefore, q x (N y ) = ω,ω 2 q(ω ) q(ω 2 ) where the sum is over all paths ω from x to y and ω 2 from y to x. Summing over all such paths gives q x (N y ) = G(x, y) G(y, x) = G(x, y) 2. More generally, if x, x 2,...,x k are points and Φ x,...,x k is the functional that counts the number of times we can find x, x 2,..., x k in order on the loop, then where ˆm [Φ x,...,x k ] = G (x, x 2 ) G (x 2, x 3 ) G(x k, x k ) G (x k, x ), Consider the case x = x 2 = x. Note that and hence G (x, y) = G(x, y) δ x,y. Φ x,x = (N x ) 2 N x, ˆm [ (N x ) 2] = ˆm [Φ x,x ] + ˆm [N x ] = [G(x, x) ] 2 + G(x, x) = G(x, x) 2 G(x, x) +. Let us derive this in a different way by computing q x (N x ). for the trivial loop η x, we have N x (η x ) =. The total measure of the set of loops with N x (ω) = k is given by r k, where G(x, x) r =. G(x, x) Hence, q x (N x ) = + k r k r = + ( r) = + G(x, 2 x)2 G(x, x). k= Resticting to a subset Suppose A A and ˆq = ˆq A denotes the weights associated to the chain when it visits A as introduced in Section 2. For each loop ω in A rooted at a point in A, there is a corresponding loop which we will call Λ(ω; A ) in A obtained from removing all the vertices that are not in A. Note that for N x (Λ(ω; A )) = N x (ω) {x A }. By construction, we know that if ω is a loop in A, ˆq(ω ) = ω q(ω) {Λ(ω; A ) = ω }. 4
15 We can also define Λ(ω; A ) for an unrooted loop ω. Note that ω ω if and only if Λ(ω; A ) Λ(ω; A ). However, some care must be taken, since it is possible to have two different representatives ω, ω 2 of ω with Λ(ω ; A ) = Λ(ω 2 ; A). Let m A, m A denote the measures on rooted loops and unrooted loops, respectively, in A generated by ˆq. The next proposition follows from (). Proposition 3.5. Let A A and let m A denote the measure on loops in A generated by the weight ˆq. Then for every loop ω in A, ˆm A (ω ) = ω m(ω) {Λ(ω; A ) = ω } Continuous time For a nontrivial continuous time loop (ω, T) of length n, we define the (continuous time) occupation field by l x (ω, T) = For trivial loops, we define T + +T n n {ω(t) = x} dt = {ω j = x} T j. l x (η y, T) = δ x,y T. Note that l is a loop functional. We also write l, χ (ω, T) = x A l x (ω, T) χ(x) = j= T + +T n χ(ω(t)) dt. The second equality is valid for nontrivial loops; for trivial loops l, χ (η x, T) = T χ(x). The continuous time analogue requires a little more setup. Proof. We first consider µ restricted to nontrivial loops. Recall that this is the same as m restricted to nontrivial loops combined with independent choices of holding times T,...,T n. Let us fix a discrete loop ω of length n. Assume that N x (ω) = k >. Then (with some abuse of notation) l x (ω, T) = T (ω). ω ω,ω =x We write T (ω) to indicate the time for the jump from ω to ω. Therefore, µ[l x ΦJ ω ] = q(ω)e[t Φ ω]. ω ω,ω =x Here E [T Φ ω] denotes the expected value given the discrete loop ω, i.e., the randomness is over the holding times T,...,T n. Summing over nontrivial loops gives µ[l x Φ; ω nontrivial] = q(ω)e[t Φ ω]. ω >,ω =x 5
16 Also, µ[l x Φ; ω = η x ] = Φ(η x, t) e t dt. Example Setting Φ gives µ(l x ) = G(x, x). Let Φ = (l x ) k More on discrete time Let N x,y (ω) = n {ω j = x, ω j+ = y}, N x (ω) = y j= N x,y (ω) = #{j < ω : ω j = x}. We can also write N x,y (ω) for an unrooted loop. Let V (x, k) be the set of loops ω rooted at x with N x (ω) = k and r(x, k) = q(ω), ω V (x,k) where by definition r(x, ) =. It is easy to see that r(x, k) = r(x, ) k, and standard Markov chain or generating function show what G(x, x) = r(x, k) = k= r(x, ) k = k= r(x, ). Note also that m[v (x, k)] = r(x, k). k To see this we consider any unrooted loop ω that visits x k times and choose a representative rooted at x with equal probability for each of the k choices. Therefore, m[{ω : N x (ω) ] = n= n r(x, )n = log[ r(x, )] = log G(x, x). Actually, it is slighly more subtle than this. If an unrooted loop ω of length n has rn representatives as rooted loops then m(ω) = r q(ω) and the number of these representatives that are rooted at x is N x (ω)r. Regardless, we can get the unrooted loop measure by giving measure q(ω)/k to the k representatives of ω rooted at x. 6
17 This is [2, Proposition 9.3.2]. In [], occupation times are emphasized. If Φ is a functional on loops we write m(φ) for the corresponding expectation m(φ) = ω m(ω) Φ(ω). If Φ only depends on the unrooted loop, then we can also write m(φ) which equals m(φ). Then m(n x ) = m(n x ) = n r(x, n) = j= r(x, ) n = j= r(x, ) = G(x, x). r(x, ) We can state the relationship in terms of Radon-Nikodym derivatives. Consider the measure on unrooted loops ω that visit x given by q x (ω) = q(ω), ω ˆω,ω =x where ω ω means that ω is a rooted representative of ω. Then, It is easy to see that q x (ω) = N x (ω) m(ω). ω >,ω =x q(ω) = G(x, x). We can similarly compute m(n x,y ). Let V denote the set of loops with ω = x, ω = y, ω n = x. Then ω = [ω, ω,...,ω n ], q(v ) = q(x, y) G(y, x) = q(x, y) F(y, x) G(x, x), where F(y, x) denotes the first visit generating function F(y, x) = ω q(ω), where the sum is over all paths ω = [ω,..., ω n ] with n, ω = y, ω n = x and ω j x for < j < n. This gives m(n x,y ) = q(x, y) G(y, x). It is slighly more complicated to compute m(n x,y ). The measure of the set of loops ω at x with N x = and such that ω y is given by F(x, x) q(x, y) F(y, x). 7
18 Note that N x,y (ω) = for all such loops. Therefore the q measure of loops at x with N x,y (ω) = is Therefore, [F(x, x) q(x, y) F(y, x)] n = n= ω V ;N x,y(ω)= ω V ;N x,y(ω)=k q(ω) = [F(x, x) q(x, y) F(y, x)]. q(x, y) F(y, x) [F(x, x) q(x, y) F(y, x)], [ ] k q(x, y) F(y, x) q(ω) =. [F(x, x) q(x, y) F(y, x)] To each unrooted loop ω with N x,y (ω) = k and r ω different representatives we give measure /(rk) to the rk representatives ω with ω = x, ω = y. We then get [ ] k q(x, y) F(y, x) m(n x,y ) = k + q(x, y) F(y, x) F(x, x) k= [ ] F(x, x) = log. + q(x, y) F(y, x) F(x, x) We will now generalize this. Suppose x = (x, x 2,..., x k ) are given points in A. For any loop ω = [ω,...,ω n ] define N x (ω) as follows. First define ω j+n = ω j. Then N x is the number of increasing sequences of integers j < j 2 < < j k < j + n with j < n and ω jl = x l, l =,...,k. Note that N x (ω) is a function of the unordered loop ω. Let V x denote the set of loops rooted at x such that such a sequence exists (for which we can take j = ). Then by concatentating paths, we can see that and hence as above q(v x ) = G(x, x 2 ) G(x 2, x 3 ) G(x k, x k ) G(x k, x ), m(n x ) = G(x, x 2 ) G(x 2, x 3 ) G(x k, x k ) G(x k, x ). Suppose χ is a positive function on A. As before, let q χ denote the measure with weights q χ (x, y) = 8 q(x, y) + χ(x).
19 Then if ω = [ω,...,ω n ], we can write { } n q χ (ω) = q(ω) exp log( + χ(ω j )) = q(ω) e ˆl,log(+χ). Here we are using a notation from [] j= ˆl, f (ω) = n fω j ) = N x (ω) f(ω). x A j= We have the corresponding measures m χ, m χ with m χ (ω) = e ˆl(ω),log(+χ) m(ω), m(ω) = e ˆl(ω),log(+χ) m(ω). As before, let G χ denote the Green s function for the weight q χ. The total mass of m χ is det G χ. Remark [] discusses Laplace transforms of the measure m. This is just another way of saying total mass of the measure m χ (as a function of χ). Proposition 2 in [, Section 3.4] states m(e ˆl,log(+χ) ) = log det G χ log det G. This is obvious since m(e ˆl,log(+χ) ) by defintion is the total mass of m χ. m(e ˆl,log(+χ) ) = { m(ω) exp } N x (ω) log( + χ(x)) = ω x ω m χ (ω). Moreover, using (9), we can see that ˆm(e ˆl,log(+χ) ) = log det G χ log det G More on continuous time If (ω, T) is a continuous time loop we define the occupation field l x (ω, T) = T + +T n {ω(t) = x} dt = n {ω j = x}t j. j= If χ is a function we write l, χ = l, χ (ω, T) = x l x (ω, T) χ(x). Note the following. 9
20 In the measure µ, the conditional expectation of l x (ω; T) given ω is N x (ω). In the measure µ χ, the conditional expectation of l x (ω; T) given ω is N x (ω)/[+χ(ω)]. Note that in the measure µ, n E [exp { l, χ } ω] = Using this we see that Also (9) shows that j= E [ n e χ(ω j) T j ] = j= + χ(ω j ) = mχ (ω) m(ω). µ [ (e l,χ ) { ω } ] = log det G χ log det G. () µ [ [e l,χ ] {discrete loop is η x } ] = log[ + χ(x)]. (2) By (5) we know that log G χ = log G χ x log[ + χ(x)], and hence we get the following. Proposition 3.6. µ[e l,χ ] = log G χ log G χ. Although we have assumed that χ is positive, careful examination of the argument will show that we can also establish this for general χ in a sufficiently small neighborhood of the origin. 4 Poisson process of loops 4. Definition 4.. Discrete time The loop soup with intensity α is a Poissonian realization from the measure m or m. The rooted soup can be considered as an independent collection of Poisson processes M α (ω) with M α (ω) having intensity m(ω). We think of M α (ω) as the number of times ω has appeared by time α. The total collection of loops C α can be considered as a random increasing multi-set (a set in which elements can appear multiple times). The unrooted soup can be obtained from the rooted soup by forgetting the root. We will write C α for both the rooted and unrooted versions. Let C α = ω C α m(ω) = ω C α m(ω). 2
21 If Φ is a loop functional, we write Φ α = ω C α Φ(ω) := ω M α (ω) Φ(ω). If χ : A C, we set C α, χ = x A ω C α N x α (ω) χ(x)). In the particular case χ = δ x, we get the occupation field L x α = ω M α (ω) N x α(ω). Using the moment generating function of the Poisson distribution, we see that E [ { e Φα] = exp α } m(ω) [e φ(ω) ]. ω In particular, E [ e Cα,log(+χ) ] = E [ e Mα(ω) ω,log(+χ) ] ω { } = exp αm(ω) [e ω,log(+χ) ] ω { } = exp α [m χ (ω) m(ω)] ω = [ ] α det Gχ. det G The last step uses (8) for the weights q χ and q. Note also that E[ C t, δ x ] = t [G(x, x) ]. Proposition 4.. Suppose C α is a loop soup using weight q on A and suppose that A A. Let C α = {Λ(ω; A ) : ω A}, where Λ(ω; A ) is defined as in Proposition 3.5. Then C α, is a loop soup for the weight ˆq A on A. Moreover, the occupations fields {L x α : x A }, are the same for both soups. 2
22 4..2 Continuous time The continuous time loop soup for nontrivial loops can be obtained from the discrete time loop soup by choosing realizations of the holding times from the appropriate distributions. The trivial loops must be added in a different way. It will be useful to consider the loop soup as the union of two independent soups: one for the nontrivial loops and one for the trivial loops. Start with a loop soup C α of the discrete loop soup of nontrivial loops. For each loop ω C α of length n we choose holding times T,...,T n independently from an Exp() distribution. Note that the times for different loops in the soup are independent as well as the different holding times for a particular loop. The occupation field is then defined by L x α = l x (ω, T). (ω,t) C α For each x A, take a Poisson point process of times {t r (x) : r < }} with intensity e t /t. We consider a be a Poissonian realization of the trivial loops (η x, t r (x)) for all x and all r α. With probability one, at all times α >, there exist an infinite number of loops. We will only nee to consider the occupation field, L x α = t r (x). (η x,t r(x)) where the sum is over all trivial loops at x in the field by time α. In other words, Note that { } E[e L x α χ(x) ] = exp α [e tχ(x) ] t e t dt = [ + χ(x)] α. This shows that L x α has a Gamma(α, ) distribution. Associated to the loop soups is the occupation field ˆL x α = Lx α + L x α = l x (ω, T) + (ω,t) L α (η y,t) L α δ x,y T. If we are only interested in the occupation field, we can contruct it by starting with the discrete occupation field and adding randomness. The next proposition makes this precise. We will call a process Γ(t) a Gamma process (with parameter ) if it has independent increments and Γ(t + s) Γ(t) has a Gamma(s, ) distribution. In particular, the distribution of {Γ(n) : n =,, 2,...} is that of the sum of independent Exp() random variables. If Recall that a random variable Y has a Gamma(s,),s > distribution if it has density f s (t) = ts e t, t. Γ(s) 22
23 Note that the moments are given by E[Y β ] = Γ(s) t β+s e t dt = (s) β := Γ(s + β). Γ(s) For integer β, E[Y β ] = (s) β = s (s + ) (s + β ). (3) More generally, a random variable Y has a Gamma(s,r) distribution if Y/r has a Gamma(s,) distribution. The square of a normal random variable with variance σ 2 has a Gamma(/2,σ 2 /2) distribution. Proposition 4.2. Suppose on the same probability space, we have defined a discrete loop soup C α and Gamma process {Y x (t)} for each x A. Assume that the loop soup and all of the Gamma processes are mutually independent. Let L x α = ω M α (ω) N x (ω) denote the occupation field generated by C α. Define Then ˆL x α = Y x (L x α + α). (4) { ˆL x α : x A} have the distribution of the occupation field for the continuous time soup. An equivalent, and sometimes more convenient, way to define the occupation field is to take two independent Gamma processes at each site {Y x (t), Y2 x (t)} and replace (4) with ˆL x α = L x α + L x α := Y x (L x α) + Y x 2 (α). The components of the field { L x α : x A} are independent and independent of {L x α : x A}. The components of the field {L x α : x A} are not independent but are conditionally independent given the discrete occupation field {L x α : x A}. I If all we are interested in is the occupation field for the continuous loop soup, then we can take the construction in Proposition 4.2 as the definition. If A A, then the occupation field restricted to A is the same as the occupation field for the chain viewed at A. 23
24 Proposition 4.3. If ˆL α is the continuous time occupation field, then there exists ǫ > such for all χ : A C with χ 2 < ǫ, [ [ ] E e ˆL α,χ det = G ] α χ. (5) det G Proof. Note that ] E [e ˆL,χ L α = x [ + χ(x) ] L x α +α = [ x ] α + χ(x) x [ ω + χ(x) Since the M α (ω) are independent, [ [ ] ] N x (ω) M α(ω) E = [ [ ] ] N x (ω) M α(ω) E + χ(x) + χ(x) x ω ω = ω = exp x E [ N(ω),log(+χ) e Mα(ω)] { α ω m(ω) [e N,log(+χ) ] ] N x (ω) M α(ω) }. = [ ] α det Gχ. deg G Although the loop soups for trivial loops are different in the discrete and continuous time settings, one can compute moments for the continuous time occupation measure in terms of moments for the discrete occupation measure. For ease, let us choose α =. Recall that We can therefore write G χ = (I Q + M χ ) = G (I + GM χ ). det G χ det G = det(i + GM χ) = det(i + M χgm χ). To justify the last equality formally, note that M χ(i + GM χ )M χ = I + M χ GM χ. This argument works if χ is strictly positive, but we can take limits if χ is zero in some places. 24
25 4.2 Moments and polynomials of the occupation field f k is a positive integer, then using (3) we see that E [ (L x α )k] = E [ E[ ( L x α )k L x α]] = E [(L x α + α) k ]. More generally, if A A and {k x : x A } are positive integers, [ ] [ ( )] [ ] E (L x α) kx = E E (L x α) kx L x α, x A = E (L x α + α) kx Although this can get messy, we see that all moments for the continuous field can be given in terms of moments of the discrete field. 5 The Gaussian free field Recall that the Gaussian free field (with Dirichlet boundary conditions) on A is the measure on R A whose Radon-Nikodym derivative with respect to Lebesgue measure is given by Z e E(φ)/2 where Z is a normalization constant. Recall [2, (9.28)] that E(φ) = φ (I Q)φ, so we can write the density as a constant times e φ,g φ /2. As calculated in [2] (as well as many other places) the normalization is given by { } Z = (2π) #(A)/2 F(A) /2 = (2π) #(A)/2 exp m(ω) = (2π) #(A)/2 det G. 2 In other words the field {φ(x) : x A} is a mean zero random vector with a joint normal distribution with covariance matrix G. Note that if E denotes expectations under the field measure, [ { }] E exp { φ(x) 2 χ(x) = 2 x (2π) exp f (I Q + M } χ)f #(A)/2 det(g) 2 det G { χ = det G (2π) det( #(A)/2 G exp f G } χ f 2 χ ) det G χ =. (6) det G ω 25
26 Here we use the relation G χ = (I Q+M χ ). The third equality follows from the fact that the term inside the integral in the second line is the normal density with covariance matrix G χ. Similarly, if F : R A R is any function, [ { } E exp φ(x) 2 χ(x) 2 x F(φ) ] = det G χ det G Ẽ [F(φ)], where Ẽ = E Gχ denotes expectation assuming covariance matrix G χ. Theorem. Suppose q is a weight with corresponding loop soup L α. Let φ be a Gaussian field with covariance matrix G. Then L /2 and φ 2 /2 have the same distribution. Proof. By comparing (5) and (6) we see that the moment generating functions of L /2 and φ 2 /2 agree in a neighborhood of the origin. References [] Le Jan [2] Lawler and Limic 26
LECTURE 5: LOOPS. dt t = x V
LECTURE 5: LOOPS We continues in the setup where V is finite. The material in this lecture is again borrowed, essentially verbatim, from Sznitman s lecture notes. 1. Rooted loops We introduce the space
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationLectures on Elementary Probability. William G. Faris
Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationCHAPTER 10: POLYNOMIALS (DRAFT)
CHAPTER 10: POLYNOMIALS (DRAFT) LECTURE NOTES FOR MATH 378 (CSUSM, SPRING 2009). WAYNE AITKEN The material in this chapter is fairly informal. Unlike earlier chapters, no attempt is made to rigorously
More informationFrom loop clusters and random interlacements to the Gaussian free field
From loop clusters and random interlacements to the Gaussian free field Université Paris-Sud, Orsay June 4, 014 Definition of the random walk loop soup G = (V, E) undirected connected graph. V at most
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationx log x, which is strictly convex, and use Jensen s Inequality:
2. Information measures: mutual information 2.1 Divergence: main inequality Theorem 2.1 (Information Inequality). D(P Q) 0 ; D(P Q) = 0 iff P = Q Proof. Let ϕ(x) x log x, which is strictly convex, and
More informationReflected Brownian Motion
Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide
More information2 Functions of random variables
2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More information8. Limit Laws. lim(f g)(x) = lim f(x) lim g(x), (x) = lim x a f(x) g lim x a g(x)
8. Limit Laws 8.1. Basic Limit Laws. If f and g are two functions and we know the it of each of them at a given point a, then we can easily compute the it at a of their sum, difference, product, constant
More informationMean-field dual of cooperative reproduction
The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES Contents 1. Continuous random variables 2. Examples 3. Expected values 4. Joint distributions
More informationMath 456: Mathematical Modeling. Tuesday, March 6th, 2018
Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary
More informationAlgebraic Cryptography Exam 2 Review
Algebraic Cryptography Exam 2 Review You should be able to do the problems assigned as homework, as well as problems from Chapter 3 2 and 3. You should also be able to complete the following exercises:
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationAn Introduction to Non-Standard Analysis and its Applications
An Introduction to Non-Standard Analysis and its Applications Kevin O Neill March 6, 2014 1 Basic Tools 1.1 A Shortest Possible History of Analysis When Newton and Leibnitz practiced calculus, they used
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationlim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),
1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that
More informationUCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)
UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable
More informationNotes on Measure Theory. Let A 2 M. A function µ : A [0, ] is finitely additive if, A j ) =
Notes on Measure Theory Definitions and Facts from Topic 1500 For any set M, 2 M := {subsets of M} is called the power set of M. The power set is the set of all sets. Let A 2 M. A function µ : A [0, ]
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationIntegration on Measure Spaces
Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of
More informationSTA 294: Stochastic Processes & Bayesian Nonparametrics
MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationTranslation Invariant Exclusion Processes (Book in Progress)
Translation Invariant Exclusion Processes (Book in Progress) c 2003 Timo Seppäläinen Department of Mathematics University of Wisconsin Madison, WI 53706-1388 December 11, 2008 Contents PART I Preliminaries
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationStochastic Processes (Week 6)
Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationMath 456: Mathematical Modeling. Tuesday, April 9th, 2018
Math 456: Mathematical Modeling Tuesday, April 9th, 2018 The Ergodic theorem Tuesday, April 9th, 2018 Today 1. Asymptotic frequency (or: How to use the stationary distribution to estimate the average amount
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More information2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.
University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)
More informationQuestion: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds.
Simulation Question: My computer only knows how to generate a uniform random variable. How do I generate others?. Continuous Random Variables Recall that a random variable X is continuous if it has a probability
More informationMA6451 PROBABILITY AND RANDOM PROCESSES
MA6451 PROBABILITY AND RANDOM PROCESSES UNIT I RANDOM VARIABLES 1.1 Discrete and continuous random variables 1. Show that the function is a probability density function of a random variable X. (Apr/May
More informationCountability. 1 Motivation. 2 Counting
Countability 1 Motivation In topology as well as other areas of mathematics, we deal with a lot of infinite sets. However, as we will gradually discover, some infinite sets are bigger than others. Countably
More informationHow to Use Calculus Like a Physicist
How to Use Calculus Like a Physicist Physics A300 Fall 2004 The purpose of these notes is to make contact between the abstract descriptions you may have seen in your calculus classes and the applications
More informationSample Spaces, Random Variables
Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted
More informationPh.D. Qualifying Exam: Algebra I
Ph.D. Qualifying Exam: Algebra I 1. Let F q be the finite field of order q. Let G = GL n (F q ), which is the group of n n invertible matrices with the entries in F q. Compute the order of the group G
More informationANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.
ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationIntroduction to Random Diffusions
Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales
More informationReview of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE)
Review of Multi-Calculus (Study Guide for Spivak s CHPTER ONE TO THREE) This material is for June 9 to 16 (Monday to Monday) Chapter I: Functions on R n Dot product and norm for vectors in R n : Let X
More information(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define
Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that
More informationIntroduction and Preliminaries
Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis
More informationLecture 6: Finite Fields
CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going
More information04. Random Variables: Concepts
University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative
More informationCSC 2541: Bayesian Methods for Machine Learning
CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 3 More Markov Chain Monte Carlo Methods The Metropolis algorithm isn t the only way to do MCMC. We ll
More information2. Function spaces and approximation
2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationA List of Problems in Real Analysis
A List of Problems in Real Analysis W.Yessen & T.Ma December 3, 218 This document was first created by Will Yessen, who was a graduate student at UCI. Timmy Ma, who was also a graduate student at UCI,
More informationStatistics 3657 : Moment Generating Functions
Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition
More informationL p Spaces and Convexity
L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function
More informationUNIVERSAL IDENTITIES. b = det
UNIVERSAL IDENTITIES KEITH CONRAD 1 Introduction We want to describe an idea which reduces the verification of algebraic identities valid over all commutative rings to the verification over the complex
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 6
MATH 56A: STOCHASTIC PROCESSES CHAPTER 6 6. Renewal Mathematically, renewal refers to a continuous time stochastic process with states,, 2,. N t {,, 2, 3, } so that you only have jumps from x to x + and
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationLecture 7. 1 Notations. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationThe Feynman-Kac formula
The Feynman-Kac formula William G. Faris February, 24 The Wiener process (Brownian motion) Consider the Hilbert space L 2 (R d ) and the self-adjoint operator H = σ2, () 2 where is the Laplace operator.
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationUCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7
UCSD ECE50 Handout #0 Prof. Young-Han Kim Monday, February 6, 07 Solutions to Exercise Set #7. Minimum waiting time. Let X,X,... be i.i.d. exponentially distributed random variables with parameter λ, i.e.,
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationFormal Groups. Niki Myrto Mavraki
Formal Groups Niki Myrto Mavraki Contents 1. Introduction 1 2. Some preliminaries 2 3. Formal Groups (1 dimensional) 2 4. Groups associated to formal groups 9 5. The Invariant Differential 11 6. The Formal
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationNotes on generating functions in automata theory
Notes on generating functions in automata theory Benjamin Steinberg December 5, 2009 Contents Introduction: Calculus can count 2 Formal power series 5 3 Rational power series 9 3. Rational power series
More informationFundamental group. Chapter The loop space Ω(X, x 0 ) and the fundamental group
Chapter 6 Fundamental group 6. The loop space Ω(X, x 0 ) and the fundamental group π (X, x 0 ) Let X be a topological space with a basepoint x 0 X. The space of paths in X emanating from x 0 is the space
More information3 Continuous Random Variables
Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random
More information9 Radon-Nikodym theorem and conditioning
Tel Aviv University, 2015 Functions of real variables 93 9 Radon-Nikodym theorem and conditioning 9a Borel-Kolmogorov paradox............. 93 9b Radon-Nikodym theorem.............. 94 9c Conditioning.....................
More informationLECTURE 3 RANDOM VARIABLES, CUMULATIVE DISTRIBUTION FUNCTIONS (CDFs)
OCTOBER 6, 2014 LECTURE 3 RANDOM VARIABLES, CUMULATIVE DISTRIBUTION FUNCTIONS (CDFs) 1 Random Variables Random experiments typically require verbal descriptions, and arguments involving events are often
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationLECTURE 5: THE METHOD OF STATIONARY PHASE
LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More informationMATH 3300 Test 1. Name: Student Id:
Name: Student Id: There are nine problems (check that you have 9 pages). Solutions are expected to be short. In the case of proofs, one or two short paragraphs should be the average length. Write your
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationContinuous Distributions
A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later
More informationPractice problems for first midterm, Spring 98
Practice problems for first midterm, Spring 98 midterm to be held Wednesday, February 25, 1998, in class Dave Bayer, Modern Algebra All rings are assumed to be commutative with identity, as in our text.
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationStatistics (1): Estimation
Statistics (1): Estimation Marco Banterlé, Christian Robert and Judith Rousseau Practicals 2014-2015 L3, MIDO, Université Paris Dauphine 1 Table des matières 1 Random variables, probability, expectation
More information0 Sets and Induction. Sets
0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set
More information