Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Size: px
Start display at page:

Download "Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011"

Transcription

1 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding recurrence of general random walks on R d, and provide a proof for the Z d case. We then move on to present a brief review of Martingales, mentioning several of their properties. In particular, we relate to the optional sampling theorem, Doob's maximal inequality and the martingale convergence theorem. Towards the end we begin to discuss Harmonic functions on graphs, dene a Liouville graph, and nish with concluding that Z 2 is Liouville, using the martingale convergence theorem. Tags for today's lecture: Recurrence criteria for general RWs, Chung-Fuchs theorem, Martingales, Harmonic functions, Liouville graph. Chung-Fuchs Theorem Theorem (Chung-Fuchs). Let S n be a RW on R d.. Suppose d = : If the weak law of large numbers holds in the form S n /n 0 in probability, then S n is neighborhood-recurrent. 2. Suppose d = 2: If the central limit theorem holds in the form S n / n normal distribution, then S n is neighborhood-recurrent. 3. Suppose d = 3: If S n is not contained in a plane (meaning that the group of possible values for S n does not rest on a plane) then S n is neighborhood-transient. proof. (i) The d = case has been proved in the previous lecture (under rst moment assumptions and for RWs on Z). (ii) d = 2: We will prove the theorem for RWs on Z 2. We need to show P(S n = 0) = (according to proposition (3) in last week's lecture). 5-

2 We have, from the assumption, P( S n n b) ˆ n x b n(y)dy ( refers to the Euclidean norm on R d, and n(y) is the limiting normal distribution. Notice that we can assume n(y) is non-degenerate, otherwise we're back to the -dimension problem). Remark. A local limit theorem would give P(S n = 0) c n, which would yield the wanted divergence of the sum P(S n = 0). Lemma. Let then Notice that G(y, y) = G(x, x). G(x, y) = E x [# of visits to y] = P x (S n = y), G(x, y) = P x (visit y) G(y, y). Proof. Dene T = min(n S n = y) or T = if y not visited. Now, G(x, y) = P x (S n = y) = k=0 P x (S n = y, T = k) (notice that P x (S n = y, T = k) = 0 k > n). Every element in the sum is positive, so we can change the order of summing: G(x, y) = k=0 Using the strong Markov property we get G(x, y) = k=0 P y (S n = y) P x (T = k) = P x (S n = y, T = k). P x (T = k) G(y, y) = P x (visit y) G(y, y). k=0 Corollary. Since P x (visit y) it follows that G(x, y) G(x, x), 5-2

3 hence for any A Z d P x (S n A) A G(x, x) (by summing upon all y A). Back to the proof of (2): Note that for every m ( ) P(S n = 0) = G(0, 0) c m 2 P( S n m) for some c > 0, from the corollary (here A = {x x m} and A = c m 2 ). We can change the sum into an integral ( ) m 2 P( S n m) = ˆ 0 P( S θm 2 m)dθ, because when n θ n+ then θm 2 = n and each segment of the integral is of length m 2 m 2. From the assumption m 2 P( S S θm θm 2 m) = P( 2 ˆ ) n(y)dy, θm θ m x θ /2 and by Fatou's lemma, ˆ liminf m 0 P( ˆ S θm 2 m)dθ 0 liminf P( S θm m 2 m)dθ = ˆ 0 ˆ x θ /2 n(y)dy. Now, denote { x θ /2 } := B θ. Notice that n(y)dy n(0) B θ as θ, and B θ n(0) B θ c θ for some c > 0. Thus, using ( ) from above, liminf m m 2 for some c, C > 0. Returning to ( ) we get P( S n m) ˆ C c θ = P(S n = 0) =. (iii) Proof outline for the 3-dimensional case of RWs on Z

4 Reminder. A RW on Z d is recurrent i ˆ [ sup Re r< [ π,π] d r ϕ(t) ] dt = where ϕ(t) = E[e it X ] is the characteristic function of X. [ ] Remark. If z C, Re [z] then Re z Re[ z]. (exercise) Thus, sup ˆ r< [ π,π] d [ Re r ϕ(t) ] dt sup ˆ r< [ π,π] d Re[ r ϕ(t)] dt. But r < : Re[ rϕ(t)] Re[ ϕ(t)], so the integral on the right hand side is only if ˆ dt =. Re[ ϕ(t)] [ π,π] d Accordingly, it is sucient to show that [ π,π] 3 Re[ ϕ(t)] dt < in order to prove the transience of the walk. Notice that if ϕ(t) = + Θ( t 2 ) as t 0, then [ π,π] 3 Re[ ϕ(t)] = [ π,π] 3 Θ( t 2 ) < as t 0. So, for the integral to be (and hence the RW be recurrent) we need ϕ(t) = + o( t 2 ) as t 0. But, for a -dimensional RV Y we have that (using the Taylor series) E[e iλy ] = + iµλ a 2 λ2 + o(λ 2 ), where E[Y ] = µ, E[Y 2 ] = a. Hence, if ϕ(t) = E[e it X ] = + o( t 2 ), then by writing t = t e iθ we get that e iθ X is a -dimensional RV with both E[e iθ X ] = 0 and V ar[e iθ X ] = 0, and therefore e iθ X 0 (a.s.). Thus, in order to have Re[ ϕ(t)] dt =, we must have that eiθ X 0 for almost all [ π,π] 3 θ. Consequently, a RW on Z 3 is recurrent only if it is contained in a plane, and otherwise transient. 2 Martingales We now supply a quick review of martingales in discrete time. A more complete discussion on the subject appears in []. 5-4

5 2. Denitions A ltration is a sequence of σ-elds {F n } n 0 s.t. F 0 F... A martingale (with respect to the ltration {F n } n 0 ) is a sequence {M n } n 0 of integrable RVs souch that M n is measurable with respect to F n, and E[M m F n ] = M n m n. Notice that the last requirement is trivially equivalent to E[M n+ F n ] = M n n. Some intuition to this denition: We can think of M n as the fortune earned by a gambler at time n. The last requirement suggests that the game is fair - the expected fortune at any time is the same as the gambler's initial fortune (or the fortune given at a certain time in the past). Submartingale is dened the same way, changing the last requirement into E[M n+ F n ] M n n (a submartingale is a good game to play...). Accordingly, supermartingale is dened with the last requirement E[M n+ F n ] M n n (there's nothing super about a supermartingale). Remark. If no ltration is given, then F n = σ(m 0,..., M n ). Example. A RW with mean 0 is a martingale (assuming X is integrable). 2.2 Optional Stopping Denition. T is a stopping time with respect to {F n } n 0 if T takes values in {0,, 2,...} { } and {T n} is measurable with respect to F n. Intuition: The gambler's decision to stop gambling is a function only of the results that happened in the past. Proposition. If M n is a (super/sub)martingale and T a stopping time, then {M T n } n 0 is also a (super/sub)martingale. In particular, EM T n = EM 0 n (and unequal accordingly for a super/sub-martingale). Proof. (exercise) Example. M n := S n for a SRW {S n } n 0 on Z starting at 0, and T = min(n M n = 0) (rst hitting time of 0). Notice that EM n = EM 0 = 0, P(T < ) = (by recurrence), and EM T = 0, so EM T EM 0. However, EM T n = 0 by the above proposition. 5-5

6 When is EM T = EM 0? Theorem (optional stopping theorem). let {M n } n 0 be a martingale, T a stopping time, and suppose P(T < ) =. Then EM T = EM 0 if any of the following holds:. T is bounded ( k P(T k) = ). 2. (dominated convergence) RV Y with E Y < and M T n Y n. 3. ET < and the dierences M n M n are uniformly bounded ( M n M n k n a.s. for some k). 4. E M T < and lim n E[ M n (T >n) ] = {M n } is uniformly integrable, i.e. ε > 0 k ε s.t. E[ M n (Mn k ε)] ε n. 6. p > s.t. {M n } are bounded in L p, that is k E M n p < k n. Proof.. By the assumption M T = M T k (a.s.), so EM T = EM T k = EM M T n M T, thus (by dom. conv.) EM T = lim EM T k = EM 0. n n T n 3. We have M T n M 0 = (M k M k ) T k where the right hand side is k= integrable by assumption. Thus, by dominated convergence we obtain. E[M T M 0 ] = lim n E[M T k M 0 ] = 0 4. Notice that M T = M T n + (M T M n ) (T >n), therefore EM T = E[M T n ] + E[M T (T >n) ] E[M n (T >n) ]. Taking n and using the second part of the assumption we are left with EM T = E 0 + lim E[M T (T >n) ]. n The assumption E M T < allows us to use the dominated convergence theorem once again to conclude that lim E[M T (T >n) ] = 0 (here assuming P(T = ) = 0 is n necessary), and so EM T = E We do not provide a proof for (5) and (6), but mention that once we prove (6), we can obtain (5) easily. (Exercise). Remark. for a supermartingale, all the above holds with the slight change that EM T EM

7 2.3 Doob's Maximal Inequality Theorem (Doob's maximal inequality). If {M n } n 0 is a non-negative submartingale, then P( max M j λ) EM n 0 j n λ. This theorem is often combined with the next proposition to get a stronger result: Proposition (Jensen). If {M n } n 0 is a martingale, f : R R convex, then {f(m n )} n 0 is a submartingale (assuming additionally that E f(m n ) < n). In particular, we can take f(x) = x p for some p, or f(x) = e bx for some b R. Given a martingale {M n }, we can now use Doob's inequality for the non-negative submartingale {f(m n )} n 0, and obtain: or P( max 0 j n M j λ) E M n p λ p p P( max 0 j n M j λ) EebMn e bλ b R. This way we can replace the original linear bound with a higher polynomial or exponential bound. Proofs for the above theorems can be found in[]. 2.4 Martingale Convergence Theorem To start the (brief) discussion on the martingale convergence theorem, we present the next quote from the book Probability of Martingales by D. Williams, on the signicance of this theorem: signies something important. signies something very important. is the martingale convergence theorem. Theorem. Let {M n } n 0 be a supermartingale, and suppose sup(mn ) < (where Mn = max( M n, 0)), then there exists a RV M such that M n M a.s. E.g. a non-negative martingale always converges (a.s.) Remark. The limit M is integrable, but the convergence is not necessarily in L. n 5-7

8 For example, consider T = min(n S n = 0) where S n is a SRW (T is the rst crossing time of -0). Then, the martingale S T n converges to the constant M = 0 which is integrable, but the convergence is not in L, since ES T n = ES 0 = 0 (using the fact that S T n is a martingale), hence E[S T n M] = 0 0. n An interesting example of the use of the martingale convergence theorem is Polya's Urn: An urn contains b blue balls and r red balls. At each time we draw a ball out, then replace it with c more balls of the color drawn. The proportion of red balls (for instance) in the urn is a non-negative martingale, and hence this proportion converges to a RV (which must take values in [0, ]). In the special case where initially r = b = and c = 2 it turns out that the limiting RV is uniformly distributed on [0, ]. Theorem (uniformly integrable submartingale). For a submartingale {M n } n 0, the following are equivalent: It is uniformly integrable. It converges a.s. and in L. It converges in L. If {M n } is a martingale then also: an integrable RV M s.t. M n = E[M F n ]. Corollary (Levy 0- law). Denote F n F (that is F = σ( F n ), the smallest σ-eld containing the union), then for every integrable RV X it holds E[X F n ] E[X F ] a.s. and in L The special case where X = A for A F and then P(A F n ) A generalizes the Kolmogorov 0- law (this last remark is left as food for thought for the reader). 3 Harmonic Functions From this point on we assume G to be a locally nite graph (that is deg(v) < v G ). Denition. h : G R is said to be harmonic if h(x) = h(y) N(x) y N(x) 5-8

9 where N(x) = {y y is a neighbor of x in G}. In other words, for every x, if we start a SRW on G {Z n } n 0 beginning at x, then h(x) = E[h(Z )]. Notice that this means that {h(z n )} n 0 is a martingale for any starting vertex x (acknowledging that h(z n ) takes only nitely many values and thus is integrable). Examples.. Constant functions are harmonic. 2. On Z d, linear functions are harmonic. (exercise) Denition. The space of bounded harmonic functions is called the Poisson Boundary of G. G is called Liouville if it has no non-constant bounded harmonic functions. Remark. All the above can be dened with respect to other RWs than the SRW. Example. Consider T 2 the innite binary tree (for our purposes we dene the tree to be innite both upwards and downwards from the origin, in order to be coherent with the general case of Cayley graphs, which would not be further discussed here). We dene an end of the tree to be an innite simple path from the origin. Then, for a collection of ends A we can dene h(x) = P x (infinite trajectory beginning at x agrees with some end in A i.o.). It is easy to check that h is a bounded harmonic function, indeed h(x) = P x ( ending in A ) = E[P x ( ending in A Z )] = E[P Z ( ending in A )] = E[h(Z )]. But, h is non-constant, for certain choices of the group A. For instance, if we choose A to be the bottom half of the graph, then the probability of ending in A is dierent for vertexes from the top half and vertexes in A. A RW beginning at the top half has a 2/3 probability of going up at every step (until it, possibly, crosses the origin downwards), as where on the bottom half the 2/3 probability is directed down. Thus, the further up we begin the RW (with respect to the origin) - the smaller the probability to end at the bottom half. Hence, T 2 is not Liouville. Remark. Z is Liouville. Proof. For h harmonic on Z, the denition implies that h(x + ) h(x) = h(x) h(x ). So, if h is non-constant, there exists a y s.t. h(y + ) h(y) = a 0, which means h has a xed slope, and thus cannot be bounded. Question is, what about Z 2? Is it Liouville? Proposition. On any connected recurrent graph (meaning a SRW on it is recurrent), any non-negative harmonic function is constant. In particular, the graph is Liouville (because any bounded harmonic function must be con- 5-9

10 stant, otherwise by adding a large enough constant we would obtain a non-negative nonconstant harmonic function). E.g. Z 2 is Liouville. Proof. Fix x, y G and let h be a non-negative harmonic function. We need to show that h(x) = h(y). Dene {Z n } n 0 to be a SRW on G starting at x. {h(z n )} n 0 is now a non-negative martingale, and so the martingale convergence theorem holds. Consequently, h(z n ) H a.s. (where H is some RV). We have assumed G is recurrent, thus P(Z n = x i.o.) =, so h(z n ) must converge to h(x) a.s. But by recurrence it also holds that P(Z n = y i.o.) =, and similarily h(z n ) h(y) a.s. Therefore h(x) = h(y) and we're done. In the next lecture we will discuss the question - is Z d Liouville for all d? References [] Rick Durret, Probability: Theory and examples, January (Chung-Fuchs theorem), 5 (martingales) and 6.7 (harmonic functions). 5-0

Lecture 7. 1 Notations. Tel Aviv University Spring 2011

Lecture 7. 1 Notations. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Lecture 8. Here we show the Asymptotics for Green s funcion and application for exiting annuli in dimension d 3 Reminders: P(S n = x) 2 ( d

Lecture 8. Here we show the Asymptotics for Green s funcion and application for exiting annuli in dimension d 3 Reminders: P(S n = x) 2 ( d Random Walks and Brownian Motion Tel Aviv University Spring 0 Lecture date: Apr 9, 0 Lecture 8 Instructor: Ron Peled Scribe: Uri Grupel In this lecture we compute asumptotics estimates for the Green s

More information

Martingale Theory for Finance

Martingale Theory for Finance Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.

More information

Notes 15 : UI Martingales

Notes 15 : UI Martingales Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.

More information

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12 Analysis on Graphs Alexander Grigoryan Lecture Notes University of Bielefeld, WS 0/ Contents The Laplace operator on graphs 5. The notion of a graph............................. 5. Cayley graphs..................................

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Lecture 2. 1 Wald Identities For General Random Walks. Tel Aviv University Spring 2011

Lecture 2. 1 Wald Identities For General Random Walks. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Lecture date: Feb 2, 20 Lecture 2 Instructor: Ron Peled Scribe: David Lagziel The following lecture will consider mainly the simple random

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

QUESTIONS AND SOLUTIONS

QUESTIONS AND SOLUTIONS UNIVERSITY OF BRISTOL Examination for the Degrees of B.Sci., M.Sci. and M.Res. (Level 3 and Level M Martingale Theory with Applications MATH 36204 and M6204 (Paper Code MATH-36204 and MATH-M6204 January

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006 Unless otherwise stated, for the remainder of the solutions, define F m = σy 0,..., Y m We will show EY m = EY 0 using induction. m = 0 is obviously true. For base case m = : EY = EEY Y 0 = EY 0. Now assume

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Notes 18 : Optional Sampling Theorem

Notes 18 : Optional Sampling Theorem Notes 18 : Optional Sampling Theorem Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapter 14], [Dur10, Section 5.7]. Recall: DEF 18.1 (Uniform Integrability) A collection

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

1 More concise proof of part (a) of the monotone convergence theorem.

1 More concise proof of part (a) of the monotone convergence theorem. Math 0450 Honors intro to analysis Spring, 009 More concise proof of part (a) of the monotone convergence theorem. Theorem If (x n ) is a monotone and bounded sequence, then lim (x n ) exists. Proof. (a)

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather 1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in

More information

Most Continuous Functions are Nowhere Differentiable

Most Continuous Functions are Nowhere Differentiable Most Continuous Functions are Nowhere Differentiable Spring 2004 The Space of Continuous Functions Let K = [0, 1] and let C(K) be the set of all continuous functions f : K R. Definition 1 For f C(K) we

More information

LECTURE 2. Convexity and related notions. Last time: mutual information: definitions and properties. Lecture outline

LECTURE 2. Convexity and related notions. Last time: mutual information: definitions and properties. Lecture outline LECTURE 2 Convexity and related notions Last time: Goals and mechanics of the class notation entropy: definitions and properties mutual information: definitions and properties Lecture outline Convexity

More information

and 1 P (w i)=1. n i N N = P (w i) lim

and 1 P (w i)=1. n i N N = P (w i) lim Chapter 1 Probability 1.1 Introduction Consider an experiment, result of which is random, and is one of the nite number of outcomes. Example 1. Examples of experiments and possible outcomes: Experiment

More information

LEBESGUE INTEGRATION. Introduction

LEBESGUE INTEGRATION. Introduction LEBESGUE INTEGATION EYE SJAMAA Supplementary notes Math 414, Spring 25 Introduction The following heuristic argument is at the basis of the denition of the Lebesgue integral. This argument will be imprecise,

More information

Martingale Theory and Applications

Martingale Theory and Applications Martingale Theory and Applications Dr Nic Freeman June 4, 2015 Contents 1 Conditional Expectation 2 1.1 Probability spaces and σ-fields............................ 2 1.2 Random Variables...................................

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Harmonic Functions and Brownian Motion in Several Dimensions

Harmonic Functions and Brownian Motion in Several Dimensions Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

Probability Theory I: Syllabus and Exercise

Probability Theory I: Syllabus and Exercise Probability Theory I: Syllabus and Exercise Narn-Rueih Shieh **Copyright Reserved** This course is suitable for those who have taken Basic Probability; some knowledge of Real Analysis is recommended( will

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Lecture 4 An Introduction to Stochastic Processes

Lecture 4 An Introduction to Stochastic Processes Lecture 4 An Introduction to Stochastic Processes Prof. Massimo Guidolin Prep Course in Quantitative Methods for Finance August-September 2017 Plan of the lecture Motivation and definitions Filtrations

More information

Harmonic functions on groups

Harmonic functions on groups 20 10 Harmonic functions on groups 0 DRAFT - updated June 19, 2018-10 Ariel Yadin -20-30 Disclaimer: These notes are preliminary, and may contain errors. Please send me any comments or corrections. -40

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

CHAPTER 1. Martingales

CHAPTER 1. Martingales CHAPTER 1 Martingales The basic limit theorems of probability, such as the elementary laws of large numbers and central limit theorems, establish that certain averages of independent variables converge

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

SOME MEASURABILITY AND CONTINUITY PROPERTIES OF ARBITRARY REAL FUNCTIONS

SOME MEASURABILITY AND CONTINUITY PROPERTIES OF ARBITRARY REAL FUNCTIONS LE MATEMATICHE Vol. LVII (2002) Fasc. I, pp. 6382 SOME MEASURABILITY AND CONTINUITY PROPERTIES OF ARBITRARY REAL FUNCTIONS VITTORINO PATA - ALFONSO VILLANI Given an arbitrary real function f, the set D

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

TUG OF WAR INFINITY LAPLACIAN

TUG OF WAR INFINITY LAPLACIAN TUG OF WAR and the INFINITY LAPLACIAN How to solve degenerate elliptic PDEs and the optimal Lipschitz extension problem by playing games. Yuval Peres, Oded Schramm, Scott Sheffield, and David Wilson Infinity

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

G : Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into

G : Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into G25.2651: Statistical Mechanics Notes for Lecture 3 I. MICROCANONICAL ENSEMBLE: CONDITIONS FOR THERMAL EQUILIBRIUM Consider bringing two systems into thermal contact. By thermal contact, we mean that the

More information

Outline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University.

Outline. Martingales. Piotr Wojciechowski 1. 1 Lane Department of Computer Science and Electrical Engineering West Virginia University. Outline Piotr 1 1 Lane Department of Computer Science and Electrical Engineering West Virginia University 8 April, 01 Outline Outline 1 Tail Inequalities Outline Outline 1 Tail Inequalities General Outline

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Introduction to Real Analysis

Introduction to Real Analysis Introduction to Real Analysis Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Sets Sets are the basic objects of mathematics. In fact, they are so basic that

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

4.1 Notation and probability review

4.1 Notation and probability review Directed and undirected graphical models Fall 2015 Lecture 4 October 21st Lecturer: Simon Lacoste-Julien Scribe: Jaime Roquero, JieYing Wu 4.1 Notation and probability review 4.1.1 Notations Let us recall

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

Stochastic Processes

Stochastic Processes Introduction and Techniques Lecture 4 in Financial Mathematics UiO-STK4510 Autumn 2015 Teacher: S. Ortiz-Latorre Stochastic Processes 1 Stochastic Processes De nition 1 Let (E; E) be a measurable space

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Math 341: Probability Seventeenth Lecture (11/10/09)

Math 341: Probability Seventeenth Lecture (11/10/09) Math 341: Probability Seventeenth Lecture (11/10/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams

More information

APPLICATIONS OF DIFFERENTIABILITY IN R n.

APPLICATIONS OF DIFFERENTIABILITY IN R n. APPLICATIONS OF DIFFERENTIABILITY IN R n. MATANIA BEN-ARTZI April 2015 Functions here are defined on a subset T R n and take values in R m, where m can be smaller, equal or greater than n. The (open) ball

More information

Stochastic Processes in Discrete Time

Stochastic Processes in Discrete Time Stochastic Processes in Discrete Time May 5, 2005 Contents Basic Concepts for Stochastic Processes 3. Conditional Expectation....................................... 3.2 Stochastic Processes, Filtrations

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Part III Advanced Probability

Part III Advanced Probability Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS

ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS ELECTRICAL NETWORK THEORY AND RECURRENCE IN DISCRETE RANDOM WALKS DANIEL STRAUS Abstract This paper investigates recurrence in one dimensional random walks After proving recurrence, an alternate approach

More information

IV.3. Zeros of an Analytic Function

IV.3. Zeros of an Analytic Function IV.3. Zeros of an Analytic Function 1 IV.3. Zeros of an Analytic Function Note. We now explore factoring series in a way analogous to factoring a polynomial. Recall that if p is a polynomial with a zero

More information

STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin.

STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin. STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER Department of Mathematics University of Wisconsin Madison WI 5376 keisler@math.wisc.edu 1. Introduction The Loeb measure construction

More information

1 Lecture 25: Extreme values

1 Lecture 25: Extreme values 1 Lecture 25: Extreme values 1.1 Outline Absolute maximum and minimum. Existence on closed, bounded intervals. Local extrema, critical points, Fermat s theorem Extreme values on a closed interval Rolle

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Metric Spaces Lecture 17

Metric Spaces Lecture 17 Metric Spaces Lecture 17 Homeomorphisms At the end of last lecture an example was given of a bijective continuous function f such that f 1 is not continuous. For another example, consider the sets T =

More information

Notes on Iterated Expectations Stephen Morris February 2002

Notes on Iterated Expectations Stephen Morris February 2002 Notes on Iterated Expectations Stephen Morris February 2002 1. Introduction Consider the following sequence of numbers. Individual 1's expectation of random variable X; individual 2's expectation of individual

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

18.175: Lecture 17 Poisson random variables

18.175: Lecture 17 Poisson random variables 18.175: Lecture 17 Poisson random variables Scott Sheffield MIT 1 Outline More on random walks and local CLT Poisson random variable convergence Extend CLT idea to stable random variables 2 Outline More

More information