THE LAW OF THE ITERATED LOGARITHM FOR TAIL SUMS SANTOSH GHIMIRE. M.Sc., Tribhuvan University, Nepal, 2001 M.S., Kansas State University, USA, 2008

Size: px
Start display at page:

Download "THE LAW OF THE ITERATED LOGARITHM FOR TAIL SUMS SANTOSH GHIMIRE. M.Sc., Tribhuvan University, Nepal, 2001 M.S., Kansas State University, USA, 2008"

Transcription

1 THE LAW OF THE ITERATED LOGARITHM FOR TAIL SUMS by SANTOSH GHIMIRE MSc, Tribhuvan University, Nepal, 00 MS, Kansas State University, USA, 008 AN ABSTRACT OF A DISSERTATION submitted in partial fulfillment of the requirements for the degree DOCTOR OF PHILOSOPHY Department of Mathematics College of Arts and Sciences KANSAS STATE UNIVERSITY Manhattan, Kansas 0

2 Abstract The main purpose of this thesis is to derive the law of the iterated logarithm for tail sums in various contexts in analysis The various contexts are sums of Rademacher functions, general dyadic martingales, independent random variables and lacunary trigonometric series We name the law of the iterated logarithm for tail sums as tail law of the iterated logarithm We first establish the tail law of the iterated logarithm for sums of Rademacher functions and obtain both upper and lower bound in it Sum of Rademacher functions is a nicely behaved dyadic martingale With the ideas from the Rademacher case, we then establish the tail law of the iterated logarithm for general dyadic martingales We obtain both upper and lower bound in the case of martingales A lower bound is obtained for the law of the iterated logarithm for tail sums of bounded symmetric independent random variables Lacunary trigonometric series exhibit many of the properties of partial sums of independent random variables So we finally obtain a lower bound for the tail law of the iterated logarithm for lacunary trigonometric series introduced by Salem and Zygmund

3 THE LAW OF THE ITERATED LOGARITHM FOR TAIL SUMS by SANTOSH GHIMIRE MSc, Tribhuvan University, Nepal, 00 MS, Kansas State University, USA, 008 A DISSERTATION submitted in partial fulfillment of the requirements for the degree DOCTOR OF PHILOSOPHY Department of Mathematics College of Arts and Sciences KANSAS STATE UNIVERSITY Manhattan, Kansas 0 Approved by: Major Professor Prof Charles N Moore

4 Copyright Santosh Ghimire 0

5 Abstract The main purpose of this thesis is to derive the law of the iterated logarithm for tail sums in various contexts in analysis The various contexts are sums of Rademacher functions, general dyadic martingales, independent random variables and lacunary trigonometric series We name the law of the iterated logarithm for tail sums as tail law of the iterated logarithm We first establish the tail law of the iterated logarithm for sums of Rademacher functions and obtain both upper and lower bound in it Sum of Rademacher functions is a nicely behaved dyadic martingale With the ideas from the Rademacher case, we then establish the tail law of the iterated logarithm for general dyadic martingales We obtain both upper and lower bound in the case of martingales A lower bound is obtained for the law of the iterated logarithm for tail sums of bounded symmetric independent random variables Lacunary trigonometric series exhibit many of the properties of partial sums of independent random variables So we finally obtain a lower bound for the tail law of the iterated logarithm for lacunary trigonometric series introduced by Salem and Zygmund

6 Table of Contents Table of Contents List of Figures Acknowledgements Dedication vi vii viii ix Introduction Dyadic martingales Notation 3 Examples 4 4 Useful results 4 5 Origin of law of the iterated logarithm 9 6 Organization of the thesis 6 Law of the iterated logarithm 7 Martingales inequalities 7 Law of the iterated logarithm for dyadic martingales 5 3 The tail law of the iterated logarithm 8 3 The tail law of the iterated logarithm for sums of Rademacher functions 8 3 The tail law of the iterated logarithm for dyadic martingales Tail LIL for dyadic martingales is not true in general 50 4 Lower bound in the tail law of the iterated logarithm 60 4 Lower bound in the tail LIL for sums of Rademacher functions 60 4 The tail law of the iterated logarithm for independent random variables 7 43 Lower bound in the tail law of the iterated logarithm for dyadic martingales 89 5 The tail law of the iterated logarithm for lacunary series 03 5 Lower bound in the tail law of the iterated logarithm for lacunary series 03 Conclusion Bibliography vi

7 List of Figures Rademacher functions 4 3 Construction of Martingales 53 vii

8 Acknowledgments I would like to tha my advisor Professor Charles N Moore for his continuous support, invaluable advice, and time throughout my student career at K-state I consider myself very fortunate to have him as my advisor This project was not possible without his help Next, I would like to tha my committee members Professor Pietro Poggi-Corradini and Professor Marianne Korten They have been very friendly, supportive and helpful throughout my study at K-state I would like to tha you Professor Haiyan Wang, who is also my Statistics Master s major advisor for her help and continuous support My thas also goes to Professor William Hsu for serving as outside chair person of my committee I would like to tha the Department of Mathematics, KSU for providing me an opportunity to pursue my PhD degree I am thaful to my mother Jyanti Devi Ghimire for her continuous motivation and encouragement for my further study Her inspiration has made me come to this stage of my study My sincere thas goes to my wife Sarala Acharya who has been very supportive for my study Her continuous support and help have made me achieve this goal I would also tha her for patience and taking care of our daughter, Subi Ghimire and letting me focus on my work Finally, I would like to tha my brother Krishna Ghimire and entire Ghimire family for their support and motivation viii

9 Dedication I would like to dedicate this to: My father late Madhusudan Upadhayaya My mother Jyanti Devi Ghimire My daughter Subi Ghimire ix

10 Chapter Introduction This chapter begins with some useful definitions and notation which will be repeatedly used in later chapters We state some useful results and then discuss the origin of law of the iterated logarithm Dyadic martingales Before we define dyadic martingales, we discuss the meaning of the word martingale Originally martingale meant a strategy for betting in which you double your bet every time you lose Let us consider a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails The strategy is that the gambler doubles his bet every time he loses and continues the process, so that the first win would recover all previous losses plus win a profit equal to the original stake This process of betting can be represented by a sequence of functions which is an example of dyadic martingale Since a gambler with infinite wealth will, almost surely, eventually flip heads, the martingale betting strategy can be seen as a sure thing Of course, no gambler in fact possesses infinite wealth, and the exponential growth of the bets will eventually barupt those who choose to use the martingale strategy A dyadic interval of the unit cube [0, is of the form Q nj = [ j, j + for n, j Z n n We reserve the symbol Q n to denote a generic dyadic interval of length n Let F n denote

11 [ j the σ algebra generated by the dyadic intervals of the form, j + on [0, and let n n Ef n+ F n denote the conditional expectation of f n+ on F n which is defined as, Ef n+ F n x = Q n Q n f n+ ydy, x Q n Definition Dyadic martingales A dyadic martingale is a sequence of integrable functions f n } n=0, with f n : [0, R such that for every n, f n is F n measurable and Ef n+ F n = f n for all n 0 The sequence f n } n=0 is called a dyadic submartingale resp supermartingale if we replace = by resp in the expectation condition Here F 0 = [0,, φ}, F = [0,, φ, [0, /, [/, } and so on Hence f n is measurable with respect to F n means for all a R, the set x : f n x > a} belongs to F n Consequently, the function f n is constant on each of the n th generation dyadic intervals Q n The expectation condition tells us that the f n is the average of f n+ on Q n Moreover, the existence of the conditional expectation can be justified by Radon-Nikodym Theorem If we thi of f n } n=0 as the fortune of gambler at the instant n of a game, then the first condition simply says the trivial fact that the result of the game totally determines the state of fortune at any instant The second condition expresses that the game is fair in the sense that the expected fortune after any trial must be same as that of the fortune before the trial Notation For a dyadic martingale we have the following standard associated functions i Maximal function, f m = sup f k, f = sup f k k m k< ii Martingale difference sequence, d k }, where d k x = f k x f k x iii Martingale square function or Quadratic characteristics, n Snfx = S n fx = d kx k=

12 iv Martingale tail square function, S fx = Sfx = S n fx = S nfx = d kx k= k=n+ d kx We note that d 0 nxdx = 0 For this let Q nj be a dyadic interval of length Then we n have, 0 d n xdx = = n j= n j= d n xdx Q nj Using the fact that f n is constant on Q nj, we have 0 d n xdx = = = = 0 n j= n j= n j= [ Q nj [f n x f n x]dx f n xdx f n x Q nj Q nj [ ] Q nj f n xdx f n x Q nj Q nj Q nj [f n x Q nj f n x Q nj ] Also we have, f n x = n k= d kx + f 0 The martingale square function is a local version of variance and can also be understood as a discrete counterpart of the area function in Harmonic Analysis They play an important role in the study of asymptotic behavior of dyadic martingales We will see that the asymptotic behavior of a dyadic martingale is governed by the size of its quadratic variation From the definition, we note that for any x, y Q n, we have S nfx = S nfy But the martingale tail square function, S n fx may not be equal to S n fy ] 3

13 3 Examples Here we give some examples of dyadic martingales Example The functions r k } k= defined on [0, by r kx = sgnsin k πx where sgn is given by sgnx = for x 0 and sgnx = for x < 0 are called the Rademacher functions Figure : Rademacher functions r x r x r 3 x Here r k alternates + and - on the dyadic intervals of the generation k as shown in Figure Moreover, r k s are independent, identically distributed random variables with zero mean and variance one Define f n = n k= a kr k where a k } is a sequence of real numbers Then f n } is a dyadic martingale Example Let f L [0, and Q n be a dyadic interval of length on [0, Define n f n x = Q n Q n fxdx, x Q n Then f n } n= is a dyadic martingale on [0, 4 Useful results In this section, we state some useful results which will be frequently used in later chapters 4

14 Lemma If E n } is a sequence of sets on a σ algebra F with the property that E n E n+ for all n and E = n= E n, then lim n E n = E Proof: Let us define A n } as follows: A = E and A n = E n E n for n =, 3, Clearly A n F and A i A j = φ for all i j Moreover we have E n = A A A n and E = i=a i Using the disjointness of A is we have, n n E n = A i = A i and E = A i = Hence we have i= i= lim E n = lim n n A i = i= Theorem 3 For a dyadic martingale, we have i= A i = E i= A i x : f x < } as = x : Sfx < } as = x : lim f n x exists} where as = means that the sets are equal up to sets of measure zero Proof: For the proof, see [] Lemma 4 Borel-Cantelli If A n } is a sequence of events and n= P A n <, then P A n io} = 0 Proof: We first note that, A n io} = lim sup A n = n= k=n A k n is the event which occur if and only if infinite number of events A n occur i= P A n io} = P lim sup A n n = P n= k=n A k = lim P n k=n A k lim P A k n k=n = 0 5

15 Remark The Borel-Cantelli Lemma can also be stated as: Let E k } be a sequence of measurable sets in X, such that k= µe k < Then almost all x X lie in at most finitely many of the sets E k Lemma 5 Borel-Cantelli, General version If A n } is a sequence of independent events and n= P A n =, then P A n io} = Proof: We have, P A n io} = P A n io} c = P A n io} = P n= k=n A k} c = P n= k=n Ac k = lim n P k=n Ac k Clearly A c k } is a sequence of independent events as A k} is independent Then, P k=n Ac k = lim N P N = lim N k=n = lim N k=n lim N k=n = lim N exp = exp = 0 k=n Ac k N P A c k N [ P A k ] N exp P A k P A k k=n P A k k=n Hence we have P A n io} = 0 Consequently, P A n io} = 6

16 Lemma 6 Lévy s inequality If X, X, X n be independent and symmetric random variables Let S n = X + X + + X n Then for all λ > 0, P max S k λ P S n λ k n P max X k λ P S n λ k n Proof: Define A k = } x : max S jx < λ S k x j<k for k n This means k is the smallest index for which S k x λ Then using the fact that X, X,, X k independent from X k+, X k+,, X n we have, n P x : S n x λ} = P A k x : S n x λ} Thus, Similarly we get, = k= n P A k x : S n x S k x} k= n P A k P S n S k 0 k= n P A k / k= = P max S k λ k n using symmetry P S n λ P max S k λ 0 k n P S n λ P max S k λ 0 k n From 0 and 0 we have, P S n λ P max S k λ k n This gives the first result Again we define, A k = } x : max X jx < λ X j x j<k 7

17 for k n Fix k We let S 0 n = X k S n Then on A k we have, λ X k S n + S 0 n Moreover, P A k P A k S n λ} + P A k S 0 n λ} = P A k S n λ} Then summing over k we have, P A = P A S n λ} P S n λ Thus, P max X k λ P S n λ k n Lemma 7 For any λ Proof: For the proof, see [6] λ + λ e λ λ e u du λ e λ Theorem 8 Central limit theorem Let X, X, X n be a sequence of independent identically distributed random variables with finite mean µ and variance σ Define X n = /n n i= X i Then n X n µ/σ converges to a standard normal distribution Proof: For the proof, see page 36 of [3] Theorem 9 Hoeffding, 963 Let Y, Y,, Y n be independent random variables with zero mean and bounded ranges: a i Y i b i, Y i n Then for each η > 0, n η P Y i η exp n i= b i a i i= Proof: For the proof, see [9] 8

18 Theorem 0 Let X n, F n } be a submartingale and let φ be an increasing convex function defined on R If φx is integrable for every n, then φx n, F n } is also a submartingale Proof: For the proof, see [6] Lemma If X i are independent random variables with the property EX i = 0, then S n = n i= X i is a martingale and S n is a submartingale Proof: For the proof, see [6] Theorem Doob s Maximal Inequality If X n, β n is a submartingale, then for any M > 0, Proof: For the proof, see [6] P max X k M k M M EX+ n = M EmaxX n, 0 5 Origin of law of the iterated logarithm Before we discuss the origin of law of the iterated logarithm, we first give the definition of normal numbers Definition 3 Normal numbers Let us suppose that N takes values in [0, and consider its decimal and dyadic expansion as N = X n n= ; X 0 n n 0,,,, 9} N = X n n= ; X n n 0, } Now for a fixed k, 0 k 9, let ω n k N denotes the number of digits among the first N n n digits of N that are equal to k Then ωn k is the relative frequency of the digit k in the ω n k first n places and thus the limit lim N is the frequency of k in N Then the number n n N is called the normal to the base 0 if and only if this limit exists for each k and is equal to Similarly, the number N is called the normal to the base if and only it the limit 0 exists and is equal to The first law of the iterated logarithm LIL, introduced in probability theory, had its origin in attempts to perfect Borel s theorem on normal numbers Precisely, the first 9

19 LIL was introduced to obtain the exact rate of convergence in the Borel s theorem Many mathematicians obtained the different rates of convergence, but Khintchine was the one who obtained the exact rate of convergence In order to describe Khintchine s result, we state a simple form of Borel s theorem on normal numbers Theorem 4 Borel If N n t denote the number of occurrences of the digit in the first N n t n places of the binary expansion of a number t [0,, then lim = for ae t in n n Lebesgue measure So by Borel s theorem we can conclude that ae t [0, is a normal number Here n/ is the expected number of ones and the theorem gives the limit of the relative frequency of number of ones But what can be said about the deviation N n t n/? In order to answer this, we consider a special case as follows Suppose that X n takes values ± with probabilities coin tossing model We consider the unit interval with Lebesgue measure as a probability space Then we can write X n t = b n t, where b n is the n th digit in the binary expansion of t [0, Let S n = n i= X i Under this context the following results were obtained Hausdorff 93 obtained S n = On +ε ae for any ε > 0 Hardy and Littlewood 94 obtained S n = O n log n ae Khintchine 93 obtained S n = O n log log n ae In 94, Khintchine obtained the definite answer to the size of the deviation in Borel s theorem and his result is given by, Theorem 5 Khintchine If N n t denote the number of occurrences of the digit in the first n places of the binary expansion of a number t [0,, then for ae t, we have lim sup n S n n log log n = 03 0

20 This result is popularly known as Khintchine s law of the iterated logarithm LIL We note that S n t = n i= X it = n i= b it n i= = N nt n 04 Then using 04 in 03 we have, lim sup n N n t n n log log n = lim sup n N n t n = n log log n So Khintchine s LIL provides the size of the deviation in terms of expected mean and the deviation is of order n log log n Because of the factor log log n iteration of log in the deviation, Khintchine s law is popularly known as law of the iterated logarithm Borel s theorem immediately follows from the Khintchine theorem For this, we have This gives us Again Hence we have lim sup n S n t < n log log n S n n log log n = ie ae b n t n < n log log n N n t n < ie N n t n < n log log n n log log n N n t n Then taking limit as n, we have lim N n t n n log log n < n N n t = 0, so lim n n = We note that results of Hausdorff and Hardy-Littlewood also imply the conclusion of Borel s theorem For this we note that for all n, log n n so that log log n log n Consequently we have, n log log n n log n ie S n n log log n S n n log n

21 Thus, S n = O n log log n = S n = O n log n Khintchine s result on the rate of convergence is the first law of the iterated logarithm in the theory of probability A few years later, the result of Khintchine was generalized by Kolmogorov to a wide class of sequences of independent random variables We now state Kolmogorov s celebrated law of the iterated logarithm Theorem 6 Kolmogorov, 99 Let X n } n= be a sequence of independent random variables with zero mean and variance one Suppose that X n ε nn for some constants log log n ε n 0 Then for almost every ω, where S n = n X i i= lim sup n S n ω n log log n = We remark that in the above theorem, the mean of S n is zero and n is the standard deviation of S n So Kolmogorov s LIL provides the size of oscillation of partial sum of independent random variable from its expected mean and the size is approximated in terms of standard deviation Next, we apply Kolmogorov s LIL to random walks to estimate the size of the walk in the long run Consider the Rademacher functions, r k } k= Set f x = r x f x = r x + r x f n x = r x + r x + + r n x Here f n x} defines a random walk In this random walk, we move unit to the right if r i x = and to the left if r i x = Clearly, f n } satisfies all the assumptions of

22 Kolmogorov s theorem So by Kolmogorov s LIL, we have, lim sup n f n x n log log n For ε > 0, this gives us f n x + ε n log log n for n large Here, the worst bound for the function f n x is n, ie, f n x n Thus, Kolmogorov s LIL gives the sharper asymptotic estimate, f n x + ε n log log n For sufficiently large n, the factor n log log n is much smaller than n This shows that in the long run the walker will fluctuate in between n log log n and n log log n Over the years people have made many efforts to obtain an analogue of Kolmogorov s LIL in various settings in analysis Some of the existing settings are lacunary trigonometric series, martingales, harmonic functions to name just a few But the first LIL in analysis was obtained in the setting of lacunary trigonometric series Definition 7 Lacunary series A real trigonometric series with the partial sums S m = m k= a k cos n k + b k sin n k which has n k+ n k > q > is called q lacunary series In the definition, the condition n k+ n k > q > is called gap condition which states that the sequence n k } increases at least as rapidly as a geometric progression whose common ratio is bigger than Lacunary series exhibit many of the properties of partial sums of independent random variables In the modern probability theory, lacunary series are called weakly dependent random variables The law of the iterated logarithm in the setting of lacunary series was first given by Salem and Zygmund This result of Salem and Zygmund is the first law of the iterated logarithm in analysis [] Theorem 8 R Salem and A Zygmund, 950 Suppose that S m is a q lacunary series and the n k are positive integers Set Bm = m k= a k + b k and M m = max a k + k m b k Suppose also that Bm as m and S m satisfies the Kolmogorov-type B m condition: Mm K m log loge e + Bm for some sequence of numbers K m 0 Then lim sup m S m B m log log B m 3

23 for almost every T, unit circle Hence Note that π S π mxdx = 0 This means that the mean of the partial sums is zero Again, [ σ = V ars m x = π π ] S π mxdx S m xdx = π = π = π π π π π π S mxdx 0 π [ m k= a k cosn k x + b k sinn k x] dx m k= π [a k cos n k x + b k sin n k x]dx π = m a k + b k k= σ = B m = m a k + b k This shows that B m is the variance of partial sums So the theorem gives us the upper bound for the size of oscillation of partial sums from its expected mean and the order of the size depends on the size of standard deviation Salem and Zygmund assumed n k to be positive integers and they only obtained the upper bound Erdös and Gál were the first to make progress towards the other inequality They obtained the following result for a particular form of lacunary series given by the following theorem Theorem 9 Erdös and Gál, 955 Suppose S m = m k= expin k is a q lacunary series and n k are integers Then for almost every in the unit circle lim sup m k= S m m log log m = Later, M Wiess gave the complete analogue of Kolmogorov s LIL in this setting This result was the part of her PhD thesis Theorem 0 MWeiss, 959 Suppose S m = m k= a k cos n k + b k sin n k is a q lacunary series Set B m = m k= a k + b k and M m = max k m a k + b k Suppose 4

24 also that B m as m and S m satisfies the Kolmogorov-type condition: M m K m B m log loge e + B m for some sequence of numbers K m 0 Then lim sup m for almost every in the unit circle S m B m log log B m = There is another type of LIL in the case of independent random variables introduced by Kai Lal Chung Theorem Chung, 948 Let X n ; n } be a sequence of independent identically distributed random variables with common distribution F with zero mean and variance σ, log log n and with finite third moment E X 3 < Then lim inf max n n S j = σπ with j n 8 probability Next, we discuss another law of the iterated logarithm introduced by Salem and Zygmund In this LIL, they considered tail sums of the lacunary series instead of n th partial sums Theorem R Salem and A Zygmund, 950 Suppose a lacunary series S N = k=n a k cos n k + b k sin n k where c k = a k + b k satisfies k= c k < Define B N = k=n k c and M N = max c k Suppose that B B N < and that M N K N k N log log B N for some sequence of numbers K N 0 as N Then lim sup N for almost every in the unit circle S N B N log log B N This result is popularly known as tail law of the iterated logarithm We remark that the condition k= c k < says that the given lacunary series converges ae and S N = N a k cos n k + b k sin n k a k cos n k + b k sin n k This shows that the tail LIL gives k= k= the rate of convergence of partial sums of lacunary series to its limit function Furthermore, the rate of convergence depends upon the standard deviation of the tail sums 5

25 6 Organization of the thesis The purpose of this thesis is to obtain an analogue of Salem-Zygmund s tail law of the iterated logarithm in various contexts in analysis The various contexts are Rademacher functions, dyadic martingales, independent random variables, and lacunary trigonometric series We first establish the tail law of the iterated logarithm for sums of Rademacher functions which are nicely behaved dyadic martingales Employing the idea from the Rademacher functions, we then derive the tail law of the iterated logarithm for dyadic martingales and then obtain the tail law of the iterated logarithm for independent random variables and lacunary series The thesis is organized as below In chapter, we derive some standard inequalities which will be used in later chapters and derive the martingale analogue of Kolmogorov s law of the iterated logarithm Chapter 3 begins with the derivation of the tail law of the iterated logarithm for sums of Rademacher functions We then derive the tail LIL for dyadic martingales and construct an example of a dyadic martingale which does not follow the tail LIL In this chapter, we only focus on the upper bound in the LIL for these functions In chapter 4, we obtain a lower bound in the tail LIL for sums of Rademacher functions We also introduce the tail law of the iterated logarithm for sums of independent random variables and obtain a lower bound for it In chapter 5, we obtain a lower bound in the tail law of the iterated logarithm for dyadic martingales and finally in chapter 6, we obtain the lower bound in the tail law of the iterated logarithm for lacunary series introduced by Salem and Zygmund 6

26 Chapter Law of the iterated logarithm In this chapter, we first derive two useful martingale inequalities and then obtain an analogue of Kolmogorov s law of the iterated logarithm in the case of dyadic martingales The martingale analogue of Kolmogorov s law of the iterated logarithm was first derived by W Stout Stout obtained the martingale analogue using a probabilistic approach We will derive it using the harmonic analysis approach Martingales inequalities We first prove a Lemma, called Rubin s Lemma which will be used in our martingale inequalities The proof of this lemma can also be found in [0], [], and [4] Lemma 3 Rubin For a dyadic martingale f n }, f 0 = 0 0 exp f n x S nfx dx Proof: We claim that gn = n exp d k x 0 k=0 n d kx dx k=0 is a decreasing function of n Let Q nj be an arbitrary n th generation dyadic interval We 7

27 have n k=0 d kx = f n and f n is constant on Q nj Using this we have, n n+ gn + = exp d k x n+ d j=0 Q nj kx dx k=0 k=0 n n = exp d k x n d j=0 Q nj kx exp d n+ x d n+x dx k=0 k=0 [ n n ] = exp d k x n d kx exp d n+ x Q nj d n+x dx Let Q n+j j=0 k=0 k=0 Q nj and Q n+j be the dyadic subintervals of Q nj Suppose d n+ takes the value α on Q n+j Then by the expectation condition, d n+ takes the value α on Q n+j This gives, Q nj exp d n+ x d n+x dx = = Q n+j exp α α dx + [ exp α α = exp = exp α α + exp e α + e α cosh α Now using the elementary fact that cosh x e x, we have [ n n ] gn + exp d k x n d kx exp j=0 k=0 k=0 Q nj [ n n ] = exp d k x n d kx Q nj j=0 k=0 k=0 Q nj n n = exp d k x n d Q nj kx dx j=0 = gn k=0 k=0 Q n+j α α ] n+ n+ α exp exp α α dx n+ α n+ Let Q and Q be the dyadic subintervals of Q 0 Assume that d takes value on Q so that it takes value on Q 8

28 g = = 0 0 exp d x d x dx exp dx + exp dx = exp + exp = exp e + e = exp cosh exp exp = Since gn is decreasing and g we conclude, n exp d k x n d kx dx 0 k=0 k=0 Hence, This completes the proof of Rubin s lemma 0 exp f n x S nfx dx Note that if we rescale the sequence f n } by λ, then the Lemma gives, 0 exp λf n x λ Snfx dx This shows that the Rubin s lemma is an inhomogeneous type inequality Now we prove our first martingale inequality Lemma 4 For a dyadic martingale f n } and λ > 0 we have λ x [0, : sup f m x > λ} 6 exp m Sf 9

29 Proof: Fix n Let λ > 0, γ > 0 Then and so on Hence for every m n, f n x = f n ydy, x Q n, Q n = Q n Q n n f n x = Q n Q n f n ydy, x Q n, Q n = n f m x = f n ydy, x Q m, Q m = Q m Q m m Fix x Then sup f m x M f n x where Mf n is the Hardy-Littlewood maximal m n function of f n Then using Jensen s inequality we have, expγ f m x = exp γ Q m f n yd Q m Me γ fmx x y Q m Q m expγ f n y dy Then the Hardy-Littlewood maximal estimate gives, } x [0, : sup f m x > λ} = x [0, : sup e γ fmx > e γλ m n m n x [0, : Me γ fm x > e γλ} 3 e γλ = 3 e expγ f n y dy 0 γ exp γλ S nf 0 γ 3 exp eγλ S nf 0 exp γ f n y γ S nf dy dy exp γ f n y γ S nfy 0

30 Applying the Rubin s Lemma we have, 0 = = exp γ f n y γ exp y:f ny 0} y:f ny 0} 0 + = exp S nf dy γ f n y γ S nfy dy + S nfy dy + dy + exp exp γf n y γ γf n y γ S nfy 0 y:f ny<0} y:f ny<0} exp γ f n y γ S nfy dy exp γf n y γ S nfy dy dy γf n y γ S nfy Thus, Choose γ = } x [0, : sup f m x > λ 6 γ exp m n eγλ S nf λ Then we have, S n f 6 λ x [0, : sup f m x > λ} S n f exp m n λ exp S n f 4 S nf λ 6 exp S n f For the dyadic martingale f n }, S nfx = n d kx S fx = k= d kx k= This gives, S n f Sf Consequently, S n f Sf So we have, λ x [0, : sup f m x > λ} 6 exp m n Sf

31 Define E n := x [0, : } sup f m x > λ m n and E := x [0, : sup f m x > λ m Clearly E n E n+ and E = k=e k Then we have, lim n E n = E See Lemma for the proof Thus, So, x [0, : sup f m x > λ} lim m n x [0, : sup f m x > λ} m n λ lim 6 exp n S n f λ = 6 exp S n f λ x [0, : sup f m x > λ} 6 exp m Sf This completes the proof of the first martingale inequality sums Now using the above martingale inequality, we prove a martingale inequality for tail Lemma 5 For a dyadic martingale f n }, with λ > 0 and, n fixed positive integer we have, λ x [0, : sup fx f m x > λ} exp m n 8 S nf } Proof: Fix n Define a sequence g m } as follows, g m x = 0, if m n; f m x f n x, if m > n We first show that g m } is a dyadic martingale Clearly for every m, g m is measurable with respect to the sigma algebra F m Let m > n Then using the fact that f m is constant on the cube Q m we have,

32 Eg m+ F m x = [f m+ x f n x]dx Q m Q m = f m+ xdx f n xdx Q m Q m Q m Q m = f m+ xdx f n x Q m Q m = f m x f n x = g m x Thus we have Eg m+ F m = g m This shows that g m } is a martingale Then applying Lemma 4 for this martingale, we get λ x [0, : sup g m x > λ} 6 exp m Sg But, g m x = 0 for m n Hence, λ x [0, : sup g m x > λ} 6 exp m n Sg Again, S gx = d kx = k=0 = = = = [g k+ x g k x] k=0 [g k+ x g k x] k=n [f k+ x f n x f k x + f n x] k=n k=n+ k=n+ [f k+ x f k x] d kx = S n fx This gives, λ x [0, : sup g m x > λ} 6 exp m n S nf 3

33 ie Clearly, So we have, λ x [0, : sup f m x f n x > λ} 6 exp m n S nf x : fx f n x > λ} x : sup f m x f n x > λ} m n 0 x : fx f n x > λ} x : sup f m x f n x > λ} m n Consequently, By the triangle inequality we have, λ x : fx f n x > λ} 6 exp 0 S nf sup m n This gives, } x : sup fx f m x > λ m n fx f m x sup fx f n x + f n x f m x m n = fx f n x + sup f n x f m x m n x : sup fx f n x > λ } x : sup f n x f m x > λ } m n m n Therefore, x : sup fx f m x > λ} x : fx f m x > λ } + m n x : sup f n x f m x > λ } m n Then using 0 and 0 in the above inequality we get, λ x : sup fx f m x > λ} 6 exp m n S nf λ = exp 8 S nf + 6 exp λ S nf Thus, λ x : sup fx f m x > λ} exp m n 8 S nf This completes the proof of our second martingale inequality 4

34 Law of the iterated logarithm for dyadic martingales Burkholder and Gundy proved See Theorem 3 x : Sfx < } as = x : lim f n exists} where as = means the sets are equal upto a set of measure zero From this result, we observe that dyadic martingales f n } behave asymptotically well on the set x : Sfx < } But what can be said about the asymptotic behavior of dyadic martingales on the complement of the given set? Its behavior is quit pathological on the set x : Sfx = }; in particular it is unbounded ae on this set But it is possible to obtain the size of growth of f n on the set x : Sfx = }? The rate of growth of f n on x : Sfx = } is precisely given by the martingale analogue of Kolmogorov s law of the iterated logarithm W Stout proved the law of the iterated logarithm for martingales using a probabilistic approach Here we derive the law of the iterated logarithm for dyadic martingales using a harmonic analysis approach Theorem 6 If f n } n=0 is a dyadic martingale on [0, then, lim sup n f n x S n fx log log S n fx almost everywhere on the set where f n } is unbounded Proof: Let > and δ > 0 We note that for every x [0,, we have either S n fx > k for some n or S n fx k, for every n, and thus, Sfx k We define stopping time as; So by stopping time, γ k S γk fx k Define, γ k x = minn : Sn+ fx > k, if Sfx k is the smallest index such that S γk +fx > k This means f n x = f n γk x = f x, f x,, f γk x, f γk x,, for γ k f x, f x, f 3 x,, if γ k = 5

35 We first show that S f k So for n < γ k x, we have S f n x = Sf n x Sf γk x k Again if n γ k x, then S f n x = Sf γk x k Thus, n S f n x k Then, lim S f n x k So we have S f k n Choose λ = +δ k log log k Then using Lemma 4 for the dyadic martingale f n } with the chosen λ, we get } x [0, : sup f n x > + δ k + δ k log log k log log k 6 exp n Sf + δ k log log k 6 exp k = 6 exp + δ k log log k = 6 exp logk log +δ = 6k log +δ = 6 k log +δ Summing over all k, we have } x [0, : sup f n x > + δ k 6 log log k n log +δ k +δ k= k= 6 = log +δ k +δ < Then by Borel-Cantelli Lemma Lemma 4 we have for ae x, sup f n x + δ k log log k n for sufficiently large k, say, k M, M depends on x Thus for ae x, we have, sup f n γk xx + δ k log log k n for sufficiently large k M We choose x such that f n x is unbounded Then from the Theorem 3 we have, x : Sfx < } ae = x : f n x converges} 6 k=

36 So we have Sfx = Then γ x γ x γ 3 x ie for every i, γ i x < Let n γ M Then choose k such that γ k x < n γ k+ x Here, γ k x < n gives γ k x n Thus, S n fx = S n + fx > k Using this, we have So, lim sup n We show, f n x sup m γ k+ f m γk+ x f n x sup f m γk+ x m S n fx loglog S n fx Let X = logs n fx Then lim sup n Therefore for ae x, + δ k+ log log k+ = + δ k loglog k + log < + δs n fx loglog S n fx + log lim sup n < + δ lim sup n loglog S n fx + log loglog S n fx loglog S n fx + log loglog S n fx = = lim sup X loglog S n fx + log loglog S n fx = logx + log log X Letting we get, lim sup n lim sup n f n x S n fx log log S n fx < + δ f n x S n fx log log S n fx + δ This can be done for every δ > 0 Hence we have for ae x, lim sup n f n x S n fx log log S n fx This completes the proof of the law of the iterated logarithm for dyadic martingales 7

37 Chapter 3 The tail law of the iterated logarithm In this chapter, we first establish the tail law of the iterated logarithm for sums of Rademacher functions Sums of Rademacher functions are nicely behaved dyadic martingales We then derive the tail law of the iterated logarithm for dyadic martingales Moreover, with the help of an example we will show that the tail law of the iterated logarithm is not true in general 3 The tail law of the iterated logarithm for sums of Rademacher functions We first prove a lemma which will be used in the proof of the tail LIL for sums of Rademacher functions Lemma 7 Let f n = n k= a kr k, f = k= a kr k where a k } is a sequence of numbers and r k } is a sequence of Rademacher functions Then for a fixed n and λ > 0 we have, λ x : sup fx f m x > λ} 4 exp m n S nf Proof: Let d i = f i f i Then d i = i k= a kr k i k= a kr k = a i r i Here, each d i has mean 0 and variance Moreover, they are independent and symmetric random variables So using Lévy s inequality Lemma 6, Chapter, we have P max j j n i= d j > λ P n k= d k > λ 8

38 Let N >> n Then we have, P max j 0 j N n i=0 d N i > λ Thus, P N n i=0 d N i > λ P max d N, d N +d N,, d N +d N + +d n+ > λ P d N +d N + +d n+ > λ This gives, P max f N f N, f N f N,, f N f n > λ P f N f n > λ ie } x [0, : max f Nx f m x > λ x : f N x f n x > λ} N m n Using the fact that sup k a k > λ sup k a k > λ or sup k a k > λ we have, } x [0, : max f Nx f m x > λ N m n } = x [0, : max f } Nx f m x N m n x > λ [0, : max f Nx + f m x N m n > λ Then, x [0, : max f Nx f m x > λ} N m n } } x [0, : max f Nx f m x N m n > λ + x [0, : max f Nx f m x N m n > λ < x : f N x f n x > λ} + x : f N x f n x > λ} = x : f N x f n x > λ} Thus, } x [0, : max f Nx f m x > λ x : f N x f n x > λ} N m n Now using equation 0 of Chapter we have, λ x [0, : sup f m x f n x > λ} 6 exp m n S nf 9

39 Thus, Clearly, Therefore, Let E N = x : f N x f n x > λ} x [0, : sup f m x f n x > λ} m n λ x [0, : f N x f n x > λ} 6 exp S nf λ x [0, : sup f N x f m x > λ} exp N m n S nf x [0, : } sup f N x f m x > λ N m n and E = k= E k Clearly E N E N+ Then E = lim N E N See Lemma, Chapter for the proof Next we show, } x [0, : sup fx f m x > λ m n E Let x be such that sup fx f m x > λ Then for sufficiently large N we have, m n sup N m n f N x f m x > λ This means x E N for sufficiently large N so that x E Then, x [0, : sup fx f m x > λ} E m n = lim E N N = lim N x [0, : sup f N x f m x > λ} N m n λ lim exp N S nf λ = exp S nf Thus, λ x [0, : sup fx f m x > λ} exp m n S nf This completes the proof of the Lemma 30

40 Theorem 8 Tail LIL for Rademacher functions Let r k } k= be the sequence of Rademacher functions and a n } n= be a sequence with n= a n < Set ft = k= a kr k t, f n t = n k= a kr k t, S n ft = j=n+ a k Then, for ae t lim sup n ft f n t S n ft log log S n ft Proof: We first show that f n } n= is a dyadic martingale For this we note that for i n, a i r i is measurable with respect to F n and so is the sum n i= a ir i = f n Again for Q n = n, we have Ef n+ F n = Q n = Q n = Q n = Q n f n+ xdx Q n n+ a k r k xdx = Q n n = = k= Q n k= Q n [ n ] a k r k x + a n+ r n+ x dx k= Q n k= Q n k= a k r k x Q n n a k r k x k= = f n Let > Define n n, n k by n k = min n a k r k xdx + a n+ r n+ xdx Q n n a k r k xdx + 0 n : j=n+ Q n dx a j < k Using the previous Lemma Lemma 7 for a fixed m we have, λ t : sup ft f n t > λ} exp n m S mf 3

41 Then using the above estimate for n k we have, t : sup ft f n t > λ} exp n n k But S ft = j=n k + f j t f j t = j=n k + λ S n k f a j r j t = j=n k + So, λ t : sup ft f n t > λ} exp n n k j=n k + a j We choose λ = + ε log log k k where ε > 0 Then, from the above estimate, we have } + ε log log k t : sup ft f n t > + ε log log k exp n n k k k j=n k + a j Using j=n k + a j < we get, k } + ε log log k t : sup ft f n t > + ε log log k exp n n k k k a j k = exp + ε log log k = exp logk log +ε = k log +ε = log +ε k +ε Consequently, } t : sup ft f n t > + ε log log k n n k= k k < k= log +ε = log +ε < k +ε k +ε k= 3

42 So by Borel-Cantelli Lemma Lemma 4, Chapter for ae t, sup ft f n t + ε n n k log log k k 0 for sufficiently large k, say k M such that M depends on t Fix t Choose n n M Then k M such that n k n < n k+ Now by the definition of n k+, we have S + ft < k+ But n < n k+ so that S n ft Thus, k+ Again n k n implies Thus, Then using 0 in 0, we have Thus for ae t we have, j=n+ j=n+ k+ a j k+ a j < k j=n+ a j < k 0 ft f n t sup ft f m t m n k + ε log log k k = + ε log log k k+ < + ε a j log log j=n+ a j lim sup n Letting, we get j=n+ a j j=n+ ft f n t < + ε log log j=n+ a j 33

43 lim sup n j=n+ a j This is true ε > 0 Hence for ae t, lim sup n j=n+ ft f n t < + ε log log j=n+ a j a j ft f n t log log j=n+ a j This completes the proof of the tail law of the iterated logarithm for sums of Rademacher functions Remark In the above theorem, we have S fx = n= a n < Then by Theorem 3, Chapter, lim f n x exists So the tail law of the iterated logarithm gives the rate of convergence of the sequence f n } to its limit function f and rate of convergence depends on the tail sums of the square function 3 The tail law of the iterated logarithm for dyadic martingales In this section, we employ the idea from the Rademacher case to obtain the tail law of the iterated logarithm for dyadic martingales Moreover, we will later note that the tail law of the iterated logarithm in not true in general which will be justified by an example Theorem 9 Tail LIL for dyadic martingales Let f n } n=0 be a dyadic martingale Assume that there exists a constant C < such that S nfx S nfy C x, y I nj for n = [ j,, 3,, j 0,,, 3,, n } where I nj =, j + Then n n for ae x lim sup n fx f n x S n fx log log S n fx C 34

44 Proof: Let > Define functions γ γ by γ k x = min n : x I nj for some j,, 3, n } and y I nj, S nfy < } k Now by Lemma 5, Chapter for each I nj we have, λ I nj y I nj : sup fy f n y > λ} exp n m 8 S m f Inj Now using the above estimate for γ k y, we have, Here, I nj y I nj : sup fy f n y > λ} exp n γ k y λ 8 S γk y f I nj 03 So S γ k yfy Inj < k S γ k yf Inj k λ S γ k y f λ k I nj λ λ k exp 8 S γ k y f exp 04 I nj 8 Then using 04 in 03, we get y I nj : sup f n y fy > λ} I nj exp n γ k y λ k Now summing over all such I nj we get, y [0, : sup λ k f n y fy > λ} exp 8 k= n γ k y k= 8 05 Summing over all over all generations we have, y [0, : sup f n y fy > λ} n γ k y λ k exp 06 8 Choose λ = + ε log log k k where ε > 0 Then using 06 for the chosen λ, we have 35

45 Thus, k= y [0, : k= + ε exp k= sup f n y fy > + ε n γ k y k log log k k 8 + ε log log k = exp 4 k= = exp logk log +ε 4 = k= k= = log +ε 4 < k log +ε 4 k= k +ε 4 y [0, : sup f n y fy > + ε n γ k y log log k k log log k k } } < Hence, by Borel-Cantelli Lemma Lemma 4, Chapter for ae y we have, sup f n y fy + ε n γ k y log log k k 07 for sufficiently large k, say k M, M depends on y Fix y Choose n n M, then j such that y I nj and k M such that γ k y n < γ k+ y By the definition of γ k we have, S γ k y fy < k and γ ky n This gives, S γ nyfy < k 08 Again by the definition of γ k+, S γ k+ y fy < and n < γ k+y So for some y k+ 0 I nj, S nfy 0 But y, y k+ 0 I nj, so by assumption S n fy 0 S C Thus, n fy CS nfy S nfy 0 k+ 36

46 ie Combining 08 and 09 we have, CS nfy 09 k+ C k+ S nfy < k 00 Using 00 in 07, we have f n y fy sup m γ k y f m y fy + ε log log k k log log = + ε k log log k k+ log log k + ε C log k + log log S nfy log log S n fy log k + log log f n y fy S n fy log log S n fy Also we know that as n so does k Here, lim sup n So for ae y we have, f n y fy S n fy log log S n fy lim sup n lim sup k log k + log log + εc log k + log log + εc log k + log log log k + log log = f n y fy S n fy log log S n fy lim sup k < + εc log k + log log log k + log log 37

47 Letting we get, lim sup n f n y fy S n fy log log S n fy This is true for ε > 0 Thus for ae y we have, C + ε lim sup n f n y fy S n fy log log S n fy C This proves the tail law of the iterated logarithm for dyadic martingales Remark 3 From the assumption, we get Sfx < for ae x This shows that the sequence f n x} converges by Theorem 3 Thus the tail law of the iterated logarithm gives the rate of convergence of dyadic martingales f n } to its limit function f Moreover, the rate of convergence depends on the tail sums of martingale square function Corollary 30 Let f n } n=0 be a dyadic martingale Fix > Define stopping times, n k x = min n : x I nj for some j,, 3, n } and y I nj, S nfy < } k Then for the sequence of stopping times n k x, for ae x lim sup k S x f xx fx [ fx log log S x fx ] < 3 Proof: We first prove the following estimate for λ > 0, η > 0, x [0, : fx f n x > λ, From equation 0 of Chapter we have, } S nfx < ηλ 6 exp 0 η λ x : fx f n x > λ} 6 exp S nf 38

48 Here S nfx < ηλ gives S nf η λ So, S nf x [0, : fx f n x > λ, η λ Then } S nfx λ < ηλ 6 exp S nf λ 6 exp η λ = 6 exp η Choose λ = + ε log log l l and η = + ε log log l where > and ε > 0 Then using 0 we have, } x [0, : fx f n x > + ε log log l, S nfx < l l + ε log log l 6 exp + ε = 6 exp logl log = 6l log = = 6 l log +ε 6 log +ε + ε +ε l Choose ε = + ε 3 Then we have = 3 Thus, x [0, : fx f n x > + ε log log l, S nfx < l 3 6 log l 3 Let gx = = C l 3 say l x log log x Clearly gx is an increasing function So for l S n fx, 39 }

49 we have S n fx log log S n fx Now using 0, we have x [0, : fx f n x > + ε = x [0, : fx f n x > + ε l=k+ x [0, : fx f n x > + ε l=k+ = x [0, : fx f n x > l=k+ x [0, : fx f n x > l=k+ l=k+ Clearly, C l 3 l=k+ S n fx log log l log log l 0 S n fx S n fx log log } n fx, S nfx < l S l log log l, S nfx < l + ε log log l, S nfx < } l l + ε l log log l, S nfx < l } [ l 3 k x dx = 3 x Thus, x [0, : fx f n x > + ε ] k = k S n fx log log This can be done for every n k x So summing over all k we have, S n fx } } C k l } } x [0, : fx f xx > + ε S x fx log log S k= x fx C k k= = C k < k= 40

50 So, by Borel-Cantelli Lemma 4, Chapter for ae x, there exists M which depends on x such that for every k M, But ε = 3 So, fx f xx + ε S x fx f xx [ fx log log S x fx log log S x fx S n k x fx ] 3 We note that as n so does k Letting we get for ae x, lim sup k S x fx f xx [ fx log log S n k x fx ] 3 Remark 4 This is true for every stopping time But we can not estimate the behavior of the limsup in between any two stopping times as two consecutive stopping times might be significantly different Next, let f be an integrable function such that f x is continuous and x, 0 < m f x M Let us define f n x = Ef F n x where F n is the σ algebra generated by the dyadic intervals of length n on [0, Clearly 4

51 the sequence f n } defines a dyadic martingale Then, d n x = f n x f n x = fydy fydy Q n Q n Q n Q n Q n Q n = fydy fydy fydy Q n Q n Q n Q n Q n Q n Q n = Q n Q n = fydy fydy fu + Q n du using y = u + Q n Q n Q n Q n Q n Q n Q n = fydy fydy fu + Q n du Q n Q n Q n Q n Q n Q n = fydy fu + Q n du Q n Q n Q n Q n = [fy fy + Q n ] dy Q n Q n Now by mean value theorem there exists z such that fy fy + Q n = f z Q n But f z M Thus fy fy + Q n M Q n Then we have, Now using f x m, we have d n x M Q n dy = Q n Q n Q n M Q n Q n = M 03 n+ d n x = [fy fy + Q n ] dy Q n Q n = f z Q n dy Q n Q n = m n+ Q n mdy Hence we have, Now d n x m 04 n+ fx f n x = k=n+ d kx k=n+ d kx 4

52 From 03 we have Then we get Again using d k x Let us take gx = g fx f n x m, we have k+ S n fx = k=n+ d k x M k+ d kx k=n+ k=n+ M = M k+ n+ m k+ = m 05 3n+ x log log Clearly gx is an increasing function So we have, x n fx S S n fx log log m g 3 n+ S n fx m 3n+ log log 3n+ m So we have S n fx log log S n fx Hence using 06 we have, m 3n+ log log 3n+ m 06 lim sup n f n y fy S n fy log log S n fy lim sup n = lim sup n m M n+ 3n+ log log 3n+ m 3M m log log 3n+ m 0 as n This shows that there is no need to use our theorem to find the limit of the quotient for functions with continuous and bounded derivatives as it is trivial in this case Clearly we do have functions fx = n= a nr n x for which limsup in law of the iterated logarithm is nontrivial where r n } is sequence of Rademacher functions We want to look for functions for Lipα type which is slightly more general than differentiable function 43

53 Next, we show that limsup is nontrivial for functions Lipα for < α < For this we consider fx = n= a n sin n x Clearly, the series satisfies the gap condition So it is a lacunary series If a n <, then it converges ae We choose a n such that fx Lipα In order to choose a n we recall a Theorem from [] Theorem 3 For the function fx for which the Fourier series is lacunary to belong to the class Lipα 0 < α < it is necessary and sufficient that its Fourier coefficients are of order On α We choose a n =, so that fx = k α k= sin k πx Let us construct martingales k α from the given function as follows f n x = Ef F n = Q n Q n k= k α sink πxdx So we have, f n x = Q n Q n k α sink πxdx k= = [ n Q n k α sink πx + = Q n = Q n Q n k= n Q n k= n k= k=n+ k α sink πxdx + 0 Q n k α sink πxdx ] k α sink πx dx By mean value theorem there exits c such that, sin k πx sin k πy = k πx y cos k πc 07 44

54 Hence using 07 we have, fx f n x Here k=n+ x ydy = Q n Hence, k α sink πx = fx f n x y:y>x} fx f n x fx n k α sink πx n Q n k α sink πxdx k= n = k α sink πx Q n k= k= = n Q n k α sink πxdy = Q n = Q n Q n x y dy k=n+ k= Q n n Q n k= n Q n k= n Q n k= x+ Qn x Q n n k= k α [sink πx sin k πy]dy k α k πx y cos k πcdy k α k πx ydy x ydy = k α sink πx Q n n = k= = π n n n k= k α k π n k= = π n l k= k k α Q n k α sink πydy [ ] x+ Qn x y = x Q n k α k π Q n k k α + n k=l+ k k α We choose l such that l+ = n n α so that log l+ = log n n α This shows that l = n + α log n 45

55 Then using α log n n for large n, we have n l k n k = k α n k + α k= k= = n l+ + n n k k α k=l+ k=l+ n n α log n + n α + n n+ l + α n α + l + α k k α n + α n + α n α log n α n + α n α = + α+ n α = C α n α say n k=l+ k We have, Next we show for some constant C lim sup N k=n k=n sin k πx k α k α log log k=n d m x C α m α k α = 46

56 d m x = f m x f m x = [ sin k πx sin k πx + Q Q m Q m k α m ] dx k= [ m = Q m Q m k α sink πx sin k πx + Q m + k= k=m+ [ m ] = Q m Q m k α sink πx sin k πx + Q m dx k= m Q m k x + Q m x k π cos k πc dx Using MVT α Q m = m k k α k= Q m k= m Q m k= π m + α+ m α = C α m α So Define gx = This gives, lim sup N k α Q m k πdx d Cα kx k α k=n+ k=n+ x log log x Clearly gx is an increasing function So k=n+ g d kx log log k=n+ d kx g k=n+ lim sup N k=n d k x C α k α k=n+ ] k α sink πx sin k πx + Q m Cα k log log α k=n+ C α k α dx 47

57 Then, lim sup N = lim sup N lim sup N lim sup N lim sup N k=n+ fx f N x d kx log log fx f N x lim sup N k=n d k x k=n+ k=n+ fx f N x sin k πx + k α k=n+ sin k πx k α Cα k log log α C α k α k=n+ k=n+ k=n+ sin k πx fx f k α N x k=n+ k=n+ k=n+ k=n+ k=n+ k=n+ k α sin k πx Cα N α Cα k log log α Cα k log log α k=n+ sin k πx k α Cα k log log α k=n+ C α k α C α k α k=n+ lim sup N C α k α Cα k log log α sin k πx k α k=n+ C α N α k=n+ Cα k log log α k=n+ C α k α C α k α But This gives, k=n+ C α k α = α N α N N α lim sup N k=n+ C α N α Cα k log log α k=n+ C α k α = lim sup N C α N α N α N log log N α N = lim sup C α N N log log N α N = 0 48

58 Thus we have, lim sup N k=n+ fx f N x d kx log log lim sup N k=n d k x = C α lim sup N k=n+ k=n+ sin k πx k α Cα k log log α k=n+ k=n+ k=n+ sin k πx k α k log log α C α k α k=n+ C α k α But lim sup N k=n+ k=n+ k log log α k=n+ k log log α k=n+ Using 08 together with the assumption, we get lim sup N k=n+ k=n+ sin k πx k α k log log α k=n+ C α k α k α C α k α = 08 = Hence we have, lim sup N k=n+ fx f N x d kx log log k=n d k x C α This shows that there are functions in Lipα with < α < for which law of the iterated logarithm is nontrivial We note that LIL is trivial for functions with continuous and bounded derivatives We showed that there are functions in Lipα with < α < doe which it is nontrivial Note that fx is differentiable ae for f Lipα α = This means the gap is very narrow Next, we show that the tail law of the iterated logarithm is not true in general with help of an example 49

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Rademacher functions

Rademacher functions Rademacher functions Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto July 6, 4 Binary expansions Define S : {, } N [, ] by Sσ σ k k, σ {, }N. For example, for σ and σ,

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Geometric intuition: from Hölder spaces to the Calderón-Zygmund estimate

Geometric intuition: from Hölder spaces to the Calderón-Zygmund estimate Geometric intuition: from Hölder spaces to the Calderón-Zygmund estimate A survey of Lihe Wang s paper Michael Snarski December 5, 22 Contents Hölder spaces. Control on functions......................................2

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

JUHA KINNUNEN. Harmonic Analysis

JUHA KINNUNEN. Harmonic Analysis JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes

More information

Analysis Qualifying Exam

Analysis Qualifying Exam Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

TEST CODE: MMA (Objective type) 2015 SYLLABUS

TEST CODE: MMA (Objective type) 2015 SYLLABUS TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

HARMONIC ANALYSIS. Date:

HARMONIC ANALYSIS. Date: HARMONIC ANALYSIS Contents. Introduction 2. Hardy-Littlewood maximal function 3. Approximation by convolution 4. Muckenhaupt weights 4.. Calderón-Zygmund decomposition 5. Fourier transform 6. BMO (bounded

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Some basic elements of Probability Theory

Some basic elements of Probability Theory Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Elementary Probability. Exam Number 38119

Elementary Probability. Exam Number 38119 Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of

More information

Martingale Theory for Finance

Martingale Theory for Finance Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.

More information

Continuum Probability and Sets of Measure Zero

Continuum Probability and Sets of Measure Zero Chapter 3 Continuum Probability and Sets of Measure Zero In this chapter, we provide a motivation for using measure theory as a foundation for probability. It uses the example of random coin tossing to

More information

Lebesgue s Differentiation Theorem via Maximal Functions

Lebesgue s Differentiation Theorem via Maximal Functions Lebesgue s Differentiation Theorem via Maximal Functions Parth Soneji LMU München Hütteseminar, December 2013 Parth Soneji Lebesgue s Differentiation Theorem via Maximal Functions 1/12 Philosophy behind

More information

Problem set 1, Real Analysis I, Spring, 2015.

Problem set 1, Real Analysis I, Spring, 2015. Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n

More information

Weighted norm inequalities for singular integral operators

Weighted norm inequalities for singular integral operators Weighted norm inequalities for singular integral operators C. Pérez Journal of the London mathematical society 49 (994), 296 308. Departmento de Matemáticas Universidad Autónoma de Madrid 28049 Madrid,

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Mini-Course on Limits and Sequences. Peter Kwadwo Asante. B.S., Kwame Nkrumah University of Science and Technology, Ghana, 2014 A REPORT

Mini-Course on Limits and Sequences. Peter Kwadwo Asante. B.S., Kwame Nkrumah University of Science and Technology, Ghana, 2014 A REPORT Mini-Course on Limits and Sequences by Peter Kwadwo Asante B.S., Kwame Nkrumah University of Science and Technology, Ghana, 204 A REPORT submitted in partial fulfillment of the requirements for the degree

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Advanced Probability Theory (Math541)

Advanced Probability Theory (Math541) Advanced Probability Theory (Math541) Instructor: Kani Chen (Classic)/Modern Probability Theory (1900-1960) Instructor: Kani Chen (HKUST) Advanced Probability Theory (Math541) 1 / 17 Primitive/Classic

More information

Measurable functions are approximately nice, even if look terrible.

Measurable functions are approximately nice, even if look terrible. Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............

More information

Martingale Theory and Applications

Martingale Theory and Applications Martingale Theory and Applications Dr Nic Freeman June 4, 2015 Contents 1 Conditional Expectation 2 1.1 Probability spaces and σ-fields............................ 2 1.2 Random Variables...................................

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

Fourier Series. 1. Review of Linear Algebra

Fourier Series. 1. Review of Linear Algebra Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt

More information

Lecture Notes Introduction to Ergodic Theory

Lecture Notes Introduction to Ergodic Theory Lecture Notes Introduction to Ergodic Theory Tiago Pereira Department of Mathematics Imperial College London Our course consists of five introductory lectures on probabilistic aspects of dynamical systems,

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 13 Sequences and Series of Functions These notes are based on the notes A Teacher s Guide to Calculus by Dr. Louis Talman. The treatment of power series that we find in most of today s elementary

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

NOTES ON THE REGULARITY OF QUASICONFORMAL HOMEOMORPHISMS

NOTES ON THE REGULARITY OF QUASICONFORMAL HOMEOMORPHISMS NOTES ON THE REGULARITY OF QUASICONFORMAL HOMEOMORPHISMS CLARK BUTLER. Introduction The purpose of these notes is to give a self-contained proof of the following theorem, Theorem.. Let f : S n S n be a

More information

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

Doubly Indexed Infinite Series

Doubly Indexed Infinite Series The Islamic University of Gaza Deanery of Higher studies Faculty of Science Department of Mathematics Doubly Indexed Infinite Series Presented By Ahed Khaleel Abu ALees Supervisor Professor Eissa D. Habil

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Chapter 1. Sets and probability. 1.3 Probability space

Chapter 1. Sets and probability. 1.3 Probability space Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability

More information

Lecture Notes 3 Convergence (Chapter 5)

Lecture Notes 3 Convergence (Chapter 5) Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

13. Examples of measure-preserving tranformations: rotations of a torus, the doubling map

13. Examples of measure-preserving tranformations: rotations of a torus, the doubling map 3. Examples of measure-preserving tranformations: rotations of a torus, the doubling map 3. Rotations of a torus, the doubling map In this lecture we give two methods by which one can show that a given

More information

TEST CODE: MIII (Objective type) 2010 SYLLABUS

TEST CODE: MIII (Objective type) 2010 SYLLABUS TEST CODE: MIII (Objective type) 200 SYLLABUS Algebra Permutations and combinations. Binomial theorem. Theory of equations. Inequalities. Complex numbers and De Moivre s theorem. Elementary set theory.

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

S. Mrówka introduced a topological space ψ whose underlying set is the. natural numbers together with an infinite maximal almost disjoint family(madf)

S. Mrówka introduced a topological space ψ whose underlying set is the. natural numbers together with an infinite maximal almost disjoint family(madf) PAYNE, CATHERINE ANN, M.A. On ψ (κ, M) spaces with κ = ω 1. (2010) Directed by Dr. Jerry Vaughan. 30pp. S. Mrówka introduced a topological space ψ whose underlying set is the natural numbers together with

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006

Exercise Exercise Homework #6 Solutions Thursday 6 April 2006 Unless otherwise stated, for the remainder of the solutions, define F m = σy 0,..., Y m We will show EY m = EY 0 using induction. m = 0 is obviously true. For base case m = : EY = EEY Y 0 = EY 0. Now assume

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Lecture 7. 1 Notations. Tel Aviv University Spring 2011

Lecture 7. 1 Notations. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction

More information

The extreme points of symmetric norms on R^2

The extreme points of symmetric norms on R^2 Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2008 The extreme points of symmetric norms on R^2 Anchalee Khemphet Iowa State University Follow this and additional

More information

Measure and integration

Measure and integration Chapter 5 Measure and integration In calculus you have learned how to calculate the size of different kinds of sets: the length of a curve, the area of a region or a surface, the volume or mass of a solid.

More information

Remarks on the Rademacher-Menshov Theorem

Remarks on the Rademacher-Menshov Theorem Remarks on the Rademacher-Menshov Theorem Christopher Meaney Abstract We describe Salem s proof of the Rademacher-Menshov Theorem, which shows that one constant works for all orthogonal expansions in all

More information

An Analysis of Katsuura s Continuous Nowhere Differentiable Function

An Analysis of Katsuura s Continuous Nowhere Differentiable Function An Analysis of Katsuura s Continuous Nowhere Differentiable Function Thomas M. Lewis Department of Mathematics Furman University tom.lewis@furman.edu Copyright c 2005 by Thomas M. Lewis October 14, 2005

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

The Hilbert Transform and Fine Continuity

The Hilbert Transform and Fine Continuity Irish Math. Soc. Bulletin 58 (2006), 8 9 8 The Hilbert Transform and Fine Continuity J. B. TWOMEY Abstract. It is shown that the Hilbert transform of a function having bounded variation in a finite interval

More information

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007 CSL866: Percolation and Random Graphs IIT Delhi Arzad Kherani Scribe: Amitabha Bagchi Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

On the definition and properties of p-harmonious functions

On the definition and properties of p-harmonious functions On the definition and properties of p-harmonious functions University of Pittsburgh, UBA, UAM Workshop on New Connections Between Differential and Random Turn Games, PDE s and Image Processing Pacific

More information

Singular Integrals. 1 Calderon-Zygmund decomposition

Singular Integrals. 1 Calderon-Zygmund decomposition Singular Integrals Analysis III Calderon-Zygmund decomposition Let f be an integrable function f dx 0, f = g + b with g Cα almost everywhere, with b

More information

A List of Problems in Real Analysis

A List of Problems in Real Analysis A List of Problems in Real Analysis W.Yessen & T.Ma December 3, 218 This document was first created by Will Yessen, who was a graduate student at UCI. Timmy Ma, who was also a graduate student at UCI,

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

SOLUTIONS TO HOMEWORK ASSIGNMENT 4

SOLUTIONS TO HOMEWORK ASSIGNMENT 4 SOLUTIONS TO HOMEWOK ASSIGNMENT 4 Exercise. A criterion for the image under the Hilbert transform to belong to L Let φ S be given. Show that Hφ L if and only if φx dx = 0. Solution: Suppose first that

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why

More information

MATH 117 LECTURE NOTES

MATH 117 LECTURE NOTES MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

MAKING MONEY FROM FAIR GAMES: EXAMINING THE BOREL-CANTELLI LEMMA

MAKING MONEY FROM FAIR GAMES: EXAMINING THE BOREL-CANTELLI LEMMA MAKING MONEY FROM FAIR GAMES: EXAMINING THE BOREL-CANTELLI LEMMA SAM CANNON Abstract. In this paper we discuss and prove the Borel-Cantelli Lemma. We then show two interesting applications of the Borel-

More information

Prime Number Theory and the Riemann Zeta-Function

Prime Number Theory and the Riemann Zeta-Function 5262589 - Recent Perspectives in Random Matrix Theory and Number Theory Prime Number Theory and the Riemann Zeta-Function D.R. Heath-Brown Primes An integer p N is said to be prime if p and there is no

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

1.1. MEASURES AND INTEGRALS

1.1. MEASURES AND INTEGRALS CHAPTER 1: MEASURE THEORY In this chapter we define the notion of measure µ on a space, construct integrals on this space, and establish their basic properties under limits. The measure µ(e) will be defined

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Fractals list of fractals by Hausdoff dimension

Fractals list of fractals by Hausdoff dimension from Wiipedia: Fractals list of fractals by Hausdoff dimension Sierpinsi Triangle D Cantor Dust Lorenz attractor Coastline of Great Britain Mandelbrot Set What maes a fractal? I m using references: Fractal

More information

Probability Theory I: Syllabus and Exercise

Probability Theory I: Syllabus and Exercise Probability Theory I: Syllabus and Exercise Narn-Rueih Shieh **Copyright Reserved** This course is suitable for those who have taken Basic Probability; some knowledge of Real Analysis is recommended( will

More information

Advanced Analysis Qualifying Examination Department of Mathematics and Statistics University of Massachusetts. Tuesday, January 16th, 2018

Advanced Analysis Qualifying Examination Department of Mathematics and Statistics University of Massachusetts. Tuesday, January 16th, 2018 NAME: Advanced Analysis Qualifying Examination Department of Mathematics and Statistics University of Massachusetts Tuesday, January 16th, 2018 Instructions 1. This exam consists of eight (8) problems

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information