Long Time Asymptotics of Ornstein-Uhlenbeck Processes in Poisson Random Media

Size: px
Start display at page:

Download "Long Time Asymptotics of Ornstein-Uhlenbeck Processes in Poisson Random Media"

Transcription

1 University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 8-23 Long Time Asymptotics of Ornstein-Uhlenbeck Processes in Poisson Random Media Fei Xing Recommended Citation Xing, Fei, "Long Time Asymptotics of Ornstein-Uhlenbeck Processes in Poisson Random Media. " PhD diss., University of Tennessee, This Dissertation is brought to you for free and open access by the Graduate School at Trace: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of Trace: Tennessee Research and Creative Exchange. For more information, please contact

2 To the Graduate Council: I am submitting herewith a dissertation written by Fei Xing entitled "Long Time Asymptotics of Ornstein-Uhlenbeck Processes in Poisson Random Media." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, with a major in Mathematics. We have read this dissertation and recommend its acceptance: Jan Rosinski, Vasileios Maroulas, Frank Guess Original signatures are on file with official student records.) Xia Chen, Major Professor Accepted for the Council: Dixie L. Thompson Vice Provost and Dean of the Graduate School

3 Long Time Asymptotics of Ornstein-Uhlenbeck Processes in Poisson Random Media A Dissertation Presented for the Doctor of Philosophy Degree The University of Tennessee, Knoxville Fei Xing August 23

4 c by Fei Xing, 23 All Rights Reserved. ii

5 To my beloved parents Guangyou Xing and Yanfei Ding, for their endless understanding and encouragement. iii

6 Acknowledgements This dissertation would not have been possible without the help of many people in many ways. I would like to express my deepest appreciation to my advisor Dr. Xia Chen for his continuous support and understanding. He is always there to give me valuable comments and leads me to the right direction in research. Also, I would like to thank Dr. Jan Rosinski and Dr. Vasileios Maroulas for providing me with many insightful discussions regarding my research, giving me valuable career advices, and suggesting on my personal plan. I also would like to thank Dr. Frank Guess for his supportive advice on applying stochastic models to real life projects. Moreover, I would like to thank my colleagues and friends Ernest Jum, Kai Kang and Liguo Wang, with whom the fruitful conversations always give me ideas and solutions to my research. In the end, my sincere gratitude to my dear wife Yang Shen for her consistent support for the past few years. Without her, life would not have been this colorful. iv

7 Learning without thought is labor lost; thought without learning is perilous. Confucius v

8 Abstract The Models of Random Motions in Random Media RMRM) have been shown to have fruitful applications in various scientific areas such as polymer physics, statistical mechanics, oceanography, etc. In this dissertation, we consider a special model of RMRM: the Ornstein-Uhlenbeck process in a Poisson random medium and investigate the long time evolution of its random energy. We give complete answers to the long time asymptotics of the exponential moments of the random energy with both positive and negative coefficients, under both quenched and annealed regimes. Through these results, we find out a dramatic difference between the long time behavior of the Brownian motion dynamics and the Ornstein-Uhlenbeck dynamics in the Poisson random medium. vi

9 Table of Contents Introduction 2 Model Set Up and the Main Results 6 2. Notation and Basic Definitions Random Motions: Ornstein-Uhlenbeck Processes Random Media: Poisson Random Media Main Theorems: Long Time Asymptotics Spectral Structures of the Ornstein-Uhlenbeck Semigroup 7 3. Function Space Notations Spectral Structures of the Ornstein-Uhlenbeck Semigroup Long Time Asymptotics: Quenched Regime Variational Formulas for the Rates Exponential Moments with Positive Coefficients Exponential Moments with Negative Coefficients Analysis of λ and λ Long Time Asymptotics: Annealed Regime Exponential Moments with Positive Coefficients Exponential Moments with Negative Coefficients Lower Bound Upper Bound vii

10 6 Future research 5 Bibliography 5 A Appendix 57 A. Poisson Integrals A.2 Poincaré Inequality for Normal Distribution A.3 Basic Properties of the Poisson potential A.4 W,2, µ) A.5 Proof of Lemma A.6 Spectral Representation of Self-Adjoint Operators Vita 7 viii

11 List of Figures 2. Simulation of Poisson point process left) and the corresponding Poisson media V right) x = t, y = log u ω +, t). Averaging 5 O-U samples in 3 realizations of the Poisson media ix

12 Chapter Introduction Concerns of Random Motions in Random Media RMRM) arise when researchers try to understand the interaction between the evolution of a random particle movement and the random environment where it stays in. RMRM is one of the most active research fields in probability theory in the past few decades, having many applications to areas such as astrophysics, oceanography, chemical reactions, statistical mechanics and partial differential equations PDE). We refer the readers to [8, 9, 2] for background, motivation, applications and fundamental results. The general model of RMRM is formulated as following. Let Xt, ϖ) t R + a stochastic process representing the evolution of some random movement or curve growth over time. For instance, one can treat Xt, ϖ) as the location of a particle with random movement realization ϖ at time t, or view Xs, ϖ) s t as the shape of a random polymer chain up to time t, in the d dimensional Euclidean space. On the other hand, independent of the law of Xt, ϖ) t R +, the space is filled with a random medium V x, ω) x, where the value of V x, ω) can be interpreted by different meanings ranging from the reward function to the potential function at each position x for every random realization ω. With this set up, V Xs, ϖ), ω) ds quantifies the total energy accumulated by the particle from starting time up to Due to the broad applications of RMRM in polymer physics, sometimes random media are also called random potentials in literature. be

13 time t. Notice that due to the two systems of randomness in the construction of a RMRM i.e. the particle movements and the media), there are two different regimes for these types of models that can be studied. Those studies of the random energy given the media ω are called the quenched regime. The annealed regime, on the other hand, is obtained by averaging the quenched objects over all possible random media. Throughout this dissertation, denote P and E as the law and expectation of a random medium, respectively. Similarly, denote P x and E x as the law and expectation of Xt, ϖ) starting at position x, respectively. Without causing confusion, let us take Xt) := Xt, ϖ) for simplicity in the rest of the dissertation. The following exponential moments are of great interest due to their strong connections with various fields, such as PDE in mathematics, survival probability of polymer chains in polymer physics, and random Gibbs measure in statistical physics: u ω ±t, x) def = E x exp ± U ± t, x) def = E E x exp ± V Xs), ω) ds Quenched) V Xs), ω) ds Annealed) Throughout this paper, we call u ω +, u ω, U +, U the quenched exponential moment with positive coefficient, the quenched exponential moment with negative coefficient, the annealed exponential moment with positive coefficient, and the annealed exponential moment with negative coefficient, respectively. Research on long time asymptotics of those exponential moments have become very active in the past few decades. To make the idea of long time asymptotics clearer, let us take the quenched exponential moment with positive coefficient as an example, and the problem can be formulated as follows: we look for suitable long time growth rate at) and the corresponding almost surely constant λ /, ± such that lim t t at) log E x exp V Xs), ω) ds = λ. 2

14 As an example of RMRM, the models of Brownian motion BM) in homogeneous Poisson random media have been studied extensively in literature due to their applications in a wide range of scientific areas such as random polymer model in chemistry [8], parabolic Anderson model in physics [5], and so on. The research on the long time asympotics of Brownian motion in Poisson random media can be traced back to 97s. In their seminal paper in 975, Donsker and Varadhan [9] discovered that the long time asymptotic of the annealed exponential moment U for BM in Poisson random media exhibits a decay rate at) = t d/d+2), using their groundbreaking large deviation theory. Whereas the development of the quenched regime for BM in Poisson random media appeared much later. There were big breakthroughs for the quenched regime in the 99s. Carmona and Molchanov [3] studied the long time asymptotic of u ω + for BM in Poisson random media and they obtained the suitable growth rate at) = t log t. Around the same time, Sznitman [27] proved that log log t at) = t log t) 2/d is the correct rate for u ω for BM in Poisson potential using his powerful method of enlargement of obstacles. Since then, there have been many advances in this area. To mention a few here: Gärtner et al. [6] obtained the almost surely second order long time asymptotic of exponential moment with positive coefficient for BM in certain Poisson media. Most recently, Chen [7] investigated the BM in a Poisson media of the gravitational field type and obtained long time asymptotics of renormalized exponential moments for both positive and negative coefficients cases. Brownian motions, as the continuous analogues of simple random walks, have very strong diffusive behaviors. However, various real world random dynamics which are influenced by certain known factors, such as friction and mean-reverting effect, are often presenting non-diffusive or even stationary phenomenons One such dynamic is the Ornstein-Uhlenbeck O-U) process, which was first introduced by Leonard Ornstein and George Eugene Uhlenbeck in the 93s to describe the velocity of a massive Brownian particle under the influence of friction [29]. Since then the O-U processes have been discovered to have many applications in a wide range of areas such as noisy relaxation process and Langevin equations in physics, interest rates, 3

15 currency exchange rates, and commodity prices in financial mathematics, and model for peptide bond angle of molecules in biochemistry, etc. See for instance [4, 8, 26] for introductions and applications. One way to formulate the O-U dynamic is through the following stochastic differential equation: dxt) = Xt) dt + dw t)..) It is classical results [25] that the O-U dynamic is an ergodic Markov process with a Normal distributed invariant distribution. Notice that, due to the pull-back effect of the Xt) dt term in.), the O-U process tends to stay near its equilibrium position, which is quite different from the behavior of BM dynamic. Motivated by O-U processes crucial roles in various areas of real-world applications as well as their different dynamical behavior from the BM, we are interested in investigating a type of new RMRM model: the O-U process in homogeneous Poisson potential. In particular, we ask the following question: Are there differences between the long time asymptotic behaviors of O-U processes and BM, in a Poisson random medium? The goal of the presented work is to give a complete answer to the long time asymptotics of exponential moments with both positive and negative coefficients for O-U processes in homogeneous Poisson random media, under both quenched and annealed regimes. The results in this work provide a better understanding of the interaction between the O-U dynamics and the Poisson random media, which is potentially fruitful in statistical physics, finance and biochemistry. Organization of the Dissertation The rest of the paper is organized as follows. In Chapter 2 we describe the O-U process in Poisson potential model in details, present the main results of the dissertation, and compare the results with the counterpart of Brown motion studied in [3, 9, 27]. Chapter 3 characterizes the spectral structure of the O-U semigroup, 4

16 which will be used extensively in later chapters for the proof of the asymptotic results. In Chapter 4 we consider the quenched regime and give the proof of the corresponding long time asymptotics. In Chapter 5 we provide the proof of long time asymptotics for our model under the annealed regime. Chapter 6 discusses several possible avenues for future work of this topic. Some mathematical backgrounds as well as proofs of several technical lemmas are included in Appendix A. 5

17 Chapter 2 Model Set Up and the Main Results In this chapter we set up the model and then present the main theorems of the dissertation. We first list the notations and basic definitions which will be used through the paper in Section 2.. Section 2.2 introduces the Ornstein-Uhlenbeck process and its related properties that will be used in later proofs. We define the Poisson potential which serves as the random media in our model. Furthermore, we give a path description of the random potential in Section 2.3. Equipped with these, we state the main results of the dissertation and compare them with the cases of Brownian motion in Section 2.4. In the whole dissertation, we consider the model on with d. 2. Notation and Basic Definitions Throughout the dissertation, px, y, t) denotes the transition probability of a Markov process X from position x at time to position y at time t >. Z + is the set of all positive integers. 6

18 P and E stand for the probability law and the corresponding expectation of the random media, respectively. Similar, P x and E x stand for the probability law and the corresponding expectation of the random motion starting at position x, respectively. We use ω d to denote the volume of the unit d dimensional ball. i.o. is short for infinitely often and a.s. is short for almost surely. Denote the domain of an operator A by DA). Bx, R) is the ball centering at x of radius R. B ) is the collection of all Borel sets on. suppk) def. = the closure of x : Kx) is called the support of a function K. We denote the first exit time of a stochastic process Xt) from the inside of a R ball by τ R, that is, τ R = inft : Xt) / B, R). R + denotes the set of all non-negative real numbers. 2.2 Random Motions: Ornstein-Uhlenbeck Processes We model the random motion by a d dimensional Ornstein-Uhlenbeck processes Xt) t R + = X t),, X d t) which satisfies the following stochastic differential equation: dxt) = Xt) dt + dw t) 2.) with X) = x, where W t) is a d dimensional Brownian motion of which margin is a one dimensional standard Brownian motion. Under this setting, it is well known that see [25]) X is a homogeneous ergodic Markov Process. Hence, given the present state of X, the future and past behaviors of X are independent. Indeed, X has the 7

19 transition density px, y, t) = π e 2t ) ) exp y xe t 2 d/2 e 2t x, y, t >, 2.2) and the invariant distribution µx) N, I d /2), where I d is the d by d identity matrix. In the following dissertation, we denote the density function of µ by φ, that is µdx) = φx) dx = 2π) d/2 exp x 2 dx. 2.3) X is a Gaussian process. That is, any finite linear combination of samples of X is Normal also known as Gaussian) distributed: for all c,, c N R, t,..., t N R +, N k= c kxt k ) is Normal distributed. In fact, X as a stochastic process has the same distribution as a time changed Brownian motion B ): Xt) d = xe t + e t 2 Be 2t ). 2.4) According to 2.4), it is also straight forward to see that X has N, I d /2) distributed invariant distribution. Remark. The following equation 2.5) can be derived from 2.2), 2.3) and the time reversal property of X: φy) px, y, t) = e 2t ) d/2 exp x 2 x ye t 2 = φx) py, x, t). 2.5) e 2t We will revisit this equation several times in the later proofs. 2.3 Random Media: Poisson Random Media The positions of random obstacles is modeled by a Poisson point process ω ) with intensity measure νdx) = λ dx λ > ). A Poisson point process ωa) A B ) is a measure-valued random variable such that for any Borel set A in : 8

20 . ω ) = almost surely. 2. For any disjoint sets A and A 2, ωa ) and ωa 2 ) are independent random variables. 3. ωa) is a Poisson distributed random variable of parameter λ volumea). Furthermore, we assume the influence of each Poisson point to the environment is local, captured by a deterministic shape function K ). More precisely, we make the following assumptions: Assumption The shape function K is nonnegative, continuous and compactly supported. Without loss of generality, assume suppk) B, L) and max x Kx) >. Therefore, the Poisson media V ) defined by V x) = Kx y) ωdy) measures the accumulated impact of all the Poisson points at position x. Figure 2. illustrates simulations of a 2-dimensional Poisson point process on the [ 4, 4] 2 square with λ =.5 and the corresponding Poisson media V with a compactly supported shape function K. We include some long range estimates of the Poisson random media V in Appendix A.3. 9

21 Figure 2.: Simulation of Poisson point process left) and the corresponding Poisson media V right). 2.4 Main Theorems: Long Time Asymptotics As introduced in Chapter, we are interested in the long time asymptotics of the following quantities: and u ω ±x, t) = E x exp ± U ± x, t) = E E x exp ± V Xs)) ds, Quenched Regime) 2.6) V Xs)) ds. Annealed Regime) 2.7) For the quenched exponential moments u ω ±, incorporating the Feynman-Kac formula as well as the infinitesimal operator structure of X, we know that u ω ± solve the following parabolic PDE with random potentials V and V, respectively: u t = 2 u x u ± V x, ω) u, t, x) [, ) Rd, 2.8) u, x) =, x.

22 Therefore, understanding the long time asymptotics of 2.6) provides information on the the long time behavior of the solution of PDE 2.8). As to U, here we provide two ways to visualize the quantity: For the first viewpoint, take the Poisson integral see Appendix A. for more details on Poisson integral) to then we have E E x exp = E x exp λ E x exp V Xs)) ds exp V Xs)) ds, ) KXs) y) ds dy. 2.9) If we take the Poisson points as hard obstacles, i.e. the shape function K satisfies + if x < δ Kx) = otherwise. We see that exp KXs) y) ds) dy equals to the total volume of the region swept by the δ neighborhood of the path of X from to t, denoted by C δ t X )). In literature, C δ t is called the δ sausage of the process X [2, 9]. Hence, U measures the exponential moment of the δ sausage of O-U process. Another perspective to understand U x, t) is to view is as the survival probability of an O-U process in the δ Poisson traps until time t. Indeed, if we take each Poisson point as a trap and assume the O-U process being killed when it first runs into a δ neighborhood of those Poisson points, then the survival probability of the O-U process up to time t could be expressed as: P P x τ > t) = E E x exp V Xs)) ds,

23 where τ = inft, Xt) δ neighborhood of a Poisson point is the survival time of the O-U particle and V is the hard obstacles modeled as above. For details, see Section 2.5 in [2]. Under the above settings for the RMRM model, we obtain the following long time asymptotics of the exponential moments for O-U processes X in homogeneous Poisson random media V, for both the quenched regime as well as the annealed regime. For each regime, we consider the exponential moments in both positive and negative coefficients situations. Theorem Quenched Regime). P almost surely, and t lim t t log E x exp V Xs)) ds = λ, lim t t log E x exp V Xs)) ds = λ 2, where λ, λ 2, ) are non-degenerate random variables with the following variational representations λ = sup g F λ 2 = inf g F ) 2 g 2 + V x)g 2 x) φx) dx ) 2 g 2 + V x)g 2 x) φx) dx where F = g C ) : g 2 x)φx) dx = and C ) is the set of all smooth functions on with compact support. According to the Feynman-Kac formula, we have the following corollary straight forward from Theorem. Corollary 2. The solutions of the PDEs in 2.8) have exponential growth/decay speed almost surely. More precisely, lim t t log uω +x, t) = λ, 2

24 and lim t t log uω x, t) = λ 2, where λ, λ 2, ) are non-degenerate random variables. Figure 2.2: x = t, y = log u ω +, t). Averaging 5 O-U samples in 3 realizations of the Poisson media. Remark 2. For the BM case, Carmona and Molchanov s result [3] shows that log log t t lim log E x exp V Bs)) ds = d max Kx), t t log t x P almost surely whereas Sznitman s result [27] shows that log t) 2/d lim log E x exp t t V Bs)) ds = c P almost surely. Comparing their results with ours, we have the following observations. First, both rates are different from the O-U dynamics: the u ω + for BM has faster growth rate 3

25 t log log t expc comparing with O-U dynamics e ct, while the u ω log t for BM yields a slower t decay rate exp c than O-U dynamics e ct. Second, even though u ω log t) 2/d + and u ω are random variables of which values depend on each realization ω of the random media V, the constants λ in both cases of the quenched exponential moments under the BM dynamics are almost surely not affected by the randomness of the Poisson potential. However, the constants we obtained for O-U dynamics are taking random values that are highly influenced by the random media. So these phenomenons reveal that BM has a relatively stabilized interaction with the Poisson random media. Remark 3. Due to the dramatic path behavior differences between O-U processes non-diffusive) and BMs diffusive), the strategies executed well for BM case do not work here for the O-U dynamics anymore. For instance, the approach proposed by Carmona and Molchanov for the quenched exponential moment of BM in Poisson media needs to quickly send the random motion to a small ball which is far away from the origin point and let it stay inside for the rest time see also [7] for an excellent summary in details). The effectiveness of this strategy for BM counts on its diffusive nature. However, to require the same behavior for O-U processes is extremely hard since there is a strong intention for an O-U process to come back to the equilibrium position when the O-U particle moves far apart from the equilibrium position. Indeed, it turns out that the cost of such procedure is not affordable for us to achieve the correct long time asymptotic. Therefore, we need to find alternative method to handle the O-U model. Our proof of the long time exponential moment asymptotic for the quenched regime proceeds by analyzing spectral structure of the following semigroup T f t t R + T f t gx) = E x exp fxs)) ds gxt)). For the case of potential function f being bounded and deterministic, the classical potential theory and large deviation theory for Markov processes ensures that the long time limit of log E t t x exp fxs)) ds is closely related to the principle eigenvalue 4

26 of the infinitesimal operator of T t. Furthermore, the principle eigenvalue has a variational representation [, 28]. Inspired by this idea, we aim to derive similar variational representation in our quenched model, in which case the potential function V is random and blows up in infinity. We achieve this by using local approximation techniques to the semigroup in Chapter 4. By analyzing the variational representation formula, we manage to get the desired long time asymptotic. Theorem 3 Annealed Positive Regime). Let Poisson potential V ) be defined as before. For all d Z +, we have t lim t t log log E E x exp V Xs)) ds = max Kx). x Remark 4. Following the similar argument in Section 5. with some mild adjustments, a careful reader will find out that the same asymptotic result holds if we replace the O-U process X ) by Brownian motion B ). This phenomenon indicates that, for the positive exponential moment case, it is the the overall impact of the Poisson potential, rather than the random motions, plays the dominant role to the long time asymptotic of the annealed exponential moment. Theorem 4 Annealed negative regime). Let Poisson potential V ) be defined as in Section 2.3. For all d Z +, we have lim t log t) E E d/2 x exp V Xs)) ds = λ ω d, where ω d is the volume of the unit d dimensional ball and λ > is the intensity of the intensity measure νdx) = λ dx for the Poisson point process ωa) A B ). Remark 5. Applying a very similar approach covered in Section 5.2, the same asymptotic result also holds for the hard obstacle situation, as introduced early in 2.4). Remark 6. In their seminal paper [9], Donsker and Varadhan showed that both the negative exponential moments of the soft obstacle and hard obstacle also known as 5

27 Wiener sausage ) have an exponential decay with rate t d/d+2), i.e. lim t t E E d/d+2) x exp V Bs)) ds = c c >. The results for the O-U process and the BM are consistent because the O-U process generates smaller sausage than Brownian motion in general due to the pull-back force to the equilibrium position, which is the origin in our model. 6

28 Chapter 3 Spectral Structures of the Ornstein-Uhlenbeck Semigroup In this Chapter, we characterize the spectral structure of certain global as well as killed O-U semigroups. In particular, we derive the variational representations see 3.5) and 3.6)) of the principle eigenvalues of the infinitesimal operators of these O-U semi-groups. These variational representations will play a crucial role in proving the quenched exponential regime in Chapter 4. Section 3. lists the function spaces which will be applied extensively in the current and the follow up Chapters. Section 3.2 concentrates on presenting the spectral structure of certain O-U semigroups. A summary of background knowledge for selfadjoint operators and their spectral structures can be found at Section A.6 in the Appendix. 3. Function Space Notations In the following, we are going to present some analytic results of functional of Xt) t. First we list some notations for functional spaces of which will be applied extensively in this section: 7

29 L 2, µ) L 2 space on with reference measure µ; L 2 B, R), µ) L 2 space on B, R) with reference measure µ; P oly ) space of all polynomials on ; C ) smooth function on with compact support; W,2, µ) = g L 2, µ) : g L 2, µ), where g is defined in the weak derivative sense; F = g C ) : g µ = where µ is the L 2 norm; F R = g C ) : suppg) B, R), g µ = ; P = g P oly ) : g µ =. Remark 7. W,2, µ) is a Hilbert space under the Sobolev norm Section A.4 in Appendix for the proof). g 2 µ + g 2 µ see C ) and P oly ) are both dense in W,2, µ) under the Sobolev norm. Hence, any function g P oly ) can be approximated by functions in C ) in the Sobolev norm sense. 3.2 Spectral Structures of the Ornstein-Uhlenbeck Semigroup Let fx) be a bounded continuous function on. We define the following family of linear operators T t t on L 2, µ): for each g L 2, µ), ) T f t gx) := E x exp fxs)) ds gxt)). 3.) 8

30 Similarly, for g L 2 B, R), µ), we define ) T f,r t gx) = E x exp fxs)) ds gxt)) τr >t, 3.2) where τ R def = inft : Xt) / B, R) is the first exit time of X from the ball B, R). Since O-U process X is time-reversal Markov process, T f t t and T f,r t t are semigroups where each operator is bounded and self-adjoint. In particular, for the case of f, Tt and T,R t correspond to the semigroup of Markov processes X and the semigroup of the killed Markov processes X on the boundary B, R), respectively. Let L f and L f,r be the infinitesimal operators for T f t t and T f,r t t, respectively. In particular, when f, L and L R are the infinitesimal operators for Markov process X and the Markov process X being killed at boundary B, R), respectively. The following Feynman-Kac formula for T t t on C ) holds see, e.g., Chapter VII & Chapter VIII, [25]): Proposition 5. For all gx) C ), L f T f t gx) gx) gx) = lim t + t = x gx) + gx) + fx)gx). 2 Proof. For the case of f, applying Itō s formula to gxt)) gives us dgxt)) = Xs) gxt)) + 2 ) gxt)) dt + gxt)) dw t). Hence the infinitesimal operator L can be written as L gx) = x gx) + 2 gx), for all g C ). See for instance [22]. 9

31 As to general f, first notice that exp fxs)) ds = + fxs)) exp fxr)) dr ds. s Multiplying both sides by gxt)), taking expectation and then applying Markov property on the right side, we get T f t gx) = T t gx) + = T t gx) + which yields to s )) E x fxs))e Xs) exp fxr)) dr gxt s)) ds E x fxs))tt s gxt s)) ) ds, L f T f t gx) gx) gx) = lim t + t = L gx) + fx)gx) = x gx) + gx) + fx)gx). 2 From Proposition 5, we observe that L f have the following symmetric quadratic form on C ): Proposition 6. For g, h C ), L f g, h µ = fx)gx)hx)φx) dx g h) φx) dx, 3.3) R 2 d which admits that L f L f g, h µ = g, L f h µ. is a symmetric operator on C ) with respect to µ, i.e. Proof. From Proposition 5, L f g, h µ = x g) hφ dx + g h φ dx + f g h φ dx. 3.4) R 2 d 2

32 Recall φx) = π d/2 exp x 2. By divergence theorem, the second integral on the right side of 3.4) becomes Therefore, R 2 d = = R 2 d gx)) hx)φx) dx 2 g hx)φx)) dx g h) φx) dx + x g) hx)φx) dx. L f g, h µ = fx)gx)hx)φx) dx g h) φx) dx. R 2 d Notice that C ) and P oly ) are both dense in W,2, µ) and the quadratic form on the right side of 3.3) is continuous both in g and h) under the Sobolev norm, we know that the same quadratic form in 3.3) also holds on P oly ): Corollary 7. For all g, h P oly ), we have L f g, h µ = g, L f h µ = fx)gx)hx)φx) dx g h) φx) dx. 3.5) R 2 d In order to apply the powerful spectral representation toolbox for self-adjoint operators to L f, we need to extend the description of L f to a larger function space than P oly ) and C ). In fact, from 3.5) we have Lg, g µ sup x fx) g 2 µ for all g P oly ) C ), which implies that L is upper semi-bounded due to the boundedness of f. According to the Friedrichs extension theorem in Section A.6, L f admits a self-adjoint extension. For simplicity, we still use the same notation for the Friedrichs s extension of L f and still call it the infinitesimal generator of the semigroup T f t. Denote DL f ) as the domain of the self-adjoint operator L f, that is, DL f ) is the collection of all the L 2, µ) functions g such that L f g L 2, µ). 2

33 From Proposition 6 and Corollary 7, it is clear that C ) P oly ) DL f ) L 2, µ). Next, we aim to describe L f on the domain DL f ) by the same quadratic form formulated in 3.5). This could be achieved by approximation using Hermite polynomials. For n = n, n 2,..., n d ) N d and x = x,..., x d ), define Ĥ n x) = d H ni x i ), i= where H n n N is the family of one dimensional Hermite polynomials, that is, H n x) = ) n e x2 dn dx n e x2 We know that each Ĥn is an eigenfunction of L with eigenvalue n, where n = d i= n i. That is L Ĥ n = x Ĥn + 2 Ĥn = n Ĥn.. Furthermore, normalize these eigenvalues by e n = Ĥn/ Ĥn µ, n N d. Then e n n N becomes an orthonormal basis of L 2, µ). See section 2.3.4, Dunkl and Xu [] for details. Using standard approximation techniques in L 2, µ), we have the following isometry result: Proposition 8. Given g L 2, µ), then g W,2, µ) if and only if 2 n + ) g, e n 2 µ <. 3.6) n N d 22

34 Furthermore, for any g, h W,2, µ), we have gx)hx)φx) dx = n N d g, e n µ h, e n µ, g h)φx) dx = n N d 2 n g, e n µ h, e n µ. 3.7) With the above isometry identity, we can prove that the quadratic form g, L f g µ has the following representation on DL f ): Lemma 3... We have DL f ) W,2, µ). Furthermore, g, L f g µ = fx)g 2 x)φx) dx g 2 φx) dx for g DL). 3.8) R 2 d Proof. Let g DL f ). For any n N, write g n x) = k n g, e k µ e k x) P oly ). Then L f g n, g µ = fx)g n x)gx)φx) dx k g, e k 2 µ. 3.9) k n Since e k k N d is an orthonormal basis of L 2, µ), we know g n g as n in L 2, µ). Consequently, lim n Lf g n, g µ = lim g n, L f g µ = g, L f g µ <. 3.) n On the other hand, due to the boundedness of fx), lim n fx)g n x)gx)φx) dx = fx)g 2 x)φx) dx <. 3.) Let n tend to infinity in 3.9). By 3.) and 3.), we have k N d k g, e k 2 µ <. This implies that g W,2, µ) by 3.6) from Proposition 8. Furthermore, from 23

35 3.7) and 3.9), g, L f g µ = fx)g 2 x)φx) dx k g, e k 2 µ k N d = fx)g 2 x)φx) dx gx) 2 φx) dx. R 2 d From the classical result of relations between a semigroup and its infinitesimal operator for instance, see [23]), T f t = exp tl f 3.2) on L 2, µ). From 3.2) and the spectral representation of self-adjoint operator T f t = exptλ E f dλ), where E f λ); < λ < is the corresponding resolution of identity for selfadjoint operator L f. In addition, for any g L 2, µ), g, T f t g µ = exptλm f gdλ), 3.3) where m f g is the spectral measure on R induced by the distribution function F f λ) g, E f λ)g µ with m f gr) = g 2 µ. Moreover, the measure m f g is bounded above by λ f sup g, L f g µ. 3.4) g DL f ), g µ= 24

36 Recall that C ) is dense in DL f ) under the Soblev norm and F = g C ) : g µ =, then we have T f,r t λ f = sup fx)g 2 x)φx) dx gx) 2 φx) dx. 3.5) g F R 2 d Next, we would like to transfer similar spectral properties from T f t and L f to and L f,r. Indeed, L 2 B, R), µ) can be imbedded in L 2, µ) by the mapping U : L 2 B, R), µ) L 2, µ), where gx) if x B, R) Ug)x) = if x / B, R) Thus L 2 B, R), µ) and W,2 B, R), µ) can be regarded as a closed subspace of L 2, µ) and W,2, µ), respectively. The following definition of local operator can be found in different literatures, for instance, Getoor [7]: Definition. An operator Q in L 2, µ) is called a local operator if for any h DQ) and any open set G with Lebesgue measure on the boundary, one has hi G DQ) and I G Qh = QI G h) as elements of L 2, µ). We know from lemma 3.. that L f is a local operator. Therefore, Theorem 4.2 and Theorem 4.3 in Getoor [7] yields to the fact that DL f,r ) = DL f ) L 2 B, R), µ) and L f gx) = L f,r gx) for all g DL f,r ). Combine with 3.8), we have g, L f,r g µ = fx)g 2 x)φx) dx B,R) 2 for any g DL f,r ). B,R) gx) 2 φx) dx 25

37 Now we turn to T f,r t. Repeat the similar argument carried out for T f t, we know that T f,r t has the spectral representation: T f,r t = exptλ E f,λ), where E f,r λ); < λ < is the corresponding resolution of identity for selfadjoint operator L f,r. In addition, for any g L 2 B, R), µ), where m f,r g g, T f,r t g µ = exptλm f,r g dλ), is known as spectral measure on R induced by the distribution function F f,r λ) g, E f,r λ)g µ with m f,r g R) = g 2 µ. Furthermore, m f,r g is bounded above by λ f,r sup g DL f,r ), g µ= = sup g F R g, L f,r g µ fx)g 2 x)φx) dx gx) 2 φx) dx. R 2 d 3.6) 26

38 Chapter 4 Long Time Asymptotics: Quenched Regime In this Chapter, we give the proof of Theorem. The proof is followed by two steps. First, we derive variational formulas for λ and λ 2. Second, we investigate the variational formulas and obtain that λ, λ 2, ). 4. Variational Formulas for the Rates Proposition 9. The following large deviation result holds P-a.s.: lim t = sup g F t log E x exp ± V Xs)) ds ) 2 g 2 ± V x)g 2 x) φx) dx, 4.) where F = g C ) : g 2 x)φx) dx =. In the following subsections, we will discuss the proof of Proposition 9 under the position exponential as well as negative exponential situations, respectively. As we mentioned earlier, the main challenge here is to deal with unbounded potential function V. This challenge is highlighted more for the positive exponential situation 27

39 and the solution we provide is to use local approximation. With this regard, we will give full proof with details for the positive exponential situation. As to the negative exponential case, since the argument is very similar, we will sketch the proof and highlight those parts which need different attention from the positive exponential situation. 4.. Exponential Moments with Positive Coefficients Proof. For n Z +, define V n = V n. Since V n is a bounded function, the spectral representation techniques discussed in Chapter 3 can be applied here. In fact, choose g F and notice that V. Then we have E x exp V Xs)) ds E x exp V n Xs)) ds g 2 E x gx)) exp = g 2 E x gx))e X) exp = g 2 px, y, )gy)tt gy) Vn dy, ) V n Xs)) ds gxt)) ) V n Xs)) ds gxt )) 4.2) where recall that px, y, ) is the transition density of X from x at time to y at time and T Vn t is the semigroup defined in 3.). Recall from 2.5) that px, y, )φ y) = c exp x 2 x ye 2. e 2 So px, y, )φ y), as a function of y, is bounded below by a positive number on the compact support of g. Therefore, combine with 4.2), we get E x exp V Xs)) ds c gy)tt gy)φy) Vn dy = c g, Tt g Vn µ. 4.3) 28

40 By applying spectral representation and Jensen s inequality, we have g, Tt g Vn µ = e t )λ m g dλ) e t ) λ mgdλ) = exp t ) g, L Vn g µ = exp t ) ) 2 g 2 + V n x)g 2 x) φx) dx. 4.4) From 4.3) and 4.4), we get lim inf t t t log E x exp V Xs)) ds g 2 2V n x)g 2 x) ) φx) dx. 2 Let n and then take supreme over all g C ), we obtain the lower bound. Next, we turn to the upper bound. To prove the upper bound, we need the following localization estimate, of which proof is given in the Appendix, Section A.5. Lemma Put γ t = αt /2 log t for some constant α >. Then P-a.s., E x exp ) t V Xs)) ds lim >t τγt t t =, E x exp V Xs)) ds where τ R = inft > : Xt) / B, R). By Lemma 4..2 and Lemma A..8, P a.s. there exist c, c 2 > the choice of c, c 2 depends on the realization of random media ω )) such that for all large t ) E x exp V Xs)) ds c E x exp V Xs)) ds τγt >t, and sup V x) c 2 log t. x B,γ t) 29

41 Therefore, for all t sufficiently large, we have E x exp V Xs)) ds ) c E x exp V Xs)) ds τγt >t c t c 2 E x exp V Xs)) ds sup Xt) γ t s t =c t c 2 E x X) γt exp V Xs)) ds sup s t Xt) γt. Xt) γ t 4.5) Let g t be a smooth function such that, g t y) on B, γ t ) and g t y) outside B, γ t +2). Denote h t = c t g t, such that h t µ =. Clearly, the normalizing constant Therefore, E x ) /2 /2 c t = gt 2 x)φx) dx φx) dx) <. B,γ t+2) X) γt exp E x g t X)) exp E x h t X)) exp = px, y, )φy) h t y)t V,γ t+2 t c exp x 2 h t, T V,γt+2 t h t µ, V Xs)) ds Xt) γt sup Xt) γ t s t V Xs)) ds sup Xt) γ t+2 s t V Xs)) ds sup Xt) γ t+2 s t h t y)φy) dy where the notation T V,γt+2 t is defined as in 3.2): g t Xt)) h t Xt)) ) T V,R t gx) = E x exp V Xs)) ds gxt)) τr >t, 4.6) 3

42 and the last inequality in 4.6) holds since px, y, )φy) c exp x 2 by 2.5). has the spectral representation Recall from??) and 3.6) that the semigroup T V,γt+2 t h t, T V,γt+2 t h t µ = e t )λ m γt+2 h t dλ) and the smallest supporting set of probability measure m γt+2 h t is bounded above by sup h, L γt+2 h µ = sup gx) 2 2V x)gx) 2) φx) dx, h DL V,γ t +2 ) g F γt +2 2 where Hence, F γt+2 = h t, T V,γt+2 t h t µ exp g C B, γ t + 2)) : g 2 x)φx) dx =. B,γ t+2) t ) sup g F γt +2 exp t ) sup g F 2 2 g 2 2V g 2) φx) dx g 2 2V g 2) φx) dx. 4.7) Combine 4.5), 4.6) and 4.7), we obtain the upper bound t lim sup t t log E x exp 2 sup g F V Xs)) ds gx) 2 2V x)gx) 2) φx) dx Exponential Moments with Negative Coefficients Here we sketch the proof for the negative exponential moment situation. 3

43 Proof. First, we consider the lower bound. Keep the same notation V n = V n as before. Notice that E x exp V Xs)) ds e n E x exp Repeat the similar procedures in 4.2), 4.3) and 4.4), we get E x exp V n Xs)) ds. ) V Xs)) ds C exp t ) R 2 g 2 + V n x)g 2 x) dx, d where C is constant determined by g and n. Hence, we have lim inf t t log E x exp ) V Xs)) ds R 2 g 2 + V n x)g 2 x) dx. d Let n go to infinity and take supreme over g C ), then we obtain the lower bound: lim inf t t log E x exp ) V Xs)) ds sup g F R 2 g 2 + V x)g 2 x) dx. d 4.8) Next, we turn to the upper bound. Since V is bounded above by, the proof of the upper bound is straight forward and do not need localization treatment as before. Indeed, we have E x exp V Xs)) ds E x exp = V Xs)) ds px, y, )φy) T V t φy) dy c exp x 2, T V t µ, where the last inequality holds once again due to the fact that px, y, )φy) c exp x ) Apply the spectral representation 3.3) and the spectral measure 32

44 estimate 3.5) to, T V t µ, we have ), Tt V µ exp t ) sup g F R 2 g 2 + V x)g 2 x) dx. 4.) d Combine 4.9) and 4.), we obtain the upper bound lim sup t t log E x exp ) V Xs)) ds sup g F R 2 g 2 + V x)g 2 x) dx. d 4.) Put 4.8) and 4.) together, we get the desired result. Using standard approximation treatment, we have, t lim t t log E x exp ± V Xs)) ds ) = inf f F R 2 f 2 V x)f 2 x) φx) dx, d ) = inf f P 2 f 2 V x)f 2 x) φx) dx, 4.2) where P = g Poly ), g µ =. For the convenience of the analysis in Section 4.2, we rewrite 4.2) with respect to def Lebesgue measure. Let E = fx) = fx)e x 2 2 : f P, then f 2 = π d/2, where 2 is the classic L 2 -norm. Hence, f 2 φx) dx = π d/2 f + x fx) 2 dx R d = π d/2 f 2 + x 2 f 2) f) dx + 2π d/2 x f dx. 4.3) Applying divergence theorem to the second integral in 4.3), x f ) f dx = x 2 f 2 x) dx = d f 2 x) dx = d R 2 d R 2 πd/2. 4.4) d 33

45 Hence, by 4.3) and 4.2), the quenched long time asymptotic results become lim t t log E x exp ± = 2 π d/2 inf g E V Xs)) ds g 2 + x 2 2V x) ) g 2 dx + d2 4.5). 4.2 Analysis of λ and λ 2 In this section, we analyze 4.5) and prove theorem. Lemma inf g 2 + x 2 g 2 dx = dπ d/2, 4.6) g E where the minimizers are g x) = ±e x 2 /2. Proof. By 4.4), ) /2 2 dπd/2 = x g)g dx x 2 g 2 dx g 2 dx ) x 2 g 2 x) dx + gx) 2 dx. 2 To make both inequalities equal, we need g x) x = g x). Under the condition that g 2 =, we have g x) = ±e x 2 /2. Clearly, g E. ) /2 To get what stated in theorem we need to show the following Proposition. Throughout the proof, use the same notation as in lemma 4..3: g x) = e x 2 /2. Proposition. Let λ = 2 π d/2 inf g 2 + x 2 2V x) ) g 2 dx + d g E R 2, d λ 2 = 2 π d/2 inf g 2 + x 2 + 2V x) ) g 2 dx d g E R 2. d Then P a.s., λ, λ 2, ) are non-degenerate random variables. 34

46 Proof. First, consider λ. By Lemma A..8, x 2 2V x) has a random) lower bound Cω) on. Then, we have inf g E inf g E g 2 + x 2 2V x) ) g 2 dx R d Cω) gx) 2 dx = Cω)π d/2. inf x 2 2V x) ) g 2 dx g E Therefore, P a.s., λ d Cω)) <. On the other hand, 2 inf g 2 + x 2 2V x) ) g 2 x) ) dx g E R d g 2 + x 2 2V x) ) gx) 2 dx = dπ d/2 2 V x)e x 2 /2 dx < dπ d/2 P a.s. The last inequality holds since PV on ) =. Therefore, P a.s. α >. λ = 2 π d/2 inf g 2 + x 2 2V x) ) g 2) dx + d g E R 2 >. d To prove the non-degeneracy of λ, it suffices to show Pλ > α) > for any By continuity of K, there exists r > such that Kx) > K)/2 for all x B, r). Then for any x B, r/2) V x) = Kx y) ωdy) Kx y) ωdy) B,r/2) K) ωdy) = K) ωb, r/2)). 2 2 B,r/2) 35

47 Therefore, V x)gx) 2 dx V x)gx) 2 dx B,r/2) K) ωb, r/2)) 2 c ωb, r/2)). B,r/2) e x 2 dx 4.7) From 4.7) we get λ 2 π d/2 cωb, r/2)), e x x 2 e x 2 V x)e x 2) dx + d 2 which implies that Pλ cn) PωB, r/2)) = n) >. As to λ 2, the upper bound holds since λ 2 2 π d/2 g 2 + x 2 + 2V x) ) g 2 dx d R 2 d =π d/2 V x)e x 2 dx <, where the last inequality holds by Lemma A..8. For the lower bound, denote F g) := g 2 + x 2 + 2V x)) g 2 x) dx. Notice that for g = g a.s., for some δ >. F g ) = g 2 + x 2 + 2V x) ) gx) 2 dx =dπ d/2 + 2 V x)e x 2 /2 dx > dπ d/2 + δ P a.s., 4.8) 36

48 For g g, by lemma 4..3, F g) = g 2 + x 2 + 2V x) ) g 2 x) ) dx g 2 + x 2 gx) ) 2 dx + δ 2 = dπ d/2 + δ 2 P a.s. 4.9) for some δ 2 >. Therefore, from 4.8), 4.9) and the continuity of F on E under Sobolev norm, λ 2 = 2 π d/2 inf g 2 + x 2 + 2V x) ) g 2) dx d g E R 2 >. d As to the non-degeneracy of λ 2, by continuity of K and the construction of V, we know that V has a positive probability of greater than any large value in a compact set. Therefore, λ 2 π d/2 inf x 2 + 2V x) ) g 2 x) dx d g E R 2 inf x 2 + 2V x)) d d x 2 := c happens with a positive probability. 37

49 Chapter 5 Long Time Asymptotics: Annealed Regime In this Chapter, we give the detailed proofs of Theorem 3 and Theorem Exponential Moments with Positive Coefficients Proof of Theorem 3. Notice that V Xs)) ds = KXs) y) ds ωdy). Using Fubini Theorem and Poisson integrals see Appendix A.), we have ) E E x exp V Xs)) ds = E x exp λ exp KXs) y) ds dy. 5.) First we establish the upper bound of 5.). Jensen s inequality yields to exp KXs) y) ds t exp tkxs) y) ds. 38

50 Hence, we have E x exp λ λ E x exp t = exp λ R d t exp KXs) y) ds e tkxs) y) ) dy ds e tky) ) dy, ) dy 5.2) where the last equality holds due to the shift invariance of the Lebesgue measure. Since K ) is compactly supported and suppk) B, L), the integral in the last line of 5.2) equals to the restriction of its domain on B, L). Therefore, combining 5.) and 5.2) we obtain the upper bound: lim sup t lim sup t t t log log E E x exp t log λ B, L) exp V Xs)) ds t max x Kx) ) = max x Kx). 5.3) Next, we consider the lower bound. For any ɛ >, by the continuity of K there exists a ball Bx, δ) such that Ky) > max x Kx) ɛ, for all y Bx, δ). 5.4) Hence, our strategy for the lower bound is to restrict the O-U process X ) inside a small ball up to time t so that the exponentials will get main contribution from the maximum of the shape function K. More precisely, 39

51 E x exp λ E x exp λ ) exp KXs) y) ds dy Bx x,δ/2) exp λ Bx x, δ/2) e ) t KXs) y) ds dy sup Xs) x <δ/2 s t ) t max e Kx) ɛ) x P x sup s t ) Xs) x < δ/2, 5.5) where the last inequality holds due to 5.4) and the fact that Xs) y Bx, δ) given the condition that Xs) Bx, δ/2) and y Bx x ) for all s t. Using the classical small ball estimate for Gaussian processes, for instance [2], the cost of restricting Gaussian process X in a small ball up to t is exponentially small: P x sup s t ) Xs) x < δ/2 e ct, for some c >. 5.6) Therefore, combine with 5.), 5.5) and 5.6) we have lim inf t t t log log E E x exp V Xs)) ds max Kx) ɛ. x The lower bound is obtained by letting ɛ go to +. Together with 5.3) we get the full result of Theorem Exponential Moments with Negative Coefficients In this section, we shall prove the Theorem 4. Notice that by using Poisson integral again, to prove Theorem 4 is equivalent to prove the following Proposition: 4

52 Proposition. For any bounded, compactly supported shape function K ), we have lim t log t) log E d/2 x exp λ exp ) KXs) y) ds dy = λ ω d, where ω d denotes the volume of d dimensional unit ball and λ > is the intensity of the Poisson point process ω ), the same notations as we described before in Theorem Lower Bound For any given t, denote R β,t = β log t with β >. By restricting X ) in the ball B, R β,t ) up to time t, we have E x exp λ R d E x exp λ =E x exp λ exp exp B,R β,t +L) ) KXs) y) ds dy exp ) KXs) y) ds dy sup s t ) KXs) y) ds dy Xs) <R β,t sup s t Xs) <R β,t where the last equality holds simply due to the fact that the support of K ) is inside the ball B, L) hence the function inside the spacial integral vanishes outside B, R β,t + L). Using the simple fact that e x < for all x R, we have E x exp λ exp e λ ω dr β,t +L) d P x sup s t Xs) < R β,t ). ) KXs) y) ds dy, 4

53 Applying the Lemma 5..4 below and noticing that β >, we thus obtain lim inf t lim inf t = β d/2 λ ω d. λ exp log t) log E d/2 x exp λ ω log t) d/2 d β log t + L) d ct β 2 log t) d 2 ) KXs) y) ds dy ) Therefore, we get the lower bound of Proposition by letting β go to +. Now we turn to prove the technical Lemma used early in the proof of the lower bound. This Lemma tells us that the probability of restricting O-U process X up to time t is close to if we select the radius of the ball carefully. Lemma Take R β,t = β log t β > ), then for all t large enough, P x sup s t ) log Xs) < R β,t exp t β 2 log t) d ) Proof. Let γt) be an increasing function of t of which growth speed is slow enough. For instance, choose γ t log t. Observe that: ) P x sup Xs) R β,t γt) s t ) =P x sup Xs) > R β,t and sup Xs) R β,t + P x sup s<γt) γt) s t s t Xs) R β,t ). 5.8) Therefore, in order to prove 5.7), it suffices to check ) log P x sup Xs) > R β,t exp t β 2 log t) d 2 s<γt) 5.9) and ) log P x sup Xs) R β,t exp t β 2 log t) d 2. 5.) γt) s t ft) log gt) means log ft)/ log gt) + as t +. 42

54 Indeed, by Lemma A..9 in Section A.5, the following inequality holds for all t large enough ) γt) P x sup Xs) > R β,t c s<γt) t, where c, c c 2 >. 5.) 2 Hence, since γt) log t, 5.9) holds. Next, we show 5.). For any g C B, R β,t )) with g 2,µ =, we have P x sup γt) s t Xs) < R β,t ) = E x sup γt) s t Xs) <R β,t g 2 E x gxγt)))gxt)) supγt) s t Xs) <R β,t. 5.2) Using Markov property of X, E x gxγt)))gxt)) supγt) s t Xs) <R β,t = E x gxγt)))e Xγt)) gxt γt))) sup s t γt) Xs) <R β,t. 5.3) Hence, P x sup γt) s t Xs) < R β,t ) g 2 B,R β,t ) px, y, γt))gy)t,r β,t t γt) gy) dy, 5.4) where px, y, t) is the probability density of X staring from x and ending at y at time t. The semigroup T,R t on L 2 B, R), µ) is defined as in Chapter 3: T,R t gx) def. = E x gxt)). sup s t Xs) R Notice from 2.5) and the fact that γt) log t, we know px, y, γt))φy) uniformly bounded below on B, R β,t ) for large t. That is, is lim inf t inf px, y, y B,R β,t ) γt))φy) > C. 43

55 Hence, combining with 5.4), P x supγt) s t Xs) < R β,t ) has the following Dirichlet form lower bound: P x ) sup Xs) < R β,t γt) s t By using spectral representation for T,R β,t t g, T R β,t t γt) g µ = e t γt))λ m,r β,t g = exp t γt))l,r β,t g = exp C g 2 gy)t R β,t t γt gy)φy) dy B,R β,t ) = C g 2 g, T,R β,t t γt) g µ. in 5.4), we have dλ) exp t γt)) t γt) 2 B,R β,t ) λ m,r β,t g gy) 2 φy) dy dλ), 5.5) 5.6) where the inequality holds due to Jensen s inequality. Choose h t : R [, ] as a smooth function such that h t x) for x < R β,t 2, h t x) for x > R β,t, and h t < for all x R. Define g t : R as g t x) = c t h t x ), where c t > is the normalizing constant such that g t 2,µ =. Use g t in 5.5), 5.6) and notice c t = g t, we have ) P x sup Xs) < R β,t C exp t γt) γt) s t 2 B,R β,t ) h t y ) 2 φy) dy. 5.7) To achieve the desired lower bound, we need to estimate B,R β,t ) h t y ) 2 φy) dy. In fact, using the sphere integral, we have for t sufficiently large B,R β,t ) Therefore, we have h t y ) 2 φy) dy c β log t P β log t 2 r d e r2 dr c 2 log t) d 2 e β+ log 2 t. log sup Xs) R β,t exp t β 2 log t) d ) γt) s t 44

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Annealed Brownian motion in a heavy tailed Poissonian potential

Annealed Brownian motion in a heavy tailed Poissonian potential Annealed Brownian motion in a heavy tailed Poissonian potential Ryoki Fukushima Research Institute of Mathematical Sciences Stochastic Analysis and Applications, Okayama University, September 26, 2012

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

A Study of Hidden Markov Model

A Study of Hidden Markov Model University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2004 A Study of Hidden Markov Model Yang Liu University of Tennessee - Knoxville Recommended

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Brownian survival and Lifshitz tail in perturbed lattice disorder

Brownian survival and Lifshitz tail in perturbed lattice disorder Brownian survival and Lifshitz tail in perturbed lattice disorder Ryoki Fukushima Kyoto niversity Random Processes and Systems February 16, 2009 6 B T 1. Model ) ({B t t 0, P x : standard Brownian motion

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Math The Laplacian. 1 Green s Identities, Fundamental Solution Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

Pseudo-Poincaré Inequalities and Applications to Sobolev Inequalities

Pseudo-Poincaré Inequalities and Applications to Sobolev Inequalities Pseudo-Poincaré Inequalities and Applications to Sobolev Inequalities Laurent Saloff-Coste Abstract Most smoothing procedures are via averaging. Pseudo-Poincaré inequalities give a basic L p -norm control

More information

The Feynman-Kac formula

The Feynman-Kac formula The Feynman-Kac formula William G. Faris February, 24 The Wiener process (Brownian motion) Consider the Hilbert space L 2 (R d ) and the self-adjoint operator H = σ2, () 2 where is the Laplace operator.

More information

Solutions: Problem Set 4 Math 201B, Winter 2007

Solutions: Problem Set 4 Math 201B, Winter 2007 Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x

More information

Intertwinings for Markov processes

Intertwinings for Markov processes Intertwinings for Markov processes Aldéric Joulin - University of Toulouse Joint work with : Michel Bonnefont - Univ. Bordeaux Workshop 2 Piecewise Deterministic Markov Processes ennes - May 15-17, 2013

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations

More information

Large time behavior of reaction-diffusion equations with Bessel generators

Large time behavior of reaction-diffusion equations with Bessel generators Large time behavior of reaction-diffusion equations with Bessel generators José Alfredo López-Mimbela Nicolas Privault Abstract We investigate explosion in finite time of one-dimensional semilinear equations

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

ON THE SHAPE OF THE GROUND STATE EIGENFUNCTION FOR STABLE PROCESSES

ON THE SHAPE OF THE GROUND STATE EIGENFUNCTION FOR STABLE PROCESSES ON THE SHAPE OF THE GROUND STATE EIGENFUNCTION FOR STABLE PROCESSES RODRIGO BAÑUELOS, TADEUSZ KULCZYCKI, AND PEDRO J. MÉNDEZ-HERNÁNDEZ Abstract. We prove that the ground state eigenfunction for symmetric

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Annealed Brownian motion in a heavy tailed Poissonian potential

Annealed Brownian motion in a heavy tailed Poissonian potential Annealed Brownian motion in a heavy tailed Poissonian potential Ryoki Fukushima Tokyo Institute of Technology Workshop on Random Polymer Models and Related Problems, National University of Singapore, May

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula

Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Partial Differential Equations with Applications to Finance Seminar 1: Proving and applying Dynkin s formula Group 4: Bertan Yilmaz, Richard Oti-Aboagye and Di Liu May, 15 Chapter 1 Proving Dynkin s formula

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Week 9 Generators, duality, change of measure

Week 9 Generators, duality, change of measure Week 9 Generators, duality, change of measure Jonathan Goodman November 18, 013 1 Generators This section describes a common abstract way to describe many of the differential equations related to Markov

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

GENERATORS WITH INTERIOR DEGENERACY ON SPACES OF L 2 TYPE

GENERATORS WITH INTERIOR DEGENERACY ON SPACES OF L 2 TYPE Electronic Journal of Differential Equations, Vol. 22 (22), No. 89, pp. 3. ISSN: 72-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu GENERATORS WITH INTERIOR

More information

ON PARABOLIC HARNACK INEQUALITY

ON PARABOLIC HARNACK INEQUALITY ON PARABOLIC HARNACK INEQUALITY JIAXIN HU Abstract. We show that the parabolic Harnack inequality is equivalent to the near-diagonal lower bound of the Dirichlet heat kernel on any ball in a metric measure-energy

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Lecture No 1 Introduction to Diffusion equations The heat equat

Lecture No 1 Introduction to Diffusion equations The heat equat Lecture No 1 Introduction to Diffusion equations The heat equation Columbia University IAS summer program June, 2009 Outline of the lectures We will discuss some basic models of diffusion equations and

More information

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France)

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France) Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators ( Wroclaw 006) P.Maheux (Orléans. France) joint work with A.Bendikov. European Network (HARP) (to appear in T.A.M.S)

More information

Potential theory of subordinate killed Brownian motions

Potential theory of subordinate killed Brownian motions Potential theory of subordinate killed Brownian motions Renming Song University of Illinois AMS meeting, Indiana University, April 2, 2017 References This talk is based on the following paper with Panki

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Improved diffusion Monte Carlo

Improved diffusion Monte Carlo Improved diffusion Monte Carlo Jonathan Weare University of Chicago with Martin Hairer (U of Warwick) October 4, 2012 Diffusion Monte Carlo The original motivation for DMC was to compute averages with

More information

The parabolic Anderson model on Z d with time-dependent potential: Frank s works

The parabolic Anderson model on Z d with time-dependent potential: Frank s works Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional Analysis

The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional Analysis Mathematics 2015, 3, 527-562; doi:10.3390/math3020527 Article OPEN ACCESS mathematics ISSN 2227-7390 www.mdpi.com/journal/mathematics The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Kolmogorov equations in Hilbert spaces IV

Kolmogorov equations in Hilbert spaces IV March 26, 2010 Other types of equations Let us consider the Burgers equation in = L 2 (0, 1) dx(t) = (AX(t) + b(x(t))dt + dw (t) X(0) = x, (19) where A = ξ 2, D(A) = 2 (0, 1) 0 1 (0, 1), b(x) = ξ 2 (x

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Introduction to Diffusion Processes.

Introduction to Diffusion Processes. Introduction to Diffusion Processes. Arka P. Ghosh Department of Statistics Iowa State University Ames, IA 511-121 apghosh@iastate.edu (515) 294-7851. February 1, 21 Abstract In this section we describe

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE

A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH SPACE Theory of Stochastic Processes Vol. 21 (37), no. 2, 2016, pp. 84 90 G. V. RIABOV A REPRESENTATION FOR THE KANTOROVICH RUBINSTEIN DISTANCE DEFINED BY THE CAMERON MARTIN NORM OF A GAUSSIAN MEASURE ON A BANACH

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

Preface and Overview. vii

Preface and Overview. vii This book is designed as an advanced text on unbounded self-adjoint operators in Hilbert space and their spectral theory, with an emphasis on applications in mathematical physics and various fields of

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Introduction to Infinite Dimensional Stochastic Analysis

Introduction to Infinite Dimensional Stochastic Analysis Introduction to Infinite Dimensional Stochastic Analysis By Zhi yuan Huang Department of Mathematics, Huazhong University of Science and Technology, Wuhan P. R. China and Jia an Yan Institute of Applied

More information

Asymptotic distribution of eigenvalues of Laplace operator

Asymptotic distribution of eigenvalues of Laplace operator Asymptotic distribution of eigenvalues of Laplace operator 23.8.2013 Topics We will talk about: the number of eigenvalues of Laplace operator smaller than some λ as a function of λ asymptotic behaviour

More information

The extreme points of symmetric norms on R^2

The extreme points of symmetric norms on R^2 Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2008 The extreme points of symmetric norms on R^2 Anchalee Khemphet Iowa State University Follow this and additional

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

Nonlinear Systems Theory

Nonlinear Systems Theory Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

1.3.1 Definition and Basic Properties of Convolution

1.3.1 Definition and Basic Properties of Convolution 1.3 Convolution 15 1.3 Convolution Since L 1 (R) is a Banach space, we know that it has many useful properties. In particular the operations of addition and scalar multiplication are continuous. However,

More information

Potential Theory on Wiener space revisited

Potential Theory on Wiener space revisited Potential Theory on Wiener space revisited Michael Röckner (University of Bielefeld) Joint work with Aurel Cornea 1 and Lucian Beznea (Rumanian Academy, Bukarest) CRC 701 and BiBoS-Preprint 1 Aurel tragically

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Analysis in weighted spaces : preliminary version

Analysis in weighted spaces : preliminary version Analysis in weighted spaces : preliminary version Frank Pacard To cite this version: Frank Pacard. Analysis in weighted spaces : preliminary version. 3rd cycle. Téhéran (Iran, 2006, pp.75.

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information