Annealed Brownian motion in a heavy tailed Poissonian potential

Similar documents
Annealed Brownian motion in a heavy tailed Poissonian potential

Brownian survival and Lifshitz tail in perturbed lattice disorder

A large deviation principle for a RWRC in a box

The Chaotic Character of the Stochastic Heat Equation

Brownian intersection local times

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Lecture 21 Representations of Martingales

Lecture 5: Importance sampling and Hamilton-Jacobi equations

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

A. Bovier () Branching Brownian motion: extremal process and ergodic theorems

Classical behavior of the integrated density of states for the uniform magnetic field and a randomly perturbed

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Theoretical Tutorial Session 2

Convex Optimization M2

u(0) = u 0, u(1) = u 1. To prove what we want we introduce a new function, where c = sup x [0,1] a(x) and ɛ 0:

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

Linear Ordinary Differential Equations

Bridging the Gap between Center and Tail for Multiscale Processes

More Empirical Process Theory

Scaling exponents for certain 1+1 dimensional directed polymers

Large Deviations Principles for McKean-Vlasov SDEs, Skeletons, Supports and the law of iterated logarithm

(B(t i+1 ) B(t i )) 2

FUNCTIONAL CENTRAL LIMIT (RWRE)

I forgot to mention last time: in the Ito formula for two standard processes, putting

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

Large Deviations from the Hydrodynamic Limit for a System with Nearest Neighbor Interactions

Large deviations and fluctuation exponents for some polymer models. Directed polymer in a random environment. KPZ equation Log-gamma polymer

Moments of the maximum of the Gaussian random walk

Asymptotic results for empirical measures of weighted sums of independent random variables

Potential theory of subordinate killed Brownian motions

Citation 数理解析研究所講究録 (2014), 1891:

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Long Time Asymptotics of Ornstein-Uhlenbeck Processes in Poisson Random Media

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3)

A Lieb-Robinson Bound for the Toda System

N polaron systems and mathematics

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego

MA8109 Stochastic Processes in Systems Theory Autumn 2013

COMPARISON THEOREMS FOR THE SPECTRAL GAP OF DIFFUSIONS PROCESSES AND SCHRÖDINGER OPERATORS ON AN INTERVAL. Ross G. Pinsky

Definition 3 (Continuity). A function f is continuous at c if lim x c f(x) = f(c).

The cover time of random walks on random graphs. Colin Cooper Alan Frieze

Chapter 3. September 11, ax + b = 0.

Poisson random measure: motivation

Connection to Branching Random Walk

A log-scale limit theorem for one-dimensional random walks in random environments

The large deviation principle for the Erdős-Rényi random graph

A Concise Course on Stochastic Partial Differential Equations

Houston Journal of Mathematics. c 2015 University of Houston Volume 41, No. 4, 2015

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

4 Sobolev spaces, trace theorem and normal derivative

Weak solutions of mean-field stochastic differential equations

n E(X t T n = lim X s Tn = X s

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

On the Boltzmann equation: global solutions in one spatial dimension

Skorokhod Embeddings In Bounded Time

Height fluctuations for the stationary KPZ equation

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

21.1 Lower bounds on minimax risk for functional estimation

Lagrangian Duality Theory

McGill University Math 354: Honors Analysis 3

Differential Games II. Marc Quincampoix Université de Bretagne Occidentale ( Brest-France) SADCO, London, September 2011

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Uniformly Uniformly-ergodic Markov chains and BSDEs

regression Lie Wang Abstract In this paper, the high-dimensional sparse linear regression model is considered,

Approximating a Diffusion by a Finite-State Hidden Markov Model

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

On continuous time contract theory

The multidimensional Ito Integral and the multidimensional Ito Formula. Eric Mu ller June 1, 2015 Seminar on Stochastic Geometry and its applications

in Bounded Domains Ariane Trescases CMLA, ENS Cachan

Numerical Solutions to Partial Differential Equations

Weak convergence and large deviation theory

On pathwise stochastic integration

6.1 Variational representation of f-divergences

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

Stein s Method and Characteristic Functions

1. Stochastic Processes and filtrations

Spline Element Method for Partial Differential Equations

The L p -dissipativity of first order partial differential operators

1 Definition of the Riemann integral

YAN GUO, JUHI JANG, AND NING JIANG

Exponential martingales: uniform integrability results and applications to point processes

Threshold solutions and sharp transitions for nonautonomous parabolic equations on R N

Traces and Duality Lemma

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

Nonparametric regression with martingale increment errors

Component structure of the vacant set induced by a random walk on a random graph. Colin Cooper (KCL) Alan Frieze (CMU)

Product measure and Fubini s theorem

Lecture Notes 3 Convergence (Chapter 5)

Problem set 1, Real Analysis I, Spring, 2015.

Integral representations in models with long memory

Characterization of cutoff for reversible Markov chains

Effective dynamics for the (overdamped) Langevin equation

5. Duality. Lagrangian

Asymptotics for posterior hazards

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Applied/Numerical Analysis Qualifying Exam

Transcription:

Annealed Brownian motion in a heavy tailed Poissonian potential Ryoki Fukushima Tokyo Institute of Technology Workshop on Random Polymer Models and Related Problems, National University of Singapore, May 21-25, 2012 1 / 25

1. Setting ) ({B t } t 0, P x : κ -Brownian motion on R d ( ω = ) δ ωi, P : Poisson point process on R d i with unit intensity 2 / 25

1. Setting ) ({B t } t 0, P x : κ -Brownian motion on R d ( ω = ) δ ωi, P : Poisson point process on R d i with unit intensity Potential For a non-negative and integrable function v, V ω (x) := i v(x ω i ). (Typically v(x) = 1 B(0,1) (x) or x α 1 with α > d.) 2 / 25

Annealed measure We are interested in the behavior of Brownian motion under the measure { t } exp V ω (B s )ds P P 0 ( ) 0 Q t ( ) = { t }]. E E 0 [exp V ω (B s )ds The configuration is not fixed and hence Brownian motion and ω i s tend to avoid each other. 0 3 / 25

0 B t exp{ t 0 V ω(b s )ds} : large, P : large, P 0 : small 4 / 25

0 B t exp{ t 0 V ω(b s )ds} : large, P : small, P 0 : large 4 / 25

2. Light tailed case Donsker and Varadhan (1975) When v(x) = o( x d 2 ) as x, { E E 0 [exp as t. t 0 }] V ω (B s ) ds } = exp { c(d, κ)t d d+2 (1 + o(1)) ( ) ) = P 0 B [0,t] B(x, t 1 d+2 R0 ) P (ω(b(x, t 1 d+2 R0 )) = 0, Remark c(d, κ) = inf U {κλd (U) + U }. 5 / 25

R 0 t 1 d+2 B t 0 6 / 25

One specific strategy gives dominant contribution to the partition function. It occurs with high probability under the annealed path measure. 7 / 25

Sznitman (1991, d = 2) and Povel (1999, d 3) When v has a compact support, there exists such that D t (ω) B ( 0, t 1 d+2 (R0 + o(1)) ) Q t ( B [0,t] B ( D t (ω), t 1 d+2 (R0 + o(1)) )) t 1. Remark Bolthausen (1994) proved the corresponding result for two-dimensional random walk model. 8 / 25

3. Heavy tailed case Pastur (1977) When v(x) x α (α (d, d + 2)) as x, { t }] } E E 0 [exp V ω (B s )ds = exp { a 1 t d α (1 + o(1)), 0 where a 1 = B(0, 1) Γ ( ) α d α. 9 / 25

In fact, Pastur s proof goes as follows: { t }] E E 0 [exp V ω (B s )ds 0 E[exp{ tv ω (0)}] exp{ a 1 t d α }. 10 / 25

In fact, Pastur s proof goes as follows: { t }] E E 0 [exp V ω (B s )ds 0 E[exp{ tv ω (0)}] exp{ a 1 t d α }. The effort of the Brownian motion is hidden in the lower order terms. 10 / 25

O(t 1 α ) B t 0 1 o(t α ) 11 / 25

F. (2011) When v(x)= x α 1 (d < α < d + 2), { E E 0 [exp = exp t 0 }] V ω (B s )ds { a 1 t d α (a2 + o(1))t α+d 2 2α }, where a 2 := { inf φ 2 =1 } κ φ(x) 2 + C(d, α) x 2 φ(x) 2 dx. Remark The proof is an application of the general machinery developed by Gärtner-König 2000. 12 / 25

Recalling the Donsker-Varadhan LDP ( 1 t ) { P 0 δ Bs ds φ 2 (x)dx exp t t 0 } κ φ(x) 2 dx, we expect the second term explains the behavior of the Brownian motion. 13 / 25

Recalling the Donsker-Varadhan LDP ( 1 t ) { P 0 δ Bs ds φ 2 (x)dx exp t t 0 } κ φ(x) 2 dx, we expect the second term explains the behavior of the Brownian motion. In particular, since P 0 ( B[0,t] B(x, R) ) exp{ tr 2 }, the localization scale should be tr 2 = t α+d 2 2α R = t α d+2 4α. 13 / 25

Recalling the Donsker-Varadhan LDP ( 1 t ) { P 0 δ Bs ds φ 2 (x)dx exp t t 0 } κ φ(x) 2 dx, we expect the second term explains the behavior of the Brownian motion. In particular, since P 0 ( B[0,t] B(x, R) ) exp{ tr 2 }, the localization scale should be tr 2 = t α+d 2 2α R = t α d+2 4α. In addition, the term C(d, α) x 2 φ(x) 2 dx says that V ω (locally) looks like a quadratic function. 13 / 25

O(t 1 α ) B t 0 α d+2 O(t 4α ) 14 / 25

0 the below is the section along this line V ω (x) Heavy tailed case h(t) r(t) th(t) t/r(t) 2 15 / 25

0 the below is the section along this line V ω (x) Light tailed case h(t) r(t) th(t) t/r(t) 2 15 / 25

Main Theorem (F. 2012) Q t (B [0,t] B (0, t α d+2 1 4α (log t) +ϵ)) t 2 1, Q t ( V ω (x) V ω (m t (ω)) t α d+2 α C(d, α) x m t (ω) 2 {t α d+2 4α Bt α d+2 2α s } s 0 in B(0, t α d+2 4α +ϵ ) ) t 1, in law OU-process with random center, where m t (ω) is the minimizer of V ω in B(0, t α d+2 4α log t). 16 / 25

Main Theorem (F. 2012) Q t (B [0,t] B (0, t α d+2 1 4α (log t) +ϵ)) t 2 1, Q t ( V ω (x) V ω (m t (ω)) t α d+2 α C(d, α) x m t (ω) 2 {t α d+2 4α Bt α d+2 2α s } s 0 in B(0, t α d+2 4α +ϵ ) ) t 1, in law OU-process with random center, where m t (ω) is the minimizer of V ω in B(0, t α d+2 4α Remark: X s := t α d+2 4α B t α d+2 2α log t). B t = t α d+2 4α X α+d 2 s t 2α 16 / 25

4. Outline of the proof (of localization) Observation: The 1st statement implies the 2nd one and vice versa. 17 / 25

4. Outline of the proof (of localization) Observation: The 1st statement implies the 2nd one and vice versa. Indeed, it is easy to believe 2nd 1st. 17 / 25

4. Outline of the proof (of localization) Observation: The 1st statement implies the 2nd one and vice versa. Indeed, it is easy to believe 2nd 1st. To see 1st 2nd, let L t := 1 t 0 δ B s ds and rewrite Q t (dω) = 1 [ E 0 [E e t Lt,Vω ] ] P Lt t (dω), Z t t where Z t is the normalizing constant and P Lt t (dω) := exp{ t L t, V ω }P(dω). E [exp{ t L t, V ω }] 17 / 25

Assuming the 1st statement, we may replace L t by δ ml with t m Lt = xl t (dx) in the following: P Lt t (V ω (x) V ω (m t (ω)) C(d, α)t α d+2 α x 2) 1? 18 / 25

Assuming the 1st statement, we may replace L t by δ ml with t m Lt = xl t (dx) in the following: Let m Lt = 0 for simplicity. P δ 0 t (V ω (x) V ω (0) C(d, α)t α d+2 α x 2) 1? This can be easily proved since (ω, P δ 0 t ) is nothing but the Poisson point process with intensity e t( x α 1) dx. 18 / 25

Assuming the 1st statement, we may replace L t by δ ml with t m Lt = xl t (dx) in the following: Let m Lt = 0 for simplicity. P δ 0 t (V ω (x) V ω (0) C(d, α)t α d+2 α x 2) 1? This can be easily proved since (ω, P δ 0 t ) is nothing but the Poisson point process with intensity e t( x α 1) dx. Remark In fact, a slightly weaker localization bound is enough to do the above replacement. 18 / 25

This observation is useless (circular argument) as it is. But due to the last remark, there is a chance to go as follows: crude control on the potential, crude control on the trajectory, fine control on the potential, fine control on the trajectory. 19 / 25

This observation is useless (circular argument) as it is. But due to the last remark, there is a chance to go as follows: crude control on the potential, crude control on the trajectory, fine control on the potential, fine control on the trajectory. Assume V ω attains its local minimum at 0 for simplicity. 19 / 25

4.1 Crude control on the potential Lemma 1 ( Q t V ω (0) d α a 1t α d α ) + t 3α 3d+2 4α ( M 1, M 1 ) 1. 20 / 25

4.1 Crude control on the potential Lemma 1 Idea ( Q t V ω (0) d α a 1t α d α Z t E[exp{ tv ω (0)}] ) + t 3α 3d+2 4α ( M 1, M 1 ) 1. { { } = exp a 1 t d α, sup h>0 [ e th P(V ω (0) h) ]. 20 / 25

4.1 Crude control on the potential Lemma 1 Idea ( Q t V ω (0) d α a 1t α d α Z t E[exp{ tv ω (0)}] ) + t 3α 3d+2 4α ( M 1, M 1 ) 1. { { } = exp a 1 t d α, sup h>0 [ e th P(V ω (0) h) ]. d α a 1t α d α = h(t) is the maximizer. { t } ] E E 0 [exp V ω (B s )ds : V ω (0) is far from h(t) 0 E [exp{ tv ω (0)} : V ω (0) is far from h(t)] = o(z t ). 20 / 25

Lemma 2 Q t ( V ω (0) + V ω (x) 2h(t) + c 1 t α d+2 α x 2 for t α d+6 8α < x < M 2 t α d+6 8α ) 1. 21 / 25

Lemma 2 Q t ( V ω (0) + V ω (x) 2h(t) + c 1 t α d+2 α x 2 Idea By Lemma 1, { t } exp V ω (B s )ds 0 for t α d+6 8α < x < M 2 t α d+6 8α ) 1. { exp { th(t)} = exp d } α a 1t d α 21 / 25

Lemma 2 Q t ( V ω (0) + V ω (x) 2h(t) + c 1 t α d+2 α x 2 Idea By Lemma 1, { t } exp V ω (B s )ds 0 for t α d+6 8α Then, use [ { E exp t }] 2 (V ω(0) + V ω (x)) and Chebyshev s inequality. < x < M 2 t α d+6 8α ) 1. { exp { th(t)} = exp d } α a 1t d α exp { a } 1 t d α c2 t d 2 α x 2 21 / 25

4.2 Crude control on the trajectory V ω (x) V ω (0) M 2 t α d+6 8α t α d+6 8α 0 t α d+6 8α x M 2 t α d+6 8α 22 / 25

4.2 Crude control on the trajectory V ω (x) V ω (0) M 2 t α d+6 8α t α d+6 8α 0 t α d+6 8α x M 2 t α d+6 8α By penalizing a crossing, Q t (B [0,t] B )) (0, M 2 t α d+6 8α 1. 22 / 25

4.3 Fine control on the potential The crude control on the trajectory is good enough to yield ( Q t V ω (x) V ω (0) C(d, α)t α d+2 α x 2 ) in B(0, t α d+2 4α +ϵ ) 1. 23 / 25

4.4 Fine control on the trajectory V ω (x) V ω (0) t α d+2 x 4α (log t) 1 2 +ϵ 0 t α d+2 4α (log t) 1 2 +ϵ 24 / 25

4.4 Fine control on the trajectory V ω (x) V ω (0) t α d+2 x 4α (log t) 1 2 +ϵ 0 t α d+2 4α (log t) 1 2 +ϵ By penalizing a crossing, Q t (B [0,t] B (0, t α d+2 1 4α (log t) +ϵ)) 2 1. 24 / 25

Thank you! 25 / 25