Brownian survival and Lifshitz tail in perturbed lattice disorder Ryoki Fukushima Kyoto niversity Random Processes and Systems February 16, 2009
6 B T
1. Model ) ({B t t 0, P x : standard Brownian motion on R d. ) ({ξ q q Z d, P θ : i.i.d. with P θ (ξ q dx) exp{ x θ dx. We call ξ := δ q+ξq the perturbed lattice. Killing traps We define killing traps by V ξ (x) := q Z d W (x q ξ q ).
Survival probability The main object of this talk is { T ] S T, ξ = E 0 [exp V ξ (B s ) ds. 0 Example 1. When ξ is the lattice points, log S T, ξ c 1 T, T. Example 2. (Donsker-Varadhan 75, Sznitman 90, 93) When ξ is the Poisson points, log E θ [S T, ξ ] c 2 T d d+2, T, log S T, ξ c 3 T /(log T ) 2/d, T, a.s.
Survival probability The main object of this talk is { T ] S T, ξ = E 0 [exp V ξ (B s ) ds. 0 Example 1. When ξ is the lattice points, log S T, ξ c 1 T, T. Example 2. (Donsker-Varadhan 75, Sznitman 90, 93) When ξ is the Poisson points, log E θ [S T, ξ ] c 2 T d d+2, T, log S T, ξ c 3 T /(log T ) 2/d, T, a.s.
Survival probability The main object of this talk is { T ] S T, ξ = E 0 [exp V ξ (B s ) ds. 0 Example 1. When ξ is the lattice points, log S T, ξ c 1 T, T. Example 2. (Donsker-Varadhan 75, Sznitman 90, 93) When ξ is the Poisson points, log E θ [S T, ξ ] c 2 T d d+2, T, log S T, ξ c 3 T /(log T ) 2/d, T, a.s.
2. Annealed asymptotics 2.1 Reduction step We take W = 1 B(0,1) for simplicity. Then, [ E θ [S T, ξ ] = E θ P0 (H suppvξ > T ) ] P θ (ξ() = 0)P 0 (T > T ) ( = R d \ suppv ξ ) sup P θ (ξ() = 0)P 0 (T > T ) (Laplace principle), where and sup are taken over {R d \ suppv ξ. log S T sup {log P θ (ξ() = 0) + log P 0 (T > T )
2. Annealed asymptotics 2.1 Reduction step We take W = 1 B(0,1) for simplicity. Then, [ E θ [S T, ξ ] = E θ P0 (H suppvξ > T ) ] P θ (ξ() = 0)P 0 (T > T ) ( = R d \ suppv ξ ) sup P θ (ξ() = 0)P 0 (T > T ) (Laplace principle), where and sup are taken over {R d \ suppv ξ. log S T sup {log P θ (ξ() = 0) + log P 0 (T > T )
2. Annealed asymptotics 2.1 Reduction step We take W = 1 B(0,1) for simplicity. Then, [ E θ [S T, ξ ] = E θ P0 (H suppvξ > T ) ] P θ (ξ() = 0)P 0 (T > T ) ( = R d \ suppv ξ ) sup P θ (ξ() = 0)P 0 (T > T ) (Laplace principle), where and sup are taken over {R d \ suppv ξ. log S T sup {log P θ (ξ() = 0) + log P 0 (T > T )
2.2 Hole probability It is well known that log P 0 (T > T ) λ 1 ()T, where λ 1 () is the smallest eigenvalue of 1/2 D in. Lemma nder some regularity assumptions on R d, log P θ (ξ() = 0) dist(x, ) θ dx. { log S T inf λ 1 ()T + dist(x, ) θ dx
2.2 Hole probability It is well known that log P 0 (T > T ) λ 1 ()T, where λ 1 () is the smallest eigenvalue of 1/2 D in. Lemma nder some regularity assumptions on R d, log P θ (ξ() = 0) dist(x, ) θ dx. { log S T inf λ 1 ()T + dist(x, ) θ dx
By the scaling = r, { log S T inf λ 1 ()T + dist(x, ) θ dx = inf {λ 1 ( )Tr 2 + r d+θ dist(x, ) θ dx = Tr 2 inf {λ 1 ( ) + r d+θ Tr 2 dist(x, ) θ dx. Natural choice: Tr 2 = r d+θ? ( r = T 1/(d+θ+2) ) An inconvenient truth { inf λ 1 ( ) + dist(x, ) θ dx 0 (r ).
By the scaling = r, { log S T inf λ 1 ()T + dist(x, ) θ dx = inf {λ 1 ( )Tr 2 + r d+θ dist(x, ) θ dx = Tr 2 inf {λ 1 ( ) + r d+θ Tr 2 dist(x, ) θ dx. Natural choice: Tr 2 = r d+θ? ( r = T 1/(d+θ+2) ) An inconvenient truth { inf λ 1 ( ) + dist(x, ) θ dx 0 (r ).
By the scaling = r, { log S T inf λ 1 ()T + dist(x, ) θ dx = inf {λ 1 ( )Tr 2 + r d+θ dist(x, ) θ dx = Tr 2 inf {λ 1 ( ) + r d+θ Tr 2 dist(x, ) θ dx. Natural choice: Tr 2 = r d+θ? ( r = T 1/(d+θ+2) ) An inconvenient truth { inf λ 1 ( ) + dist(x, ) θ dx 0 (r ).
r 1/r dist(x, ) θ dx dist(x, r ) θ dx, λ 1 ( r ).
2.3 Constant capacity regime { (log r) 1 2 (d = 2) r d 2 d (d 3) r 1/r dist(x, ) θ dx dist(x, ) θ dx, λ 1 ( ) + const. > λ 1 ( ).
In this picture (d 3), dist(x, ) θ dx r d 2 d θ. This is optimal in the following sense: r Proposition { inf inf λ 1 ( ) + r d 2 d θ dist(x, ) r 1 θ dx > 0. r Therefore we should take r d+θ /Tr 2 = r d 2 d θ in Tr 2 inf {λ 1 ( ) + r d+θ Tr 2 dist(x, ) θ dx.
In this picture (d 3), dist(x, ) θ dx r d 2 d θ. This is optimal in the following sense: r Proposition { inf inf λ 1 ( ) + r d 2 d θ dist(x, ) r 1 θ dx > 0. r Therefore we should take r d+θ /Tr 2 = r d 2 d θ in Tr 2 inf {λ 1 ( ) + r d+θ Tr 2 dist(x, ) θ dx.
2.4 Results Theorem 1 For any θ > 0, log E θ [S T, ξ ] T T 2+θ 4+θ (log T ) θ 4+θ (d = 2), d 2 +2θ d 2 +2d+2θ (d 3). Remarks. d (1) Weak disorder limit: 2 +2θ 1 as θ. d 2 +2d+2θ (2) Strong disorder limit: d 2 +2θ d 2 +2d+2θ d d+2 as θ 0.
3. Lifshitz tail Define the density of states of 1/2 + V ξ by Corollary 1 [ N(λ) := lim N (2N) d E θ # { λ ξ k (( N, N)d ) λ ]. λ 1 θ 2 (log λ 1 ) θ 2 (d = 2), log N(λ) λ d 2 θ d (d 3). λ 0 0 complete lattice λ perturbed lattice
3. Lifshitz tail Define the density of states of 1/2 + V ξ by Corollary 1 [ N(λ) := lim N (2N) d E θ # { λ ξ k (( N, N)d ) λ ]. λ 1 θ 2 (log λ 1 ) θ 2 (d = 2), log N(λ) λ d 2 θ d (d 3). λ 0 0 complete lattice λ perturbed lattice
4. Quenched asymptotics 4.1 The smallest eigenvalue and IDS We start with a standard estimate: { S T, ξ E 0 [exp Key estimate T exp { λ ξ ( 1 ( T, T ) d ) T N(λ) = sup N N sup N N 0 ] V ξ (B s ) ds ; T ( t,t) d > T + e c T 1 (2N) d E θ [# { λ ξ k (( N, N)d ) λ ] 1 (2N) d P θ ( λ ξ 1 (( N, N)d ) λ ).
4. Quenched asymptotics 4.1 The smallest eigenvalue and IDS We start with a standard estimate: { S T, ξ E 0 [exp Key estimate T exp { λ ξ ( 1 ( T, T ) d ) T N(λ) = sup N N sup N N 0 ] V ξ (B s ) ds ; T ( t,t) d > T + e c T 1 (2N) d E θ [# { λ ξ k (( N, N)d ) λ ] 1 (2N) d P θ ( λ ξ 1 (( N, N)d ) λ ).
4. Quenched asymptotics 4.1 The smallest eigenvalue and IDS We start with a standard estimate: { S T, ξ E 0 [exp Key estimate T exp { λ ξ ( 1 ( T, T ) d ) T N(λ) = sup N N sup N N 0 ] V ξ (B s ) ds ; T ( t,t) d > T + e c T 1 (2N) d E θ [# { λ ξ k (( N, N)d ) λ ] 1 (2N) d P θ ( λ ξ 1 (( N, N)d ) λ ).
sing the Corollary, we have (when d 3) ( P θ λ ξ 1 (( N, N)d ) λ ) (2N) d exp { c 1 λ L ( L = d 2 + θ ) d for any N N and small λ > 0. If we take N = T and λ = (c 2 log T ) 1/L, P θ ( λ ξ 1 (( T, T )d ) (c 2 log T ) 1/L) 2 d T d c 1/c 2. Thus for small c 2 > 0, for large T, almost surely. λ ξ 1 (( T, T )d ) (c 2 log T ) 1/L
sing the Corollary, we have (when d 3) ( P θ λ ξ 1 (( N, N)d ) λ ) (2N) d exp { c 1 λ L ( L = d 2 + θ ) d for any N N and small λ > 0. If we take N = T and λ = (c 2 log T ) 1/L, P θ ( λ ξ 1 (( T, T )d ) (c 2 log T ) 1/L) 2 d T d c 1/c 2. Thus for small c 2 > 0, for large T, almost surely. λ ξ 1 (( T, T )d ) (c 2 log T ) 1/L
sing the Corollary, we have (when d 3) ( P θ λ ξ 1 (( N, N)d ) λ ) (2N) d exp { c 1 λ L ( L = d 2 + θ ) d for any N N and small λ > 0. If we take N = T and λ = (c 2 log T ) 1/L, P θ ( λ ξ 1 (( T, T )d ) (c 2 log T ) 1/L) 2 d T d c 1/c 2. Thus for small c 2 > 0, for large T, almost surely. λ ξ 1 (( T, T )d ) (c 2 log T ) 1/L
4.2 Results We can also prove (essentially) the same lower bound and obtain: Theorem 2 For any θ > 0, we have T (log T ) 2 2+θ (log log T ) θ 2+θ (d = 2), log S T, ξ T (log T ) 2d d 2 +2θ (d 3), with P θ -probability one.