Some remarks on Large Deviations

Size: px
Start display at page:

Download "Some remarks on Large Deviations"

Transcription

1 Some remarks on Large Deviations S.R.S. Varadhan Courant Institute, NYU Conference in memory of Larry Shepp April 25, 2014 Some remarks on Large Deviations p.1/??

2 We want to estimate certain probabilities. Some remarks on Large Deviations p.2/??

3 We want to estimate certain probabilities. Large Deviation Theory is a tool. Some remarks on Large Deviations p.2/??

4 We want to estimate certain probabilities. Large Deviation Theory is a tool. Need to be set up properly Some remarks on Large Deviations p.2/??

5 We want to estimate certain probabilities. Large Deviation Theory is a tool. Need to be set up properly Look at three examples Some remarks on Large Deviations p.2/??

6 LDP Some remarks on Large Deviations p.3/??

7 LDP logp n [A] ninf x A I(x) Some remarks on Large Deviations p.3/??

8 LDP logp n [A] ninf x A I(x) Estimates are local. Some remarks on Large Deviations p.3/??

9 LDP logp n [A] ninf x A I(x) Estimates are local. Upper bound is valid for compact sets Some remarks on Large Deviations p.3/??

10 LDP logp n [A] ninf x A I(x) Estimates are local. Upper bound is valid for compact sets Lower bound is valid for open sets Some remarks on Large Deviations p.3/??

11 LDP logp n [A] ninf x A I(x) Estimates are local. Upper bound is valid for compact sets Lower bound is valid for open sets Upper bound for closed sets? Some remarks on Large Deviations p.3/??

12 LDP logp n [A] ninf x A I(x) Estimates are local. Upper bound is valid for compact sets Lower bound is valid for open sets Upper bound for closed sets? X is compact. Some remarks on Large Deviations p.3/??

13 LDP logp n [A] ninf x A I(x) Estimates are local. Upper bound is valid for compact sets Lower bound is valid for open sets Upper bound for closed sets? X is compact. If not, we need some estimates Some remarks on Large Deviations p.3/??

14 LDP logp n [A] ninf x A I(x) Estimates are local. Upper bound is valid for compact sets Lower bound is valid for open sets Upper bound for closed sets? X is compact. If not, we need some estimates Compactification or some control Some remarks on Large Deviations p.3/??

15 1 n logep n [exp[nf(x)] sup x[f(x) I(x)] Some remarks on Large Deviations p.4/??

16 1 n logep n [exp[nf(x)] sup x[f(x) I(x)] F(y) I(y) < F(x 0 ) I(x 0 ) fory x 0 Some remarks on Large Deviations p.4/??

17 1 n logep n [exp[nf(x)] sup x[f(x) I(x)] F(y) I(y) < F(x 0 ) I(x 0 ) fory x 0 1 Z n exp[nf(x)]dp n δ x0 Some remarks on Large Deviations p.4/??

18 1 n logep n [exp[nf(x)] sup x[f(x) I(x)] F(y) I(y) < F(x 0 ) I(x 0 ) fory x 0 1 Z n exp[nf(x)]dp n δ x0 LSC implies upper bound. Some remarks on Large Deviations p.4/??

19 1 n logep n [exp[nf(x)] sup x[f(x) I(x)] F(y) I(y) < F(x 0 ) I(x 0 ) fory x 0 1 Z n exp[nf(x)]dp n δ x0 LSC implies upper bound. G : X Y Some remarks on Large Deviations p.4/??

20 1 n logep n [exp[nf(x)] sup x[f(x) I(x)] F(y) I(y) < F(x 0 ) I(x 0 ) fory x 0 1 Z n exp[nf(x)]dp n δ x0 LSC implies upper bound. G : X Y Q n = P n G 1 Some remarks on Large Deviations p.4/??

21 1 n logep n [exp[nf(x)] sup x[f(x) I(x)] F(y) I(y) < F(x 0 ) I(x 0 ) fory x 0 1 Z n exp[nf(x)]dp n δ x0 LSC implies upper bound. G : X Y Q n = P n G 1 J(y) = inf x G 1 (y)i(x) Some remarks on Large Deviations p.4/??

22 1967 Larry Shepp Some remarks on Large Deviations p.5/??

23 1967 Larry Shepp x(t) is Gaussian process on [0,1] with mean 0. Some remarks on Large Deviations p.5/??

24 1967 Larry Shepp x(t) is Gaussian process on [0,1] with mean 0. ρ(s, t) is its covariance. Smooth. Some remarks on Large Deviations p.5/??

25 1967 Larry Shepp x(t) is Gaussian process on [0,1] with mean 0. ρ(s, t) is its covariance. Smooth. P[(λG) c ] exp[ c(g)λ 2 +o(λ 2 )] Some remarks on Large Deviations p.5/??

26 1967 Larry Shepp x(t) is Gaussian process on [0,1] with mean 0. ρ(s, t) is its covariance. Smooth. P[(λG) c ] exp[ c(g)λ 2 +o(λ 2 )] c(g) = inf f G ci(f) [ 1 I(f) =sup f(t)g(t)dt g ] ρ(s, t)g(s)g(t)dsdt Some remarks on Large Deviations p.5/??

27 IfG = {f : sup 0 t 1 f(t) 1} Some remarks on Large Deviations p.6/??

28 IfG = {f : sup 0 t 1 f(t) 1} c(g) = [2sup 0 t 1 ρ(t,t)] 1 Some remarks on Large Deviations p.6/??

29 IfG = {f : sup 0 t 1 f(t) 1} c(g) = [2sup 0 t 1 ρ(t,t)] 1 Tail is coming from the one with the largest variance. Some remarks on Large Deviations p.6/??

30 IfG = {f : sup 0 t 1 f(t) 1} c(g) = [2sup 0 t 1 ρ(t,t)] 1 Tail is coming from the one with the largest variance. Is this always true? Some remarks on Large Deviations p.6/??

31 IfG = {f : sup 0 t 1 f(t) 1} c(g) = [2sup 0 t 1 ρ(t,t)] 1 Tail is coming from the one with the largest variance. Is this always true? Does every almost surely bounded Gaussian process have a Gaussian tail? Some remarks on Large Deviations p.6/??

32 IfG = {f : sup 0 t 1 f(t) 1} c(g) = [2sup 0 t 1 ρ(t,t)] 1 Tail is coming from the one with the largest variance. Is this always true? Does every almost surely bounded Gaussian process have a Gaussian tail? Do the constants always match? Some remarks on Large Deviations p.6/??

33 1970 Landau and Shepp proved a Gaussian bound. Some remarks on Large Deviations p.7/??

34 1970 Landau and Shepp proved a Gaussian bound. Sankhya. Some remarks on Large Deviations p.7/??

35 1970 Landau and Shepp proved a Gaussian bound. Sankhya. It is enough to prove an exponential tail. Some remarks on Large Deviations p.7/??

36 1970 Landau and Shepp proved a Gaussian bound. Sankhya. It is enough to prove an exponential tail. P[ X C n] = P[ X 1 +X 2 + +X n n C n] P[ X 1 + X n Cn] exp[ cn] Some remarks on Large Deviations p.7/??

37 1970 Landau and Shepp proved a Gaussian bound. Sankhya. It is enough to prove an exponential tail. P[ X C n] = P[ X 1 +X 2 + +X n n C n] P[ X 1 + X n Cn] exp[ cn] Provided C > E[ X]] and E[e θ X ] < Some remarks on Large Deviations p.7/??

38 Skhorohod published a proof of an exponential bound in a Banach Space. Some remarks on Large Deviations p.8/??

39 Skhorohod published a proof of an exponential bound in a Banach Space. X = X(1). X(t) is a continuos process with independent increments. Some remarks on Large Deviations p.8/??

40 Skhorohod published a proof of an exponential bound in a Banach Space. X = X(1). X(t) is a continuos process with independent increments. P[ X(1) n] P[ sup X(t) n] 0 t 1 P[τ 1 +τ τ n 1] e[e[e (τ 1+ +τ n ) ]] = e[e[e τ ] n ] Some remarks on Large Deviations p.8/??

41 Fernique 1970 Some remarks on Large Deviations p.9/??

42 Fernique 1970 X,Y are two independent copies. Some remarks on Large Deviations p.9/??

43 Fernique 1970 X,Y are two independent copies. So are X+Y 2 and X Y 2 Some remarks on Large Deviations p.9/??

44 Fernique 1970 X,Y are two independent copies. So are X+Y 2 and X Y 2 F(t) = P[ X t] Some remarks on Large Deviations p.9/??

45 Fernique 1970 X,Y are two independent copies. So are X+Y 2 and X Y 2 F(t) = P[ X t] F(t)[1 F(s)] [F( t s 2 )] 2 Some remarks on Large Deviations p.9/??

46 Fernique 1970 X,Y are two independent copies. So are X+Y 2 and X Y 2 F(t) = P[ X t] F(t)[1 F(s)] [F( t s 2 )] 2 Uses this to to show the Gaussian estimate with some constant for X. i.elogf(t) ct 2. Some remarks on Large Deviations p.9/??

47 Fernique 1970 X,Y are two independent copies. So are X+Y 2 and X Y 2 F(t) = P[ X t] F(t)[1 F(s)] [F( t s 2 )] 2 Uses this to to show the Gaussian estimate with some constant for X. i.elogf(t) ct 2. Improves it to get the right constant. Some remarks on Large Deviations p.9/??

48 This would follow from a general LDP for sums of IID s in Banach Space. Some remarks on Large Deviations p.10/??

49 This would follow from a general LDP for sums of IID s in Banach Space. This was done in 1977 Some remarks on Large Deviations p.10/??

50 This would follow from a general LDP for sums of IID s in Banach Space. This was done in 1977 E[e θ X ] < for all θ > 0. Some remarks on Large Deviations p.10/??

51 This would follow from a general LDP for sums of IID s in Banach Space. This was done in 1977 E[e θ X ] < for all θ > 0. For a Gaussian this follows frome[e θ X ] < for someθ > 0. Some remarks on Large Deviations p.10/??

52 Example. Sourav Chatterjee Ifr = nx by Stirling s formula ( ) n = exp[ n[xlogx+(1 x)log(1 x)]+o(n)] r Some remarks on Large Deviations p.11/??

53 Example. Sourav Chatterjee Ifr = nx by Stirling s formula ( ) n = exp[ n[xlogx+(1 x)log(1 x)]+o(n)] r For coin tossing with a biased coin I(x) = xlog x p +(1 x)log 1 x 1 p Some remarks on Large Deviations p.11/??

54 Counting the number of graphs with specified subgraph counts. Some remarks on Large Deviations p.12/??

55 Counting the number of graphs with specified subgraph counts. N vertices. The number of possible subgraphs Γ withk vertices in a complete graph of size N is c(n,γ) c(γ)n k. Some remarks on Large Deviations p.12/??

56 Counting the number of graphs with specified subgraph counts. N vertices. The number of possible subgraphs Γ withk vertices in a complete graph of size N is c(n,γ) c(γ)n k. In a given graph G this may be smaller and the ratio is some fraction r(n,g,γ) 1 Some remarks on Large Deviations p.12/??

57 Count the number of graphsg having N vertices with specified valuesr(n,g,γ i ) = r i for a finite number of Γ s. Some remarks on Large Deviations p.13/??

58 Count the number of graphsg having N vertices with specified valuesr(n,g,γ i ) = r i for a finite number of Γ s. Their number is exp[n 2 J(Γ 1,r 1 ;...;Γ k,r k )+o(n 2 )] Some remarks on Large Deviations p.13/??

59 Count the number of graphsg having N vertices with specified valuesr(n,g,γ i ) = r i for a finite number of Γ s. Their number is exp[n 2 J(Γ 1,r 1 ;...;Γ k,r k )+o(n 2 )] Out of a total of2 (N 2) possible graphs with N vertices. Some remarks on Large Deviations p.13/??

60 Count the number of graphsg having N vertices with specified valuesr(n,g,γ i ) = r i for a finite number of Γ s. Their number is exp[n 2 J(Γ 1,r 1 ;...;Γ k,r k )+o(n 2 )] Out of a total of2 (N 2) possible graphs with N vertices. 0 J 1 2 log2 Some remarks on Large Deviations p.13/??

61 Count the number of graphsg having N vertices with specified valuesr(n,g,γ i ) = r i for a finite number of Γ s. Their number is exp[n 2 J(Γ 1,r 1 ;...;Γ k,r k )+o(n 2 )] Out of a total of2 (N 2) possible graphs with N vertices. 0 J 1 2 log2 expression forj Some remarks on Large Deviations p.13/??

62 0 x,y 1;f(x,y) = f(y,x); 0 f 1 Some remarks on Large Deviations p.14/??

63 0 x,y 1;f(x,y) = f(y,x); 0 f 1 r(γ,f) = [0,1] V(Γ) (i,j) E(Γ) f(x i,x j ) i V(Γ) dx i Some remarks on Large Deviations p.14/??

64 0 x,y 1;f(x,y) = f(y,x); 0 f 1 r(γ,f) = [0,1] V(Γ) (i,j) E(Γ) f(x i,x j ) i V(Γ) dx i H(f) = 1 2 [f logf +(1 f)log(1 f)]dxdy Some remarks on Large Deviations p.14/??

65 0 x,y 1;f(x,y) = f(y,x); 0 f 1 r(γ,f) = [0,1] V(Γ) (i,j) E(Γ) f(x i,x j ) i V(Γ) dx i H(f) = 1 2 [f logf +(1 f)log(1 f)]dxdy J = sup f:r(γ i,f)=r i 1 i k H(f) Some remarks on Large Deviations p.14/??

66 Let f(x,y) be a continuous function. Some remarks on Large Deviations p.15/??

67 Let f(x,y) be a continuous function. Consider a "random" graph with N vertices labeled {1,2,...,N}. (i,j) is an edge with probability f( i N, j N ). Some remarks on Large Deviations p.15/??

68 Let f(x,y) be a continuous function. Consider a "random" graph with N vertices labeled {1,2,...,N}. (i,j) is an edge with probability f( i N, j N ). The "expected number" of subgraphsγcan be easily calculated. Some remarks on Large Deviations p.15/??

69 Consider a map φ ofγonto {1,2,...,N}. Some remarks on Large Deviations p.16/??

70 Consider a map φ ofγonto {1,2,...,N}. There aren(n 1) (N k +1) of them Some remarks on Large Deviations p.16/??

71 Consider a map φ ofγonto {1,2,...,N}. There aren(n 1) (N k +1) of them The chance that one of them maps edges in Γ to edges in our random graph is Π (v,v ) E(Γ)f( φ(v) N, φ(v ) N ) Some remarks on Large Deviations p.16/??

72 Ratio of the expected number of subgraphs of typeγ to the number in a complete graph, for largen is r(γ,f) = Π (i,j) E(Γ) f(x i,x j )Π i V(Γ) dx i [0,1] V(Γ) Some remarks on Large Deviations p.17/??

73 Ratio of the expected number of subgraphs of typeγ to the number in a complete graph, for largen is r(γ,f) = Π (i,j) E(Γ) f(x i,x j )Π i V(Γ) dx i [0,1] V(Γ) Law of large numbers is valid. Some remarks on Large Deviations p.17/??

74 Ratio of the expected number of subgraphs of typeγ to the number in a complete graph, for largen is r(γ,f) = Π (i,j) E(Γ) f(x i,x j )Π i V(Γ) dx i [0,1] V(Γ) Law of large numbers is valid. w(g) = Π (i,j) E(G) f( i N, j N )Π (i,j)/ E(G)[1 f( i N, j N )] Some remarks on Large Deviations p.17/??

75 G G N,ǫ,r1,r 2,...,r k w(g) 1 Some remarks on Large Deviations p.18/??

76 The typical probabilityw(g) under the distribution determined by f has the propertylogw(g) = (i,j) E(G) logf( i N, j N )+ (i,j)/ E(G) log[1 f( i N, j N )] Some remarks on Large Deviations p.19/??

77 The typical probabilityw(g) under the distribution determined by f has the propertylogw(g) = (i,j) E(G) N2 2 H(f) logf( i N, j N )+ (i,j)/ E(G) log[1 f( i N, j N )] Some remarks on Large Deviations p.19/??

78 The typical probabilityw(g) under the distribution determined by f has the propertylogw(g) = (i,j) E(G) N2 2 H(f) logf( i N, j N )+ (i,j)/ E(G) You must have at least exp[ N2 2 log[1 f( i N, j N )] H(f)] graphs. Some remarks on Large Deviations p.19/??

79 X x 1,1 x 1,2 x 1,n x 2,1 x 2,2 x 2,n... x n,1 x n,2 x n,n Some remarks on Large Deviations p.20/??

80 k K n x 1,1 x 1,2 x 1,n x 2,1 x 2,2 x 2,n x n,1 x n,2 x n,n Some remarks on Large Deviations p.21/??

81 K = {k : k(x,y),[0,1] 2 [0,1]} Some remarks on Large Deviations p.22/??

82 K = {k : k(x,y),[0,1] 2 [0,1]} K N range ofn N matrices. Some remarks on Large Deviations p.22/??

83 K = {k : k(x,y),[0,1] 2 [0,1]} K N range ofn N matrices. P(k N ) = exp[ N2 2 log2] Some remarks on Large Deviations p.22/??

84 K = {k : k(x,y),[0,1] 2 [0,1]} K N range ofn N matrices. P(k N ) = exp[ N2 2 log2] logp[k N f] I(f) = N2 2 f log(2f)+(1 f)log(2(1 f))dxdy = N 2 [H(f) 1 2 log2] Some remarks on Large Deviations p.22/??

85 Topology? Weak? Some remarks on Large Deviations p.23/??

86 Topology? Weak? k r(γ,f) is not continuous. Some remarks on Large Deviations p.23/??

87 Topology? Weak? k r(γ,f) is not continuous. If LDP holds in a topology in which it is continuous, then J(Γ 1,r 1 ;...;Γ k,r k )) = 1 2 log2 inf k:r(γ i,k)=r i 1 i k = sup k:r(γ i,k)=r i 1 i k H(k) I(k) Some remarks on Large Deviations p.23/??

88 Topology? Weak? k r(γ,f) is not continuous. If LDP holds in a topology in which it is continuous, then J(Γ 1,r 1 ;...;Γ k,r k )) = 1 2 log2 inf k:r(γ i,k)=r i 1 i k = sup k:r(γ i,k)=r i 1 i k H(k) I(k) Strong topology likel p will be OK. Some remarks on Large Deviations p.23/??

89 Topology? Weak? k r(γ,f) is not continuous. If LDP holds in a topology in which it is continuous, then J(Γ 1,r 1 ;...;Γ k,r k )) = 1 2 log2 inf k:r(γ i,k)=r i 1 i k = sup k:r(γ i,k)=r i 1 i k H(k) I(k) Strong topology likel p will be OK. No chance. Fluctuations. Some remarks on Large Deviations p.23/??

90 Enter "cut" topology Some remarks on Large Deviations p.24/??

91 Enter "cut" topology cut metric isd (k 1,k 2 ) = [k 1 (x,y) k 2 (x,y)]φ(x)ψ(y)dxdy sup φ, ψ 1 Some remarks on Large Deviations p.24/??

92 Enter "cut" topology cut metric isd (k 1,k 2 ) = [k 1 (x,y) k 2 (x,y)]φ(x)ψ(y)dxdy sup φ, ψ 1 sup A,B A B [k 1 (x,y) k 2 (x,y)]dxdy Some remarks on Large Deviations p.24/??

93 Ifd (k n,k) 0 and sup n,x,y k n (x,y) C Some remarks on Large Deviations p.25/??

94 Ifd (k n,k) 0 and sup n,x,y k n (x,y) C r(γ,k n ) r(γ,k). Some remarks on Large Deviations p.25/??

95 Ifd (k n,k) 0 and sup n,x,y k n (x,y) C r(γ,k n ) r(γ,k). Limits of large graphs. Some remarks on Large Deviations p.25/??

96 Ifd (k n,k) 0 and sup n,x,y k n (x,y) C r(γ,k n ) r(γ,k). Limits of large graphs. Count the number of occurrences ofγin the graph. Some remarks on Large Deviations p.25/??

97 Ifd (k n,k) 0 and sup n,x,y k n (x,y) C r(γ,k n ) r(γ,k). Limits of large graphs. Count the number of occurrences ofγin the graph. Divide by the number in a complete graph. Some remarks on Large Deviations p.25/??

98 Assume the limit ψ(γ) of the ratio exists for every Γ. Some remarks on Large Deviations p.26/??

99 Assume the limitψ(γ) of the ratio exists for everyγ. What are possible limits? Graphons. Some remarks on Large Deviations p.26/??

100 Assume the limitψ(γ) of the ratio exists for everyγ. What are possible limits? Graphons. Representation. Some remarks on Large Deviations p.26/??

101 Assume the limitψ(γ) of the ratio exists for everyγ. What are possible limits? Graphons. Representation. There is s symmetric function f(x,y) on [0,1] [0,1] such that Some remarks on Large Deviations p.26/??

102 Assume the limitψ(γ) of the ratio exists for everyγ. What are possible limits? Graphons. Representation. There is s symmetric function f(x,y) on [0,1] [0,1] such that For any graphγwith verticesv(γ) and edgese(γ) r(γ,f) = Π (i,j) E(Γ) f(x i,x j )Π i V(Γ) dx i [0,1] V(Γ) Some remarks on Large Deviations p.26/??

103 r(γ,f) = r(γ,g) for all Γ if and only if f(x,y) = g(σx,σy) for someσ H. Some remarks on Large Deviations p.27/??

104 r(γ,f) = r(γ,g) for all Γ if and only if f(x,y) = g(σx,σy) for someσ H. Cut topology is the smallest topology on K/H that makes f r(γ, f) continuous for every Γ. Some remarks on Large Deviations p.27/??

105 r(γ,f) = r(γ,g) for all Γ if and only if f(x,y) = g(σx,σy) for someσ H. Cut topology is the smallest topology on K/H that makes f r(γ, f) continuous for every Γ. This topology works for LLN. 2 n 2 n << 2 n2 Some remarks on Large Deviations p.27/??

106 Upper Bound needs compactness, or exponential tightness. Some remarks on Large Deviations p.28/??

107 Upper Bound needs compactness, or exponential tightness. K is not compact under cut topology. Some remarks on Large Deviations p.28/??

108 Upper Bound needs compactness, or exponential tightness. K is not compact under cut topology. But K/H is by a theorem of Lovász-Szegedy Some remarks on Large Deviations p.28/??

109 Upper Bound needs compactness, or exponential tightness. K is not compact under cut topology. But K/H is by a theorem of Lovász-Szegedy It may be possible to prove the large deviation estimate in the topology induced by "cut" topology on K/H Some remarks on Large Deviations p.28/??

110 Need to estimate the probability of a neighborhood of the orbit. Some remarks on Large Deviations p.29/??

111 Need to estimate the probability of a neighborhood of the orbit. Szemerédi s regularity lemma Some remarks on Large Deviations p.29/??

112 Need to estimate the probability of a neighborhood of the orbit. Szemerédi s regularity lemma Replaces thehorbit by aπ n permutation orbit. Some remarks on Large Deviations p.29/??

113 Need to estimate the probability of a neighborhood of the orbit. Szemerédi s regularity lemma Replaces thehorbit by aπ n permutation orbit. logn! = o(n 2 ) Some remarks on Large Deviations p.29/??

114 Example. Chiranjib Mukherjee Brownian Motion Some remarks on Large Deviations p.30/??

115 Example. Chiranjib Mukherjee Brownian Motion L t = 1 T T 0 δ x(s)ds Some remarks on Large Deviations p.30/??

116 Example. Chiranjib Mukherjee Brownian Motion L t = 1 T T 0 δ x(s)ds λ(v) = lim T 1 = sup f 2 =1 = sup f 0 f 1 =1 T T loge[exp[ V(x(s))ds]] 0 [ V(x)[f(x)] 2 dx 1 2 [ V(x)f(x)dx 1 f 2 8 f f 2 dx ] dx Some remarks on Large Deviations p.30/??

117 P[L T f] = exp[ TI(f)+o(T)] Some remarks on Large Deviations p.31/??

118 P[L T f] = exp[ TI(f)+o(T)] I(f) = 1 8 f 2 dx f Some remarks on Large Deviations p.31/??

119 [ E exp[ 1 T T 0 T 0 ] V(β(s) β(t))dsdt] Some remarks on Large Deviations p.32/??

120 [ E exp[ 1 T T 0 T V(x) 0 as x 0 ] V(β(s) β(t))dsdt] Some remarks on Large Deviations p.32/??

121 [ E exp[ 1 T T 0 T V(x) 0 as x exp[ct +o(t)]?. 0 ] V(β(s) β(t))dsdt] Some remarks on Large Deviations p.32/??

122 [ E exp[ 1 T T 0 T V(x) 0 as x exp[ct +o(t)]?. 0 ] V(β(s) β(t))dsdt] c = sup f 0 f 1 =1 [ V(x y)f(x)f(y)dxdy 1 8 f 2 f ] dx Some remarks on Large Deviations p.32/??

123 [ E exp[ 1 T T 0 T V(x) 0 as x exp[ct +o(t)]?. 0 ] V(β(s) β(t))dsdt] c = sup f 0 f 1 =1 [ V(x y)f(x)f(y)dxdy 1 8 f 2 f ] dx Compactification ofm(r d )/R d Some remarks on Large Deviations p.32/??

124 If we only need to estimate [ T ] E exp[ V(β(s))ds] One point comactification of R d is enough. 0 Some remarks on Large Deviations p.33/??

125 If we only need to estimate [ T ] E exp[ V(β(s))ds] One point comactification of R d is enough. {f : f 0, f(x)dx 1}. 0 Some remarks on Large Deviations p.33/??

126 If we only need to estimate [ T ] E exp[ V(β(s))ds] One point comactification of R d is enough. {f : f 0, f(x)dx 1}. Vague topology is OK. 0 Some remarks on Large Deviations p.33/??

127 Translation invariant comapactification? Some remarks on Large Deviations p.34/??

128 Translation invariant comapactification? { µ α } Some remarks on Large Deviations p.34/??

129 Translation invariant comapactification? { µ α } α µ α(r d ) 1, dµ α = f α dx Some remarks on Large Deviations p.34/??

130 Translation invariant comapactification? { µ α } α µ α(r d ) 1, dµ α = f α dx I({f α }) = i 1 8 fα 2 f α dx = α I(f α) Some remarks on Large Deviations p.34/??

131 Translation invariant comapactification? { µ α } α µ α(r d ) 1, dµ α = f α dx I({f α }) = i 1 8 fα 2 f α dx = α I(f α) c = sup {f α } [ α V(x)(f α f α )(x)dx α ] I(f α ) Some remarks on Large Deviations p.34/??

132 Translation invariant comapactification? { µ α } α µ α(r d ) 1, dµ α = f α dx I({f α }) = i 1 8 c = sup {f α } fα 2 f α dx = α I(f α) [ V(x)(f α f α )(x)dx α α [ ] V(x)(f f)(x)dx I(f) c = sup f ] I(f α ) Some remarks on Large Deviations p.34/??

133 Needs Compactness Some remarks on Large Deviations p.35/??

134 Needs Compactness D( µ 1, µ 2 ) Some remarks on Large Deviations p.35/??

135 Needs Compactness D( µ 1, µ 2 ) 1 2 j F j (x 1,...,x kj )[Π k j r=1 µ 1(dx r ) Π k j r=1 µ 2(dx r ) Some remarks on Large Deviations p.35/??

136 Needs Compactness D( µ 1, µ 2 ) 1 2 j F j (x 1,...,x kj )[Π k j r=1 µ 1(dx r ) Π k j r=1 µ 2(dx r ) {F j } are translation invariant Some remarks on Large Deviations p.35/??

137 Needs Compactness D( µ 1, µ 2 ) 1 2 j F j (x 1,...,x kj )[Π k j r=1 µ 1(dx r ) Π k j r=1 µ 2(dx r ) {F j } are translation invariant F(x 1 +x,...,x k +x) = F(x 1,...,x k ) Some remarks on Large Deviations p.35/??

138 Needs Compactness D( µ 1, µ 2 ) 1 2 j F j (x 1,...,x kj )[Π k j r=1 µ 1(dx r ) Π k j r=1 µ 2(dx r ) {F j } are translation invariant F(x 1 +x,...,x k +x) = F(x 1,...,x k ) Complete with this metric. Some remarks on Large Deviations p.35/??

139 Needs Compactness D( µ 1, µ 2 ) 1 2 j F j (x 1,...,x kj )[Π k j r=1 µ 1(dx r ) Π k j r=1 µ 2(dx r ) {F j } are translation invariant F(x 1 +x,...,x k +x) = F(x 1,...,x k ) Complete with this metric. Completion is compact. Some remarks on Large Deviations p.35/??

140 What is in the completion? Some remarks on Large Deviations p.36/??

141 What is in the completion? { µ α } Some remarks on Large Deviations p.36/??

142 What is in the completion? { µ α } D({ µ α },{ µ β }) = 1 2 j [ α Πµ α (dx r ) β F j (x 1,...,x kj ) Πµ β (dx r ) Some remarks on Large Deviations p.36/??

143 What is in the completion? { µ α } D({ µ α },{ µ β }) = 1 2 j [ α Πµ α (dx r ) β F j (x 1,...,x kj ) Πµ β (dx r ) Need to show that ifd({ µ α },{ µ β }) = 0 then Some remarks on Large Deviations p.36/??

144 What is in the completion? { µ α } D({ µ α },{ µ β }) = 1 2 j [ α Πµ α (dx r ) β F j (x 1,...,x kj ) Πµ β (dx r ) Need to show that ifd({ µ α },{ µ β }) = 0 then { µ α } = { µ β } Some remarks on Large Deviations p.36/??

145 Identification of{ µ α } from Some remarks on Large Deviations p.37/??

146 Identification of{ µ α } from α F(x1,...,x k )Πµ α (dx r ) Some remarks on Large Deviations p.37/??

147 Identification of{ µ α } from α F(x1,...,x k )Πµ α (dx r ) ] m α[ F(x1,...,x k )Πµ α (dx r ) Some remarks on Large Deviations p.37/??

148 Identification of{ µ α } from α F(x1,...,x k )Πµ α (dx r ) α[ F(x1,...,x k )Πµ α (dx r ) F(x1,...,x k )Πµ α (dx r ) ] m Some remarks on Large Deviations p.37/??

149 Identification of{ µ α } from α F(x1,...,x k )Πµ α (dx r ) α[ F(x1,...,x k )Πµ α (dx r ) F(x1,...,x k )Πµ α (dx r ) ] m F = exp[ 1 t i x i ] provided i t i = 0 Some remarks on Large Deviations p.37/??

150 Identification of{ µ α } from α F(x1,...,x k )Πµ α (dx r ) α[ F(x1,...,x k )Πµ α (dx r ) F(x1,...,x k )Πµ α (dx r ) ] m F = exp[ 1 t i x i ] provided i t i = 0 Πφ(t i ) provided i t i = 0 Some remarks on Large Deviations p.37/??

151 φ(t) 2 Some remarks on Large Deviations p.38/??

152 φ(t) 2 φ(t) = φ(t) χ(t) Some remarks on Large Deviations p.38/??

153 φ(t) 2 φ(t) = φ(t) χ(t) Π i χ(t i ) = 1 if i t i = 0 Some remarks on Large Deviations p.38/??

154 φ(t) 2 φ(t) = φ(t) χ(t) Π i χ(t i ) = 1 if i t i = 0 χ(t+s) = χ(t)χ(s), χ(nt) = [χ(t)] n Some remarks on Large Deviations p.38/??

155 φ(t) 2 φ(t) = φ(t) χ(t) Π i χ(t i ) = 1 if i t i = 0 χ(t+s) = χ(t)χ(s), χ(nt) = [χ(t)] n χ(t) = e ita Some remarks on Large Deviations p.38/??

156 φ(t) 2 φ(t) = φ(t) χ(t) Π i χ(t i ) = 1 if i t i = 0 χ(t+s) = χ(t)χ(s), χ(nt) = [χ(t)] n χ(t) = e ita a is not determined. Some remarks on Large Deviations p.38/??

157 THANK YOU Some remarks on Large Deviations p.39/??

Large Deviations and Random Graphs

Large Deviations and Random Graphs Large Deviations and Random Graphs S.R.S. Varadhan Courant Institute, NYU UC Irvine March 25, 2011 Large DeviationsandRandom Graphs p.1/39 Joint work with Sourav Chatterjee. Large DeviationsandRandom Graphs

More information

The large deviation principle for the Erdős-Rényi random graph

The large deviation principle for the Erdős-Rényi random graph The large deviation principle for the Erdős-Rényi random graph (Courant Institute, NYU) joint work with S. R. S. Varadhan Main objective: how to count graphs with a given property Only consider finite

More information

δ xj β n = 1 n Theorem 1.1. The sequence {P n } satisfies a large deviation principle on M(X) with the rate function I(β) given by

δ xj β n = 1 n Theorem 1.1. The sequence {P n } satisfies a large deviation principle on M(X) with the rate function I(β) given by . Sanov s Theorem Here we consider a sequence of i.i.d. random variables with values in some complete separable metric space X with a common distribution α. Then the sample distribution β n = n maps X

More information

A LARGE DEVIATION PRINCIPLE FOR THE ERDŐS RÉNYI UNIFORM RANDOM GRAPH

A LARGE DEVIATION PRINCIPLE FOR THE ERDŐS RÉNYI UNIFORM RANDOM GRAPH A LARGE DEVIATION PRINCIPLE FOR THE ERDŐS RÉNYI UNIFORM RANDOM GRAPH AMIR DEMBO AND EYAL LUBETZKY Abstract. Starting with the large deviation principle (ldp) for the Erdős Rényi binomial random graph G(n,

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

Entropy and Large Deviations

Entropy and Large Deviations Entropy and Large Deviations p. 1/32 Entropy and Large Deviations S.R.S. Varadhan Courant Institute, NYU Michigan State Universiy East Lansing March 31, 2015 Entropy comes up in many different contexts.

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions

Lower Tail Probabilities and Normal Comparison Inequalities. In Memory of Wenbo V. Li s Contributions Lower Tail Probabilities and Normal Comparison Inequalities In Memory of Wenbo V. Li s Contributions Qi-Man Shao The Chinese University of Hong Kong Lower Tail Probabilities and Normal Comparison Inequalities

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Lower Tail Probabilities and Related Problems

Lower Tail Probabilities and Related Problems Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Concentration of Measure Distributional

More information

In Memory of Wenbo V Li s Contributions

In Memory of Wenbo V Li s Contributions In Memory of Wenbo V Li s Contributions Qi-Man Shao The Chinese University of Hong Kong qmshao@cuhk.edu.hk The research is partially supported by Hong Kong RGC GRF 403513 Outline Lower tail probabilities

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

. Then V l on K l, and so. e e 1.

. Then V l on K l, and so. e e 1. Sanov s Theorem Let E be a Polish space, and define L n : E n M E to be the empirical measure given by L n x = n n m= δ x m for x = x,..., x n E n. Given a µ M E, denote by µ n the distribution of L n

More information

Large Deviations from the Hydrodynamic Limit for a System with Nearest Neighbor Interactions

Large Deviations from the Hydrodynamic Limit for a System with Nearest Neighbor Interactions Large Deviations from the Hydrodynamic Limit for a System with Nearest Neighbor Interactions Amarjit Budhiraja Department of Statistics & Operations Research University of North Carolina at Chapel Hill

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

1 Preliminaries on locally compact spaces and Radon measure

1 Preliminaries on locally compact spaces and Radon measure 1 Preliminaries on locally compact spaces and Radon measure Definition 1.1 A topological space is locally compact if every point has a compact neighborhood. The spaces we are concerned with in this presentation

More information

Weak convergence and Compactness.

Weak convergence and Compactness. Chapter 4 Weak convergence and Compactness. Let be a complete separable metic space and B its Borel σ field. We denote by M() the space of probability measures on (, B). A sequence µ n M() of probability

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Some analysis problems 1. x x 2 +yn2, y > 0. g(y) := lim

Some analysis problems 1. x x 2 +yn2, y > 0. g(y) := lim Some analysis problems. Let f be a continuous function on R and let for n =,2,..., F n (x) = x (x t) n f(t)dt. Prove that F n is n times differentiable, and prove a simple formula for its n-th derivative.

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

EMERGENT STRUCTURES IN LARGE NETWORKS

EMERGENT STRUCTURES IN LARGE NETWORKS Applied Probability Trust (29 November 2012) EMERGENT STRUCTURES IN LARGE NETWORKS DAVID ARISTOFF, University of Minnesota CHARLES RADIN, The University of Texas at Austin Abstract We consider a large

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Asymptotic results for empirical measures of weighted sums of independent random variables

Asymptotic results for empirical measures of weighted sums of independent random variables Asymptotic results for empirical measures of weighted sums of independent random variables B. Bercu and W. Bryc University Bordeaux 1, France Workshop on Limit Theorems, University Paris 1 Paris, January

More information

A new method to bound rate of convergence

A new method to bound rate of convergence A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution

More information

ψ(t) := log E[e tx ] (1) P X j x e Nψ (x) j=1

ψ(t) := log E[e tx ] (1) P X j x e Nψ (x) j=1 Seminars 15/01/018 Ex 1. Exponential estimate Let X be a real valued random variable, and suppose that there exists r > 0 such that E[e rx ]

More information

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 1 PROBLEMS AND SOLUTIONS First day July 29, 1994 Problem 1. 13 points a Let A be a n n, n 2, symmetric, invertible

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Graph Limits: Some Open Problems

Graph Limits: Some Open Problems Graph Limits: Some Open Problems 1 Introduction Here are some questions from the open problems session that was held during the AIM Workshop Graph and Hypergraph Limits, Palo Alto, August 15-19, 2011.

More information

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance

Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance Sample path large deviations of a Gaussian process with stationary increments and regularily varying variance Tommi Sottinen Department of Mathematics P. O. Box 4 FIN-0004 University of Helsinki Finland

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Lecture 19 : Brownian motion: Path properties I

Lecture 19 : Brownian motion: Path properties I Lecture 19 : Brownian motion: Path properties I MATH275B - Winter 2012 Lecturer: Sebastien Roch References: [Dur10, Section 8.1], [Lig10, Section 1.5, 1.6], [MP10, Section 1.1, 1.2]. 1 Invariance We begin

More information

MATH 6337 Second Midterm April 1, 2014

MATH 6337 Second Midterm April 1, 2014 You can use your book and notes. No laptop or wireless devices allowed. Write clearly and try to make your arguments as linear and simple as possible. The complete solution of one exercise will be considered

More information

2a Large deviation principle (LDP) b Contraction principle c Change of measure... 10

2a Large deviation principle (LDP) b Contraction principle c Change of measure... 10 Tel Aviv University, 2007 Large deviations 5 2 Basic notions 2a Large deviation principle LDP......... 5 2b Contraction principle................ 9 2c Change of measure................. 10 The formalism

More information

18.175: Lecture 14 Infinite divisibility and so forth

18.175: Lecture 14 Infinite divisibility and so forth 18.175 Lecture 14 18.175: Lecture 14 Infinite divisibility and so forth Scott Sheffield MIT 18.175 Lecture 14 Outline Infinite divisibility Higher dimensional CFs and CLTs Random walks Stopping times Arcsin

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

CMS winter meeting 2008, Ottawa. The heat kernel on connected sums

CMS winter meeting 2008, Ottawa. The heat kernel on connected sums CMS winter meeting 2008, Ottawa The heat kernel on connected sums Laurent Saloff-Coste (Cornell University) December 8 2008 p(t, x, y) = The heat kernel ) 1 x y 2 exp ( (4πt) n/2 4t The heat kernel is:

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Brownian survival and Lifshitz tail in perturbed lattice disorder

Brownian survival and Lifshitz tail in perturbed lattice disorder Brownian survival and Lifshitz tail in perturbed lattice disorder Ryoki Fukushima Kyoto niversity Random Processes and Systems February 16, 2009 6 B T 1. Model ) ({B t t 0, P x : standard Brownian motion

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Real Analysis Qualifying Exam May 14th 2016

Real Analysis Qualifying Exam May 14th 2016 Real Analysis Qualifying Exam May 4th 26 Solve 8 out of 2 problems. () Prove the Banach contraction principle: Let T be a mapping from a complete metric space X into itself such that d(tx,ty) apple qd(x,

More information

Notes for Functional Analysis

Notes for Functional Analysis Notes for Functional Analysis Wang Zuoqin (typed by Xiyu Zhai) November 6, 2015 1 Lecture 18 1.1 The convex hull Let X be any vector space, and E X a subset. Definition 1.1. The convex hull of E is the

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Singular value decomposition (SVD) of large random matrices. India, 2010

Singular value decomposition (SVD) of large random matrices. India, 2010 Singular value decomposition (SVD) of large random matrices Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu India, 2010 Motivation New challenge of multivariate statistics:

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Brownian Bridge and Self-Avoiding Random Walk.

Brownian Bridge and Self-Avoiding Random Walk. Brownian Bridge and Self-Avoiding Random Walk. arxiv:math/02050v [math.pr] 9 May 2002 Yevgeniy Kovchegov Email: yevgeniy@math.stanford.edu Fax: -650-725-4066 November 2, 208 Abstract We derive the Brownian

More information

Problem 1A. Use residues to compute. dx x

Problem 1A. Use residues to compute. dx x Problem 1A. A non-empty metric space X is said to be connected if it is not the union of two non-empty disjoint open subsets, and is said to be path-connected if for every two points a, b there is a continuous

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),

lim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f), 1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

1. Principle of large deviations

1. Principle of large deviations 1. Principle of large deviations This section will provide a short introduction to the Law of large deviations. The basic principle behind this theory centers on the observation that certain functions

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

General Theory of Large Deviations

General Theory of Large Deviations Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations

More information

Lecture 23. Random walks

Lecture 23. Random walks 18.175: Lecture 23 Random walks Scott Sheffield MIT 1 Outline Random walks Stopping times Arcsin law, other SRW stories 2 Outline Random walks Stopping times Arcsin law, other SRW stories 3 Exchangeable

More information

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space. University of Bergen General Functional Analysis Problems with solutions 6 ) Prove that is unique in any normed space. Solution of ) Let us suppose that there are 2 zeros and 2. Then = + 2 = 2 + = 2. 2)

More information

RAJASTHAN P.E.T. MATHS-1995

RAJASTHAN P.E.T. MATHS-1995 RAJASTHAN P.E.T. MATHS-1995 1. The equation of the normal to the circle x2 + y2 = a2 at point (x y ) will be : (1) x y - xy = (2) xx - yy = (3) x y + xy = (4) xx + yy = 2. Equation of the bisector of the

More information

Problem Set 5. 2 n k. Then a nk (x) = 1+( 1)k

Problem Set 5. 2 n k. Then a nk (x) = 1+( 1)k Problem Set 5 1. (Folland 2.43) For x [, 1), let 1 a n (x)2 n (a n (x) = or 1) be the base-2 expansion of x. (If x is a dyadic rational, choose the expansion such that a n (x) = for large n.) Then the

More information

Overview of normed linear spaces

Overview of normed linear spaces 20 Chapter 2 Overview of normed linear spaces Starting from this chapter, we begin examining linear spaces with at least one extra structure (topology or geometry). We assume linearity; this is a natural

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Renyi divergence and asymptotic theory of minimal K-point random graphs Alfred Hero Dept. of EECS, The University of Michigan, Summer 1999

Renyi divergence and asymptotic theory of minimal K-point random graphs Alfred Hero Dept. of EECS, The University of Michigan, Summer 1999 Renyi divergence and asymptotic theory of minimal K-point random graphs Alfred Hero Dept. of EECS, The University of Michigan, Summer 999 Outline Rényi Entropy and Rényi Divergence ffl Euclidean k-minimal

More information

Szemerédi s regularity lemma revisited. Lewis Memorial Lecture March 14, Terence Tao (UCLA)

Szemerédi s regularity lemma revisited. Lewis Memorial Lecture March 14, Terence Tao (UCLA) Szemerédi s regularity lemma revisited Lewis Memorial Lecture March 14, 2008 Terence Tao (UCLA) 1 Finding models of large dense graphs Suppose we are given a large dense graph G = (V, E), where V is a

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N Applied Analysis prelim July 15, 216, with solutions Solve 4 of the problems 1-5 and 2 of the problems 6-8. We will only grade the first 4 problems attempted from1-5 and the first 2 attempted from problems

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

Chapter 1. Sets and probability. 1.3 Probability space

Chapter 1. Sets and probability. 1.3 Probability space Random processes - Chapter 1. Sets and probability 1 Random processes Chapter 1. Sets and probability 1.3 Probability space 1.3 Probability space Random processes - Chapter 1. Sets and probability 2 Probability

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Asymptotic results for empirical measures of weighted sums of independent random variables

Asymptotic results for empirical measures of weighted sums of independent random variables Asymptotic results for empirical measures of weighted sums of independent random variables B. Bercu and W. Bryc University Bordeaux 1, France Seminario di Probabilità e Statistica Matematica Sapienza Università

More information

The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional Analysis

The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional Analysis Mathematics 2015, 3, 527-562; doi:10.3390/math3020527 Article OPEN ACCESS mathematics ISSN 2227-7390 www.mdpi.com/journal/mathematics The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

Chapter 6 - Random Processes

Chapter 6 - Random Processes EE385 Class Notes //04 John Stensby Chapter 6 - Random Processes Recall that a random variable X is a mapping between the sample space S and the extended real line R +. That is, X : S R +. A random process

More information

Connection to Branching Random Walk

Connection to Branching Random Walk Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host

More information

Compact orbit spaces in Hilbert spaces and limits of edge-colouring models

Compact orbit spaces in Hilbert spaces and limits of edge-colouring models Compact orbit spaces in Hilbert spaces and limits of edge-colouring models Guus Regts 1 and Alexander Schrijver 2 Abstract. Let G be a group of orthogonal transformations of a real Hilbert space H. Let

More information

Empirical Processes and random projections

Empirical Processes and random projections Empirical Processes and random projections B. Klartag, S. Mendelson School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA. Institute of Advanced Studies, The Australian National

More information

Annealed Brownian motion in a heavy tailed Poissonian potential

Annealed Brownian motion in a heavy tailed Poissonian potential Annealed Brownian motion in a heavy tailed Poissonian potential Ryoki Fukushima Tokyo Institute of Technology Workshop on Random Polymer Models and Related Problems, National University of Singapore, May

More information

Large deviations, weak convergence, and relative entropy

Large deviations, weak convergence, and relative entropy Large deviations, weak convergence, and relative entropy Markus Fischer, University of Padua Revised June 20, 203 Introduction Rare event probabilities and large deviations: basic example and definition

More information

Concentration of Measures by Bounded Couplings

Concentration of Measures by Bounded Couplings Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional

More information

Correlation dimension for self-similar Cantor sets with overlaps

Correlation dimension for self-similar Cantor sets with overlaps F U N D A M E N T A MATHEMATICAE 155 (1998) Correlation dimension for self-similar Cantor sets with overlaps by Károly S i m o n (Miskolc) and Boris S o l o m y a k (Seattle, Wash.) Abstract. We consider

More information

Invariance Principle for Variable Speed Random Walks on Trees

Invariance Principle for Variable Speed Random Walks on Trees Invariance Principle for Variable Speed Random Walks on Trees Wolfgang Löhr, University of Duisburg-Essen joint work with Siva Athreya and Anita Winter Stochastic Analysis and Applications Thoku University,

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

arxiv: v3 [math.pr] 18 Aug 2017

arxiv: v3 [math.pr] 18 Aug 2017 Sparse Exchangeable Graphs and Their Limits via Graphon Processes arxiv:1601.07134v3 [math.pr] 18 Aug 2017 Christian Borgs Microsoft Research One Memorial Drive Cambridge, MA 02142, USA Jennifer T. Chayes

More information

18.175: Lecture 13 Infinite divisibility and Lévy processes

18.175: Lecture 13 Infinite divisibility and Lévy processes 18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Austin Mohr Math 704 Homework 6

Austin Mohr Math 704 Homework 6 Austin Mohr Math 704 Homework 6 Problem 1 Integrability of f on R does not necessarily imply the convergence of f(x) to 0 as x. a. There exists a positive continuous function f on R so that f is integrable

More information

Infinitely iterated Brownian motion

Infinitely iterated Brownian motion Mathematics department Uppsala University (Joint work with Nicolas Curien) This talk was given in June 2013, at the Mittag-Leffler Institute in Stockholm, as part of the Symposium in honour of Olav Kallenberg

More information

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1 Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems p. 1/1 p. 2/1 Converse Lyapunov Theorem Exponential Stability Let x = 0 be an exponentially stable equilibrium

More information

SOLVABLE VARIATIONAL PROBLEMS IN N STATISTICAL MECHANICS

SOLVABLE VARIATIONAL PROBLEMS IN N STATISTICAL MECHANICS SOLVABLE VARIATIONAL PROBLEMS IN NON EQUILIBRIUM STATISTICAL MECHANICS University of L Aquila October 2013 Tullio Levi Civita Lecture 2013 Coauthors Lorenzo Bertini Alberto De Sole Alessandra Faggionato

More information

5 Introduction to the Theory of Order Statistics and Rank Statistics

5 Introduction to the Theory of Order Statistics and Rank Statistics 5 Introduction to the Theory of Order Statistics and Rank Statistics This section will contain a summary of important definitions and theorems that will be useful for understanding the theory of order

More information