Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Size: px
Start display at page:

Download "Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )"

Transcription

1 Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

2 Outline 1 Introduction and Definitions 2 On the convergence rate of Monte Carlo methods 3 Simulation Methods of Classical Laws 4 Variance Reduction 5 Low discrepancy sequences E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

3 Outline 1 Introduction and Definitions 2 On the convergence rate of Monte Carlo methods 3 Simulation Methods of Classical Laws 4 Variance Reduction 5 Low discrepancy sequences E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

4 Motivation What is really random? Stochastic Models in general Sources of noise in neuronal activities Monte Carlo Methods Efficiency E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

5 Motivation What is really random? Stochastic Models in general Sources of noise in neuronal activities Monte Carlo Methods Efficiency E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

6 Motivation What is really random? Stochastic Models in general Sources of noise in neuronal activities Monte Carlo Methods Efficiency E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

7 Motivation What is really random? Stochastic Models in general Sources of noise in neuronal activities Monte Carlo Methods Efficiency E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

8 Motivation What is really random? Stochastic Models in general Sources of noise in neuronal activities Monte Carlo Methods Efficiency E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

9 Noise in neuronal activity Thermal noise Channel noise Electrical noise Synaptic noise E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

10 Two approaches: strong link between deterministic and stochastic approaches Toy example A = 1 0 f (θ) dθ A = E [f (U)] Ã = f (θ 1, θ 2, θ 3 ) dθ 3 dθ 2 dθ 1 [0,1] 3 Ã = E [f (U 1, U 2, U 3 )] Heat Equation and Brownian Motion u t (t, x) u(t, x) = 0 2 x 2 u(t, x) = Ψ(x) u(t, x) = E [Ψ(W T ) W t = x] E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

11 Probability Space σ-field Consider an arbitrary nonempty space Ω. A class F of subsets of Ω is called a σ field if Ω F A F implies A c F A 1, A 2, F implies k N A k F Probability measures Consider a real valued function mathbbp defined on a σ field F of Ω. P is a probability measure if it satisfies. 1 0 P(A) 1, for A F 2 P( ) = 0, P(Ω) = 1 3 if A 1, A 2, is a disjoint sequence of F sets, P( A k ) = P(A k ) k=1 k=1 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

12 Limit sets Definition For a sequence A 1, A 2, of sets, we define lim sup A n = n lim inf n A n = If lim sup n A n = lim inf n A n, we write lim A n. n=1 k=n n=1 k=n A k A k Theorem 1 For each sequence A n, P(lim inf A n ) lim P(A n ) lim P(A n ) P(lim sup A n ) 2 If A n A then P(A n ) P(A) E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

13 Borel Cantelli Lemmas Theorem (First Borel Cantelli Lemma) If n P(A n) converges, then P(lim sup n A n ) = 0 Theorem (Second Borel Cantelli Lemma) If {A n } is an independent sequence of events and n P(A n) diverges, then P(lim sup n A n ) = 1 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

14 Random Variables Definition (Random variable) A random variable on a probability space (Ω, F, P) is a real valued function X = X(ω), F-measurable. Definition (Cumulative distribution function) The cumulative distribution function F X of a random variable X is F X (y) := P (X y). Properties F X caracterizes the law of X F X is a non decreasing function F X is right continuous lim y F X (y) = 0 lim y + F X (y) = 1 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

15 Expected value For general real random variable, E (1 A ) = P(A) ( ) E a i 1 Ai = a i P(A i ) i i If X 0 E (X) = sup E(Y ) Y X,Y = ai1 A i i E(X) = E(X + ) E(X ) E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

16 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

17 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

18 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

19 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

20 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

21 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

22 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

23 Convergence of sequences of random variables Convergence in distribution A sequence X 1, X 2, of random variables is said to converge in distribution to a random variable X if lim F Xn (x) = F X (x) n for every x such that F X is continuous at x. Properties E(f (X n )) Ef (X) for all bounded, continuous functions f ; E(f (X n )) Ef (X) for all bounded, Lipschitz functions f ; lim sup E(f (X n )) Ef (X) for every upper semi-continuous function f bounded from above; lim inf E(f (X n )) Ef (X) for every lower semi-continuous function f bounded from below; lim sup P(X n C) P(X C) for all closed sets C; lim inf P(X n U) P(X U) for all open sets U; It does not imply convergence of probability density function. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

24 Convergence of sequences of random variables Convergence in probability A sequence X 1, X 2, of random variables is said to converge in probability to a random variable X if for every ε > 0 lim n P( X n X ε) = 0 Almost sure Convergence A sequence X 1, X 2, of random variables is said to converge almost surely to a random variable X if P(lim n X n = X) = 1. Convergence in mean A sequence X 1, X 2, of random variables is said to converge in r mean to a random variable X if for every E( X n X r ) = 0. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

25 Implications X n X n X n X n a.s. P X = X n X P a.s. X = X ϕ(n) X P L X = X n X L r X = X n P X If r s 1, X n L r X = X n L s X E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

26 Independence Independent events Events A and B are independent if P(A B) = P(A)P(B). More generally, a finite collection A 1,, A n of events is independent if P(A k1 A kj ) = P(A k1 ) P(A kj ) for j 2, n and 1 k 1 < < k j n. Independent random variables Random variables X and Y are independent if the σ fields σ(x) and σ(y ) are independent (i.e. for all A σ(x) and B σ(y ), A and B are independent). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

27 Conditional probability A first definition P(A B) = P(A B) P(B) if P(B) 0. If B 1, B 2,, is a countable partition of Ω f (ω) = P(A B i ) if ω B i. The random variable f is the conditional probability of A given G = σ(b 1, B 2, ). Remark It is necessary to extend the definition to non empty sets B i such that P(B i ) = 0. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

28 Conditional Expectation Definition Let X be an integrable random variable on (Ω, F, P) and G a σ field in F. There exists a random variable E(X G) called the conditional expected value of X given G having the two properties (i) E(X G) is G measurable and integrable (ii) E [1 G E(X G)] = E [1 G X] for G G. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

29 Outline 1 Introduction and Definitions 2 On the convergence rate of Monte Carlo methods 3 Simulation Methods of Classical Laws 4 Variance Reduction 5 Low discrepancy sequences E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

30 Strong Law of Large Numbers Theorem (Law of Large Numbers) Consider a random variable X, such that E ( X ) <. We denote the mean µ := E (X). We consider a sample of n independent random variables X 1,, X n with the same law as X. Then X X n n a.s. n µ. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

31 Central Limit Theorem Theorem Consider a random variable X, such that E ( [ X 2) <. Denote the mean and variance by µ := E (X), σ 2 = E (X E(X)) 2] = E(X 2 ) (E(X)) 2. Let us consider a sample of n i.i.d random variables X 1,, X n with the same law as X. Then ( ) n X1 + + X n L µ N (0, 1). σ n n E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

32 Confidence intervals Assume µ = E(X). An estimator of µ is ˆµ n := 1 n (X X n ) Assume n is large enough to be in the asymptotic regime. [ ] n P σ (ˆµn µ) A P (G A) where G N (0, 1) α there exists y α such that P( G y α ) = α An example of the size of the confidence interval For α = 95%, y α = P ( µ [ ˆµ n 1.96σ n, ˆµ n σ n ]) 95% E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

33 A non asymptotic estimate Theorem (Berry-Esseen) Let (X i ) i 1 be a sequence of independent and identically distributed random variables with zero mean. Denote by σ the common standard deviation. Suppose that E X 3 < +. Then ( ) ε n := sup P X1 + + X x N x R σ x e u2 /2 du N 2π CE X 1 3 σ 3 N. In addition, C 0.8. For a proof, see, e.g., Shiryayev (1984). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

34 A more precise result We now give a result which is slightly more precise than the Berry-Esseen Theorem: the estimate is non uniform in x. See Petrov (1975) for a proof and extensions. Theorem (Bikelis) Let (X i ) i 1 be a sequence of independent real random variables, which are not necessarily identically distributed. Suppose that EX i = 0 for all i, and that there exists 0 < δ 1 such that E X i 2+δ < + for all i. Set σ 2 i := EX 2 i, B N := N σi 2, i=1 F N (x) := P [ N i=1 X i BN Denote by Φ the distribution function of a Gaussian law with zero mean and unit 1 variance. There exists a universal constant A in (, 1) independent of N and 2π of the sequence (X i ) i 1, such that, for all x, F N (x) Φ(x) A B 1+δ/2 N (1 + x ) 2+δ x N E X i 2+δ E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50 i=1 ].

35 Outline 1 Introduction and Definitions 2 On the convergence rate of Monte Carlo methods 3 Simulation Methods of Classical Laws 4 Variance Reduction 5 Low discrepancy sequences E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

36 Uniform Law Generating a sequence U 1,, U n of i.i.d. uniform random variables Properties 1 n 1 n n 1 a Ui b b a a, b [0, 1] i=1 n i=1 Congruencial generator N max N n 0 N n k+1 an k + b (mod) N max u k = n k N max ( U 2i+1 1 ) ( U 2i+2 1 ) etc. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

37 Remarks 1 This sequence (u k ) k 1 mimics a sequence of independent random variable uniformly distributed on (0, 1) but it is a deterministic sequence. 2 It allows us to compare several methods with the same random events 3 These sequences are periodic 4 We have to take care to the period: as long as possible. 5 A good choice: Mersenne Twister. Do not forget If you want to use a software or a given language in order to apply stochastic numerical methods, you have to find its own uniform random generator or to download a good uniform generator. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

38 Inverse the Cumulative Distribution Function Proposition Let X be a random variable with cumulative distribution function F X. Let U be a random variable uniformly distributed on (0, 1). F 1 X (U) X E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

39 Rejection Procedure Principle Our aim is to estimate E [FG] with 0 G 1 almost surely. The idea : write G = P(X)(= P(X F, G)) E [FG] = E [F 1 X ] = E [F X] E [G] E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

40 Simulation of a random variable with a rejection procedure Let X be a r.v. with density f. We do not know how to simulate it. Let Y be a r.v. with density g. We know how to simulate it. Assumption: x R 0 f (x) Cg(x). We set h(x) := f (x) Cg(x) 1 {g(x)>0} E [ϕ(x)] = ϕ(x)f (x)dx = ϕ(x) f (x) g(x) g(x)dx [ = E ϕ(y ) f (Y ) ] [ = CE ϕ(y ) f (Y ) ] = CE [ϕ(y )h(y )] g(y ) Cg(Y ) = CE [ ] ϕ(y )1 {U h(y )} = CE [ϕ(y ) U < h(y )] P [U h(y )] E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

41 P [U h(y )] = E [h(y )] f (y) f (y) = Cg(y) g(y)dy = C dy = 1 C E [ϕ(x)] = CE [ϕ(y ) U < h(y )] P [U h(y )] = E [ϕ(y ) U < h(y )] E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

42 Algorithm 1 Generate Y 2 Compute for this realisation f (Y ) Cg(Y ) 3 Generate a random variable U, indep. of Y, with uniform law on (0, 1). 4 If U f (Y ), accept the realisation, that is X = Y Cg(Y ) 5 Else (if U > f (Y ) ), reject the realisation and start again from first step. Cg(Y ) Remark You have to wait a random time to obtain each realisation The probability of acceptance is equal to 1 C. Smaller is C, better is the algorithm. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

43 Gaussian r.v.: Box Muller Procedure Algorithm Generate (U 1, U 2 ) two independent random variables uniformly distributed on (0, 1) Set G 1 = 2 log(u 1 ) cos(2πu 2 ) G 2 = 2 log(u 1 ) sin(2πu 2 ) G 1 and G 2 are two independent gaussian random variables N (0, 1). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

44 Gaussian r.v.: Marsiglia Polar Procedure Algorithm Generate (X 1, X 2 ) a point uniformly distributed in the unit disc. Compute S = X1 2 + X 2 2 Set 2 log (S) G 1 = X 1 S 2 log (S) G 2 = X 2 S G 1 and G 2 are two independent gaussian random variables N (0, 1). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

45 Marsaglia Polar Procedure (2) Remark To generate (X 1, X 2 ), uniformly distributed on the unit disc, we use a rejection procedure: We generate U 1 U(0, 1) and U 2 U(0, 1) indep. We set Y 1 = 2U 1 1,Y 2 = 2U 2 1 If Y Y 2 2 1, (X 1, X 2 ) = (Y 1, Y 2 ) ACCEPTANCE Else, REJECTION go to the first step. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

46 Outline 1 Introduction and Definitions 2 On the convergence rate of Monte Carlo methods 3 Simulation Methods of Classical Laws 4 Variance Reduction 5 Low discrepancy sequences E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

47 Context Notation µ = E(Ψ(Z)) ˆµ N = 1 N N Ψ(Z i ) i=1 Analysis of the error Central Limit Theorem gives our aim is to obtain the best accuracy µ ˆµ N σ N where σ 2 = Var(Ψ(Z)) E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

48 Comparison of methods We want to approximate µ = E(X) We know also that µ = E(Y ) We have two ways to approximate µ : Constraints µ 1 N X N X i=1 X i µ 1 N Y N Y Y i The best method should give the result with the best accuracy in a fixed execution time (say A) i=1 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

49 Comparison of methods each simulation of X take a time T X each simulation of Y take a time T Y we have time to generate N X = A T X (and N Y = A T Y ) the accuracy is σ X NX = we have to compare σ 2 X T X and σ 2 Y T Y. σ 2 X T X A. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

50 Variance Reduction Importance Sampling You want to estimate with a Monte Carlo procedure E [ϕ(x)] where X has a density g. E [ϕ(x)] = ϕ(x)g(x)dx You choose a random variable Y with density f. ϕ(x)g(x) E [ϕ(x)] = ϕ(x)g(x)dx = f (x)dx [ ] f (x) ϕ(y )g(y ) E [ϕ(x)] = E. f (Y ) Control Variables E(f (X)) = E(f (X) h(x)) + E(h(X)) E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

51 Variance Reduction Antithetic Variables You want to estimate I = 1 Thanks to the change of variable y = 1 x, I = g(1 y)dy = 1 2 g(x)dx 1 = 1 E(g(U) + g(1 U)) 2 0 g(x) + g(1 x)dx E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

52 Antithetic Variables Theorem Let us consider a monotone function g, a non increasing function T and a sequence of i.i.d. random variables (X i, i N) such that X i and T (X i ) have the same law. Then, Var [ 1 n n i=1 ] [ g(x i ) + g(t (X i )) 1 Var 2 2n 2n i=1 g(x i ) ] E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

53 Stratification Idea X is a r.v. with value in E, you want to estimate E [g(x)] You split the space E into E 1, E 2,..., E k. k I = E [g(x) X E i ] P [X E i ] I = Î = i=1 k p i I i. i=1 k p i Î i. i=1 σ 2 i = Var (g(x) X E i ). E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

54 You use a Monte Carlo procedure with n i realisations to estimate I i. ( k ) 2 Var(Î) = 1 p i σ i n n i i=1 under the constraint n = n i, the optimal choise of n i is n i = n p iσ i. k p j σ j j=1 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

55 Outline 1 Introduction and Definitions 2 On the convergence rate of Monte Carlo methods 3 Simulation Methods of Classical Laws 4 Variance Reduction 5 Low discrepancy sequences E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

56 Low discrepancy Sequences Using sequences of points more regular than random points may sometimes improve Monte Carlo methods. We look for deterministic sequences (x i, i 1) such that f (x)dx 1 [0,1] n (f (x 1) + + f (x n )) d for all function f in a large enough set. Definition These methods with deterministic sequences are called quasi Monte Carlo methods. One can find sequences such that the speed of convergence of the previous approximation is of order K log(n)d n E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

57 Low discrepancy Sequences (2) Definition (Uniformly distributed sequences) For all y, z [0, 1] d, we say that y z if i = 1,.., d,y i z i. A sequence (x 1, i 1) is said to be uniformly distributed on [0, 1] d if one of the following equivalent properties is fulfilled: 1 For all y = (y 1,, y d ) [0, 1] d 1, lim n + n 2 Let Dn (x) = sup y [0,1] d the sequence, then 1 n n 1 xk [0,y] = Volume([0, y]) k=1 n 1 xk [0,y] Volume([0, y]) be the discrepancy of k=1 lim n D n (x) = 0 3 For every (bounded) continuous function f on [0, 1] d 1 lim n + n n f (x k ) = f (x)dx [0,1] d k=1 E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

58 Low discrepancy Sequences(3) Remark If (U n ) n 1 is a sequence of independent random variables with uniform law on [0, 1], the random sequence (U n (ω), n 1) is almost surely uniformly distributed. The discrepancy fulfills an iterated logarithm law 2n lim sup n log(log n) D n (U) = 1 Lower bound for the discrepancy: Roth Theorem a.s. The discrepancy of any infinite sequence satisfies the property Dn (log n) d 1 2 > C d n for d 3 for an infinite number of values of n, where C d is a constant which depends on d only. E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

59 Low discrepancy Sequences (4) Koksma-Hlawka inequality Let g be a finite variation function in the sense of Hardy and Krause and denote by V (g) its variation. Then, for n 1, 1 N N g(x k ) g(u)du [0,1] V (g)d N(x) d k=1 Finite variation function in the sense of Hardy and Krause If the function g is d times continuously differentiable, the variation V (g) is given by d k=1 1 i 1<..<i k d { x [0, 1] d x j = 1 for j i 1,.., i k k g(x) x i1 x ik dx i 1 dx ik E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

60 Popular Quasi Monte Carlo Sequences 1 Faure sequences 2 Halton sequences 3 Sobol sequences 4 van der Corput sequences An upper bound For such sequences, we obtain an upper bound of the discrepancy: Remark For small d: deterministic methods Dn (log n)d C n For moderated d: Quasi Monte Carlo methods For large d: Monte Carlo methods E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

61 Low discrepancy Sequences Figure: Halton Points Figure: (Pseudo) uniform points E. Tanré (INRIA - Team Tosca) Mathematical Methods for Neurosciences October 22nd, / 50

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Scientific Computing: Monte Carlo

Scientific Computing: Monte Carlo Scientific Computing: Monte Carlo Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 April 5th and 12th, 2012 A. Donev (Courant Institute)

More information

S6880 #13. Variance Reduction Methods

S6880 #13. Variance Reduction Methods S6880 #13 Variance Reduction Methods 1 Variance Reduction Methods Variance Reduction Methods 2 Importance Sampling Importance Sampling 3 Control Variates Control Variates Cauchy Example Revisited 4 Antithetic

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Lecture 2: Convergence of Random Variables

Lecture 2: Convergence of Random Variables Lecture 2: Convergence of Random Variables Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Introduction to Stochastic Processes, Fall 2013 1 / 9 Convergence of Random Variables

More information

An Introduction to Stochastic Calculus

An Introduction to Stochastic Calculus An Introduction to Stochastic Calculus Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Weeks 3-4 Haijun Li An Introduction to Stochastic Calculus Weeks 3-4 1 / 22 Outline

More information

Introduction to Self-normalized Limit Theory

Introduction to Self-normalized Limit Theory Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

Convergence of Random Variables

Convergence of Random Variables 1 / 15 Convergence of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay March 19, 2014 2 / 15 Motivation Theorem (Weak

More information

General Principles in Random Variates Generation

General Principles in Random Variates Generation General Principles in Random Variates Generation E. Moulines and G. Fort Telecom ParisTech June 2015 Bibliography : Luc Devroye, Non-Uniform Random Variate Generator, Springer-Verlag (1986) available on

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Numerical Methods I Monte Carlo Methods

Numerical Methods I Monte Carlo Methods Numerical Methods I Monte Carlo Methods Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Dec. 9th, 2010 A. Donev (Courant Institute) Lecture

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

STOR 635 Notes (S13)

STOR 635 Notes (S13) STOR 635 Notes (S13) Jimmy Jin UNC-Chapel Hill Last updated: 1/14/14 Contents 1 Measure theory and probability basics 2 1.1 Algebras and measure.......................... 2 1.2 Integration................................

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

Chapter 5. Weak convergence

Chapter 5. Weak convergence Chapter 5 Weak convergence We will see later that if the X i are i.i.d. with mean zero and variance one, then S n / p n converges in the sense P(S n / p n 2 [a, b])! P(Z 2 [a, b]), where Z is a standard

More information

PROBABILITY THEORY II

PROBABILITY THEORY II Ruprecht-Karls-Universität Heidelberg Institut für Angewandte Mathematik Prof. Dr. Jan JOHANNES Outline of the lecture course PROBABILITY THEORY II Summer semester 2016 Preliminary version: April 21, 2016

More information

Numerical Methods in Economics MIT Press, Chapter 9 Notes Quasi-Monte Carlo Methods. Kenneth L. Judd Hoover Institution.

Numerical Methods in Economics MIT Press, Chapter 9 Notes Quasi-Monte Carlo Methods. Kenneth L. Judd Hoover Institution. 1 Numerical Methods in Economics MIT Press, 1998 Chapter 9 Notes Quasi-Monte Carlo Methods Kenneth L. Judd Hoover Institution October 26, 2002 2 Quasi-Monte Carlo Methods Observation: MC uses random sequences

More information

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote Real Variables, Fall 4 Problem set 4 Solution suggestions Exercise. Let f be of bounded variation on [a, b]. Show that for each c (a, b), lim x c f(x) and lim x c f(x) exist. Prove that a monotone function

More information

Introduction to Statistical Inference Self-study

Introduction to Statistical Inference Self-study Introduction to Statistical Inference Self-study Contents Definition, sample space The fundamental object in probability is a nonempty sample space Ω. An event is a subset A Ω. Definition, σ-algebra A

More information

Recap of Basic Probability Theory

Recap of Basic Probability Theory 02407 Stochastic Processes? Recap of Basic Probability Theory Uffe Høgsbro Thygesen Informatics and Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: uht@imm.dtu.dk

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose

More information

Entrance Exam, Real Analysis September 1, 2017 Solve exactly 6 out of the 8 problems

Entrance Exam, Real Analysis September 1, 2017 Solve exactly 6 out of the 8 problems September, 27 Solve exactly 6 out of the 8 problems. Prove by denition (in ɛ δ language) that f(x) = + x 2 is uniformly continuous in (, ). Is f(x) uniformly continuous in (, )? Prove your conclusion.

More information

Hochdimensionale Integration

Hochdimensionale Integration Oliver Ernst Institut für Numerische Mathematik und Optimierung Hochdimensionale Integration 14-tägige Vorlesung im Wintersemester 2010/11 im Rahmen des Moduls Ausgewählte Kapitel der Numerik Contents

More information

Convergence Concepts of Random Variables and Functions

Convergence Concepts of Random Variables and Functions Convergence Concepts of Random Variables and Functions c 2002 2007, Professor Seppo Pynnonen, Department of Mathematics and Statistics, University of Vaasa Version: January 5, 2007 Convergence Modes Convergence

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Topics: consistent estimators; sub-σ-fields and partial observations; Doob s theorem about sub-σ-field measurability;

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria Weak Leiden University Amsterdam, 13 November 2013 Outline 1 2 3 4 5 6 7 Definition Definition Let µ, µ 1, µ 2,... be probability measures on (R, B). It is said that µ n converges weakly to µ, and we then

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence 1 Overview We started by stating the two principal laws of large numbers: the strong

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Monte Carlo Integration I [RC] Chapter 3

Monte Carlo Integration I [RC] Chapter 3 Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the

More information

Advanced Probability

Advanced Probability Advanced Probability Perla Sousi October 10, 2011 Contents 1 Conditional expectation 1 1.1 Discrete case.................................. 3 1.2 Existence and uniqueness............................ 3 1

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Lecture 3: Random variables, distributions, and transformations

Lecture 3: Random variables, distributions, and transformations Lecture 3: Random variables, distributions, and transformations Definition 1.4.1. A random variable X is a function from S into a subset of R such that for any Borel set B R {X B} = {ω S : X(ω) B} is an

More information

MATH4210 Financial Mathematics ( ) Tutorial 7

MATH4210 Financial Mathematics ( ) Tutorial 7 MATH40 Financial Mathematics (05-06) Tutorial 7 Review of some basic Probability: The triple (Ω, F, P) is called a probability space, where Ω denotes the sample space and F is the set of event (σ algebra

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

JUSTIN HARTMANN. F n Σ.

JUSTIN HARTMANN. F n Σ. BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation

More information

Continuous random variables

Continuous random variables Continuous random variables CE 311S What was the difference between discrete and continuous random variables? The possible outcomes of a discrete random variable (finite or infinite) can be listed out;

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

1 Probability space and random variables

1 Probability space and random variables 1 Probability space and random variables As graduate level, we inevitably need to study probability based on measure theory. It obscures some intuitions in probability, but it also supplements our intuition,

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Variance Reduction s Greatest Hits

Variance Reduction s Greatest Hits 1 Variance Reduction s Greatest Hits Pierre L Ecuyer CIRRELT, GERAD, and Département d Informatique et de Recherche Opérationnelle Université de Montréal, Canada ESM 2007, Malta, October 22, 2007 Context

More information

ABSTRACT EXPECTATION

ABSTRACT EXPECTATION ABSTRACT EXPECTATION Abstract. In undergraduate courses, expectation is sometimes defined twice, once for discrete random variables and again for continuous random variables. Here, we will give a definition

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................

More information

Lecture 7. Sums of random variables

Lecture 7. Sums of random variables 18.175: Lecture 7 Sums of random variables Scott Sheffield MIT 18.175 Lecture 7 1 Outline Definitions Sums of random variables 18.175 Lecture 7 2 Outline Definitions Sums of random variables 18.175 Lecture

More information

ECON 3150/4150, Spring term Lecture 6

ECON 3150/4150, Spring term Lecture 6 ECON 3150/4150, Spring term 2013. Lecture 6 Review of theoretical statistics for econometric modelling (II) Ragnar Nymoen University of Oslo 31 January 2013 1 / 25 References to Lecture 3 and 6 Lecture

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Low-discrepancy constructions in the triangle

Low-discrepancy constructions in the triangle Low-discrepancy constructions in the triangle Department of Statistics Stanford University Joint work with Art B. Owen. ICERM 29th October 2014 Overview Introduction 1 Introduction The Problem Motivation

More information

MATH5835 Statistical Computing. Jochen Voss

MATH5835 Statistical Computing. Jochen Voss MATH5835 Statistical Computing Jochen Voss September 27, 2011 Copyright c 2011 Jochen Voss J.Voss@leeds.ac.uk This text is work in progress and may still contain typographical and factual mistakes. Reports

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Chapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration

Chapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration Chapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration Random experiment: uncertainty in outcomes Ω: sample space: a set containing all possible outcomes Definition

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

Conditional independence, conditional mixing and conditional association

Conditional independence, conditional mixing and conditional association Ann Inst Stat Math (2009) 61:441 460 DOI 10.1007/s10463-007-0152-2 Conditional independence, conditional mixing and conditional association B. L. S. Prakasa Rao Received: 25 July 2006 / Revised: 14 May

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

Serena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008

Serena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008 Journal of Uncertain Systems Vol.4, No.1, pp.73-80, 2010 Online at: www.jus.org.uk Different Types of Convergence for Random Variables with Respect to Separately Coherent Upper Conditional Probabilities

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Introduction to Discrepancy Theory

Introduction to Discrepancy Theory Introduction to Discrepancy Theory Aleksandar (Sasho) Nikolov University of Toronto Sasho Nikolov (U of T) Discrepancy 1 / 23 Outline 1 Monte Carlo 2 Discrepancy and Quasi MC 3 Combinatorial Discrepancy

More information

SDS : Theoretical Statistics

SDS : Theoretical Statistics SDS 384 11: Theoretical Statistics Lecture 1: Introduction Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin https://psarkar.github.io/teaching Manegerial Stuff

More information

Transforms. Convergence of probability generating functions. Convergence of characteristic functions functions

Transforms. Convergence of probability generating functions. Convergence of characteristic functions functions Transforms For non-negative integer value ranom variables, let the probability generating function g X : [0, 1] [0, 1] be efine by g X (t) = E(t X ). The moment generating function ψ X (t) = E(e tx ) is

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information