Math 341: Probability Eighteenth Lecture (11/12/09)

Size: px
Start display at page:

Download "Math 341: Probability Eighteenth Lecture (11/12/09)"

Transcription

1 Math 341: Probability Eighteenth Lecture (11/12/09) Steven J Miller Williams College Steven.J.Miller@williams.edu public html/341/ Bronfman Science Center Williams College, November 12, 2009

2 Summary for the Day

3 Summary for the day Complex Analysis: Review definitions / statements. Accumulation point theorems. Clicker question: Statement of the CLT. Clicker question on rate of convergence. Central Limit Theorem: Poisson example. Proof with MGFs. Proof with Fourier analysis.

4 Review

5 5 Summary for the Day Review Accumulation and Moments Clicker Questions CLT and MGF CLT and Fourier Analysis Accumulation points and functions Theorem Let f be an analytic function on an open set U, with infinitely many zeros z 1, z 2, z 3,... If lim n z n U, then f is identically zero on U. In other words, if a function is zero along a sequence in U whose accumulation point is also in U, then that function is identically zero in U.

6 Schwartz Space Schwartz space S(R): all infinitely differentiable functions f such that, for any non-negative integers m and n, (1+x 2 ) m d n f dx n <. sup x R Inversion Theorem for Fourier Transform: Let f S(R). Then f(x) = ˆf(y)e 2πixy dy. f, g S(R) withˆf = ĝ then f(x) = g(x).

7 7 Summary for the Day Review Accumulation and Moments Clicker Questions CLT and MGF CLT and Fourier Analysis Accumulation and Moments

8 Accumulation Points and Moments Theorem Assume the MGFs M X (t) and M Y (t) exist in a neighborhood of zero (i.e., there is some δ such that both functions exist for t < δ). If M X (t) = M Y (t) in this neighborhood, then F X (u) = F Y (u) for all u. As the densities are the derivatives of the cumulative distribution functions, we have f = g.

9 Accumulation Points and Moments Theorem Let {X i } i I be a sequence of random variables with MGFs M Xi (t). Assume there is a δ > 0 such that when t < δ we have lim i M Xi (t) = M X (t) for some MGF M X (t), and all MGFs converge for t < δ. Then there exists a unique cumulative distribution function F whose moments are determined from M X (t) and for all x where F X (x) is continuous, lim n F Xi (x) = F X (x).

10 Accumulation Points and Moments Theorem: X and Y continuous random variables on [0, ) with continuous densities f and g, all of whose moments are finite and agree, and 1 C > 0 st c C, e (c+1)t f(e t ) and e (c+1)t g(e t ) are Schwartz functions. 2 The (not necessarily integral) moments μ r n (f) = 0 x rn f(x)dx and μ r n (g) = 0 x rn g(x)dx agree for some sequence of non-negative real numbers {r n } n=0 which has a finite accumulation point (i.e., lim n r n = r < ). Then f = g (in other words, knowing all these moments uniquely determines the probability density).

11 Application to equal integral moments Return to the two densities causing trouble: f 1 (x) = 1 2πx 2 e (log2 x)/2 f 2 (x) = f 1 (x)[1+sin(2π log x)].

12 Application to equal integral moments Return to the two densities causing trouble: f 1 (x) = 1 2πx 2 e (log2 x)/2 f 2 (x) = f 1 (x)[1+sin(2π log x)]. Same integral moments: e k 2 /2. Have the correct decay. Using complex analysis (specifically, contour integration), we can calculate the (a+ib) th moments: For f 2 : e (a+ib)2 /2 + i 2 For f 1 : e (a+ib)2 /2 ( e (a+i(b 2π))2 /2 e (a+i(b+2π))2 /2 ).

13 Application to equal integral moments Return to the two densities causing trouble: f 1 (x) = 1 2πx 2 e (log2 x)/2 f 2 (x) = f 1 (x)[1+sin(2π log x)]. No sequence of real moments having an accumulation point where they agree. a th moment of f 2 is e a2 /2 + e (a 2iπ)2 /2 ( 1 e 4iaπ), and this is never zero unless a is a half-integer. Only way this can vanish is if 1 = e 4iaπ.

14 Clicker Questions

15 Normalization of a random variable Normalization (standardization) of a random variable Let X be a random variable with mean μ and standard deviation σ, both of which are finite. The normalization, Y, is defined by Note that Y := X E[X] StDev(X) = X μ. σ E[Y] = 0 and StDev(Y) = 1.

16 Statement of the Central Limit Theorem Normal distribution A random variable X is normally distributed (or has the normal distribution, or is a Gaussian random variable) with mean μ and variance σ 2 if the density of X is f(x) = ( 1 exp 2πσ 2 (x μ)2 2σ 2 ). We often write X N(μ,σ 2 ) to denote this. If μ = 0 and σ 2 = 1, we say X has the standard normal distribution.

17 Statement of the Central Limit Theorem Central Limit Theorem Let X 1,...,X N be independent, identically distributed random variables whose moment generating functions converge for t < δ for some δ > 0 (this implies all the moments exist and are finite). Denote the mean by μ and the variance by σ 2, let and set X N Z N = X 1 + +X N N = X N μ σ/ N. Then as N, the distribution of Z N converges to the standard normal.

18 Statement of the Central Limit Theorem Why are there only tables of values of standard normal?

19 Statement of the Central Limit Theorem Why are there only tables of values of standard normal? Answer: normalization. Similar to log tables (only need one from change of base formula).

20 Alternative Statement of the Central Limit Theorem Central Limit Theorem Let X 1,...,X N be independent, identically distributed random variables whose moment generating functions converge for t < δ for some δ > 0 (this implies all the moments exist and are finite). Denote the mean by μ and the variance by σ 2, let S N = X 1 + +X N and set Z N = S N Nμ. Nσ 2 Then as N, the distribution of Z N converges to the standard normal.

21 Key Probabilities: Key probabilities for Z N(0, 1) (i.e., Z has the standard normal distribution). Prob( Z 1) 68.2%. Prob( Z 1.96) 95%. Prob( Z 2.575) 99%.

22 Convergence to the standard normal Question: Let X 1, X 2,... be iidrv with mean 0 and variance 1, and let Z N = X N /(1/ N). By the CLT Z N N(0, 1); which choice converges fastest? Slowest? 1 Uniform: X Unif( 3, 3). 2 Laplace: f X (x) = e 2 x / 2. 3 Normal: X N(0, 1). 4 Millered Cauchy: f X (x) = a = a sin(π/8) π 1, 1+(ax) 8

23 Convergence to the standard normal Question: Let X 1, X 2,... be iidrv with mean 0 and variance 1, and let Z N = X N /(1/ N). By the CLT Z N N(0, 1); which choice converges fastest? Slowest? 1 Uniform: X Unif( 3, 3). Kurtosis: Laplace: f X (x) = e 2 x / 2. Kurtosis: 6. 3 Normal: X N(0, 1). Kurtosis: 3. 4 Millered Cauchy: f X (x) = 4a sin(π/8) π 1, 1+(ax) 8 a = 2 1. Kurtosis:

24 Convergence to the standard normal Question: Let X 1, X 2,... be iidrv with mean 0 and variance 1, and let Z N = X N /(1/ N). By the CLT Z N N(0, 1); which choice converges fastest? Slowest? 1 Uniform: X Unif( 3, 3). Kurtosis: Laplace: f X (x) = e 2 x / 2. Kurtosis: 6. 3 Normal: X N(0, 1). Kurtosis: 3. 4 Millered Cauchy: f X (x) = 4a sin(π/8) π 1, 1+(ax) 8 a = 2 1. Kurtosis: log M X (t) = t2 2 + (μ 4 3)t 4 4! + O(t 6 ).

25 Convergence to the standard normal Question: Let X 1, X 2,... be iidrv with mean 0 and variance 1, and let Z N = X N /(1/ N). By the CLT Z N N(0, 1); which choice converges fastest? Slowest? 1 Uniform: X Unif( 3, 3). Excess Kurtosis: Laplace: f X (x) = e 2 x / 2. Excess Kurtosis: 3. 3 Normal: X N(0, 1). Excess Kurtosis: 0. 4 Millered Cauchy: f X (x) = 4a sin(π/8) π 1, 1+(ax) 8 a = 2 1. Excess Kurtosis: log M X (t) = t2 2 + (μ 4 3)t 4 4! + O(t 6 ).

26 Convergence to the standard normal Figure: Convolutions of 5 Uniforms.

27 Convergence to the standard normal Figure: Convolutions of 5 Laplaces.

28 Convergence to the standard normal Figure: Convolutions of 5 Normals.

29 Convergence to the standard normal Figure: Convolutions of 1 Millered Cauchy.

30 Convergence to the standard normal Figure: Convolutions of 2 Millered Cauchy.

31 Central Limit Theorem

32 MGF and the CLT Moment generating function of normal distributions Let X be a normal random variable with mean μ and variance σ 2. Its moment generating function satisfies M X (t) = e μt+σ2 t 2 2. In particular, if Z has the standard normal distribution, its moment generating function is M Z (t) = e t2 /2.

33 MGF and the CLT Moment generating function of normal distributions Let X be a normal random variable with mean μ and variance σ 2. Its moment generating function satisfies M X (t) = e μt+σ2 t 2 2. In particular, if Z has the standard normal distribution, its moment generating function is M Z (t) = e t2 /2. Proof: Complete the square.

34 Poisson Example of the CLT Example Let X, X 1,...,X N be Poisson random variables with parameter λ. Let X N = X 1 + +X N, Y = X E[X] N StDev(X). Then as N, Y converges to having the standard normal distribution.

35 Poisson Example of the CLT Example Let X, X 1,...,X N be Poisson random variables with parameter λ. Let X N = X 1 + +X N, Y = X E[X] N StDev(X). Then as N, Y converges to having the standard normal distribution. Moment generating function: M X (t) = exp(λ(e t 1)). Independent formula: M X1 +X 2 (t) = M X1 (t)m X2 (t). Shift formula: M ax+b (t) = e bt M X (at).

36 General proof via Moment Generating Functions X i s iidrv, Z N = X μ σ/ N = N n=1 X i μ σ N.

37 General proof via Moment Generating Functions X i s iidrv, Z N = X μ σ/ N = Moment Generating Function is: M ZN (t) = N n=1 e μt σ N ( ) t MX σ N N n=1 X i μ σ N. ( ) = e μt N N t σ M X σ N

38 General proof via Moment Generating Functions X i s iidrv, Z N = X μ σ/ N = Moment Generating Function is: M ZN (t) = N n=1 e μt σ N ( ) t MX σ N N n=1 X i μ σ N. ( ) = e μt N N t σ M X σ N Taking logarithms: log M ZN (t) = μt N σ ( ) t + N log M X σ. N

39 General proof via Moment Generating Functions (cont) Expansion of MGF: M X (t) = 1+μt + μ 2 t 2 2! + = 1+t ( μ+ μ 2 t ) 2 +.

40 General proof via Moment Generating Functions (cont) Expansion of MGF: M X (t) = 1+μt + μ 2 t 2 + = 1+t 2! Expansion for log(1+u) is log(1+u) = u u2 2 + u3 3!. ( μ+ μ 2 t ) 2 +.

41 General proof via Moment Generating Functions (cont) Expansion of MGF: M X (t) = 1+μt + μ 2 t 2 + = 1+t 2! Expansion for log(1+u) is ( μ+ μ 2 t ) 2 +. Combining gives log M X (t) = t log(1+u) = u u2 2 + u3 3!. ( ( μ+ μ 2 t ) t μ+ μ 2 t 2 + ) 2 2 = μt + μ 2 μ2 t 2 + terms in t 3 or higher. 2 +

42 General proof via Moment Generating Functions (cont) = ( ) t log M X σ N μt σ N + σ2 t 2 2 σ 2 N + terms in t 3 /N 3/2 or lower in N.

43 General proof via Moment Generating Functions (cont) = ( ) t log M X σ N μt σ N + σ2 t 2 2 σ 2 N + terms in t 3 /N 3/2 or lower in N. Denote lower order terms by O(N 3/2 ). Collecting gives log M ZN (t) = μt ( N μt + N σ σ N + t ) 2 2N + O(N 3/2 ) = μt N σ + μt N σ = t O(N 1/2 ). + t O(N 1/2 )

44 Central Limit Theorem and Fourier Analysis

45 Convolutions Convolution of f and g: h(y) = (f g)(y) = R f(x)g(y x)dx = R f(x y)g(x)dx.

46 Convolutions Convolution of f and g: h(y) = (f g)(y) = R f(x)g(y x)dx = R f(x y)g(x)dx. X 1 and X 2 independent random variables with probability density p. Prob(X i [x, x +Δx]) = Prob(X 1 + X 2 ) [x, x +Δx] = x+δx x x+δx x1 x 1 = p(t)dt p(x)δx. x 2 =x x 1 p(x 1 )p(x 2 )dx 2 dx 1.

47 Convolutions Convolution of f and g: h(y) = (f g)(y) = R f(x)g(y x)dx = R f(x y)g(x)dx. X 1 and X 2 independent random variables with probability density p. Prob(X i [x, x +Δx]) = Prob(X 1 + X 2 ) [x, x +Δx] = x+δx x x+δx x1 x 1 = p(t)dt p(x)δx. As Δx 0 we obtain the convolution of p with itself: Prob(X 1 + X 2 [a, b]) = b a x 2 =x x 1 p(x 1 )p(x 2 )dx 2 dx 1. (p p)(z)dz. Exercise to show non-negative and integrates to 1.

48 Statement of Central Limit Theorem WLOG p has mean zero, variance one, finite third moment and decays rapidly so all convolution integrals converge: p infinitely differentiable function satisfying xp(x)dx = 0, x 2 p(x)dx = 1, X 1, X 2,... are iidrv with density p. Define S N = N i=1 X i. x 3 p(x)dx <. Standard Gaussian (mean zero, variance one) is 1 2π e x2 /2.

49 Statement of Central Limit Theorem WLOG p has mean zero, variance one, finite third moment and decays rapidly so all convolution integrals converge: p infinitely differentiable function satisfying xp(x)dx = 0, x 2 p(x)dx = 1, X 1, X 2,... are iidrv with density p. Define S N = N i=1 X i. x 3 p(x)dx <. Standard Gaussian (mean zero, variance one) is 1 2π e x2 /2. Central Limit Theorem Let X i, S N be as above and assume the third moment of each X i is finite. Then S N / N converges in probability to the standard Gaussian: lim Prob N ( SN N [a, b] ) = 1 b e x2 /2 dx. 2π a

50 Proof of the Central Limit Theorem The Fourier transform: ˆp(y) = p(x)e 2πixy dx.

51 Proof of the Central Limit Theorem The Fourier transform: ˆp(y) = p(x)e 2πixy dx. Derivative of ĝ is the Fourier transform of 2πixg(x); differentiation (hard) is converted to multiplication (easy). ĝ (y) = 2πix g(x)e 2πixy dx; g prob. density, ĝ (0) = 2πiE[x], ĝ (0) = 4π 2 E[x 2 ].

52 Proof of the Central Limit Theorem The Fourier transform: ˆp(y) = p(x)e 2πixy dx. Derivative of ĝ is the Fourier transform of 2πixg(x); differentiation (hard) is converted to multiplication (easy). ĝ (y) = 2πix g(x)e 2πixy dx; g prob. density, ĝ (0) = 2πiE[x], ĝ (0) = 4π 2 E[x 2 ]. Natural: mean and variance simple multiples of derivatives of ˆp at zero: ˆp (0) = 0, ˆp (0) = 4π 2.

53 Proof of the Central Limit Theorem The Fourier transform: ˆp(y) = p(x)e 2πixy dx. Derivative of ĝ is the Fourier transform of 2πixg(x); differentiation (hard) is converted to multiplication (easy). ĝ (y) = 2πix g(x)e 2πixy dx; g prob. density, ĝ (0) = 2πiE[x], ĝ (0) = 4π 2 E[x 2 ]. Natural: mean and variance simple multiples of derivatives of ˆp at zero: ˆp (0) = 0, ˆp (0) = 4π 2. We Taylor expand ˆp (need technical conditions on p): ˆp(y) = 1+ p (0) 2 y 2 + = 1 2π 2 y 2 + O(y 3 ). Near origin, ˆp a concave down parabola.

54 Proof of the Central Limit Theorem (cont) Prob(X 1 + +X N [a, b]) = b a (p p)(z)dz.

55 55 Summary for the Day Review Accumulation and Moments Clicker Questions CLT and MGF CLT and Fourier Analysis Proof of the Central Limit Theorem (cont) Prob(X 1 + +X N [a, b]) = b a (p p)(z)dz. The Fourier transform converts convolution to multiplication. If FT[f](y) denotes the Fourier transform of f evaluated at y: FT[p p](y) = ˆp(y) ˆp(y).

56 Proof of the Central Limit Theorem (cont) Prob(X 1 + +X N [a, b]) = b a (p p)(z)dz. The Fourier transform converts convolution to multiplication. If FT[f](y) denotes the Fourier transform of f evaluated at y: FT[p p](y) = ˆp(y) ˆp(y). Do not want the distribution of X 1 + +X N = x, but rather S N = X 1+ +X N N = x.

57 57 Summary for the Day Review Accumulation and Moments Clicker Questions CLT and MGF CLT and Fourier Analysis Proof of the Central Limit Theorem (cont) Prob(X 1 + +X N [a, b]) = b a (p p)(z)dz. The Fourier transform converts convolution to multiplication. If FT[f](y) denotes the Fourier transform of f evaluated at y: FT[p p](y) = ˆp(y) ˆp(y). Do not want the distribution of X 1 + +X N = x, but rather S N = X 1+ +X N N = x. If B(x) = A(cx) for some fixed c = 0, then ˆB(y) = 1 c Â( y c).

58 Proof of the Central Limit Theorem (cont) Prob(X 1 + +X N [a, b]) = b a (p p)(z)dz. The Fourier transform converts convolution to multiplication. If FT[f](y) denotes the Fourier transform of f evaluated at y: FT[p p](y) = ˆp(y) ˆp(y). Do not want the distribution of X 1 + +X N = x, but rather S N = X 1+ +X N N = x. If B(x) = A(cx) for some fixed c = 0, then ˆB(y) = 1 c Â( y c). ) Prob( X1 + +X N N = x = ( Np Np)(x N).

59 Proof of the Central Limit Theorem (cont) Prob(X 1 + +X N [a, b]) = b a (p p)(z)dz. The Fourier transform converts convolution to multiplication. If FT[f](y) denotes the Fourier transform of f evaluated at y: FT[p p](y) = ˆp(y) ˆp(y). Do not want the distribution of X 1 + +X N = x, but rather S N = X 1+ +X N N = x. If B(x) = A(cx) for some fixed c = 0, then ˆB(y) = 1 c Â( y c). ) Prob( X1 + +X N N = x = ( Np Np)(x N). [ FT ( Np Np)(x ] N) (y) = ( )] [ˆp y N. N

60 Proof of the Central Limit Theorem (cont) Can find the Fourier transform of the distribution of S N : [ ( )] N y ˆp. N

61 Proof of the Central Limit Theorem (cont) Can find the Fourier transform of the distribution of S N : [ ( )] N y ˆp. N Take the limit as N for fixed y.

62 Proof of the Central Limit Theorem (cont) Can find the Fourier transform of the distribution of S N : [ ( )] N y ˆp. N Take the limit as N for fixed y. Know ˆp(y) = 1 2π 2 y 2 + O(y 3 ). Thus study [1 2π2 y 2 ( )] y 3 N N + O. N 3/2

63 Proof of the Central Limit Theorem (cont) Can find the Fourier transform of the distribution of S N : [ ( )] N y ˆp. N Take the limit as N for fixed y. Know ˆp(y) = 1 2π 2 y 2 + O(y 3 ). Thus study [1 2π2 y 2 ( )] y 3 N N + O. For any fixed y, [1 2π2 y 2 lim N N N 3/2 ( )] y 3 N + O = e 2π2 y 2. N 3/2

64 Proof of the Central Limit Theorem (cont) Can find the Fourier transform of the distribution of S N : [ ( )] N y ˆp. N Take the limit as N for fixed y. Know ˆp(y) = 1 2π 2 y 2 + O(y 3 ). Thus study [1 2π2 y 2 ( )] y 3 N N + O. For any fixed y, [1 2π2 y 2 lim N N N 3/2 ( )] y 3 N + O = e 2π2 y 2. N 3/2 Fourier transform of 1 2π e x2 /2 at y is e 2π2 y 2.

65 Proof of the Central Limit Theorem (cont) We have shown: the Fourier transform of the distribution of S N converges to e 2π2 y 2 ; the Fourier transform of 1 2π e x2 /2 at y is e 2π2 y 2. Therefore the distribution of S N equalling x converges to 1 2π e x2 /2.

66 Proof of the Central Limit Theorem (cont) We have shown: the Fourier transform of the distribution of S N converges to e 2π2 y 2 ; the Fourier transform of 1 2π e x2 /2 at y is e 2π2 y 2. Therefore the distribution of S N equalling x converges to 1 2π e x2 /2. We need complex analysis to justify this inversion. Must be careful: Consider { e 1/x2 if x = 0 g(x) = 0 if x = 0. All the Taylor coefficients about x = 0 are zero, but the function is not identically zero in a neighborhood of x = 0.

Math 341: Probability Seventeenth Lecture (11/10/09)

Math 341: Probability Seventeenth Lecture (11/10/09) Math 341: Probability Seventeenth Lecture (11/10/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams

More information

Pythagoras at the Bat: An Introduction to Statistics and Mathematical Modeling

Pythagoras at the Bat: An Introduction to Statistics and Mathematical Modeling Pythagoras at the Bat: An Introduction to Statistics and Mathematical Modeling Steven J Miller Williams College sjm1@williams.edu http://www.williams.edu/go/math/sjmiller Wellesley College, September 21,

More information

Pythagoras at the Bat: An Introduction to Statistics and Mathematical Modeling

Pythagoras at the Bat: An Introduction to Statistics and Mathematical Modeling Pythagoras at the Bat: An Introduction to Statistics and Mathematical Modeling Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ Hampshire College,

More information

The Pythagorean Won-Loss Theorem: An introduction to modeling

The Pythagorean Won-Loss Theorem: An introduction to modeling The Pythagorean Won-Loss Theorem: An introduction to modeling Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ Bennington College, December 5, 2008

More information

The Pythagorean Won-Loss Theorem: An introduction to modeling

The Pythagorean Won-Loss Theorem: An introduction to modeling The Pythagorean Won-Loss Theorem: An introduction to modeling Steven J Miller Brown University sjmiller@math.brown.edu http://www.math.brown.edu/ sjmiller Williams College, January 15, 2008. Acknowledgements

More information

Pythagoras at the Bat: An Introduction to Stats and Modeling

Pythagoras at the Bat: An Introduction to Stats and Modeling Pythagoras at the Bat: An Introduction to Stats and Modeling Cameron, Kayla and Steven Miller (sjm1@williams.edu, Williams College) http://web.williams.edu/mathematics/sjmiller/public_html/ Acknowledgments

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

Math 341: Probability Eighth Lecture (10/6/09)

Math 341: Probability Eighth Lecture (10/6/09) Math 341: Probability Eighth Lecture (10/6/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams

More information

Pythagoras at the Bat: An Introduction to Stats and Modeling

Pythagoras at the Bat: An Introduction to Stats and Modeling Pythagoras at the Bat: An Introduction to Stats and Modeling Cameron, Kayla and Steven Miller (sjm1@williams.edu, Williams College) http://web.williams.edu/mathematics/sjmiller/public_html/ Acknowledgments

More information

The Central Limit Theorem

The Central Limit Theorem The Central Limit Theorem (A rounding-corners overiew of the proof for a.s. convergence assuming i.i.d.r.v. with 2 moments in L 1, provided swaps of lim-ops are legitimate) If {X k } n k=1 are i.i.d.,

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

Stochastic Processes for Physicists

Stochastic Processes for Physicists Stochastic Processes for Physicists Understanding Noisy Systems Chapter 1: A review of probability theory Paul Kirk Division of Molecular Biosciences, Imperial College London 19/03/2013 1.1 Random variables

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Lecture The Sample Mean and the Sample Variance Under Assumption of Normality

Lecture The Sample Mean and the Sample Variance Under Assumption of Normality Math 408 - Mathematical Statistics Lecture 13-14. The Sample Mean and the Sample Variance Under Assumption of Normality February 20, 2013 Konstantin Zuev (USC) Math 408, Lecture 13-14 February 20, 2013

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

17 The functional equation

17 The functional equation 18.785 Number theory I Fall 16 Lecture #17 11/8/16 17 The functional equation In the previous lecture we proved that the iemann zeta function ζ(s) has an Euler product and an analytic continuation to the

More information

p. 6-1 Continuous Random Variables p. 6-2

p. 6-1 Continuous Random Variables p. 6-2 Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability (>). Often, there is interest in random variables

More information

Statistics, Data Analysis, and Simulation SS 2015

Statistics, Data Analysis, and Simulation SS 2015 Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Sampling Distributions

Sampling Distributions Sampling Distributions Mathematics 47: Lecture 9 Dan Sloughter Furman University March 16, 2006 Dan Sloughter (Furman University) Sampling Distributions March 16, 2006 1 / 10 Definition We call the probability

More information

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems!

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Murat Koloğlu, Gene Kopp, Steven J. Miller and Yinghui Wang http://www.williams.edu/mathematics/sjmiller/public html Smith College, January

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

18.175: Lecture 15 Characteristic functions and central limit theorem

18.175: Lecture 15 Characteristic functions and central limit theorem 18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Expectation, variance and moments

Expectation, variance and moments Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks

More information

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems!

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Olivial Beckwith, Murat Koloğlu, Gene Kopp, Steven J. Miller and Yinghui Wang http://www.williams.edu/mathematics/sjmiller/public html Colby

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems!

Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Cookie Monster Meets the Fibonacci Numbers. Mmmmmm Theorems! Murat Koloğlu, Gene Kopp, Steven J. Miller and Yinghui Wang http://www.williams.edu/mathematics/sjmiller/public html College of the Holy Cross,

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45 Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions

More information

Lecture Notes on the Gaussian Distribution

Lecture Notes on the Gaussian Distribution Lecture Notes on the Gaussian Distribution Hairong Qi The Gaussian distribution is also referred to as the normal distribution or the bell curve distribution for its bell-shaped density curve. There s

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional

More information

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016 Lecture 3 Probability - Part 2 Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza October 19, 2016 Luigi Freda ( La Sapienza University) Lecture 3 October 19, 2016 1 / 46 Outline 1 Common Continuous

More information

S n = x + X 1 + X X n.

S n = x + X 1 + X X n. 0 Lecture 0 0. Gambler Ruin Problem Let X be a payoff if a coin toss game such that P(X = ) = P(X = ) = /2. Suppose you start with x dollars and play the game n times. Let X,X 2,...,X n be payoffs in each

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Polynomial Approximations and Power Series

Polynomial Approximations and Power Series Polynomial Approximations and Power Series June 24, 206 Tangent Lines One of the first uses of the derivatives is the determination of the tangent as a linear approximation of a differentiable function

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. Determine whether the following statements are true or false. Justify your answer (i.e., prove the claim, derive a contradiction or give a counter-example). (a) (10

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

continuous random variables

continuous random variables continuous random variables continuous random variables Discrete random variable: takes values in a finite or countable set, e.g. X {1,2,..., 6} with equal probability X is positive integer i with probability

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Section 5.5 More Integration Formula (The Substitution Method) 2 Lectures. Dr. Abdulla Eid. College of Science. MATHS 101: Calculus I

Section 5.5 More Integration Formula (The Substitution Method) 2 Lectures. Dr. Abdulla Eid. College of Science. MATHS 101: Calculus I Section 5.5 More Integration Formula (The Substitution Method) 2 Lectures College of Science MATHS : Calculus I (University of Bahrain) Integrals / 7 The Substitution Method Idea: To replace a relatively

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

1 Generating functions

1 Generating functions 1 Generating functions Even quite straightforward counting problems can lead to laborious and lengthy calculations. These are greatly simplified by using generating functions. 2 Definition 1.1. Given a

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Characteristic Functions and the Central Limit Theorem

Characteristic Functions and the Central Limit Theorem Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

1 Characteristic functions

1 Characteristic functions 1 Characteristic functions Life is the sum of trifling motions. Joseph Brodsky In this chapter we introduce the characteristic function, a tool that plays a central role in the mathematical description

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 1940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic -

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Brown University Analysis Seminar

Brown University Analysis Seminar Brown University Analysis Seminar Eigenvalue Statistics for Ensembles of Random Matrices (especially Toeplitz and Palindromic Toeplitz) Steven J. Miller Brown University Providence, RI, September 15 th,

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Fundamental Tools - Probability Theory II

Fundamental Tools - Probability Theory II Fundamental Tools - Probability Theory II MSc Financial Mathematics The University of Warwick September 29, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory II 1 / 22 Measurable random

More information

Moment Generating Functions

Moment Generating Functions MATH 382 Moment Generating Functions Dr. Neal, WKU Definition. Let X be a random variable. The moment generating function (mgf) of X is the function M X : R R given by M X (t ) = E[e X t ], defined for

More information

18.175: Lecture 17 Poisson random variables

18.175: Lecture 17 Poisson random variables 18.175: Lecture 17 Poisson random variables Scott Sheffield MIT 1 Outline More on random walks and local CLT Poisson random variable convergence Extend CLT idea to stable random variables 2 Outline More

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

From Fibonacci Numbers to Central Limit Type Theorems

From Fibonacci Numbers to Central Limit Type Theorems From Fibonacci Numbers to Central Limit Type Theorems Murat Koloğlu, Gene Kopp, Steven J. Miller and Yinghui Wang Williams College, October 1st, 010 Introduction Previous Results Fibonacci Numbers: F n+1

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................

More information

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if Paninski, Intro. Math. Stats., October 5, 2005 35 probability, Z P Z, if P ( Z Z > ɛ) 0 as. (The weak LL is called weak because it asserts convergence in probability, which turns out to be a somewhat weak

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B) REVIEW OF MAIN CONCEPTS AND FORMULAS Boolean algebra of events (subsets of a sample space) DeMorgan s formula: A B = Ā B A B = Ā B The notion of conditional probability, and of mutual independence of two

More information

1. Point Estimators, Review

1. Point Estimators, Review AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

PROBABILITY THEORY LECTURE 3

PROBABILITY THEORY LECTURE 3 PROBABILITY THEORY LECTURE 3 Per Sidén Division of Statistics Dept. of Computer and Information Science Linköping University PER SIDÉN (STATISTICS, LIU) PROBABILITY THEORY - L3 1 / 15 OVERVIEW LECTURE

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture VIc (19.11.07) Contents: Maximum Likelihood Fit Maximum Likelihood (I) Assume N measurements of a random variable Assume them to be independent and distributed according

More information

Probability Distributions Columns (a) through (d)

Probability Distributions Columns (a) through (d) Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)

More information

MATH2715: Statistical Methods

MATH2715: Statistical Methods MATH2715: Statistical Methods Exercises V (based on lectures 9-10, work week 6, hand in lecture Mon 7 Nov) ALL questions count towards the continuous assessment for this module. Q1. If X gamma(α,λ), write

More information

Mathematical Statistics 1 Math A 6330

Mathematical Statistics 1 Math A 6330 Mathematical Statistics 1 Math A 6330 Chapter 2 Transformations and Expectations Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 14, 2015 Outline 1 Distributions of Functions

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016 Math 4B Notes Written by Victoria Kala vtkala@math.ucsb.edu SH 6432u Office Hours: T 2:45 :45pm Last updated 7/24/206 Classification of Differential Equations The order of a differential equation is the

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Continuous distributions

Continuous distributions CHAPTER 7 Continuous distributions 7.. Introduction A r.v. X is said to have a continuous distribution if there exists a nonnegative function f such that P(a X b) = ˆ b a f(x)dx for every a and b. distribution.)

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/

More information