Expectation MATH Expectation. Benjamin V.C. Collins, James A. Swenson MATH 2730

Similar documents
Discrete Random Variables

Topic 3: The Expectation of a Random Variable

success and failure independent from one trial to the next?

MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM

Random Variables. Lecture 6: E(X ), Var(X ), & Cov(X, Y ) Random Variables - Vocabulary. Random Variables, cont.

Math 510 midterm 3 answers

3 Multiple Discrete Random Variables

Bivariate scatter plots and densities

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.

Chapter 2: Discrete Distributions. 2.1 Random Variables of the Discrete Type

Topic 3: The Expectation of a Random Variable

EXAM. Exam #1. Math 3342 Summer II, July 21, 2000 ANSWERS

Functions of random variables

Topics in Discrete Mathematics

Topic 3 Random variables, expectation, and variance, II

CME 106: Review Probability theory

(b). What is an expression for the exact value of P(X = 4)? 2. (a). Suppose that the moment generating function for X is M (t) = 2et +1 3

Statistics for Economists. Lectures 3 & 4

Random variables (discrete)

Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions

Lecture 10. Variance and standard deviation

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

Basic Probability. Introduction

UNIT NUMBER PROBABILITY 6 (Statistics for the binomial distribution) A.J.Hobson

Math 493 Final Exam December 01

CS5314 Randomized Algorithms. Lecture 5: Discrete Random Variables and Expectation (Conditional Expectation, Geometric RV)

Week 12-13: Discrete Probability

SDS 321: Introduction to Probability and Statistics

18.440: Lecture 19 Normal random variables

5 CORRELATION. Expectation of the Binomial Distribution I The Binomial distribution can be defined as: P(X = r) = p r q n r where p + q = 1 and 0 r n

X = X X n, + X 2

Notes 12 Autumn 2005

Expectation is linear. So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then,

STAT 430/510 Probability Lecture 7: Random Variable and Expectation

Properties of Random Variables

Random Variables and Expectations

Notice how similar the answers are in i,ii,iii, and iv. Go back and modify your answers so that all the parts look almost identical.

Suppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8.

FINAL EXAM: Monday 8-10am

Analysis of Engineering and Scientific Data. Semester

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro

Chapter 3 Probability Distributions and Statistics Section 3.1 Random Variables and Histograms

Math 151. Rumbos Fall Solutions to Review Problems for Exam 2. Pr(X = 1) = ) = Pr(X = 2) = Pr(X = 3) = p X. (k) =

Expectation and Variance

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

SOR201 Solutions to Examples 3

Overview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20

Discrete Random Variables

M378K In-Class Assignment #1

Review of Probability. CS1538: Introduction to Simulations

STA 2023 EXAM-2 Practice Problems From Chapters 4, 5, & Partly 6. With SOLUTIONS

Random variables, Expectation, Mean and Variance. Slides are adapted from STAT414 course at PennState

Multivariate distributions

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

Random Variable. Pr(X = a) = Pr(s)

MATH Solutions to Probability Exercises

MA 250 Probability and Statistics. Nazar Khan PUCIT Lecture 15

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

Bell-shaped curves, variance

MATH 151, FINAL EXAM Winter Quarter, 21 March, 2014

STA 2023 EXAM-2 Practice Problems. Ven Mudunuru. From Chapters 4, 5, & Partly 6. With SOLUTIONS

Nonparametric hypothesis tests and permutation tests

MAS113 Introduction to Probability and Statistics

CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G.

1 Presessional Probability

STAT 414: Introduction to Probability Theory

Steve Smith Tuition: Maths Notes

Probability Dr. Manjula Gunarathna 1

DEFINITION: IF AN OUTCOME OF A RANDOM EXPERIMENT IS CONVERTED TO A SINGLE (RANDOM) NUMBER (E.G. THE TOTAL

Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 2 MATH00040 SEMESTER / Probability

Discrete Random Variables

Lecture 10: Bayes' Theorem, Expected Value and Variance Lecturer: Lale Özkahya

Notes for Math 324, Part 17

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

EE514A Information Theory I Fall 2013

Chapter 2.5 Random Variables and Probability The Modern View (cont.)

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

Week 2. Review of Probability, Random Variables and Univariate Distributions

9/6/2016. Section 5.1 Probability. Equally Likely Model. The Division Rule: P(A)=#(A)/#(S) Some Popular Randomizers.

Probability and Statistics. Vittoria Silvestri

2 (Statistics) Random variables

Edexcel past paper questions

Special distributions

Joint Distribution of Two or More Random Variables

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019

Probability calculus and statistics

Lecture 2: Discrete Probability Distributions

CMPSCI 240: Reasoning Under Uncertainty

CSE 312, 2011 Winter, W.L.Ruzzo. 6. random variables

Properties of Summation Operator

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 1

What is Probability? Probability. Sample Spaces and Events. Simple Event

More on Distribution Function

The Geometric Distribution

More than one variable

Political Science 6000: Beginnings and Mini Math Boot Camp

Transcription:

MATH 2730 Expectation Benjamin V.C. Collins James A. Swenson

Average value Expectation Definition If (S, P) is a sample space, then any function with domain S is called a random variable. Idea Pick a real-valued random variable X : S R. If we repeat our experiment many times, what would we expect to be the average value of X over all trials?

Expectation of a paradise Example Let X denote the sum shown on two fair dice. Roll the dice 36 times, recording X each time. Estimate the most likely average value of X. Solution. Let D = {1, 2, 3, 4, 5, 6}, S = D D, and P(a, b) = 1 36 for all (a, b) S. Let X : S R be defined by X (a, b) = a + b. Out of 36 rolls, we expect X = 2 once, X = 3 twice, etc. So our expected average value is 1 2+2 3+3 4+4 5+5 6+6 7+5 8+4 9+3 10+2 11+1 12 36 = 252 36 = 7.

Generalizing from the example Summary Let a S. If N is large, we expect a to occur about N P(a) times, which adds N P(a) X (a) to our sum. Thus the average value is expected to be 1 NP(a)X (a), or just P(a)X (a). N a S a S In the example, we grouped together outcomes where X has the same value. In general, for any t R, we expect X to equal t about N P(X = t) times, which adds N P(X = t) t to our sum. Thus the average value is expected to be t R tp(x = t).

Formulæ Expectation Definition Let (S, P) be a sample space and X : S R a random variable. The expectation (or expected value, or mean) of X is µ = E(X ) = a S P(a)X (a). Proposition If (S, P) is a sample space and X : S R is a random variable, then E(X ) = tp(x = t). t R

Face value of a card Expectation Example Let (S, P) model the draw of one card from a 52-card deck, and let Z : S N give the face value of the card. The expectation of Z is E(Z) = t R = 10 t=1 tp(x = t) tp(x = t) = P(X = 1) + 2P(X = 2) + + 9P(X = 9) + 10P(X = 10) = 1 13 + 2 1 13 + + 9 1 13 + 10 4 13 = 1 85 (1 + 2 + + 9 + 40) = 13 13 6.54.

Center of mass Expectation Example Suppose we have n numbered weights on a seesaw. Let S = {1,..., n}, set P(k) = 1 n for all k S, and define X : S R so that X (k) is the position of the k th weight, treating the seesaw as a number line. If the system balances, where is the fulcrum? k 1 2 3 4 5 6 7 8 X (k) 2 2 1 1 1 3 3 7

Center of mass Expectation Solution. Fix units so that each weight has mass P(k). Let c R be the location of the fulcrum. The moment of the k th weight is P(k)[X (k) c], the mass times the signed distance from the fulcrum. The system balances when its net moment is zero: P(k)[X (k) c] = 0 k S [P(k)X (k) cp(k)] = 0 k S P(k)X (k) k S k S cp(k) = 0 P(k)X (k) = c k S k S E(X ) = c P(k)

Center of mass Expectation Example In our example, c = E(X ) = 2 8 ( 2) + 3 8 (1) + 2 8 (3) + 1 4 + 3 + 6 + 7 (7) = = 12 8 8 8 = 3 2. k 1 2 3 4 5 6 7 8 X (k) 2 2 1 1 1 3 3 7

Center of mass Expectation Example In our example, c = E(X ) = 2 8 ( 2) + 3 8 (1) + 2 8 (3) + 1 4 + 3 + 6 + 7 (7) = = 12 8 8 8 = 3 2. Even if not all outcomes are equally likely, we can picture E(X ) as the balancing point of a seesaw with mass P(X = t) at each t R. We don t even have to change our proof!

Working with multiple random variables Proposition If (S, P) is a sample space and X : S R and Y : S R are random variables, then there is a random variable X + Y defined by (X + Y )(a) = X (a) + Y (a), and its expectation is Proof. E(X + Y ) = E(X ) + E(Y ). E(X + Y ) = a S P(a)[X (a) + Y (a)] = (a) + P(a)Y (a)] a S[P(a)X = P(a)X (a) + P(a)Y (a) a S a S = E(X ) + E(Y ).

Working with multiples of random variables Proposition If (S, P) is a sample space, X : S R is a random variable, and c R, then there is a random variable cx defined by (cx )(a) = c X (a), and its expectation is E(cX ) = ce(x ). Proof. E(cX ) = a S P(a)[cX (a)] = c a S P(a)X (a) = ce(x ).

Working with multiples of multiple random variables Theorem (expectation is linear) If (S, P) is a sample space, X : S R and Y : S R are random variables, and c 1 and c 2 are real numbers, then E(c 1 X + c 2 Y ) = c 1 E(X ) + c 2 E(Y ). Proof. E(c 1 X + c 2 Y ) = E(c 1 X ) + E(c 2 Y ) = c 1 E(X ) + c 2 E(Y )

Working with products of random variables Proposition If (S, P) is a sample space and X : S R and Y : S R are random variables, then there is a random variable XY defined by (XY )(a) = X (a)y (a). The expectation of XY is E(XY ) = E(X )E(Y ) if X and Y are independent random variables.

Working with products of random variables Proof. E(XY ) = t R = t t R tp(xy = t) = t R = = u R (u,v) R R (u,v) R R:uv=t (u,v) R R:uv=t P(X = u and Y = v) uvp(x = u)p(y = v) uvp(x = u)p(y = v) uvp(x = u)p(y = v) = E(X )E(Y ). v R

Pay attention to details! Expectation Warning This is not an theorem! It is possible that E(XY ) = E(X )E(Y ) even when X and Y are not independent!

Spread Expectation

What is the variance? Definition Let (S, P) be a sample space and X : S R a random variable; let µ = E(X ). The variance of X is σ 2 = Var(X ) = E[(X µ) 2 ]. [The number σ = Var(X ) is the standard deviation of X.] Idea Qualitatively: Var(X ) is large if X is often far from µ; Var(X ) is small if X is usually close to µ. That is, Var(X ) is a measure of spread in the values of X.

Rolling one die Example Roll a fair die; let X denote the number rolled. The expectation of X is E(X ) = t R tp(x = t) = 6 t/6 = 1 6 t=1 6 t = 1 ( ) 7 = 21 6 2 6 = 7 2. t=1

Rolling one die Expectation Example The variance of X is Var(X ) = E [ (X µ) 2] = E = t R = 1 6 [ ( X 7 ) ] 2 2 ( t 7 2 P(t) 2) [ ( 1 7 ) 2 ( + + 6 7 ) ] 2 2 2 = 1 25 + 9 + 1 + 1 + 9 + 25 6 4 = 70 24 = 35 12 2.917.

An easier formula Proposition If X is a real-valued random variable, then Var(X ) = E [ X 2] E[X ] 2. Proof. Var(X ) = E[(X µ) 2 ] = E(X 2 2µX + µ 2 1) = E(X 2 ) 2µE(X ) + µ 2 E(1) = E(X 2 ) 2E(X ) 2 + E(X ) 2 (1) = E(X 2 ) E(X ) 2.

Rolling one die Expectation Example Roll a fair die; let X denote the number rolled. The variance of X is Var(X ) = E(X 2 ) E(X ) 2 [ ] = t 2 P(X = t) t R [ 6 ] = 1 t 2 49 6 4 t=1 ( ) 2 7 2 = 1 6 6(7)(13) 49 6 4 546 441 = = 105 36 36 = 35 3 2.917.

Gardening Expectation Example Given: the probability that a beet seed germinates is 0.8. I m planting a seed; let X = 1 if the seed germinates and X = 0 otherwise. [X is called a zero-one random variable, or an indicator random variable.] Find the expectation and variance of X. Remark My seed is a Bernoulli trial with p = 0.8.

Gardening Solution. E(X ) = k R kp(x = k) = P(X = 1) = p = 0.8. Var(x) = E(X 2 ) E(X ) 2 = E(X ) E(X ) 2 = p p 2 = p(1 p) = pq = 0.8 0.2 = 0.16.

Moral of the example Summary If X is a zero-one random variable modeling a Bernoulli trial, then E(X ) = p; Var(X ) = pq = p(1 p).

Gardening Expectation Example Given: the probability that a beet seed germinates is 0.8. I will plant 9 beet seeds: let X denote the number of seeds that germinate. Find E(X ) and Var(X ). Remark Assuming that the seeds germinate independently, my garden is a binomial experiment with p = 0.8 and n = 9. Hence P(X = k) = ( ) n k p k q n k = ( 9 k) (0.8) k (0.2) 9 k.

Gardening Expectation Solution. E(X ) = = = 9 k=0 9 k=1 8 j=0 = 9(0.8) ( 9 k (0.8) k) k (0.2) 9 k k 9! k(k 1)!(9 k)! (0.8)k (0.2) 9 k 9! (j)!(9 (j + 1))! (0.8)j+1 (0.2) 9 (j+1) (j = k 1) 8 j=0 8 = 9(0.8) j=0 8! (j)!(8 j)! (0.8)j (0.2) 8 j ( 8 j ) (0.8) j (0.2) 8 j = 9(0.8)[0.8 + 0.2] 8 = 9(0.8) = 7.2.

Gardening Remark By the same proof, the expectation of any binomial random variable is E(X ) = np. But this was too much work! Instead, let X i = 1 if the i th seed germinates and X i = 0 otherwise. Then: X = X 1 + X 2 + + X 9 E(X ) = E(X 1 + X 2 + + X 9 ) = E(X 1 ) + E(X 2 ) + + E(X 9 ) (linearity) = p + p + + p (Bernoulli example) = np = 9(0.8) = 7.2.

Gardening Expectation We can try to find Var(X ) in the same way, but we don t have a formula for Var(X + Y ). So... Lemma If X : S R and Y : S R are independent random variables on (S, P), then Var(X + Y ) = Var(X ) + Var(Y ).

Gardening Proof of lemma. Var(X + Y ) = E[(X + Y ) 2 ] E[X + Y ] 2 = E(X 2 + 2XY + Y 2 ) [E(X ) + E(Y )] 2 = E(X 2 ) + 2E(XY ) + E(Y 2 ) [E(X ) 2 + 2E(X )E(Y ) + E(Y ) 2 ] = E(X 2 ) E(X ) 2 + E(Y 2 ) E(Y ) 2 +2 [E(XY ) E(X )E(Y )] = Var(X ) + Var(Y ).

Gardening Solution to the beet problem. Let X i = 1 if the i th seed germinates and X i = 0 otherwise. We have assumed that these are independent variables, so: X = X 1 + X 2 + + X 9 Var(X ) = Var(X 1 + X 2 + + X 9 ) = Var(X 1 ) + Var(X 2 ) + + Var(X 9 ) (lemma) = pq + pq + + pq (Bernoulli example) = npq = 9(0.8)(0.2) = 1.44.

Moral of the example Summary If X is a binomial random variable, then E(X ) = np; Var(X ) = npq = np(1 p).