Properties of Random Variables

Similar documents
Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

Properties of Summation Operator

BASICS OF PROBABILITY

LIST OF FORMULAS FOR STK1100 AND STK1110

Let X and Y denote two random variables. The joint distribution of these random

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Multivariate probability distributions and linear regression

Formulas for probability theory and linear models SF2941

ECE Lecture #9 Part 2 Overview

Review: mostly probability and some statistics

Probability and Distributions

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Continuous Random Variables

Bivariate distributions

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

ACM 116: Lectures 3 4

Mathematical Statistics

Chapter 4. Chapter 4 sections

Probability and Statistics Notes

MAS113 Introduction to Probability and Statistics

3 Conditional Expectation

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

Final Exam # 3. Sta 230: Probability. December 16, 2012

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

Expectation and Variance

STAT 430/510: Lecture 16

Sampling Distributions

1 Basic continuous random variable problems

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example

Expectation MATH Expectation. Benjamin V.C. Collins, James A. Swenson MATH 2730

Lecture 4. August 24, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

4. Distributions of Functions of Random Variables

ENGG2430A-Homework 2

Math 510 midterm 3 answers

Statistics. Statistics

3. Probability and Statistics

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51

4 Moment generating functions

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

Lecture 19: Properties of Expectation

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example

Chapter 5. Chapter 5 sections

Practice Examination # 3

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

Exercises and Answers to Chapter 1

Bivariate scatter plots and densities

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

6 The normal distribution, the central limit theorem and random samples

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Convergence in Distribution

2 (Statistics) Random variables

Mathematics of Finance Problem Set 1 Solutions

1 Presessional Probability

Joint Distribution of Two or More Random Variables

Probability Theory and Statistics. Peter Jochumzen

This does not cover everything on the final. Look at the posted practice problems for other topics.

5 Operations on Multiple Random Variables

More than one variable

FE 5204 Stochastic Differential Equations

Limiting Distributions

1 Solution to Problem 2.1

Random Variables. P(x) = P[X(e)] = P(e). (1)

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Proving the central limit theorem

Notes 12 Autumn 2005

we need to describe how many cookies the first person gets. There are 6 choices (0, 1,... 5). So the answer is 6.

1 Random Variable: Topics

FINAL EXAM: Monday 8-10am

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS

6.041/6.431 Fall 2010 Quiz 2 Solutions

Class 8 Review Problems solutions, 18.05, Spring 2014

SOLUTION FOR HOMEWORK 11, ACTS 4306

Lecture 3 - Expectation, inequalities and laws of large numbers

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Random Variables. Lecture 6: E(X ), Var(X ), & Cov(X, Y ) Random Variables - Vocabulary. Random Variables, cont.

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

Lecture 1: August 28

Conditional distributions (discrete case)

STA 256: Statistics and Probability I

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

1.1 Review of Probability Theory

Stochastic Models of Manufacturing Systems

Gaussian vectors and central limit theorem

Continuous r.v practice problems

Evaluating Classifiers. Lecture 2 Instructor: Max Welling

Multivariate distributions

Ch. 5 Joint Probability Distributions and Random Samples

Section 9.1. Expected Values of Sums

Dennis Bricker Dept of Mechanical & Industrial Engineering The University of Iowa

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

EE4601 Communication Systems

Transcription:

Properties of Random Variables 1 Definitions A discrete random variable is defined by a probability distribution that lists each possible outcome and the probability of obtaining that outcome If the random variable x is discrete with n possible outcomes, here is its probablilty distribution Outcome Probability x 1 p 1 x 2 p 2 x n p n The commonly used discrete Binomial random variable shows the number of successes for n independent random trials when the probability of a single success is p Here is the Binomial probability distribution f for obtaining k successes out of n trials: ( n f(k n, p p k (1 p n k k Example: What is the probability distribution for the number of successes for shooting 3 basketball free throws when you are a 80% free throw shooter? Answer: Outcome Probability 0 0008 1 0096 2 0384 3 0512 A continuous random variable is defined by its probability density function To find the probability that a x b for the continuous random variable x, find the area under its probability density p: P (a x b b a p(x dx Note that for any value a, P (x a 0 for a continuous random variable x 1

The most popular continuous distribution is the normal distribution, which is discussed in Section 4 2 Expected Value The expected value of a random variable is the weighted average of its possible values Each value is weighted by the probability that the outcome occurs If a discrete random variable x has outcomes x 1, x 2,, x n, with probabilities p 1, p 2,, p n, respectively, the expected value of x is E(x x i p i If a continuous random variable has the probability density p, its expected value is defined by E(x xp(x dx In the justification of the properties of random variables later in this section, we assume continuous random variables The justifications for discrete random variables are obtained by replacing the integrals with summations For the remainder of this section, the letters x and y represent random variables and the letter c represents a constant Property 1: E(x + y E(x + E(y Let p(x and q(y be the probability density functions of the random variables x and y, respectively Also let r(x, y be the joint density of x and y considered together We have p(x r(x, y dy and q(y r(x, y dx p(x and q(x are called the marginal densities of r(x, y 2

Then E(x + y x (x + yr(x, y dy dx x r(x, y dy dx + x p(x dx + E(x + E(y r(x, y dy dx + y q(x dy y y r(x, ydy dx r(x, ydx dy Property 2: E(c c c can be thought of as a discrete random variable that takes on the value c with probability 1 E(c is then simply the value of c times 1 Property 3: E(cx c E(x variable x Then Let p(x be the probability density function of the random E(cx cx p(x dx c x p(x dx c E(x 2 Independence Two random variables x and y are independent if their joint density r(x, y factors into two one-dimensional densities: r(x, y p(xq(y Property 4: If x and y are independent, E(xy E(x E(y 3

E(xy x p(x xy r(x, y dy dx xy p(xq(y dy dx x p(x dx E(x E(y y q(y dy dx y q(y dy 3 Variance The variance of a random variable x is defined as Var(x E(x E(x 2 Property 5: Var(x E(x 2 E(x 2 Var(x E(x E(x 2 E(x 2 2 E(x E(x + E(x 2 E(x 2 E(x 2 Property 6: If x and y are independent, Var(x + y Var(x + Var(y Var(x + y E((x + y 2 E(x + y 2 E(x 2 + 2xy + y 2 (E(x + E(y 2 E(x 2 + 2 E(xy + E(y 2 (E(x 2 + 2 E(x E(y + E(y 2 E(x 2 + 2 E(x E(y + E(y 2 (E(x 2 + 2 E(x E(y + E(y 2 E(x 2 E(x 2 + 2 E(x E(y 2 E(x E(y + E(y 2 E(y 2 Var(x + Var(y Property 7: Var(cx c 2 Var(x 4

Var(cx E((cx 2 E(cx 2 E(c 2 x 2 (c E(x 2 c 2 E(x 2 c 2 E(x 2 c 2 (E(x 2 E(x 2 c 2 Var(x 4 The Normal Density The normal distribution with mean µ and variance σ 2 is defined by its density ( φ(x 1 x µ 2 σ 2π e 2σ (1 The expression x N(µ, σ 2 (2 means that the random variable x has a normal distribution with mean µ and variance σ 2 Ask the instructor for references if you are interested in a justification of (1 implies (2 A modern justification that E(x µ and Var(x σ 2 for a normal random variable x uses the moment generating function, which is defined as M x (t E(e tx For a N(µ, σ 2 random variable, M x (t e tµ+ 1 2 σ2 t 2 5 The Unbiased Property of the Sample Mean A statistic t is unbiased for a parameter θ if E(t θ Property 8: x N(µ, σ 2 implies x is unbiased for µ Suppose that x i N(µ, σ 2 for each observation x i in the random sample x 1,, x n Then E(x i µ and Var(x i σ 2 Furthermore, ( ( 1 E( x E x i 1 n n E x i 1 E(x i 1 µ n n n n µ µ 5

Property 9: If the observations in the random sample x 1, x 2,, x n are independent and x i N(µ, σ 2 for each observation x i, Var( x Var ( 1 n 1 n 2 Var( x σ2 n ( x i 1 n Var x 2 i 1 Var(x n 2 i σ 2 n n 2 σ2 σ2 n 7 The Unbiased Property of the Sample Variance The sample standard deviation s is defined as s 1 n 1 (x i x 2 The n 1 in the denominator is chosen so that s 2 x will be unbiased for σ 2 x2, as we see in this property: Property 10: s 2 x is unbiased for σ 2 x Assume that we have a random sample x 1,, x n, where all observations are independent, E(x i µ and Var(x i σ 2 for all observations Since Var(x i E(x 2 i E(x i 2 by Property 4, σ 2 E(x i 2 µ 2, so E(x 2 i σ 2 + µ 2 Also, if x i x j, using the fact that x i and x j are independent, by Property 3, E(x i x j E(x i E(x j µµ µ 2 We then 6

have ( E(SSE E (x i x 2 Therefore, ( E x i 1 n ( ( E x 2 i 2 n ( E x 2 i 2 n E(s x E E(x 2 i 2 n E(x 2 i 2 n E(x 2 i 1 n j1 2 x j x i x j + 1 n 2 j1 x i x j + 1 n 2 j1 x j x k k1 E(x i x j + 1 n 2 j1 E(x i x j + n n 2 j1 E(x i x j j1 j1 k1 x j x k j1 k1 n(σ 2 + µ 2 1 n [n(σ2 + µ 2 + n(n 1µ 2 ] nσ 2 + nµ 2 σ 2 + µ 2 + (n 1µ 2 ] (n 1σ 2 j1 k1 E(x j x k E(x j x k ( SSE 1 n 1 n 1 E (SSE 1 n 1 (n 1σ2 σ 2 Random Vectors Property 11: E(v + w E(v + E(w 7

v 1 + w 1 E(v + w E v n + w n E(v 1 + E(w 1 E(v n + E(w n E(v + E(w E(v 1 + w 1 E(v n + w n E(v 1 E(v n + E(w 1 E(w n Property 12: If b is a constant vector, E(b b E(b E b 1 b n E(b 1 E(b n b 1 b n b Property 13: E(Av A E(w 8

Let A be an m n matrix and v be an n 1 vector a 11 a 1n v 1 E(Av E a m1 a mn v n ( a 1i v i E a 1i v i E ( a mi v i E a mi v i A E(v a 1i E(v i a 1i a 1n E(v 1 a n1 a nn E(v n a mi E(v i Property 14: Cov(v + b Cov(v Cov(v + b Var(v 1 + b Cov(v n + b, v 1 + b Cov(v 1 + b, v n + b Var(v n + b Var(v 1 Cov(v n, v 1 Cov(v 1, v n Var(v n Cov(v Property 15: Cov(Av A Cov(vA T 9

Let Cov(v i, v j σ ij Then a 11 a 1n v 1 Cov(Av Cov a n1 a nn v n n a 1iv i Cov n a niv i ( ( Var a 1i v i Cov a 1i v i, a ni v i, ( ( Cov a 1i v i, a ni v i, Var a nn v i j1 j1 a 1i a 1j σ ij a 1i a nj σ ij j1 a ni a 1j σ ij a ni a nj σ ij a 11 a 1n a n1 a nn j1 a 1j σ 1j a nj σ 1j a 1j σ nj a nj σ nj a 11 a 1n σ 11 σ 1n a 11 a n1 a n1 a nn σ n1 σ nn a 1n a nn A Cov(vA T 10