Mathematical Statistics 1 Math A 6330

Similar documents
Transformations and Expectations

Lecture 3: Random variables, distributions, and transformations

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Continuous Random Variables and Continuous Distributions

p. 6-1 Continuous Random Variables p. 6-2

Mathematical Statistics 1 Math A 6330

2 Functions of random variables

Random Variables and Their Distributions

Continuous Random Variables

Northwestern University Department of Electrical Engineering and Computer Science

SDS 321: Introduction to Probability and Statistics

Fundamental Tools - Probability Theory II

Chapter 4. Continuous Random Variables 4.1 PDF

Continuous Random Variables

Lecture 5: Moment generating functions

Lecture 4. Continuous Random Variables and Transformations of Random Variables

Lecture 1: August 28

3 Continuous Random Variables

Probability and Distributions

2 Continuous Random Variables and their Distributions

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

STAT Chapter 5 Continuous Distributions

Lecture 3. Discrete Random Variables

Lecture 11. Probability Theory: an Overveiw

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015

Chapter 1. Sets and probability. 1.3 Probability space

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Review: mostly probability and some statistics

Expectation of Random Variables

Analysis of Engineering and Scientific Data. Semester

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

1 Random Variable: Topics

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

1 Review of Probability and Distributions

1 Joint and marginal distributions

Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R

Functions of Random Variables

4 Pairs of Random Variables

STAT 430/510: Lecture 10

Quick Tour of Basic Probability Theory and Linear Algebra

Chapter 3: Random Variables 1

CS145: Probability & Computing

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

15 Discrete Distributions

Chapter 4 Multiple Random Variables

Continuous random variables

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

Brief Review of Probability

Continuous Distributions

Measure-theoretic probability

Introduction to Probability Theory

Ch3. Generating Random Variates with Non-Uniform Distributions

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 4. Chapter 4 sections

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

1 Probability and Random Variables

Random Variables. Definition: A random variable (r.v.) X on the probability space (Ω, F, P) is a mapping

S6880 #7. Generate Non-uniform Random Number #1

STT 441 Final Exam Fall 2013

Generation from simple discrete distributions

Week 2. Review of Probability, Random Variables and Univariate Distributions

2 (Statistics) Random variables

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Review of Probability Theory

MTH 202 : Probability and Statistics

STAT 430/510: Lecture 16

Order Statistics and Distributions

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

7 Random samples and sampling distributions

Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci

Probability Theory. Patrick Lam

STAT 430/510: Lecture 15

STAT 801: Mathematical Statistics. Distribution Theory

SDS 321: Introduction to Probability and Statistics

Statistics and Econometrics I

Math438 Actuarial Probability

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4.

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Recitation 2: Probability

Introduction to Probability and Stocastic Processes - Part I

Chap 2.1 : Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

Tom Salisbury

Ch3 Operations on one random variable-expectation

Math Bootcamp 2012 Miscellaneous

Discrete Random Variables

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

Probability distributions. Probability Distribution Functions. Probability distributions (contd.) Binomial distribution

Chapter 3: Random Variables 1

STAT 231 Homework 5 Solutions

Continuous Probability Spaces

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( )

1 Solution to Problem 2.1

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

Transcription:

Mathematical Statistics 1 Math A 6330 Chapter 2 Transformations and Expectations Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 14, 2015

Outline 1 Distributions of Functions of a Random Variable

Outline 1 Distributions of Functions of a Random Variable 2

Chapter 2 Transformations and Expectations Often, if we are able to model a phenomenon in terms of a random variable X with cdf F X (x), we will also be concerned with behavior of functions of X. In this chapter, we study techniques that allow us to gain information about functions of X that may be of interest, information that can be range from very complete (the distribution of those functions) to more vague (the average behavior).

Tractability Let X be a r.v. and Y = g(x ). P(Y A) = P(g(X ) A) (which depends on F X and g). If y = g(x), then g(x) : X Y, where X and Y are the sample spaces of X and Y, respectively. The inverse mapping of g is g 1 which maps subsets of Y to subsets of X is given by If A = {y}, then g 1 (A) = {x X : g(x) A}. g 1 ({y}) = g 1 (y) = {x X : g(x) = y}. if g 1 (y) = {x}, we simply write g 1 (y) = x.

Transformations If X is a random variable with cdf F X (x), for a function of X, say g(x ), what is the cdf of the new random variable Y = g(x )? For any set A, we have P(Y A) = P(g(X ) A) where X is the sample space of X. = P({x X : g(x) A}) = P(X g 1 (A)),

Remark If X is a discrete r.v., then X is countable. In this case the space of Y = g(x), that is, Y = {y : y = g(x), x X } is also countable. Hence Y is also discrete. The pmf of Y is f Y (y) = P(Y = y) = x g 1 (y) for y Y, and f Y (y) = 0 for y Y. P(X = x) = x g 1 (y) f X (x), In this case, finding the pmf of Y involves simply identifying g 1 (y), for each y Y, and summing probabilities.

Definition The set X = {x : f X (x) > 0} is called the support of X.

Definition The set X = {x : f X (x) > 0} is called the support of X. Definition A function g is monotone, if it is either increasing, that is, u > v = g(u) > g(v), or decreasing, that is, u < v = g(u) > g(v).

Remark If g : X Y is monotone, then g 1 is single-valued; that is, g 1 (y) = x if and only if g(x) = y. If g is increasing, then {x X : g(x) y} = {x X : g 1 (g(x)) g 1 (y)} = {x X : x g 1 (y)}. If g is decreasing, then {x X : g(x) y} = {x X : g 1 (g(x)) g 1 (y)} = {x X : x g 1 (y)}.

Remark If g is an increasing function and X a cont. r.v. with pdf f X (x), then F Y (y) = f X (x)dx = {x X :x g 1 (y)} g 1 (y) f X (x)dx = F X (g 1 (y)). If g is a decreasing function and X a cont. r.v. with pdf f X (x), then F Y (y) = g 1 (y) f X (x)dx = 1 F X (g 1 (y)).

Example (2.1.1) A discrete random variable X has a binomial distribution if its pmf is of the form ( ) n f X (x) = P(X = x) = p x (1 p) n x, x = 0, 1,..., n, x where n is a positive integer and 0 p 1. Now, if Y = n X, then Y has a binomial distribution with parameters n and 1 p; that is, ( ) n f Y (y) = (1 p) y p n y, y = 0, 1,..., n. y

Remark If X and Y = g(x ) are continuous random variables, then F Y (y) = P(Y y) = P(g(X ) y) = P({x X : g(x) y}) = f X (x)dx. {x X :g(x) y}

Example (2.1.2) Suppose X has a uniform distribution on the interval (0, 2π); that is, { 1/(2π) if 0 < x < 2π f X (x) = 0 otherwise. Consider Y = sin 2 X. Then F Y (y) = P(Y y) = 2P(Y x 1 ) + 2P(x 2 X π), where x 1 and x 2 are the two solutions to sin 2 x = y, 0 < x < π.

Remark Thus, even though this example dealt with a seemingly simple situation, the resulting expression for the cdf of Y was not simple.

Theorem (2.1.3) Let X have cdf F X (x), let Y = g(x ), and let X and Y be the sample space of X and Y. a. If g is an increasing function on X, F Y (y) = F X (g 1 (y)) for y Y. b. If g is a decreasing function on X and X is a continuous random variable, F Y (y) = 1 F X (g 1 (y)) for y Y.

Example (2.1.4) Suppose X uniform(0, 1) and let Y = g(x ) = log X. Since d dx g(x) = 1 x g(x) is a decreasing function. Therefore, for y > 0, < 0, for 0 < x < 1, F Y (y) = 1 F X (g 1 (y)) = 1 F X (e y ) = 1 e y. Of course, F Y (y) = 0 for y 0.

Theorem (2.1.5) Let X have pdf f X (x) and let Y = g(x ), where g is a monotone function. Suppose that f X (x) is continuous on X and that g 1 (y) has a continuous derivatives on Y. Then { fx (g 1 (y)) d dy f Y (y) = g 1 (y) y Y 0 otherwise.

Theorem (2.1.5) Let X have pdf f X (x) and let Y = g(x ), where g is a monotone function. Suppose that f X (x) is continuous on X and that g 1 (y) has a continuous derivatives on Y. Then { fx (g 1 (y)) d dy f Y (y) = g 1 (y) y Y 0 otherwise. Proof. { fx (g 1 (x)) d dy f Y (y) = g 1 (y) if g is increasing f X (g 1 (y)) d dy g 1 (y) if g is deacreasing.

Example (2.1.6) Let f X (x) be the gamma pdf f X (x) = 1 (n 1)!β n x n 1 e x/β, 0 < x <, β R + and n N +. Let Y = g(x ) = 1/X. The pdf of Y is f Y (y) = f X (g 1 (y)) d dy g 1 (y) ( ) 1 1 n 1 = (n 1)!β n e 1/(βy) 1 y y 2 ( ) 1 1 n+1 = (n 1)!β n e 1/(βy), y a special case of the inverted gamma pdf.

Example (2.1.7) Let X be a continuous random variable and let Y = X 2. Then for y > 0 Therefore F Y (y) = P(Y y) = P(X 2 y) = P( y X y) = P(X y) P(X y) = F X ( y) F X ( y). f Y (y) = d dy F Y (y) = 1 2 y (f X ( y) + f X ( y)).

Theorem (2.1.8) Let A 0, A 1,..., A k be a partition of X s.t. P(X A 0 ) = 0 and f X (x) is cont. in each A i. Suppose there exist functions g 1 (x),..., g k (x), defined on A 1,..., A k, respectively, satisfying i. g(x) = g i (x), for x A i, ii. g i (x) is monotone on A i, iii. the set Y = {y : y = g i (x) for some x A i } is the same for each i = 1,..., k, iv. g 1 (y) has a continuous derivative on Y. Then { k i=1 f Y (y) = f X (g 1 i (y)) d dy g 1 i (y) y Y 0 otherwise.

Example (2.1.9) Let X have a standard normal distribution f X (x) = 1 2π e x2 /2, < x < and Y = g(x ) = X 2. A 0 = {0}; A 1 = (, 0), g 1 (x) = x 2, g 1 1 (y) = y; A 2 = (0, ), g 2 (x) = x 2, g 1 2 (y) = y; f Y (y) = 1 e ( y) 2 /2 2π 1 2 y + 1 e ( y) 2 /2 1 2π 2 y = 1 e y /2, 0 < y <. 2π

Remark The pdf in Example 2.1.9 is the pdf of a chi squared random variable with 1 degree of freedom.

Remark The pdf in Example 2.1.9 is the pdf of a chi squared random variable with 1 degree of freedom. Theorem (2.1.10) Let X have continuous cdf F X (x) and define the random variable Y as Y = F X (X ). Then Y uniform(0, 1).

Remark The pdf in Example 2.1.9 is the pdf of a chi squared random variable with 1 degree of freedom. Theorem (2.1.10) Let X have continuous cdf F X (x) and define the random variable Y as Y = F X (X ). Then Y uniform(0, 1). Remark If F X is strictly increasing, then F 1 X (y) = x F (x) = y. If F X is constant on some interval, say (x 1, x 2 ), then F 1 X (y) = inf{x : F X (x) y}, 0 < y < 1.

In (b), we have F 1 X (y) = x 1. F 1 X (1) = if F X (x) < 1 for all x, and for any F X, F X (0) 1 =.

Theorem 2.1.10. For Y = F X (X ) we have, for 0 < y < 1, P(Y y) = P(F X (X ) y) = P(F 1 X = P(X F 1 X [F X (X )] F 1 X [y]) 1 (FX is increasing) (y)) = F X (F 1 X (y)) (definition of F X ) = y. (continuity of F X )

Application in generating a random number If Y is a continuous random variable with cdf F Y, then the random variable F 1 Y (U), where U uniform(0, 1), has distribution F Y.

Definition (2.2.1) The expected value or mean of a random variable g(x ), denoted by Eg(x), is { Eg(X ) = g(x)f X (x)dx if X is continuous x X g(x)f X (x) if X is discrete, provided that the integral or sum exists.

Definition (2.2.1) The expected value or mean of a random variable g(x ), denoted by Eg(x), is { Eg(X ) = g(x)f X (x)dx if X is continuous x X g(x)f X (x) if X is discrete, provided that the integral or sum exists. Remark If E g(x ) =, we say that Eg(X ) does not exist.

Example (2.2.2) Suppose X has an exponential(λ) distribution, that is, its pdf is f X (x) = 1 λ e x/λ, 0 x <, λ > 0.

Example (2.2.2) Suppose X has an exponential(λ) distribution, that is, its pdf is f X (x) = 1 λ e x/λ, 0 x <, λ > 0. Example (2.2.3) If X has a binomial distribution, then, as above, its pmf is given by ( ) n f X (x) = P(X = x) = p x (1 p) n x, x = 0, 1,..., n. x EX = n ( ) n x p x (1 p) n x = np. x x=0

Example (2.2.4) A classic example of a random variable whose expected value does not exist is a Cauchy random variable, that is, one with pdf E X = f X (x) = 1 π x π 2 = lim M π 1, < x <. 1 + x 2 1 1 + x 2 dx = 2 π M and EX does not exist. 0 0 x 1 + x 2 dx x 1 + x 2 dx = 1 π lim M log(1 + M2 ) =

Properties of expectation Theorem (2.2.5) Let X be a random variable. Then for any functions g 1 (x) and g 2 (x) whose expectations exist, then a. E(ag 1 (X ) + bg 2 (X ) + c) = aeg 1 (X ) + beg 2 (X ) + c. b. If g 1 (x) 0 for all x, then Eg 1 (X ) 0. c. If g 1 (x) g 2 (x) for all x, then Eg 1 (X ) Eg 2 (X ). d. If a g 1 (x) b for all x, then a Eg 1 (X ) b.

Example (2.2.6) If X is a random variable, then min b E(X b) 2 = E(X EX ) 2.

Example (2.2.6) If X is a random variable, then min b E(X b) 2 = E(X EX ) 2. Example (2.2.7) Suppose that X uniform(0, 1). Let g(x ) = log X. Then Eg(X ) = E( log X ) = 1 0 log xdx = x x log x 1 0 = 1.