Chapter 4 Multiple Random Variables

Similar documents
Lecture 16: Hierarchical models and miscellanea

Bivariate distributions

Probability and Distributions

1 Review of Probability and Distributions

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Chapter 5 continued. Chapter 5 sections

2 Functions of random variables

Chapter 4 Multiple Random Variables

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Continuous Random Variables

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

STAT:5100 (22S:193) Statistical Inference I

Lecture 2: Repetition of probability theory and statistics

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Random Variables and Their Distributions

Contents 1. Contents

Review for the previous lecture

MAS223 Statistical Inference and Modelling Exercises

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

STAT 430/510: Lecture 15

Algorithms for Uncertainty Quantification

Chapter 3 Common Families of Distributions

STAT Chapter 5 Continuous Distributions

STT 441 Final Exam Fall 2013

9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Review: mostly probability and some statistics

ENGG2430A-Homework 2

6.041/6.431 Fall 2010 Quiz 2 Solutions

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

1 Joint and marginal distributions

15 Discrete Distributions

Closed book and notes. 60 minutes. Cover page and four pages of exam. No calculators.

Conditional distributions. Conditional expectation and conditional variance with respect to a variable.

Chapter 5. Chapter 5 sections

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota

Lecture 4. Continuous Random Variables and Transformations of Random Variables

4 Pairs of Random Variables

Joint p.d.f. and Independent Random Variables

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

Introduction to Statistical Inference Self-study

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

3 Conditional Expectation

STAT 430/510 Probability

Multivariate distributions

Statistics 1B. Statistics 1B 1 (1 1)

THE UNIVERSITY OF HONG KONG DEPARTMENT OF STATISTICS AND ACTUARIAL SCIENCE

UNIT Define joint distribution and joint probability density function for the two random variables X and Y.

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

More on Distribution Function

Probability and Statistics Notes

Chapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be

A Few Special Distributions and Their Properties

Limiting Distributions

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

Lecture 1: August 28

, find P(X = 2 or 3) et) 5. )px (1 p) n x x = 0, 1, 2,..., n. 0 elsewhere = 40

More than one variable

SDS 321: Introduction to Probability and Statistics

STAT 516 Midterm Exam 3 Friday, April 18, 2008

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Probability Theory. Patrick Lam

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Continuous Random Variables and Continuous Distributions

Bivariate Distributions

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

STAT 430/510 Probability Lecture 16, 17: Compute by Conditioning

Lecture 11. Probability Theory: an Overveiw

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

18.440: Lecture 28 Lectures Review

(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.

Stat 5101 Notes: Brand Name Distributions

STA 256: Statistics and Probability I

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University

CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Basics on Probability. Jingrui He 09/11/2007

1 Solution to Problem 2.1

Final Exam # 3. Sta 230: Probability. December 16, 2012

Introduction to Probability Theory

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

18.440: Lecture 28 Lectures Review

STAT 430/510: Lecture 16

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

Sampling Distributions

Introduction to Normal Distribution

Chapter 4 Multiple Random Variables

Statistics for scientists and engineers

Let X and Y denote two random variables. The joint distribution of these random

Continuous r.v practice problems

Formulas for probability theory and linear models SF2941

THE QUEEN S UNIVERSITY OF BELFAST

Expectation of Random Variables

Mathematical Statistics 1 Math A 6330

Stat 704 Data Analysis I Probability Review

Transcription:

Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous Case: Consider ( XY ) a continuous bivariate random vector with joint pdf f ( xy ) and define U = g ( x y 1 ) and V = g ( x y) Let A = {( x y ); f ( x y ) > 0} and B = {( uv ) : u= g1 ( xy ) and v= g( xy )} for some ( x y) A Assume that U and V are 1-1 transformation from A to B Define x = h1 ( u v) and y= h ( u v) be the inverse transformations The Jacobian J of the transformation is defined as the determinant of a matrix of partial derivatives: x x u v x y y x J = = y y u v u v u v Therefore the joint pdf of U and V is given by f ( uv ) = f ( h( uv ) h( uv )) J where J is the absolute value of the Jacobian UV XY 1 1

Example 433 (Distribution of the product of two independent beta variables): Let X ~ beta( α β ) and Y ~ beta(α + βγ ) then the joint pdf of ( XY ) is Γ ( α + β) Γ ( α + β + γ) f x y x x x x x y Γ( α) Γ( β) Γ ( α + β) Γ( γ) α 1 β 1 α+ β 1 γ 1 XY ( ) = (1 ) (1 ) 0< < 10< < 1 Consider U = XY and V = X then this an one to one transformation from {( x y) : 0 < x< 10 < y< 1} to {( uv ) :0 < u< v< 1} In addition X = V Y = U / V J = 1/ v therefore Γ( α + β) α 1 β 1 Γ ( α + β + γ) α+ β 1 γ 11 fuv ( u v) = fxy ( v u/ v) = v (1 v) ( u/ v) (1 u/ v) Γ( α) Γ( β) Γ ( α + β) Γ( γ) v and 1 f ( u) = f ( u v) dv U u U V Γ ( α + β + γ) 1 1 1 1 u α + β v β γ (1 v) β ( v u) γ = dv Γ( α) Γ( β) Γ( γ) u = Γ ( α + β + γ) α 1 β+ γ 1 (1 ) 0 1 Γ( α) Γ ( β + γ) u u < u< Example 434 (Sum and difference of two independent normal random variables): Let X ~ n( α σ ) and Y ~ n( β σ ) and they are independent Consider U = X + Y and V = X Y then X = ( U + V) / Y = ( U V)/ and J = 1/ Therefore

1 fuv ( u v) = fxy (( u+ v)/( u v)/) 1 1 (( u+ v)/ α) + (( u v)/ β) = exp( ) πσ σ 1 1 u ( α + β) u+ v ( α β) v+ α + β = exp( ) πσ 4σ 1 ( u ( α + β)) 1 ( v ( α β)) = exp( ) exp( ) π σ 4σ π σ 4σ Theorem 435: Let X and Y be independent random variables Let g( x) be a function only of x and h( y) be a function only of y Then the random variables U = g( X) and V = h(y) are independent Proof: We only prove it for the continuous case For any u R and v R we define A = { x: g( x) u} and B = { y: h( y) v} then the joint cdf of ( UV ) is UV The joint pdf of ( UV ) is u F ( u v) = P( U u V v) = P( X A Y B ) = P( X A ) P( Y B ) v u v u ( uv ) = F ( uv ) = PX ( A) f UV UV u PY ( Bv) uv u v v Question: Why can t we use the transformation technique to prove this? 1 f ( uv ) = f ( g ( x) h 1 ( y)) J UV XY Notes: 3

1 If only one function is of interest say U = g ( x y) In this case choose a convenient 1 V = g ( X Y) so that ( UV ) is a 1-1 transformation from A onto B and obtain the joint pdf of U and V given by fuv ( uv ) Finally one can obtain the marginal of U by integrating over all values of V What is the transformation is not 1-1? We use a generalized version of Theorem 18 from univariate to many-to-one functions by finding partitions A0 A1 An of A where the transformation U = g ( x y) and 1 V = g ( X Y) is 1-1 from Ai ( i= 1 n) onto B and PA ( 0 ) = 0 Therefore the joint pdf of U and V is given by n f ( uv ) = f ( h( uv ) h ( uv )) J UV i= 1 XY 1i i i 3 Even the transformation is not 1-1 we can always use the method in the proof of Theorem of 435 to calculate the joint cdf and pdf of U and V Example 436: Show that the distribution of the ratio of two independent normal variables is a Cauchy random variable That is if X ~ n (01) Y ~ n (01) then U = X / Y ~ Cauchy(0) Solution 1 Let V = Y then A1 = {( x y) : y> 0} A = {( x y) : y< 0} A0 = {( x y) : y= 0} and 1 x + y fxy ( x y) = exp( ) Therefore π Solution For v ( u + 1) v fuv ( u v) = fxy ( uv v) v + fxy ( uv v) v = exp( )( < u< 0< v< ) π v ( u + 1) v 1 ( u + 1) v 1 fu ( u) = exp( ) dv= exp( ) 0 0 = < u< π π(1 + u ) π(1 + u ) u < 0 we have 4

F ( u) = P( U u) = P( X / Y u) U 0 x/ u 0 0 X y 0 x/ u X y = f ( x) f ( y) dydx+ f ( x) f ( y) dydx Take the derivative respect to u we have 0 1 x x 1 x 1 x x 1 x fu ( u) = exp( )( ) exp( ) dx+ exp( )( ) exp( ) dx 0 π u π u π u π u 1 1 1 1 ux + x = xexp( ) dx 0 π u = u π u + 1 Similarly you can get f U ( u) for u 0 You also need to verify the conditions such that the integration and differentiation can be exchanged Chapter 44 Hierarchical Models and Mixture Distributions Examples: 1 Binomial-Poisson Hierarchy: X =number survived and Y =number of eggs laid Then we can use the models X Y ~ binomial( Y p ) and Y ~ Poisson( λ ) In this case X ~ Poisson( λ p) Poisson-Exponential Hierarchy: Let Y =number of eggs laid and Λ =variability across different mothers Y Λ ~Poisson( Λ ) and Λ ~exponential( β ) In this case Y ~ negative binomial( r = 1 p= 1/(1 + β )) 3 Binomial-Poisson-Exponential Hierarchy (Three-stage hierarchy): Let X =number survived Y =number of eggs laid and Λ = variability across different mothers X Y ~ binomial( Y p ) and Y Λ ~ Poisson( Λ ) and Λ ~exponential( β ) Equivalently we can look at this as a two-stage hierarchy where X Y ~binomial( Y p) and Y ~ negative binomial( r = 1 p= 1/(1 + β )) 5

4 Beta-Binomial Hierarchy: X P ~ binomial ( np ) and P ~ beta( α β ) Definition 444: A random variable X is said to have a mixture distribution if the distribution of X depends on a quantity that also has a distribution IMPORTANT RESULTS: These results are useful when one is interested only the expectation and variances of a random variable Theorem 443: If X and Y are any two random variables then E X = E( E( X Y) ) provided that the expectations exist Proof: We prove this theorem in the continuous situation Let f ( xy XY ) denote the joint pdf of X and Y Then we have EX = xfxy ( xydxd ) y= [ xf( x ydxf ) ] Y( ydy ) = EX ( y) fy( ydy ) = EEX ( ( Y)) Theorem 447: For any two random variables X and Y then VarX = E( Var( X Y )) + Var( E( X Y )) provided that the expectations exist Proof First we have 6

In addition and VarX = E( X EX) = E( X E( X Y) + E( X Y) EX) = E X E X Y + E E X Y EX + E X E X Y E X Y EX ( ( )) ( ( ) ) ([ ( )][ ( ) ]) E([ X E( X Y)][ E( X Y) EX]) = E( E{[( X E( X Y))( E( X Y) EX)] Y}) E{[ X E( X Y)][ E( X Y) EX] Y} = ( E( X Y) EX)( E{[ X E( X Y)] Y}) = ( E( X Y) EX)( E( X Y) E( X Y)) = 0 E X E X Y E E X E X Y Y ( ( )) = ( [ ( )] ) = EVar ( ( X Y)) Finally we have E EX Y EX VarEX Y ([ ( ) ] ) = ( ( ) VarX = E( Var ( X Y )) +Var( E ( X Y )) Illustrations: 1 Binomial-Poisson Hierarchy: models X Y ~ binomial(( Y p ) and Y ~ Poisson( λ ) (Recall that X ~ Poisson( λ p) ) Using Theorem 443 and 445 we get EX = E( E( X Y)) = E( py) = λ p and VarX = E( Var( X Y )) + Var( ( Y )) E( (1 p) ) + ( py) = (1 p) + p E X = Yp Var p λ λ Beta-Binomial Hierarchy: X P ~binomial ( np ) and P ~ beta( α β ) Find EX and VarX 7

3 Non-central chi-squared distribution with degrees of freedom k and noncentrality parameter λ p/+ k 1 x/ k λ x e λ e f( x λ p) = k= 0 p/+ k Note that X K ~ χ p + k and K ~Poisson( λ ) Therefore Γ ( p / + k) k! EX = E( E( X P)) = E( p+ K) = p+ λ 8