Chapter 12: Bivariate & Conditional Distributions

Similar documents
Chapter 5 continued. Chapter 5 sections

Multivariate Distributions CIVL 7012/8012

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Continuous Random Variables

Let X and Y denote two random variables. The joint distribution of these random

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Bivariate distributions

Multiple Random Variables

Introduction to Probability and Stocastic Processes - Part I

ECE Lecture #9 Part 2 Overview

EE4601 Communication Systems

Chapter 4 Multiple Random Variables

Chapter 5. Chapter 5 sections

ACM 116: Lectures 3 4

ENGG2430A-Homework 2

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

conditional cdf, conditional pdf, total probability theorem?

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Contents 1. Contents

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Multivariate Random Variable

Covariance and Correlation

Formulas for probability theory and linear models SF2941

ECE 4400:693 - Information Theory

Joint Distribution of Two or More Random Variables

5 Operations on Multiple Random Variables

Stat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

Chapter 5 Joint Probability Distributions

Review of Integration Techniques

18 Bivariate normal distribution I

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

STAT 516 Midterm Exam 3 Friday, April 18, 2008

Joint Gaussian Graphical Model Review Series I

Homework 10 (due December 2, 2009)

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

Class 8 Review Problems solutions, 18.05, Spring 2014

Conditional distributions. Conditional expectation and conditional variance with respect to a variable.

Course on Inverse Problems

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

Chapter 6: Rational Expr., Eq., and Functions Lecture notes Math 1010

Stat 5101 Notes: Algorithms (thru 2nd midterm)

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

Elements of Probability Theory

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1

Math 180B Problem Set 3

Information geometry for bivariate distribution control

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Lecture 2: Repetition of probability theory and statistics

Two-dimensional Random Vectors

Interesting Probability Problems

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

More on Distribution Function

Multivariate Distribution Models

STAT/MA 416 Answers Homework 6 November 15, 2007 Solutions by Mark Daniel Ward PROBLEMS

Final Exam # 3. Sta 230: Probability. December 16, 2012

The Multivariate Normal Distribution 1

STA 256: Statistics and Probability I

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

MC3: Econometric Theory and Methods. Course Notes 4

STAT:5100 (22S:193) Statistical Inference I

The Multivariate Gaussian Distribution [DRAFT]

Probability. Table of contents

Problem. Set up the definite integral that gives the area of the region. y 1 = x 2 6x, y 2 = 0. dx = ( 2x 2 + 6x) dx.

A Modification of Linfoot s Informational Correlation Coefficient

Lecture 14: Multivariate mgf s and chf s

Homework 9 (due November 24, 2009)

Stat 5101 Notes: Algorithms

Math 265 (Butler) Practice Midterm III B (Solutions)

Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota

Regression. Econometrics II. Douglas G. Steigerwald. UC Santa Barbara. D. Steigerwald (UCSB) Regression 1 / 17

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown

HW4 : Bivariate Distributions (1) Solutions

Speci cation of Conditional Expectation Functions

Multivariate distributions

4. CONTINUOUS RANDOM VARIABLES

Tutorial 2: Comparative Statics

Introduction to Normal Distribution

1: PROBABILITY REVIEW

Joint Probability Distributions, Correlations

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

Math 510 midterm 3 answers

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

On the Expected Absolute Value of a Bivariate Normal Distribution

Chapter 4 Multiple Random Variables

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University.

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Practice Examination # 3

STA 2201/442 Assignment 2

General Random Variables

ECON 5350 Class Notes Review of Probability and Distribution Theory

This does not cover everything on the final. Look at the posted practice problems for other topics.

STOR Lecture 16. Properties of Expectation - I

Chapter 9: Elementary Sampling Theory

Transcription:

Chapter 12: Bivariate & Conditional Distributions James B. Ramsey March 2007 James B. Ramsey () Chapter 12 26/07 1 / 26

Introduction Key relationships between joint, conditional, and marginal distributions. James B. Ramsey () Chapter 12 26/07 2 / 26

Introduction Key relationships between joint, conditional, and marginal distributions. Joint Density: James B. Ramsey () Chapter 12 26/07 2 / 26

Introduction Key relationships between joint, conditional, and marginal distributions. Joint Density: f j (X 1, X 2 ) = f 2j1 (X 2 jx 1 )f 1 (X 1 ) f i (X i ) = = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Z f j (X 1, X 2 )dx j6=i James B. Ramsey () Chapter 12 26/07 2 / 26

Introduction Key relationships between joint, conditional, and marginal distributions. Joint Density: f i (X i ) = Prob. Distributions: f j (X 1, X 2 ) = f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Z f j (X 1, X 2 )dx j6=i James B. Ramsey () Chapter 12 26/07 2 / 26

Introduction Key relationships between joint, conditional, and marginal distributions. Joint Density: f i (X i ) = Prob. Distributions: f j (X 1, X 2 ) = f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Z f j (X 1, X 2 )dx j6=i Pr(X 1 x 0 ) = = Z x0 Z x0 Z [ f 1 (X 1 )dx 1 f j (X 1, X 2 )dx 2 ]dx 1 James B. Ramsey () Chapter 12 26/07 2 / 26

Cond. Prob. Distributions: Pr(X 1 x 0 jy = y 0 ) = R x 0 f 1j2(X 1 jy = y 0 )dx 1 James B. Ramsey () Chapter 12 26/07 3 / 26

Cond. Prob. Distributions: Pr(X 1 x 0 jy = y 0 ) = R x 0 f 1j2(X 1 jy = y 0 )dx 1 Joint Probs: pr(x 1 x 1,0, X 2 x 2,0 ) = R x 1,0 R x2,0 f j (X 1, X 2 )dx 1 dx 2 James B. Ramsey () Chapter 12 26/07 3 / 26

Cond. Prob. Distributions: Pr(X 1 x 0 jy = y 0 ) = R x 0 f 1j2(X 1 jy = y 0 )dx 1 Joint Probs: pr(x 1 x 1,0, X 2 x 2,0 ) = R x 1,0 f 1 (X 1 ) = R f j (X 1, X 2 )dx 2 = R f 1j2(X 1 jx 2 )f 2 (X 2 )dx 2 R x2,0 f j (X 1, X 2 )dx 1 dx 2 James B. Ramsey () Chapter 12 26/07 3 / 26

Cond. Prob. Distributions: Pr(X 1 x 0 jy = y 0 ) = R x 0 f 1j2(X 1 jy = y 0 )dx 1 Joint Probs: pr(x 1 x 1,0, X 2 x 2,0 ) = R x 1,0 R x2,0 f j (X 1, X 2 )dx 1 dx 2 f 1 (X 1 ) = R f j (X 1, X 2 )dx 2 = R f 1j2(X 1 jx 2 )f 2 (X 2 )dx 2 Or the marginal distn. is the weighted sum of the conditional distns. James B. Ramsey () Chapter 12 26/07 3 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) James B. Ramsey () Chapter 12 26/07 4 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Sources of "association": measuring a at & a curved plate; James B. Ramsey () Chapter 12 26/07 4 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Sources of "association": measuring a at & a curved plate; Visitors to a real estate agent; buyers and sellers; James B. Ramsey () Chapter 12 26/07 4 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Sources of "association": measuring a at & a curved plate; Visitors to a real estate agent; buyers and sellers; Individual consumption/income; James B. Ramsey () Chapter 12 26/07 4 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Sources of "association": measuring a at & a curved plate; Visitors to a real estate agent; buyers and sellers; Individual consumption/income; Individual height & weight; James B. Ramsey () Chapter 12 26/07 4 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Sources of "association": measuring a at & a curved plate; Visitors to a real estate agent; buyers and sellers; Individual consumption/income; Individual height & weight; Breakdown of two components of a machine James B. Ramsey () Chapter 12 26/07 4 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Sources of "association": measuring a at & a curved plate; Visitors to a real estate agent; buyers and sellers; Individual consumption/income; Individual height & weight; Breakdown of two components of a machine Compare I.Q.& height, James B. Ramsey () Chapter 12 26/07 4 / 26

Association is not Causality Follows clearly from fact that: f j (X 1, X 2 ) =f 2j1 (X 2 jx 1 )f 1 (X 1 ) = f 1j2 (X 1 jx 2 )f 2 (X 2 ) Sources of "association": measuring a at & a curved plate; Visitors to a real estate agent; buyers and sellers; Individual consumption/income; Individual height & weight; Breakdown of two components of a machine Compare I.Q.& height, Health & wearing of a top hat. James B. Ramsey () Chapter 12 26/07 4 / 26

Buyers & Renters Entering a Real Estate O ce Arrivals: Distn. is given by: e λ λ N N! ; Recall conditions for Poisson distn. James B. Ramsey () Chapter 12 26/07 5 / 26

Buyers & Renters Entering a Real Estate O ce Arrivals: Distn. is given by: e λ λ N N! ; Recall conditions for Poisson distn. Given N arrivals the distn. of Buyers is Binomial: James B. Ramsey () Chapter 12 26/07 5 / 26

Buyers & Renters Entering a Real Estate O ce Arrivals: Distn. is given by: e λ λ N N! ; Recall conditions for Poisson distn. Given N arrivals the distn. of Buyers is Binomial: N π B (1 π) N B ; N = B + R B James B. Ramsey () Chapter 12 26/07 5 / 26

Replace N by R + B to obtain the distn.: James B. Ramsey () Chapter 12 26/07 6 / 26

Replace N by R + B to obtain the distn.: e λ λ R +B (R + B)! [ = e λ λ R +B R!B! R + B π B (1 B π B (1 π) R ] π) R James B. Ramsey () Chapter 12 26/07 6 / 26

Replace N by R + B to obtain the distn.: e λ λ R +B (R + B)! [ = e λ λ R +B R!B! R + B π B (1 Is a two parameter distribution, λ, π; B π B (1 π) R ] π) R James B. Ramsey () Chapter 12 26/07 6 / 26

Replace N by R + B to obtain the distn.: e λ λ R +B (R + B)! [ = e λ λ R +B R!B! R + B π B (1 Is a two parameter distribution, λ, π; B π B (1 π) R ] π) R λ = mean arrivals per hour; π = proportion of Buyers. James B. Ramsey () Chapter 12 26/07 6 / 26

Derivation of the Bivariate Normal Distn. Recall the bivariate Gaussian distn. from a pair of independent Gaussian distns. James B. Ramsey () Chapter 12 26/07 7 / 26

Derivation of the Bivariate Normal Distn. Recall the bivariate Gaussian distn. from a pair of independent Gaussian distns. φ(x, Y ) = φ(x )φ(y ) = expf 1 2 ( X η x σ x ) 2 g expf 1 2 p ( Y η y σ y ) 2 g p 2πσx 2πσy James B. Ramsey () Chapter 12 26/07 7 / 26

Let η x, η y both equal zero, σ x =σ y = 1 in order to simplify the algebra; James B. Ramsey () Chapter 12 26/07 8 / 26

Let η x, η y both equal zero, σ x =σ y = 1 in order to simplify the algebra; φ(x, Y ) = expf 1 2 X g2 p 2π expf 1 2 Y 2 g p 2π = expf 1 2 [X 2 + Y 2 ]g 2π James B. Ramsey () Chapter 12 26/07 8 / 26

But if X, Y are associated, we speculate from the calculation of "r" that for some parameter ρ the quadratic above contains a term like: James B. Ramsey () Chapter 12 26/07 9 / 26

But if X, Y are associated, we speculate from the calculation of "r" that for some parameter ρ the quadratic above contains a term like: expf 1 2 [X 2 + Y 2 2ρXY ]g James B. Ramsey () Chapter 12 26/07 9 / 26

But if X, Y are associated, we speculate from the calculation of "r" that for some parameter ρ the quadratic above contains a term like: expf 1 2 [X 2 + Y 2 2ρXY ]g If correct, integration to 1 implies that: James B. Ramsey () Chapter 12 26/07 9 / 26

But if X, Y are associated, we speculate from the calculation of "r" that for some parameter ρ the quadratic above contains a term like: expf 1 2 [X 2 + Y 2 2ρXY ]g If correct, integration to 1 implies that: φ(x, Y ) = expf 1 2(1 ρ 2 ) [X 2 + Y 2 2ρXY ]g 2π p (1 ρ 2 ) James B. Ramsey () Chapter 12 26/07 9 / 26

And in general for non-zero means and non-unitary variances, one has: φ(x, Y ) = expf 1 2(1 ρ 2 ) [( X η x σ x ) 2 + ( Y η y σ y ) 2 2π p (1 ρ 2 )σ x σ y 2ρ( X η x σ x )( Y η y σ y )]g 2π p (1 ρ 2 )σ x σ y James B. Ramsey () Chapter 12 26/07 10 / 26

And in general for non-zero means and non-unitary variances, one has: φ(x, Y ) = expf 1 2(1 ρ 2 ) [( X η x σ x ) 2 + ( Y η y σ y ) 2 2π p (1 ρ 2 )σ x σ y 2ρ( X η x σ x )( Y η y σ y )]g 2π p (1 ρ 2 )σ x σ y If ρ equals zero, get the joint distn. of a pair of independent variables James B. Ramsey () Chapter 12 26/07 10 / 26

And in general for non-zero means and non-unitary variances, one has: φ(x, Y ) = expf 1 2(1 ρ 2 ) [( X η x σ x ) 2 + ( Y η y σ y ) 2 2π p (1 ρ 2 )σ x σ y 2ρ( X η x σ x )( Y η y σ y )]g 2π p (1 ρ 2 )σ x σ y If ρ equals zero, get the joint distn. of a pair of independent variables The σ x, σ y in the denominator results from transforming from standardized to non-standardized variables, e.g.u, V de ned by the transformations: James B. Ramsey () Chapter 12 26/07 10 / 26

U= X η x σ x implies: du = 1 σ x dx ; James B. Ramsey () Chapter 12 26/07 11 / 26

U= X η x σ x implies: du = 1 σ x dx ; And V = Y η y σ y implies: dv = 1 σ y dy ; James B. Ramsey () Chapter 12 26/07 11 / 26

U= X η x σ x implies: du = 1 σ x dx ; And V = Y η y σ y implies: dv = 1 σ y dy ; so that the change in "scale" is allowed for in that: James B. Ramsey () Chapter 12 26/07 11 / 26

U= X η x σ x implies: du = 1 σ x dx ; And V = Y η y σ y implies: dv = 1 σ y dy ; so that the change in "scale" is allowed for in that: du = ( du dv dx )dx ; and dv = ( dy )dy. James B. Ramsey () Chapter 12 26/07 11 / 26

U= X η x σ x implies: du = 1 σ x dx ; And V = Y η y σ y implies: dv = 1 σ y dy ; so that the change in "scale" is allowed for in that: du = ( du dv dx )dx ; and dv = ( dy )dy. 1 1 We multiply the density in (U, V) by the re-scaling; σ x σ y density integrates to one. so that the James B. Ramsey () Chapter 12 26/07 11 / 26

U= X η x σ x implies: du = 1 σ x dx ; And V = Y η y σ y implies: dv = 1 σ y dy ; so that the change in "scale" is allowed for in that: du = ( du dv dx )dx ; and dv = ( dy )dy. 1 1 We multiply the density in (U, V) by the re-scaling; σ x σ y density integrates to one. See overheads so that the James B. Ramsey () Chapter 12 26/07 11 / 26

The Conditional Normal Density Function For simplicity let means = 0, variances equal 1. James B. Ramsey () Chapter 12 26/07 12 / 26

The Conditional Normal Density Function For simplicity let means = 0, variances equal 1. The Joint distn. is: James B. Ramsey () Chapter 12 26/07 12 / 26

The Conditional Normal Density Function For simplicity let means = 0, variances equal 1. The Joint distn. is: φ(x, Y ) = expf 1 2(1 ρ 2 ) g[x 2 + Y 2 2ρXY ] 2π p 1 ρ 2 James B. Ramsey () Chapter 12 26/07 12 / 26

which can be rewritten as the product of a conditional & a marginal distn. James B. Ramsey () Chapter 12 26/07 13 / 26

which can be rewritten as the product of a conditional & a marginal distn. φ(x, Y ) = φ(y jx )φ(x ) = X expf 2 2(1 ρ 2 ) p g 1 expf 2(1 ρ 2 ) g[y 2 2ρXY ] p p 2π 2π 1 ρ 2 James B. Ramsey () Chapter 12 26/07 13 / 26

In [Y 2 2ρXY ] complete the square by adding/subtracting ρ 2 X 2 ; James B. Ramsey () Chapter 12 26/07 14 / 26

In [Y 2 2ρXY ] complete the square by adding/subtracting ρ 2 X 2 ; [Y 2 2ρXY + ρ 2 X 2 ] = [Y ρx ] 2 and expf X 2 2(1 ρ 2 ) g = expf X 2 ρ 2 X 2 2(1 ρ 2 ) g = expf X 2 2 g James B. Ramsey () Chapter 12 26/07 14 / 26

In [Y 2 2ρXY ] complete the square by adding/subtracting ρ 2 X 2 ; yields: [Y 2 2ρXY + ρ 2 X 2 ] = [Y ρx ] 2 and expf X 2 2(1 ρ 2 ) g = expf X 2 ρ 2 X 2 2(1 ρ 2 ) g = expf X 2 2 g James B. Ramsey () Chapter 12 26/07 14 / 26

f expf [Y 2 ρx] 2(1 ρ 2 ) p g X 2 p gfexpf p 2π 1 ρ 2 2π 2 g g James B. Ramsey () Chapter 12 26/07 15 / 26

f expf [Y 2 ρx] 2(1 ρ 2 ) p g X 2 p gfexpf p 2π 1 ρ 2 2π 2 g The conditional distn. is: Gaussian with conditional mean: ρx and variance (1-ρ 2 ). g James B. Ramsey () Chapter 12 26/07 15 / 26

The General Conditional Distribution If η x, η y are non-zero and σ x, σ y are non-unitary,then can show: James B. Ramsey () Chapter 12 26/07 16 / 26

The General Conditional Distribution If η x, η y are non-zero and σ x, σ y are non-unitary,then can show: σ 2 Y jx 0 = (1 ρ 2 )σ 2 y ; σ 2 X jy 0 = (1 ρ 2 )σ 2 x James B. Ramsey () Chapter 12 26/07 16 / 26

Most important is the conditional mean: James B. Ramsey () Chapter 12 26/07 17 / 26

Most important is the conditional mean: E fy jx = x 0 g = η y + ρ σ y σ x (x o η x ) = [η y ρ σ y η σ x ] + ρ σ y x o ; x σ x = α + βx 0 which is a linear relationship between Y and X; cf Chapter 5. James B. Ramsey () Chapter 12 26/07 17 / 26

Most important is the conditional mean: E fy jx = x 0 g = η y + ρ σ y σ x (x o η x ) = [η y ρ σ y η σ x ] + ρ σ y x o ; x σ x = α + βx 0 which is a linear relationship between Y and X; cf Chapter 5. Recall that the conditional mean of YjX is the mean of Y w.r.t. the conditional distribution, f(yjx). James B. Ramsey () Chapter 12 26/07 17 / 26

Moments of Bivariate Distributions Because F x (X) = R F j (X, Y )dy, F y (Y) = R F j (X, Y )dx James B. Ramsey () Chapter 12 26/07 18 / 26

Moments of Bivariate Distributions Because F x (X) = R F j (X, Y )dy, F y (Y) = R F j (X, Y )dx the univariate moments are all de ned as before. James B. Ramsey () Chapter 12 26/07 18 / 26

Moments of Bivariate Distributions Because F x (X) = R F j (X, Y )dy, F y (Y) = R F j (X, Y )dx the univariate moments are all de ned as before. The theoretical analogue to the sample covariance is: James B. Ramsey () Chapter 12 26/07 18 / 26

Moments of Bivariate Distributions Because F x (X) = R F j (X, Y )dy, F y (Y) = R F j (X, Y )dx the univariate moments are all de ned as before. The theoretical analogue to the sample covariance is: µ 1,1 (X, Y ) = E f(x Z Z η x )(Y η y )g (X η x )(Y η y )f (X, Y )dxdy Z Z = XYf (X, Y )dxdy η x η y James B. Ramsey () Chapter 12 26/07 18 / 26

If X, Y are independently distributed, E{(X-η x )(Y-η y )} = 0. James B. Ramsey () Chapter 12 26/07 19 / 26

If X, Y are independently distributed, E{(X-η x )(Y-η y )} = 0. µ 1,1 (X, Y ) = σ x,y = E f(x η x )(Y η y )g James B. Ramsey () Chapter 12 26/07 19 / 26

If X, Y are independently distributed, E{(X-η x )(Y-η y )} = 0. µ 1,1 (X, Y ) = σ x,y = E f(x η x )(Y η y )g is the theoretical covariance. James B. Ramsey () Chapter 12 26/07 19 / 26

If X and Y are not independent, but joint Gaussian; James B. Ramsey () Chapter 12 26/07 20 / 26

If X and Y are not independent, but joint Gaussian; η x = η y = 0, and σ x = σ y = 1 for convenience, then: James B. Ramsey () Chapter 12 26/07 20 / 26

If X and Y are not independent, but joint Gaussian; η x = η y = 0, and σ x = σ y = 1 for convenience, then: E fxy g = = Z Z x y Z Z x y XY φ(x, Y )dxdy XY φ(y jx )φ(x )dxdy James B. Ramsey () Chapter 12 26/07 20 / 26

2 Z Z X 4 expf 3 1 g[y ρx 2(1 ρ Y 2 ]2 ) p p dy 5 expf X 2 p 2 g dx x y jx 2π 1 ρ 2 2π James B. Ramsey () Chapter 12 26/07 21 / 26

2 Z Z X 4 expf 3 1 g[y ρx 2(1 ρ Y 2 ]2 ) p p dy 5 expf X 2 p 2 g dx x y jx 2π 1 ρ 2 2π Z X fρx g expf X 2 p 2 g dx x 2π = E fρx 2 g = ρ James B. Ramsey () Chapter 12 26/07 21 / 26

As variance of X is unity by assumption. James B. Ramsey () Chapter 12 26/07 22 / 26

As variance of X is unity by assumption. ρ measures the degree of linear association. James B. Ramsey () Chapter 12 26/07 22 / 26

As variance of X is unity by assumption. ρ measures the degree of linear association. If σ x and σ y are non-unitary; James B. Ramsey () Chapter 12 26/07 22 / 26

As variance of X is unity by assumption. ρ measures the degree of linear association. If σ x and σ y are non-unitary; E fxy g = ρσ x σ y James B. Ramsey () Chapter 12 26/07 22 / 26

As variance of X is unity by assumption. ρ measures the degree of linear association. If σ x and σ y are non-unitary; E fxy g = ρσ x σ y is the covariance {units of X and of Y} and ρ the correlation coe cient is dimensionless. James B. Ramsey () Chapter 12 26/07 22 / 26

The Sampling of Joint & Conditional Distributions In Chapter 9 discussed the sampling of univariate distributions and have explored the use of simple random sampling at length. James B. Ramsey () Chapter 12 26/07 23 / 26

The Sampling of Joint & Conditional Distributions In Chapter 9 discussed the sampling of univariate distributions and have explored the use of simple random sampling at length. Sampling for a joint distn. is similar: collect random samples of individuals & measure the joint observations; e.g. sample individuals and measure income & consumption, or height & weight. James B. Ramsey () Chapter 12 26/07 23 / 26

Sampling for conditional distributions is di erent. James B. Ramsey () Chapter 12 26/07 24 / 26

Sampling for conditional distributions is di erent. One can sample for height given weight, or weight given height. James B. Ramsey () Chapter 12 26/07 24 / 26

Sampling for conditional distributions is di erent. One can sample for height given weight, or weight given height. This can be achieved by: sampling heights & for each height sample weight; or sample for weight & for each weight,sample heights. James B. Ramsey () Chapter 12 26/07 24 / 26

Sampling for conditional distributions is di erent. One can sample for height given weight, or weight given height. This can be achieved by: sampling heights & for each height sample weight; or sample for weight & for each weight,sample heights. If using natural experiments, be sure which conditional distn. is being sampled. James B. Ramsey () Chapter 12 26/07 24 / 26

Consider some examples: Sampling I.Q. and heights; James B. Ramsey () Chapter 12 26/07 25 / 26

Consider some examples: Sampling I.Q. and heights; Sampling electrical output & fuel inputs; James B. Ramsey () Chapter 12 26/07 25 / 26

Consider some examples: Sampling I.Q. and heights; Sampling electrical output & fuel inputs; Incomes & consumption. James B. Ramsey () Chapter 12 26/07 25 / 26

End of Chapter 12. James B. Ramsey () Chapter 12 26/07 26 / 26