Outline Properties of Covariance Quantifying Dependence Models for Joint Distributions Lab 4. Week 8 Jointly Distributed Random Variables Part II
|
|
- Betty Cooper
- 5 years ago
- Views:
Transcription
1 Week 8 Jointly Distributed Random Variables Part II
2 Week 8 Objectives 1 The connection between the covariance of two variables and the nature of their dependence is given. 2 Pearson s correlation coefficient is introduced as a quantification of linear dependence. 3 A hierarchical approach to building joint distribution models is presented. The bivariate normal distribution and the class of regression models are introduced in this context. 4 The relationship between covariance/correlation and the slope of the simple linear regression (slr) model is given. The formulas for the variance of sums, and the connection between covariance/correlation and the slope of the slr model are demonstrated in. The R command lm is introduced.
3 1 2 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 3 Hierarchical Models Regression Models 4 Properties of the Variance via Simulations Fitting Regression Models
4 Proposition 1. Cov(X, Y ) = Cov(Y, X). 2. Cov(X, X) = Var(X). 3. If X, Y are independent, then Cov(X, Y ) = Cov(aX + b, cy + d) = accov(x, Y ), for any real numbers a, b, c and d. ( m 5. Cov i=1 X i, ) n j=1 Y j = m n i=1 j=1 Cov ( ) X i, Y j.
5 Let X denote the number of short fiction books sold at the State College, PA, location of Barnes and Noble in a given week, and let Y denote the corresponding number sold on line from State College residents. It is given that Cov(X, Y ) = If (X i, Y i ), i = 1,..., 10, denote the sale numbers in 10 weeks, find Cov(X, Y ). Solution: Using parts 4 and 5 of the proposition we have Cov( i=1 X i, j=1 Y j ) = Cov( i=1 10 X i, Y j ) j=1 = i=1 j=1 Cov(X i, Y j ) = i=1 Cov(X i, Y i ) = 0.575, assuming sales in different weeks are independent
6 Zero Covariance Independence Example Find Cov(X, Y ), where X, Y have joint pmf given by y /3 0 1/3 x 0 0 1/3 1/3 1 1/3 0 1/3 2/3 1/3 1.0 Are X and Y independent? Is Cov(X, Y ) = 0? Solution: Since E(X) = 0, the computational formula gives Cov(X, Y ) = E(XY ). But E(XY ) = 0. (Why?) Thus, Cov(X, Y ) = 0 but X and Y are not independent (why?).
7 Example Consider the example with the car and home insurance deductibles, but suppose that the deductible amounts are given in cents. Find the covariance of the two deductibles. Solution: Let X, Y denote the deductibles of a randomly chosen home and car owner in cents. If X, Y denote the deductibles in dollars, then X = 100X, Y = 100Y. According to part 4 of the proposition, σ X Y = σ XY = 18, 750, 000.
8 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 1 2 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 3 Hierarchical Models Regression Models 4 Properties of the Variance via Simulations Fitting Regression Models
9 Positive and Negative Dependence Monotone Dependence and Covariance Correlation as a measure of (linear) dependence When two variables are not independent, it is of interest to qualify and quantify dependence. X, Y are positively dependent or positively correlated if large values of X are associated with large values of Y, and small values of X are associated with small values of Y. In the opposite case, X, Y are negatively dependent or negatively correlated. If the dependence is either positive or negative it is called monotone.
10 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Monotone dependence and covariance The monotone dependence is positive or negative, if and only if the covariance is positive or negative. To gain some intuition for that consider a population of N units, let (x 1, y 1 ), (x 2, y 2 ),..., (x N, y N ) denote the values of the bivariate characteristic for each of the N units, and let (X, Y ) denote the bivariate characteristic of a randomly selected unit. Then σ XY = 1 N N (x i µ X )(y i µ Y ), i=1 where µ X = 1 N N i=1 x i and µ Y = 1 N N i=1 y i are the marginal expected values of X and Y. Thus:
11 Intuition for Covariance - Continued Monotone Dependence and Covariance Correlation as a measure of (linear) dependence If X, Y are positively dependent (think of X = height and Y = weight), then (x i µ X )(y i µ Y ) will be mostly positive. If X, Y are negatively dependent (think of X = stress and Y = time to failure), then (x i µ X )(y i µ Y ) will be mostly negative. Therefore, σ XY will be positive or negative according to whether the dependence of X and Y is positive or negative.
12 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Caveat: This statement: The monotone dependence is positive, if and only if the covariance is positive SHOULD NOT be interpreted as: If the covariance is positive then the dependence is positive Positive covariance implies positive dependence only if we know that the dependence is monotone.
13 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Monotone dependence and regression function The dependence is monotone (positive or negative), if and only if the regression function, µ Y X (x) = E(Y X = x), is monotone (increasing or decreasing) in x. Example 1 X = height and Y = weight of a randomly selected adult male, are positively dependent. It is also true that µ Y X (1.82) < µ Y X (1.90). 2 X = stress applied and Y = time to failure, are negatively dependent. It is also true that µ Y X (10) > µ Y X (20).
14 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 1 2 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 3 Hierarchical Models Regression Models 4 Properties of the Variance via Simulations Fitting Regression Models
15 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Quantification of dependence should be unit-free! In particular, a quantification of dependence should not depend on the units of measurement. For example, a quantification of the dependence between height and weight should not depend on whether the units are meters and kilograms or feet and pounds The scale-dependence of the covariance, implied by the property Cov(aX, by ) = abcov(x, Y ) makes it unsuitable as a measure of dependence.
16 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Pearson s Linear Correlation Coefficient Pearson s linear correlation coefficient, or simply correlation coefficient, between X and Y is defined as ρ XY = Corr(X, Y ) = σ XY σ X σ Y. Corr(X, Y ) can also be thought of as the covariance between the scaled versions of X and Y, i.e., ( X Corr(X, Y ) = Cov, Y ) σ X σ Y Since scaled versions of r.v.s are scale-free, so is ρ XY.
17 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Proposition (Properties of Correlation) 1 If ac > 0, then Corr(aX + b, cy + d) = Corr(X, Y ). 2 1 ρ XY 1. 3 If X, Y are independent, then ρ XY = 0. 4 ρ XY = 1 or 1 if and only if Y = ax + b, for some constants a, b. Assuming there is a linear trend, it possible to associate values of ρ XY with the degree of concentration of the (X, Y )-values around a line, as illustrated by the following figures.
18 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Correlation = 0.2 Correlation = 0.45 Correlation = 0.65 Correlation = 0.9 Figure: Correlations and degrees of concentration around a line
19 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Example Find the correlation coefficient of the deductibles in car and home insurance of a randomly chosen car and home owner, when the deductibles are expressed a) in dollars, and b) in cents. Solution: If X, Y denote the deductibles in dollars, we saw that σ XY = Omitting the details, it can be found that σ X = 75 and σ Y = Thus, ρ XY = = Next, the deductibles expressed in cents are (X, Y ) = (100X, 100Y ). According to the proposition, ρ X Y = ρ XY
20 Linear vs Nonlinear Dependence Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Corr(X, Y ) quantifies accurately the dependence between X and Y provided there is a linear trend. If all (X, Y )-values fall on a straight line (perfect dependence) then ρ XY = ±1 depending on whether the line has positive or negative slope. As ρ XY gets smaller, the concentration of the (X, Y )-values around the line decreases. However, the quantification of dependence is not accurate if the trend is nonlinear. It is possible to have perfect dependence (i.e., knowing one amounts to knowing the other) but ρ XY ±1.
21 Example Let X U(0, 1), and Y = X 2. Find ρ XY. Solution: We have σ XY = E(XY ) E(X)E(Y ) Monotone Dependence and Covariance Correlation as a measure of (linear) dependence = E(X 3 ) E(X)E(X 2 ) = = Omitting calculations, σ X = 1 12, σ Y = ρ XY = = = 2 3. Thus, 5 A similar set of calculations reveals that with X as before and Y = X 4, ρ XY =
22 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Example Let X U( 1, 1), and Y = X 2. Are X and Y independent? Is the dependence of X and Y monotone? Is ρ XY = 0? Solution: X and Y are not independent. In fact, they are dependent quite strongly since knowing X implies that Y is also known. Moreover, their dependence is not monotone (why?). Since f X (x) = 0.5 for 1 < x < 1, and 0 otherwise, E(X) = 0, and E(XY ) = E(X 3 ) = 0. Thus, Cov(X, Y ) = E(XY ) E(X)E(Y ) = 0. Hence, ρ XY = 0.
23 Uncorrelated Variables Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Two variables having zero correlation are called uncorrelated. Independent variables are uncorrelated. Uncorrelated variables are not necessarily independent.
24 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Sample versions of covariance and correlation If (X 1, Y 1 ),..., (X n, Y n ) is a sample from the bivariate distribution of (X, Y ), the sample covariance, S X,Y, and sample correlation coefficient, r X,Y, are defined as S X,Y = 1 n 1 r X,Y = S X,Y S X S Y n (X i X)(Y i Y ) i=1 Sample versions of covariance and correlation coefficient
25 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence Computational formula: [ n ( n ( n )] S X,Y = 1 X i Y i 1 X i) Y i. n 1 n i=1 i=1 i=1 R commands: R commands for covariance and correlation cov(x,y) gives S X,Y cor(x,y) gives r X,Y
26 Hierarchical Models Regression Models 1 2 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 3 Hierarchical Models Regression Models 4 Properties of the Variance via Simulations Fitting Regression Models
27 Hierarchical Models Regression Models Hierarchical models are based on the Multiplication Rules p(x, y) = p Y X=x (y)p X (x), for the joint pmf f (x, y) = f Y X=x (y)f X (x), for the joint pdf. They specify the joint distribution of X, Y by first specifying the conditional distribution of Y given X = x, and then specifying the marginal distribution of X. The hierarchical method of modeling yields a very rich and flexible class of joint distributions.
28 Hierarchical Models Regression Models Example Let X be the number of eggs an insect lays and Y the number of eggs that survive. Model the joint pmf of X and Y. Solution: With the principle of hierarchical modeling, we need to specify the conditional pmf of Y given X = x, and the marginal pmf of X. One possibile specification is Y X = x Bin(x, p), X Poisson(λ), which leads to the following joint pmf of X and Y. p(x, y) = p Y X=x (y)p X (x) = ( x )p y (1 p) x y e λ λ x y x!
29 Hierarchical Models Regression Models In hierarchical models that specify p Y X=x (y) and p X (x) the marginal expected value of Y is most easily found through the Law of Total Expectation (LTE). Find E(Y ) if X and Y have the joint distribution of the previous example, i.e., Y X = x Bin(x, p), X Poisson(λ). Solution: Since Y X Bin(X, p), the regression function of Y on X is E(Y X) = Xp. Thus, according to the LTE, E(Y ) = E[E(Y X)] = E[Xp] = E[X]p = λp.
30 Hierarchical Models Regression Models The Bivariate Normal Distribution The bivariate normal distribution arises as a hierarchical model. Definition X and Y are said to have a bivariate normal distribution if ( ) Y X = x N α 1 + β 1 x, σε 2, and X N(µ X, σx 2 ). Commentaries: 1. The regression function of Y on X is linear in x: µ Y X (x) = α 1 + β 1 x.
31 Hierarchical Models Regression Models Commentaries (Continued): 2. The marginal distribution of Y is normal with E(Y ) = E[E(Y X)] = E[α 1 + β 1 X] = α 1 + β 1 µ X. 3. σ 2 ε is the conditional variance of Y given X and is called error variance. 4. A more common form of the bivariate pdf involves µ X and µ Y, σ 2 X, σ2 Y and ρ XY ; see relation (4.6.13), p. 205, in the book. 5. Any linear combination, a 1 X + a 2 Y, of X and Y has a normal distribution. Plots of bivariate normal pdfs are given in the next slides.
32 Hierarchical Models Regression Models Joint PMF of two independent N(0,1) RVs 0.15 z x x
33 Hierarchical Models Regression Models Joint PMF of two N(0,1) RVs with ρ = z x x Week 8 Jointly3 Distributed Random Variables Part II
34 Hierarchical Models Regression Models 1 2 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 3 Hierarchical Models Regression Models 4 Properties of the Variance via Simulations Fitting Regression Models
35 Hierarchical Models Regression Models Regression models focus primarily on the regression function of Y on X, and are commonly written as Y = µ Y X (x) + ε, where ε is called the (intrinsic) error variable. Y is called the response variable and X is interchangeably referred to as covariate, or independent variable, or predictor, or explanatory variable. The marginal distribution of X, which is of little interest in such studies, is left unspecified. (Thus regression models do not specify the joint distribution of X and Y.)
36 Hierarchical Models Regression Models The Simple Linear Regression Model The simple linear regression model specifies that the regression function is linear in x, i.e., µ Y X (x) = α 1 + β 1 x, and that Var(Y X = x) is the same for all values x. An alternative expression for the regression line is µ Y X (x) = β 0 + β 1 (x µ X ). With this expression, E(Y ) = β 0.
37 Hierarchical Models Regression Models The following picture illustrates the meaning of the slope of the regression line. Figure: Illustration of Regression Parameters
38 Hierarchical Models Regression Models Proposition In the simple linear regression model 1 The intrinsic error variable, ɛ, has zero mean and is uncorrelated from the explanatory variable, X. 2 The slope is related to the correlation by β 1 = σ XY σ 2 X = ρ XY σ Y σ X. 3 An additional relationship is σ 2 Y = σ2 ε + β 2 1 σ2 X.
39 Hierarchical Models Regression Models Example Suppose Y = 5 2x + ε, and let σ ɛ = 4, µ X = 7 and σ X = 3. a) Find σy 2, and ρ XY. b) Find µ Y. Solution: For a) use the formulas in the above proposition: σy 2 = ( 2) = 52 ρ XY = β 1 σ X /σ Y = 2 3/ 52 = For b) use the Law of Total Expectation: E(Y ) = E[E(Y X)] = E[5 2X] = 5 2E(X) = 9
40 Hierarchical Models Regression Models The normal simple linear regression model specifies, in addition, that the intrinsic error variable is normally distributed, i.e., Y = β 0 + β 1 (X µ X ) + ε, ε N(0, σ 2 ε ). An alternative expression of the normal simple linear regression model is Y X = x N(β 0 + β 1 (x µ X ), σ 2 ε ). In the above, β 0 + β 1 (x µ X ) can also be replaced by α 1 + β 1 x.
41 Hierarchical Models Regression Models Figure: Illustration of the Normal Simple Linear Regression Model
42 Hierarchical Models Regression Models Example Suppose Y = 5 2x + ε, and let ε N(0, σ 2 ε ) where σ ɛ = 4. Let Y 1, Y 2 be observations taken at X = 1 and X = 2, respectively. a) Find the 95th percentile of Y 1 and Y 2. b) Find P(Y 1 > Y 2 ). Solution: a) Y 1 N(3, 4 2 ), so its 95th percentile is = Y 2 N(1, 4 2 ), so its 95th percentile is = b) Y 1 Y 2 N(2, 32) (why?). Thus, ( ) 2 P(Y 1 > Y 2 ) = P(Y 1 Y 2 > 0) = 1 Φ =
43 Properties of the Variance via Simulations Fitting Regression Models 1 2 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 3 Hierarchical Models Regression Models 4 Properties of the Variance via Simulations Fitting Regression Models
44 Proof of σ 2 X+Y = σ2 X Y = σ2 X + σ2 Y Properties of the Variance via Simulations Fitting Regression Models for X, Y indep. Using normal samples, try: x=rnorm(50000); y=rnorm(50000,10,2); var(x+y); var(x y) Can you guess the output from these commands? The X, Y can have very different distributions. Try: z=rpois(50000,1); var(x+z); var(x-z) Can you guess the output from these commands? Guess the output from the commands: cov(x,y); cov(x,z); cov(y,z) Hint: The x, y and z samples are independently generated.
45 Properties of the Variance via Simulations Fitting Regression Models Proof of σ 2 X+Y = σ2 X + σ2 Y + 2σ X,Y We will generate a sample (x i, y i ), i = 1,..., 50000, from a bivariate population with Var(X) = 1, Var(Y ) = 2 and Cov(X, Y ) = 1, and will confirm the formula for the Var(X + Y ) through the sample variance of x i + y i, i = 1,..., 50000). Let X, V be iid N(0,1), and set Y = X + V. Then Var(X) = 1, Var(Y ) = 2, Cov(X, Y ) = 1. (Why?) Generate a sample (x i, y i ), i = 1,..., 50000, from the bivariate population of (X, Y ) as follows: x=rnorm(50000); v=rnorm(50000); y=x+v Try: var(x); var(y); cov(x,y); var(x+y); var(x)+var(y)+2*cov(x,y)
46 Properties of the Variance via Simulations Fitting Regression Models 1 2 Monotone Dependence and Covariance Correlation as a measure of (linear) dependence 3 Hierarchical Models Regression Models 4 Properties of the Variance via Simulations Fitting Regression Models
47 Properties of the Variance via Simulations Fitting Regression Models If y and x contain the response and predictor values, the model Y = α 1 + β 1 X + ɛ is fitted (i.e., the coefficients α 1 and β 1 are estimated) by Can also do out=lm(y x); coef(out) lm(y x ) or out=lm(y x); out Alternatively, since β 1 = Cov(X, Y )/Var(X), β 1 and α 1 can be estimated by cov(x,y)/var(x); mean(y)-(cov(x,y)/var(x))*mean(x)
48 Illustration with Simulated Data Properties of the Variance via Simulations Fitting Regression Models e=rnorm(50,0,5); x=runif(50,0,10); y=25 3.4*x+e beta1= cov(x,y)/var(x); alpha1 = mean(y)-beta1*mean(x) Do a scatterplot and find the estimated regression line: plot(x,y); out=lm(y x); out Try plot(x,y); abline(coef(out),col=2) Can add the true regression line on the plot by lines(x,25-3.4*x,type= l,col=4)
49 Ramifications of lm Properties of the Variance via Simulations Fitting Regression Models To fit the model Y = β 0 + β 1 (X µ X ) + ɛ, do xc=x-mean(x); lm(y xc) [Note: now β 0 = Y!] lm(y x) is equivalent to lm(y 1 + x). lm(y 1) fits the model Y = β 0 + ɛ and returns β 0 = Y. [However, it contains more information than mean(y)!] Polynomial models can also be fitted. For example x2=xˆ2; lm(y x + x2) (or lm(y 1 + x + x2)) fits the model Y = α 1 + β 1 X + β 2 X 2 + ɛ.
50 The Use of Transformations Properties of the Variance via Simulations Fitting Regression Models Do a scatterplot for the data, given in Problem 13, Section 6.3, on the diameter and age of sugar maple trees : da=read.table( TreeAgeDiamSugarMaple.txt, header=t) x=da$diamet; y=da$age; plot(x,y) Are there any apparent violations of the SLR model assumptions? Take logarithm of the data and do a scatterplot for the transformed data: x1=log(x); y1=log(y); plot(x1,y1) Are the apparent violations less apparent?
Joint probability distributions: Discrete Variables. Two Discrete Random Variables. Example 1. Example 1
Joint probability distributions: Discrete Variables Two Discrete Random Variables Probability mass function (pmf) of a single discrete random variable X specifies how much probability mass is placed on
More informationJointly Distributed Random Variables
Jointly Distributed Random Variables CE 311S What if there is more than one random variable we are interested in? How should you invest the extra money from your summer internship? To simplify matters,
More informationf X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx
INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't
More informationBivariate distributions
Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient
More informationChapter 4 continued. Chapter 4 sections
Chapter 4 sections Chapter 4 continued 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP:
More informationMore than one variable
Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to
More informationThe mean, variance and covariance. (Chs 3.4.1, 3.4.2)
4 The mean, variance and covariance (Chs 3.4.1, 3.4.2) Mean (Expected Value) of X Consider a university having 15,000 students and let X equal the number of courses for which a randomly selected student
More informationProblem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51
Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Bivariate Distributions Néhémy Lim University of Washington Winter 2017 Outline Distributions of Two Random Variables Distributions of Two Discrete Random Variables Distributions
More informationENGG2430A-Homework 2
ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,
More informationNotes for Math 324, Part 19
48 Notes for Math 324, Part 9 Chapter 9 Multivariate distributions, covariance Often, we need to consider several random variables at the same time. We have a sample space S and r.v. s X, Y,..., which
More informationStatistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University
Statistics for Economists Lectures 6 & 7 Asrat Temesgen Stockholm University 1 Chapter 4- Bivariate Distributions 41 Distributions of two random variables Definition 41-1: Let X and Y be two random variables
More informationLecture 16 : Independence, Covariance and Correlation of Discrete Random Variables
Lecture 6 : Independence, Covariance and Correlation of Discrete Random Variables 0/ 3 Definition Two discrete random variables X and Y defined on the same sample space are said to be independent if for
More informationJoint Distribution of Two or More Random Variables
Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few
More informationCovariance and Correlation
Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability
More informationChapter 5 Class Notes
Chapter 5 Class Notes Sections 5.1 and 5.2 It is quite common to measure several variables (some of which may be correlated) and to examine the corresponding joint probability distribution One example
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationCHAPTER 5. Jointly Probability Mass Function for Two Discrete Distributed Random Variables:
CHAPTER 5 Jointl Distributed Random Variable There are some situations that experiment contains more than one variable and researcher interested in to stud joint behavior of several variables at the same
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationCovariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance
Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture
More informationRandom vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.
Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just
More informationBivariate Distributions
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 17 Néhémy Lim Bivariate Distributions 1 Distributions of Two Random Variables Definition 1.1. Let X and Y be two rrvs on probability space (Ω, A, P).
More informationBivariate Distributions. Discrete Bivariate Distribution Example
Spring 7 Geog C: Phaedon C. Kyriakidis Bivariate Distributions Definition: class of multivariate probability distributions describing joint variation of outcomes of two random variables (discrete or continuous),
More informationCovariance and Correlation Class 7, Jeremy Orloff and Jonathan Bloom
1 Learning Goals Covariance and Correlation Class 7, 18.05 Jerem Orloff and Jonathan Bloom 1. Understand the meaning of covariance and correlation. 2. Be able to compute the covariance and correlation
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationECON Fundamentals of Probability
ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationWeek 9 The Central Limit Theorem and Estimation Concepts
Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population
More information18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More informationCh. 5 Joint Probability Distributions and Random Samples
Ch. 5 Joint Probability Distributions and Random Samples 5. 1 Jointly Distributed Random Variables In chapters 3 and 4, we learned about probability distributions for a single random variable. However,
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationGov Multiple Random Variables
Gov 2000-4. Multiple Random Variables Matthew Blackwell September 29, 2015 Where are we? Where are we going? We described a formal way to talk about uncertain outcomes, probability. We ve talked about
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationRegression Analysis. Ordinary Least Squares. The Linear Model
Regression Analysis Linear regression is one of the most widely used tools in statistics. Suppose we were jobless college students interested in finding out how big (or small) our salaries would be 20
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More information(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.
54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationRandom Variables and Expectations
Inside ECOOMICS Random Variables Introduction to Econometrics Random Variables and Expectations A random variable has an outcome that is determined by an experiment and takes on a numerical value. A procedure
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007
UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas
More information11. Regression and Least Squares
11. Regression and Least Squares Prof. Tesler Math 186 Winter 2016 Prof. Tesler Ch. 11: Linear Regression Math 186 / Winter 2016 1 / 23 Regression Given n points ( 1, 1 ), ( 2, 2 ),..., we want to determine
More informationSTA 256: Statistics and Probability I
Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.
ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers
More informationSTAT 430/510: Lecture 16
STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions
More informationBivariate Distributions
Bivariate Distributions EGR 260 R. Van Til Industrial & Systems Engineering Dept. Copyright 2013. Robert P. Van Til. All rights reserved. 1 What s It All About? Many random processes produce Examples.»
More informationStatistics STAT:5100 (22S:193), Fall Sample Final Exam B
Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2
More informationNotes for Math 324, Part 20
7 Notes for Math 34, Part Chapter Conditional epectations, variances, etc.. Conditional probability Given two events, the conditional probability of A given B is defined by P[A B] = P[A B]. P[B] P[A B]
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationBivariate Paired Numerical Data
Bivariate Paired Numerical Data Pearson s correlation, Spearman s ρ and Kendall s τ, tests of independence University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html
More informationGaussian random variables inr n
Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationVariance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18
Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We
More informationRandom Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.
Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example
More informationMATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)
MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem
More informationACM 116: Lectures 3 4
1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance
More information1 Exercises for lecture 1
1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )
More informationRecall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.
Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,
More informationAppendix A : Introduction to Probability and stochastic processes
A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More informationApplied Quantitative Methods II
Applied Quantitative Methods II Lecture 4: OLS and Statistics revision Klára Kaĺıšková Klára Kaĺıšková AQM II - Lecture 4 VŠE, SS 2016/17 1 / 68 Outline 1 Econometric analysis Properties of an estimator
More informationChapter 4 Multiple Random Variables
Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous
More informationStatistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }
Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v
More informationHW5 Solutions. (a) (8 pts.) Show that if two random variables X and Y are independent, then E[XY ] = E[X]E[Y ] xy p X,Y (x, y)
HW5 Solutions 1. (50 pts.) Random homeworks again (a) (8 pts.) Show that if two random variables X and Y are independent, then E[XY ] = E[X]E[Y ] Answer: Applying the definition of expectation we have
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationTom Salisbury
MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom
More informationConditional distributions (discrete case)
Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can
More informationUCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the
More informationProperties of Summation Operator
Econ 325 Section 003/004 Notes on Variance, Covariance, and Summation Operator By Hiro Kasahara Properties of Summation Operator For a sequence of the values {x 1, x 2,..., x n, we write the sum of x 1,
More informationExpectation of Random Variables
1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete
More informationECON 3150/4150, Spring term Lecture 6
ECON 3150/4150, Spring term 2013. Lecture 6 Review of theoretical statistics for econometric modelling (II) Ragnar Nymoen University of Oslo 31 January 2013 1 / 25 References to Lecture 3 and 6 Lecture
More informationCHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable
CHAPTER 4 MATHEMATICAL EXPECTATION 4.1 Mean of a Random Variable The expected value, or mathematical expectation E(X) of a random variable X is the long-run average value of X that would emerge after a
More informationSTOR Lecture 16. Properties of Expectation - I
STOR 435.001 Lecture 16 Properties of Expectation - I Jan Hannig UNC Chapel Hill 1 / 22 Motivation Recall we found joint distributions to be pretty complicated objects. Need various tools from combinatorics
More informationChapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:
4.1 Bivariate Distributions. Chapter 4. Multivariate Distributions For a pair r.v.s (X,Y ), the Joint CDF is defined as F X,Y (x, y ) = P (X x,y y ). Obviously, the marginal distributions may be obtained
More informationLecture 16 - Correlation and Regression
Lecture 16 - Correlation and Regression Statistics 102 Colin Rundel April 1, 2013 Modeling numerical variables Modeling numerical variables So far we have worked with single numerical and categorical variables,
More informationChapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression
BSTT523: Kutner et al., Chapter 1 1 Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression Introduction: Functional relation between
More informationLecture Note 1: Probability Theory and Statistics
Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would
More informationUniversity of Illinois ECE 313: Final Exam Fall 2014
University of Illinois ECE 313: Final Exam Fall 2014 Monday, December 15, 2014, 7:00 p.m. 10:00 p.m. Sect. B, names A-O, 1013 ECE, names P-Z, 1015 ECE; Section C, names A-L, 1015 ECE; all others 112 Gregory
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationIAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES
IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES VARIABLE Studying the behavior of random variables, and more importantly functions of random variables is essential for both the
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More information1. Simple Linear Regression
1. Simple Linear Regression Suppose that we are interested in the average height of male undergrads at UF. We put each male student s name (population) in a hat and randomly select 100 (sample). Then their
More informationPROBABILITY THEORY REVIEW
PROBABILITY THEORY REVIEW CMPUT 466/551 Martha White Fall, 2017 REMINDERS Assignment 1 is due on September 28 Thought questions 1 are due on September 21 Chapters 1-4, about 40 pages If you are printing,
More informationJoint Gaussian Graphical Model Review Series I
Joint Gaussian Graphical Model Review Series I Probability Foundations Beilun Wang Advisor: Yanjun Qi 1 Department of Computer Science, University of Virginia http://jointggm.org/ June 23rd, 2017 Beilun
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationReview of Statistics
Review of Statistics Topics Descriptive Statistics Mean, Variance Probability Union event, joint event Random Variables Discrete and Continuous Distributions, Moments Two Random Variables Covariance and
More informationEXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY GRADUATE DIPLOMA, Statistical Theory and Methods I. Time Allowed: Three Hours
EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY GRADUATE DIPLOMA, 008 Statistical Theory and Methods I Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks.
More information