Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Similar documents
Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review

Introduction to Computational Finance and Financial Econometrics Probability Theory Review: Part 2

Elements of Probability Theory

ENGG2430A-Homework 2

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Continuous Random Variables

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

STA 256: Statistics and Probability I

5 Operations on Multiple Random Variables

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable

BASICS OF PROBABILITY

Lecture 11. Multivariate Normal theory

Lecture 2: Review of Probability

ECE Lecture #9 Part 2 Overview

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Chp 4. Expectation and Variance

Econ 424 Review of Matrix Algebra

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

conditional cdf, conditional pdf, total probability theorem?

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

Introduction to Probability and Stocastic Processes - Part I

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

STAT 430/510: Lecture 16

3. Probability and Statistics

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Chapter 4 continued. Chapter 4 sections

11. Regression and Least Squares

The Multivariate Normal Distribution. In this case according to our theorem

18.440: Lecture 28 Lectures Review

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

3-1. all x all y. [Figure 3.1]

Applied Econometrics - QEM Theme 1: Introduction to Econometrics Chapter 1 + Probability Primer + Appendix B in PoE

More than one variable

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Let X and Y denote two random variables. The joint distribution of these random

Formulas for probability theory and linear models SF2941

Random Variables and Expectations

ECON Fundamentals of Probability

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Practice Examination # 3

ACM 116: Lectures 3 4

Notes on Random Vectors and Multivariate Normal

Lecture 1: August 28

LIST OF FORMULAS FOR STK1100 AND STK1110

Probability and Statistics Notes

MAS113 Introduction to Probability and Statistics. Proofs of theorems

18 Bivariate normal distribution I

Homework 5 Solutions

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51

Stat 5101 Notes: Algorithms (thru 2nd midterm)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

The Multivariate Normal Distribution 1

8 - Continuous random vectors

2 (Statistics) Random variables

Bivariate distributions

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

Final Exam # 3. Sta 230: Probability. December 16, 2012

Introduction to Normal Distribution

A Probability Review

Joint Distribution of Two or More Random Variables

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.

Interesting Probability Problems

Random Variables and Their Distributions

18.440: Lecture 26 Conditional expectation

Solutions and Proofs: Optimizing Portfolios

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Lecture 4: Conditional expectation and independence

01 Probability Theory and Statistics Review

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

ECE 4400:693 - Information Theory

EE4601 Communication Systems

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

Preliminary Statistics. Lecture 3: Probability Models and Distributions

3 Multiple Discrete Random Variables

Chapter 5 continued. Chapter 5 sections

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

Bivariate Distributions. Discrete Bivariate Distribution Example

Stochastic Models of Manufacturing Systems

14.30 Introduction to Statistical Methods in Economics Spring 2009

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Multiple Random Variables

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

18.440: Lecture 28 Lectures Review

More on Distribution Function

Outline Properties of Covariance Quantifying Dependence Models for Joint Distributions Lab 4. Week 8 Jointly Distributed Random Variables Part II

Covariance and Correlation Class 7, Jeremy Orloff and Jonathan Bloom

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

Raquel Prado. Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010

Multivariate Random Variable

Transcription:

You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s Marginal distributions Conditional distributions 2 Bivariate Probability Distribution - Continuous rv s 3 Independence 4 Covariance and Correlation Eric Zivot (Copyright 2015) Probability Review - Part 2 2 / 46

Example Example: Two discrete rv s X and Y Bivariate pdf Y % 0 1 Pr(X) 0 1/8 0 1/8 X 1 2/8 1/8 3/8 2 1/8 2/8 3/8 3 0 1/8 1/8 Pr(Y ) 4/8 4/8 1 p(x, y) = Pr(X = x, Y = y) = values in table e.g., p(0, 0) = Pr(X = 0, Y = 0) = 1/8 Eric Zivot (Copyright 2015) Probability Review - Part 2 3 / 46

Properties of joint pdf p(x, y) S XY = {(0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1), (3, 0), (3, 1)} p(x, y) 0 for x, y S XY p(x, y) = 1 x,y S XY Eric Zivot (Copyright 2015) Probability Review - Part 2 4 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s Marginal distributions Conditional distributions 2 Bivariate Probability Distribution - Continuous rv s 3 Independence 4 Covariance and Correlation Eric Zivot (Copyright 2015) Probability Review - Part 2 5 / 46

Marginal pdfs p(x) = Pr(X = x) = p(x, y) y S Y = sum over columns in joint table p(y) = Pr(Y = y) = p(x, y) x S X = sum over rows in joint table Eric Zivot (Copyright 2015) Probability Review - Part 2 6 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s Marginal distributions Conditional distributions 2 Bivariate Probability Distribution - Continuous rv s 3 Independence 4 Covariance and Correlation Eric Zivot (Copyright 2015) Probability Review - Part 2 7 / 46

Conditional Probability Suppose we know Y = 0. How does this knowledge affect the probability that X = 0, 1, 2, or 3? The answer involves conditional probability. Example: Pr(X = 0 Y = 0) = Pr(X = 0, Y = 0) Pr(Y = 0) = joint probability marginal probability = 1/8 4/8 = 1/4 Remark: Pr(X = 0 Y = 0) = 1/4 Pr(X = 0) = 1/8 = X depends on Y The marginal probability, Pr(X = 0), ignores information about Y. Eric Zivot (Copyright 2015) Probability Review - Part 2 8 / 46

Definition - Conditional Probability The conditional pdf of X given Y = y is, for all x S X, p(x y) = Pr(X = x Y = y) = Pr(X = x, Y = y) Pr(Y = y) The conditional pdf of Y given X = x is, for all values of y S Y p(y x) = Pr(Y = y X = x) = Pr(X = x, Y = y) Pr(X = x) Eric Zivot (Copyright 2015) Probability Review - Part 2 9 / 46

Conditional Mean and Variance µ X Y =y = E[X Y = y] = x Pr(X = x Y = y), x S X µ Y X=x = E[Y X = x] = y Pr(Y = y X = x). y S Y σx Y 2 =y σy 2 X=x = var(x Y = y) = = var(y X = x) = x S X (x µ X Y =y ) 2 Pr(X = x Y = y), y S Y (y µ Y X=x ) 2 Pr(Y = y X = x). Eric Zivot (Copyright 2015) Probability Review - Part 2 10 / 46

Example E[X] = 0 1/8 + 1 3/8 + 2 3/8 + 3 1/8 = 3/2 E[X Y = 0] = 0 1/4 + 1 1/2 + 2 1/4 + 3 0 = 1, E[X Y = 1] = 0 0 + 1 1/4 + 2 1/2 + 3 1/4 = 2, var(x) = (0 3/2) 2 1/8 + (1 3/2) 2 3/8 +(2 3/2) 2 3/8 + (3 3/2) 2 1/8 = 3/4, var(x Y = 0) = (0 1) 2 1/4 + (1 1) 2 1/2 +(2 1) 2 1/2 + (3 1) 2 0 = 1/2, var(x Y = 1) = (0 2) 2 0 + (1 2) 2 1/4 +(2 2) 2 1/2 + (3 2) 2 1/4 = 1/2. Eric Zivot (Copyright 2015) Probability Review - Part 2 11 / 46

Independence Let X and Y be discrete rvs with pdfs p(x), p(y), sample spaces S X, S Y and joint pdf p(x, y). Then X and Y are independent rv s if and only if: p(x, y) = p(x) p(y) for all values of x S X and y S Y Result: If X and Y are independent rv s then, p(x y) = p(x) for all x S X, y S Y p(y x) = p(y) for all x S X, y S Y Intuition: Knowledge of X does not influence probabilities associated with Y Knowledge of Y does not influence probablities associated with X Eric Zivot (Copyright 2015) Probability Review - Part 2 12 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s 2 Bivariate Probability Distribution - Continuous rv s Marginal and conditional distributions 3 Independence 4 Covariance and Correlation Eric Zivot (Copyright 2015) Probability Review - Part 2 13 / 46

Bivariate Distributions - Continuous rv s The joint pdf of X and Y is a non-negative function f(x, y) such that: f(x, y)dxdy = 1 Let [x 1, x 2 ] and [y 1, y 2 ] be intervals on the real line. Then, Pr(x 1 X x 2, y 1 Y y 2 ) = x2 y2 x 1 y 1 f(x, y)dxdy = volume under probability surface over the intersection of the intervals [x 1, x 2 ] and [y 1, y 2 ] Eric Zivot (Copyright 2015) Probability Review - Part 2 14 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s 2 Bivariate Probability Distribution - Continuous rv s Marginal and conditional distributions 3 Independence 4 Covariance and Correlation Eric Zivot (Copyright 2015) Probability Review - Part 2 15 / 46

Marginal and conditional distributions The marginal pdf of X is found by integrating y out of the joint pdf f(x, y) and the marginal pdf of Y is found by integrating x out of the joint pdf: f(x) = f(y) = f(x, y)dy, f(x, y)dx. The conditional pdf of X given that Y = y, denoted f(x y), is computed as, f(x y) = f(x, y) f(y), and the conditional pdf of Y given that X = x is computed as, f(y x) = f(x, y) f(x). Eric Zivot (Copyright 2015) Probability Review - Part 2 16 / 46

Conditional mean and variance The conditional means are computed as: µ X Y =y = E[X Y = y] = x p(x y)dx, µ Y X=x = E[Y X = x] = y p(y x)dy and the conditional variances are computed as, σ 2 X Y =y = var(x Y = y) = (x µ X Y =y ) 2 p(x y)dx, σ 2 Y X=x = var(y X = x) = (y µ Y X=x ) 2 p(y x)dy. Eric Zivot (Copyright 2015) Probability Review - Part 2 17 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s 2 Bivariate Probability Distribution - Continuous rv s 3 Independence 4 Covariance and Correlation Eric Zivot (Copyright 2015) Probability Review - Part 2 18 / 46

Independence Let X and Y be continuous random variables. X and Y are independent iff: f(x y) = f(x), for < x, y <, f(y x) = f(y), for < x, y <. Result: Let X and Y be continuous random variables. X and Y are independent iff: f(x, y) = f(x)f(y) The result in the above proposition is extremely useful in practice because it gives us an easy way to compute the joint pdf for two independent random variables: we simple compute the product of the marginal distributions. Eric Zivot (Copyright 2015) Probability Review - Part 2 19 / 46

Example Example: Bivariate standard normal distribution Let X N(0, 1), Y N(0, 1) and let X and Y be independent. Then, f(x, y) = f(x)f(y) = 1 2π e 1 2 x2 1 2π e 1 2 y2 = 1 2π e 1 2 (x2 +y 2). To find Pr( 1 < X < 1, 1 < Y < 1) we must solve, 1 1 1 1 1 2π e 1 2 (x2 +y 2) dxdy which, unfortunately, does not have an analytical solution. Numerical approximation methods are required to evaluate the above integral. See R package mvtnorm. Eric Zivot (Copyright 2015) Probability Review - Part 2 20 / 46

Independence cont. Result: If the random variables X and Y (discrete or continuous) are independent then the random variables g(x) and h(y ) are independent for any functions g( ) and h( ). For example, if X and Y are independent then X 2 and Y 2 are also independent. Eric Zivot (Copyright 2015) Probability Review - Part 2 21 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s 2 Bivariate Probability Distribution - Continuous rv s 3 Independence 4 Covariance and Correlation Bivariate normal distribution Linear Combination of rv s Eric Zivot (Copyright 2015) Probability Review - Part 2 22 / 46

Covariance and Correlation - Measuring linear dependence between two rv s Covariance: Measures direction but not strength of linear relationship between 2 rv s σ XY = E[(X µ X )(Y µ Y )] = = x.y S XY (x µ X )(y µ Y ) p(x, y) (discrete) (x µ X )(y µ Y )f(x, y)dxdy (cts) Eric Zivot (Copyright 2015) Probability Review - Part 2 23 / 46

Example Example: For the data in Table 2, we have σ XY = Cov(X, Y ) = (0 3/2)(0 1/2) 1/8 +(0 3/2)(1 1/2) 0 + +(3 3/2)(1 1/2) 1/8 = 1/4 Eric Zivot (Copyright 2015) Probability Review - Part 2 24 / 46

Properties of Covariance Cov(X, Y ) = Cov(Y, X) Cov(aX, by ) = a b Cov(X, Y ) = a b σ XY Cov(X, X) = Var(X) X, Y independent = Cov(X, Y ) = 0 Cov(X, Y ) = 0 X and Y are independent Cov(X, Y ) = E[XY ] E[X]E[Y ] Eric Zivot (Copyright 2015) Probability Review - Part 2 25 / 46

Correlation Correlation: Measures direction and strength of linear relationship between 2 rv s ρ XY = Cor(X, Y ) = Cov(X, Y ) SD(X) SD(Y ) = σ XY σ X σ Y = scaled covariance Eric Zivot (Copyright 2015) Probability Review - Part 2 26 / 46

Example Example: For the Data in Table 2 ρ XY = Cor(X, Y ) = 1/4 (3/4) (1/2) = 0.577 Eric Zivot (Copyright 2015) Probability Review - Part 2 27 / 46

Properties of Correlation 1 ρ XY 1 ρ XY = 1 if Y = ax + b and a > 0 ρ XY = 1 if Y = ax + b and a < 0 ρ XY = 0 if and only if σ XY = 0 ρ XY = 0 X and Y are independent in general ρ XY = 0 = independence if X and Y are normal Eric Zivot (Copyright 2015) Probability Review - Part 2 28 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s 2 Bivariate Probability Distribution - Continuous rv s 3 Independence 4 Covariance and Correlation Bivariate normal distribution Linear Combination of rv s Eric Zivot (Copyright 2015) Probability Review - Part 2 29 / 46

Bivariate normal distribution Let X and Y be distributed bivariate normal. The joint pdf is given by: f(x, y) = { exp 1 2(1 ρ 2 ) 1 2πσ X σ Y 1 ρ 2 [ ( x µx σ X ) 2 ( ) ]} + y µy 2 σ Y 2ρ(x µ X )(y µ Y ) σ X σ Y where E[X] = µ X, E[Y ] = µ Y, SD(X) = σ X, SD(Y ) = σ Y, and ρ = cor(x, Y ). Eric Zivot (Copyright 2015) Probability Review - Part 2 30 / 46

Outline 1 Bivariate Probability Distribution - Discrete rv s 2 Bivariate Probability Distribution - Continuous rv s 3 Independence 4 Covariance and Correlation Bivariate normal distribution Linear Combination of rv s Eric Zivot (Copyright 2015) Probability Review - Part 2 31 / 46

Linear Combination of 2 rv s Let X and Y be rv s. Define a new rv Z that is a linear combination of X and Y : Z = ax + by where a and b are constants. Then, µ Z = E[Z] = E[aX + by ] = ae[x] + be[y ] = a µ X + b µ Y Eric Zivot (Copyright 2015) Probability Review - Part 2 32 / 46

Linear Combination of 2 rv s cont. and, σ 2 Z = Var(Z) = Var(a X + b Y ) = a 2 Var(X) + b 2 Var(Y ) + 2a b Cov(X, Y ) = a 2 σ 2 X + b 2 σ 2 Y + 2a b σ XY If X N(µ X, σ 2 X ) and Y N(µ Y, σ 2 Y ) then Z N(µ Z, σ 2 Z ). Eric Zivot (Copyright 2015) Probability Review - Part 2 33 / 46

Example Example: Portfolio returns R A = return on asset A with E[R A ] = µ A and Var(R A ) = σ 2 A R B = return on asset B with E[R B ] = µ B and Var(R B ) = σ 2 B Cov(R A, R B ) = σ AB and Cor(R A, R B ) = ρ AB = σ AB σ A σ B Portfolio: x A = share of wealth invested in asset A, x B = share of wealth invested in asset B x A + x B = 1 (exhaust all wealth in 2 assets) R P = x A R A + x B R B = portfolio return Eric Zivot (Copyright 2015) Probability Review - Part 2 34 / 46

Problem Portfolio Problem: How much wealth should be invested in assets A and B? Portfolio expected return (gain from investing): E[R P ] = µ P = E[x A R A + x B R B ] = x A E[R A ] + x B E[R B ] = x A µ A + x B µ B Eric Zivot (Copyright 2015) Probability Review - Part 2 35 / 46

Problem cont. Portfolio variance (risk from investing): Var(R P ) = σp 2 = Var(x A R A + x B R B ) = x 2 AVar(R A ) + x 2 BVar(R B )+ 2 x A x B Cov(R A, R B ) SD(R P ) = = x 2 Aσ 2 A + x 2 Bσ 2 B + 2x A x B σ AB Var(R P ) = σ P ) 1/2 = (x 2 AσA 2 + x 2 BσB 2 + 2x A x B σ AB Eric Zivot (Copyright 2015) Probability Review - Part 2 36 / 46

Linear Combination of N rv s Let X 1, X 2,, X N be rvs and let a 1, a 2,..., a N be constants. Define, N Z = a 1 X 1 + a 2 X 2 + + a N X N = a i X i i=1 Then, µ Z = E[Z] = a 1 E[X 1 ] + a 2 E[X 2 ] + + a N E[X N ] N N = a i E[X i ] = a i µ i i=1 i=1 Eric Zivot (Copyright 2015) Probability Review - Part 2 37 / 46

Linear Combination of N rv s cont. For the variance, σ 2 Z = Var(Z) = a 2 1Var(X 1 ) + + a 2 NVar(X N ) + 2a 1 a 2 Cov(X 1, X 2 ) + 2a 1 a 3 Cov(X 1, X 3 ) + + 2a 2 a 3 Cov(X 2, X 3 ) + 2a 2 a 4 Cov(X 2, X 4 ) + + 2a N 1 a N Cov(X N 1, X N ) Note: N variance terms and N(N 1) = N 2 N covariance terms. If N = 100, there are 100 99 = 9900 covariance terms! Result: If X 1, X 2,, X N variables then, are each normally distributed random N Z = a i X i N(µ Z, σz) 2 i=1 Eric Zivot (Copyright 2015) Probability Review - Part 2 38 / 46

Example Example: Portfolio variance with three assets R A, R B, R C are simple returns on assets A, B and C x A, x B, x C are portfolio shares such that x A + x B + x C = 1 R p = x A R A + x B R B + x C R C Portfolio variance, σ 2 P = x 2 Aσ 2 A + x 2 Bσ 2 B + x 2 Cσ 2 C + 2x A x B σ AB + 2x A x C σ AC + 2x B x C σ BC Note: Portfolio variance calculation may be simplified using matrix layout, x A x B x C x A σa 2 σ AB σ AC x B σ AB σb 2 σ BC x C σ AC σ BC σc 2 Eric Zivot (Copyright 2015) Probability Review - Part 2 39 / 46

Example Example: Multi-period continuously compounded returns and the square-root-of-time rule r t = ln(1 + R t ) = monthly cc return r t N(µ, σ 2 ) for all t Cov(r t, r s ) = 0 for all t s Annual return, r t (12) = 11 r t j j=0 = r t + r t 1 + + r t 11 Eric Zivot (Copyright 2015) Probability Review - Part 2 40 / 46

Example cont. Then, E[r t (12)] = = 11 j=0 11 j=0 E[r t j ] µ (E[r t ] = µ for all t) = 12µ (µ = mean of monthly return) Eric Zivot (Copyright 2015) Probability Review - Part 2 41 / 46

Example cont. And, Var(r t (12)) = Var = 11 r t j j=0 11 11 Var(r t j ) = σ 2 j=0 j=0 = 12 σ 2 (σ 2 = monthly variance) SD(r t (12)) = 12 σ (square root of time rule) Then, r t (12) N(12µ, 12σ 2 ) Eric Zivot (Copyright 2015) Probability Review - Part 2 42 / 46

Example cont. For example, suppose: r t N(0.01, (0.10) 2 ) Then, E[r t (12)] = 12 (0.01) = 0.12 Var(r t (12)) = 12 (0.10) 2 = 0.12 SD(r t (12)) = 0.12 = 0.346 r t (12) N(0.12, (0.346) 2 ) Eric Zivot (Copyright 2015) Probability Review - Part 2 43 / 46

Example cont. And, (qα) r A = 12 µ + 12 σ z α = 0.12 + 0.346 z α (qα R ) A = e (qr α )A 1 = e 0.12+0.346 zα 1 Eric Zivot (Copyright 2015) Probability Review - Part 2 44 / 46

Covariance between two linear combinations of random variables Consider two linear combinations of two random variables: Then, X = X 1 + X 2 Y = Y 1 + Y 2 cov(x, Y ) = cov(x 1 + X 2, Y 1 + Y 2 ) = cov(x 1, Y 1 ) + cov(x 1, Y 2 ) + cov(x 2, Y 1 ) + cov(x 2, Y 2 ) The result generalizes to linear combinations of N random variables in the obvious way. Eric Zivot (Copyright 2015) Probability Review - Part 2 45 / 46

You can t see this text! faculty.washington.edu/ezivot/ Eric Zivot (Copyright 2015) Probability Review - Part 2 46 / 46