matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

Similar documents
Elements of Probability Theory

ENGG2430A-Homework 2

Lecture 2: Review of Probability

Multivariate Random Variable

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

LIST OF FORMULAS FOR STK1100 AND STK1110

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Probability. Paul Schrimpf. January 23, UBC Economics 326. Probability. Paul Schrimpf. Definitions. Properties. Random variables.

Multiple Random Variables

2 (Statistics) Random variables

3. Probability and Statistics

Chapter 5 continued. Chapter 5 sections

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

BASICS OF PROBABILITY

Random Variables and Their Distributions

ECON Fundamentals of Probability

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Probability Theory and Statistics. Peter Jochumzen

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Lecture 11. Probability Theory: an Overveiw

Lecture 2: Repetition of probability theory and statistics

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Continuous Random Variables

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

STT 441 Final Exam Fall 2013

Introduction to Normal Distribution

Algorithms for Uncertainty Quantification

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

Bivariate distributions

Raquel Prado. Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

EE4601 Communication Systems

Chp 4. Expectation and Variance

Stat 5101 Notes: Algorithms

MULTIVARIATE PROBABILITY DISTRIBUTIONS

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Review of Probability Theory

Chapter 5 Joint Probability Distributions

1 Probability theory. 2 Random variables and probability theory.

18 Bivariate normal distribution I

Introduction to Probability and Stocastic Processes - Part I

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

STAT515, Review Worksheet for Midterm 2 Spring 2019

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

Week 12-13: Discrete Probability

Notes on Random Processes

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

01 Probability Theory and Statistics Review

Probability: Handout

Review: mostly probability and some statistics

Problem Set #5. Econ 103. Solution: By the complement rule p(0) = 1 p q. q, 1 x 0 < 0 1 p, 0 x 0 < 1. Solution: E[X] = 1 q + 0 (1 p q) + p 1 = p q

Bivariate Distributions. Discrete Bivariate Distribution Example

3-1. all x all y. [Figure 3.1]

Formulas for probability theory and linear models SF2941

ECON 3150/4150, Spring term Lecture 6

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Covariance and Correlation

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Multivariate Distribution Models

Let X and Y denote two random variables. The joint distribution of these random

4. CONTINUOUS RANDOM VARIABLES

Probability and Distributions

Notes on Random Vectors and Multivariate Normal

ECE Lecture #9 Part 2 Overview

Notes on Random Variables, Expectations, Probability Densities, and Martingales

conditional cdf, conditional pdf, total probability theorem?

Chapter 5. Chapter 5 sections

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

1 Presessional Probability

The Multivariate Gaussian Distribution

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Lecture 14: Multivariate mgf s and chf s

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review

If we want to analyze experimental or simulated data we might encounter the following tasks:

and 1 P (w i)=1. n i N N = P (w i) lim

Applied Econometrics - QEM Theme 1: Introduction to Econometrics Chapter 1 + Probability Primer + Appendix B in PoE

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

STOR Lecture 16. Properties of Expectation - I

Appendix A : Introduction to Probability and stochastic processes

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

1.1 Review of Probability Theory

Theory of probability and mathematical statistics

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5 Operations on Multiple Random Variables

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Actuarial Science Exam 1/P

4. Distributions of Functions of Random Variables

Joint Probability Distributions and Random Samples (Devore Chapter Five)

STA 256: Statistics and Probability I

Problem 1. Problem 2. Problem 3. Problem 4

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Joint p.d.f. and Independent Random Variables

Lecture Note 1: Probability Theory and Statistics

Introduction to Probability Theory

Transcription:

Short Guides to Microeconometrics Fall 2018 Kurt Schmidheiny Unversität Basel Elements of Probability Theory 2 1 Random Variables and Distributions Contents Elements of Probability Theory matrix-free 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions...... 2 1.2 Bivariate Random Variables and Distributions...... 3 2 Moments 5 2.1 Expected Value or Mean................... 5 2.2 Variance and Standard Deviation.............. 6 2.3 Higher order Moments.................... 7 2.4 Covariance and Correlation................. 7 2.5 Conditional Expectation and Variance........... 8 3 Random Vectors and Random Matrices 9 4 Important Distributions 9 4.1 Univariate Normal Distribution............... 9 4.2 Bivariate Normal Distribution................ 9 4.3 Multivariate Normal Distribution.............. 10 A random variable is a variable whose values are determined by a probability distribution. This is a casual way of defining random variables which is sufficient for our level of analysis. For more advanced probability theory, a random variable will be defined as a real-valued function over some probability space. In section 1 to 3, a random variable is denoted by capital letters, e.g. X, whereas its realizations are denoted by small letters, e.g. x. 1.1 Univariate Random Variables and Distributions A univariate discrete random variable is a variable that takes a countable number K of real numbers with certain probabilities. The probability that the random variable X takes the value x k among the K possible realizations is given by the probability distribution P (X = x k ) = P (x k ) = p k with k = 1, 2,..., K. K may be in some cases. This can also be written as p 1 if X = x 1 p 2 if X = x 2 P (x k ) =. p K if X = x K Note that p k = 1. A univariate continuous random variable is a variable that takes a continuum of values in the real line. The distribution of a continuous random variable X can be characterized by a density function or probability Version: 17-9-2018, 21:19

3 Short Guides to Microeconometrics Elements of Probability Theory 4 density function (pdf) f(x). The nonnegative function f(x) is such that P (x 1 X x 2 ) = x2 x 1 f(x)dx. defines the probability that X takes a value in the interval [x 1, x 2 ]. Note that there is no chance that X takes exactly the value x, P (X = x) = 0. The probability that X takes any value on the real line is f(x)dx = 1. The distribution of a univariate random variable X is alternatively described by the cumulative distribution function (cdf) F (x) = P (X < x). The cdf of a discrete random variable X is F (x) = P (X = x k ) = p k. x k x x k x and of a continuous random variable X F (x) = F (x) has the following properties: x f(t)dt F (x) is monotonically nondecreasing F () = 0 and F ( ) = 1. F (x) is continuous to the left 1.2 Bivariate Random Variables and Distributions A bivariate continuous random variable is a variable that takes a continuum of values in the plane. The distribution of a bivariate continuous random variable (X, Y ) can be characterized by a joint density function or joint probability density function, f(x, y). The nonnegative function f(x, y) is such that P (x 1 X x 2, y 1 Y y 2 ) = x2 y2 x 1 y 1 f(x, y)dydx defines the probability that X and Y take values in the interval [x 1, x 2 ] and [y 1, y 2 ], respectively. Note that f(x, y)dydx = 1. The marginal density function or marginal probability density function is given by such that f(x) = f(x, y)dy P (x 1 X x 2 ) = P (x 1 X x 2, Y ) = x2 x 1 f(x)dx. The conditional density function or conditional probability density function with respect to the event {Y = y} is given by f(y x) = provided that f(x) > 0. Note that f(x, y) f(x) f(y x)dy = 1. Two random variables X and Y are called independent, if and only if If X and Y are independent, then: f(y x) = f(y) f(x, y) = f(x) f(y) P (x 1 X x 2, y 1 Y y 2 ) = P (x 1 X x 2 ) P (y 1 Y y 2 )

5 Short Guides to Microeconometrics Elements of Probability Theory 6 More generally, if a finite set of n continuous random variables X 1, X 2, X 3,..., X n are mutually independent, then f(x 1, x 2, x 3,..., x n ) = f(x 1 ) f(x 2 ) f(x 3 )... f(x n ). 2 Moments 2.1 Expected Value or Mean The expected value or mean of a discrete random variable with probability distribution P (x k ) and k = 1, 2,..., K is defined as E[X] = x k P (x k ) if the series converges absolutely. The expected value or mean of a continuous univariate random variable with density function f(x) is defined as E[X] = if the integral exists. xf(x)dx For a random variable Z which is a continuous function φ of a discrete random variable X, we have: E[Z] = E[φ(X)] = φ(x k )P (x k ) For a random variable Z which is a continuous function φ of the continuous random variables X and Y, we have: E[Z] = E[φ(X)] = E[Z] = E[φ(X, Y )] = φ(x)f(x)dx φ(x, y)f(x, y)dx dy The following rules hold in general, i.e. for discrete, continuous and mixed types of random variables: E[α] = α E[αX + βy ] = αe[x] + βe[y ] E [ n i=1 X i] = n i=1 E[X i] E[X Y ] = E[X] E[Y ] where α R and β R are constants. if X and Y are independent 2.2 Variance and Standard Deviation The variance of a univariate random variable X is defined as V[X] = E [ (X E[X]) 2] = E[X 2 ] (E[X]) 2 The variance has the following properties: V[X] 0 V[X] = 0 if and only if X = E[X] The following rules hold in general, i.e. for discrete, continuous and mixed types of random variables: V[αX + β] = α 2 V[X] V[X + Y ] = V[X] + V[Y ] + 2Cov[X, Y ] V[X Y ] = V[X] + V[Y ] 2Cov[X, Y ] V [ n i=1 X i] = n i=1 V[X i] if X i and X j independent for all i j where α R and β R are constants. Instead of the variance, one often considers the standard deviation σ X = VX.

7 Short Guides to Microeconometrics Elements of Probability Theory 8 2.3 Higher order Moments The j-th moment around zero is defined as E [ (X E[X]) j]. 2.4 Covariance and Correlation The Covariance between two random variables X and Y is defined as: Cov[X, Y ] = E [ (X E[X])(Y E[Y ]) ] = E[XY ] E[X]E[Y ] = E [ (X E[X])Y ] = E [ X(Y E[Y ]) ] The following rules hold in general, i.e. for discrete, continuous and mixed types of random variables: Cov[αX + γ, βy + µ] = αβcov[x, Y ] Cov[X 1 + X 2, Y 1 + Y 2 ] = Cov[X 1, Y 1 ] + Cov[X 1, Y 2 ] + Cov[X 2, Y 1 ] + Cov[X 2, Y 2 ] Cov[X, Y ] = 0 if X and Y are independent where α R, β R, γ R and µ R are constants. The correlation coefficient between two random variables X and Y is defined as: ρ X,Y = Cov[X, Y ] σ X σ Y where σ X and σ Y denote the corresponding standard deviations. The correlation coefficient has the following property: 1 ρ X,Y 1 The following rule holds: ρ αx+γ,βy +µ = ρ X,Y ρ X,Y = 0 if X and Y are independent where α R and β R are constants. We say that X and Y are uncorrelated if ρ = 0 X and Y are positively correlated if ρ > 0 X and Y are negatively correlated if ρ < 0 2.5 Conditional Expectation and Variance Let (X, Y ) be a bivariate discrete random variable and P (y k X) the conditional probability of Y = y k given X. Then the conditional expected value or conditional mean of Y given X is E[Y X] = E Y X [Y ] = y k P (y k X). Let (X, Y ) be a bivariate continuous random variable and f(y x) the conditional density of Y given X. Then the conditional expected value or conditional mean of Y given X is E[Y X] = E Y X [Y ] = yf(y X)dy. The law of iterated means or law of iterated expecations holds in general, i.e. for discrete, continuous or mixed random variables: [ ] E X E[Y X] = E[Y ]. The conditional variance of Y given X is given by V[Y X] = E [ (Y E[Y X]) 2 ] [ ] X = E Y 2 X (E[Y X]) 2. The law of total variance is [ ] [ ] V[Y ] = E X V [Y X] + VX E[Y X].

9 Short Guides to Microeconometrics Elements of Probability Theory 10 3 Random Vectors and Random Matrices In the matrix version, this section generalizes the rules about moments to random vectors and random matrices. 4 Important Distributions 4.1 Univariate Normal Distribution The density of the univariate normal distribution is given by: f(x) = 1 σ 2π e 1 2( x µ σ ) 2. The normal distribution is characterized by the two parameters µ and σ. The mean of the normal distribution is E[X] = µ and the variance V[X] = σ 2. We write X N(µ, σ 2 ). The univariate normal distribution with mean µ = 0 and variance σ 2 = 1 is called the standard normal distribution N(0, 1). 4.2 Bivariate Normal Distribution The density of the bivariate normal distribution is 1 f(x, y) = 2πσ X σ Y 1 ρ 2 { [ (x ) 2 ( ) 2 ( ) ( ) ]} 1 µx y µy x µx y µy exp 2(1 ρ 2 + 2ρ. ) σ X σ Y σ X σ Y If (X, Y ) follows a bivariate normal distribution, then: The marginal densities f(x) and f(y) are univariate normal. The conditional densities f(x y) and f(y x) are univariate normal. E[X] = µ X, V[X] = σ 2 X, E[Y ] = µ Y, V[Y ] = σ 2 Y. The correlation coefficient between X and Y is ρ X,Y = ρ. E[Y X] = µ Y + ρ σ Y σ X (X µ X ) and V[Y X] = σ 2 Y (1 ρ2 ). The above properties characterize the normal distribution. It is the only distribution with all these properties. Further important properties: If (X, Y ) follows a bivariate normal distribution, then ax + by is also normally distributed: ax + by N(µ X + µ Y, σ 2 X + σ 2 Y + 2ρσ X σ Y ). The reverse implication is not true. If X and Y are bivariate normally distributed with Cov[X, Y ] = 0, then X and Y are independent. 4.3 Multivariate Normal Distribution If the n variables in (X 1, X 2,..., X n ) follow a multivariate normal distribution, then: The marginal densities f(x i ) for i = 1, 2,..., n are univariate normal. The conditional densities f(x i.), conditional on one or a subset of the other n 1 variables, are univariate normal. Any linear combination α 1 x 1 + α 2 x 2 +... + α n x n, where α 1,..., α n are constants, is univariate normal.

11 Short Guides to Microeconometrics References Amemiya, Takeshi (1994), Introduction to Statistics and Econometrics, Cambridge: Harvard University Press. Hayashi, Fumio (2000), Econometrics, Princeton: Princeton University Press. Appendix A. Stock, James H. and Mark W. Watson (2012), Introduction to Econometrics, 3rd ed., Pearson Addison-Wesley. Chapter 2.