EXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0.

Size: px
Start display at page:

Download "EXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0."

Transcription

1 EXPECTED VALUE of a RV corresponds to the average value one would get for the RV when repeating the experiment, independently, infinitely many times. Sample (RIS) of n values of X (e.g. More accurately, consider a Random Independent 0,,, 0,,,,,, 0) and the corresponding P nj= X j Sample Mean X P i f n i = =0.9, where f i are the All i observed relative frequencies (of each possible value). We know that, as n (the sample size) increases, each f i tends to the corresponding Pr(X = i).this implies that, in the same limit, X tends to P All i i Pr(X = i), which we denote E(X) and call the expected (mean) value of X (so, in the end, the expected value is computed from the theoretical probabilities, not by doing the experiment). Note that P All i i Pr(X = i) is a weighted average of all possible values of X by their probabilities. And, as we have two names for it, we also have two alternate notations, E(X) is sometimes denoted µ x (the Greek mu ). EXAMPLES: When X is the number of dots in a single roll of a die, we get E(X) = =3.5 (for a symmetric distribution, the mean is at the centre of symmetry - for any other distribution, it is at the center of mass ). Note that the name expected value is somehow misleading. Let Y be the larger of the two numbers when rolling two dice. E(Y )= = = = Let U have the following (arbitrarily chosen) distribution: E(U) = =.4. U = Prob:

2 What happens to the expected value of X when we transform it, i.e. define a new RV by: U X, or g(x) in general? +X The main thing to remember is that, in general,the mean does not transform accordingly, i.e. µ u = h(µ x,µ y ). µ x +µ x, etc. This is also true for a transformation of X and Y, i.e. E[h(X, Y )] = But(atleastonegoodnews),tofindtheexpectedvalueofg(X), h(x, Y ),..., we can bypass constructing the new distribution (which was a tedious process) and use: E[g(X)] = X All i g(i) Pr(X = i) E[h(X, Y )] = X All i,j h(i, j) Pr(X = i Y = j) EXAMPLE: Based on U,wedefine W = U 3 and compute E(U) = = 3.4. Clearly, it would have been a mistake to use.4 3 =0. (totally wrong). We can also get the correct answer the long way (just to verify the short answer), by first finding the distribution of W : W = U = 0 4 Prob: implying W = 0 8 Prob: Based on this, E(W )=0.+3. =3.4 (check). Exception: Linear Transformations Only for these, we can find the mean of the new RV by simply replacing X by µ x, thus: E(a X + c) =a µ x + c

3 Proof: E(a X + c) = P (a i + c)f x (i) =a P i f x (i)+c P f x (i) =a µ x + c All i All i All i EXAMPLE: E(U 3) =.4 3= 0. Expected values related to a bivariate distribution When a bivariate distribution is given, the easiest way to compute the individual expected values (of X and Y ) is through the marginals. EXAMPLE: X = 3 Y = we compute E(X) = =. and E(Y )= =0.. This is how we also deal with an expected value of g (X) and/or g (Y ). Only when the new variable is defined by h(x, Y ), we have weigh-average the whole table. For example: E(X Y )= =. More EXAMPLES: E [(X ) ]= =. E +Y = = h i E (X ) (multiplying the last two results would be wrong). Here it may help to first +Y build the corresponding table of the (X ) +Y values: 0.4 = Answer:

4 For Linear Transformation (of two RVs) we get: E(a X + b Y + c) =a µ x + b µ y + c Proof: E(aX + by +c) = P P (a i +b j + c)f xy (i, j) =a P i f x (i)+b P i j i j a E(X)+b E(Y )+c Note that X and Y need not be independent! j f y (j)+c = EXAMPLE: Using the previous bivariate distribution, E(X 3Y +4)is simply =.4 The previous formula easily extends to any number of RVs (again, not necessarily independent!) E(a X + a X a k X k + c) = a E(X )+a E(X ) a k E(X k )+c Can Independence help (in other cases of expected value)? Yes, the expected value of a product of RVs equals the product of the individual expected values, but ONLY when these RVs are independent: E(X Y )=E(X) E(Y ) Proof: E(X Y )= P µ Ã! P P P i j f x (i) f y (j) = i f x (i) j f y (j) = E(X) E(Y ) i j i j The statement can actually be made more general: WhenX and Y are independent E [g (X) g (Y )] = E [g (X)] E [g (Y )] Moments of a RV 4

5 There are two types of moments, Simple Moments E(X k ) and Central Moments E (X µ x ) k where k is an integer. Special cases: The 0 th moment is identically equal to. The first simple moment is µ x (yet another name for it!). The second simple moment is E(X ) = µ x, etc. The first central moment is identically equal to 0. The second central moment E [(X µ x ) ] must be 0 (averaging non-negative quantities cannot result in a negative number). It is of such importance that it gets its own name: the variance of X, denoted Var(X). When doing the computation by hand, it helps to realize that E [(X µ x ) ]=E(X ) µ x. As a measure of the spread of the distribution of X values, we take σ x p Var(X) and call it the standard deviation of X. For all distributions, the (µ σ, ρ + σ) interval should contain the bulk of the distribution, i.e. anywhere from 50 to 90% (in terms of the corresponding histogram). Finally, skewness is defined as E [(X µ x) 3 ] (itmeasurestowhatextentisthedistribution non-symmetric, or skewed ), and kurtosis as E [(X µ x) 4 ] σ 3 x (it measures the degree σ 4 x of flatness, 3 being its typical value, higher for peaked, smaller for flat distributions). Thesearealotlessimportantthanthemeanandvariance(later,wewillunderstandwhy). EXAMPLES: X is the number of dots when rolling one die: µ x = 7,Var(X) = 7 q = 35 implying σ 35 x = = Note that 3.5±.708 contains.7% of the dis- tribution. Skewness, for a symmetric distribution, is always equal to 0, kurtosis can be 5

6 computed from E [(X µ) 4 ]= (.5)4 +(.5) 4 +( 0.5) =4. 79 kurtosis = 4.79 ( 35 =. 734 ( flat ). ) U = 0 4 For µ u =.4, Var(U) = Prob: =.44 σ u =.44 =.. From E [(U µ u ) 3 ]=(.4) ( 0.4) =.008, the skewness is.008. = (long right tail), 3 and from E [(U µ u ) 4 ]=(.4) ( 0.4) =5.779 the kurtosis equals =.787 When X is transformed to Y = g(x), we already know that there is no general shortcut for computing E(Y ). This (even more so) applies to the variance of Y, which also needs to be computed from scratch. But, we did manage to simplify the expected value of a linear transformation of X (i.e., of Y = ax + c). Is there any direct conversion of Var(X) into Var(Y ) in this (linear) case? The answer is yes, and we can easily derive the corresponding formula: Var(aX + c) = E [(ax + c) ] (aµ x +c) = E [a X +acx + c ] (aµ x +c) = a E(X ) a µ x = a Var(X). This implies that σ ax+c = a σ X Moments - the bivariate case Firstly, there will be the individual moments of X (and, separately, Y ), which can be established based on the corresponding marginal distribution. Are there any other (joint) moments? Yes, and again we have the simple moments E(X k Y m ) and the central moments E (X µ x ) k (Y µ y ) m. The most important of

7 these is the first,first central moment called the covariance of X and Y : Cov(X, Y )= E (X µ x ) (Y µ y ) E(X Y ) µ x µ y It is obviously symmetric, i.e. Cov(X, Y )=Cov(Y,X) and it becomes zero when X and Y are independent (but not necessarily the other way round). Based on Cov(X, Y ), one can define the Correlation Coefficient between X and Y by: ρ xy = Cov(X, Y ) σ x σ y (Greek letter rho ). Its value must be always between and. Proof: Obviously, Var(X by )=Var(X) bcov(x, Y )+b Var(Y ) 0 for any value of b, including This implies that b = Cov(X,Y ) Var(Y ) Var(X) Cov(X,Y ) Cov(X,Y ) Cov(X, Y )+ Var(Y ) Var(Y ) Var(Y ) = Var(X) which, after dividing by Var(X) yields Cov(X,Y ) Var(Y ) 0 ρ 7

8 EXAMPLE: For one of our previous distributions X = 3 Y = we get µ x =., µ y =0., Var(X) =5.3. =0.89, Var(Y )=0. 0. =0.4, Cov(X, Y )= = 0. (may be negative), and ρ xy = = 0.34 One can also show that ρ ax+c,by +d = Cov(aX+c,bY +d) σ ax+c σ by +d = a b Cov(X,Y ) a b σ X σ Y = ±ρ xy (linear transformation does not change the value of ρ, but it may change its sign - can you tell when?). Linear Combination of RVs Starting with X and Y, let s see whether we can simplify the expression for Var(aX + by + c) = E [(ax + by + c) ] aµ x + bµ y + c = a E(X )+b E(Y )+abe(x Y ) a µ x b µ x abµ x µ y = a Var(X)+b Var(Y )+abcov(x, Y ) Independence eliminates the last term. This result can be easily extended to a linear combination of any number of random variables: Var(a X + a X +...a k X k + c) = a Var(X )+a Var(X ) a kvar(x k )+ a a Cov(X,X )+a a 3 Cov(X,X 3 ) a k a k Cov(X k,x k ) Mutual independence (if present) eliminates the last row of k covariances. 8

9 And finally a formula for a covariance of one linear combination of RVs against another: Cov (a X + a X +..., b Y + b Y +...) = a b Cov(X,Y )+a b Cov(X,Y ) +a b Cov(X,Y )+a b Cov(X,Y )+... (I will call this the distributive law of covariance). Sample Mean and its distribution Consider a random independent sample of size n from an arbitrary distribution. We know that the sample mean X = P n i= X i n is itself a RV with its own distribution. Regardless how that distribution looks like, its expected value must be the same as the expected value of the distribution from which we sample. Proof: E( X) = n E(X )+ n E(X ) n E(X n)= n µ + n µ n µ = µ That s why X is often used as an estimator of µ, when its value is unknown. Similarly, Var( X) =( n ) Var(X )+( n ) Var(X )+... +( n ) Var(X n )=( n ) σ +( n ) σ ( n ) σ = σ n that where σ is the standard deviation of the original distribution. This implies σ X = σ n The standard deviation of X (sometimes called the standard error of X) is n times smaller than that of the original distribution. Note that the standard error tends to zero as sample size increases. Now, how about the shape of the X-distribution, how does it relate to the shape of the sampled distribution? The surprising answer is: it doesn t. For n bigger than say 5, the 9

10 distribution of X quickly approaches the same regular shape, regardless of how the original distribution looked like. Now, consider a random independent sample of size n from a bi-variate distribution, where (X,Y ), (X,Y ),... (X n,y n ) are the individual pairs of observations. Then, Var(X + P X X n )=Var(X )+ Var(X )+...+Var(X n ) n Var(X) and, similarly, Var( n Y i )= i= P n Var(Y ). Similarly, Cov( n P X i, n Y i )=Cov(X,Y )+Cov(X,Y )+...+Cov(X n,y n )=ncov(x, Y ). i= i= P All this implies that the correlation coefficient between n P X i and n n Cov(X,Y ) Y i equals n Var(X) n Var(Y ) ρ xy (the correlation between a single X and Y pair). Thesameistrueforthecorresponding i= i= sample means X P ni= X i n and Ȳ P ni= Y i n,why? Conditional expected value is, simply put, an expected value computed using the corresponding conditional distribution, e.g. E(X Y =)= X i i f x (i Y =) etc. EXAMPLE: Using our old bivariate distributions X = E(X Y = 3 Y = ) is constructed based on the corresponding conditional distribution X Y = 3 Prob: 3 0

11 by the usual process: =.8 3 (note that this is different from E(X) =. calculated previously). Similarly E(X Y =)= =4.. These imply that Var(X Y =)= = Also,E( X Y =)= = When values of a RV are only integers (the case of most of our examples so far), we define its probability generating function (PGF) by P (z) = P All i z i Pr(X = i) where z is a parameter. Differentiating k times and substituting z =yields d k P (z) dz k = E[X (X )... (X k +)] z= (the so called k th factorial moment). For mutually independent RVs, we also get P X+Y +Z (z) =P x (z) P y (z) P z (z) Furthermore, given P (z), we can recover the distribution by Taylor-expanding P (z) - the coefficient of z i is Pr(X = i). EXAMPLES: For the distribution of our last example, we get P (z) = P (note that this can be also obtained from the MGF by e t z). P 0 (z) = P 00 (z) = ( z) 4 ( z) 3 z= = z= =4 i= z i ( )i = z z = z z giving us the mean of (check) and the variance of 4+ =(check).

12 What is the distribution of the total number of dots when rolling 3 dice? Well, the PGF of X,X and X 3 (each) is z + z + z 3 + z 4 + z 5 + z since they are independent RVs, the PGF of their sum is simply µ z + z + z 3 + z 4 + z 5 + z 3 Expanding (easy in Maple), we get: z3 + 7 z4 + 3 z z z z8 + 5 z9 + 8 z0 + 8 z + 5 z z z z5 + 3 z + 7 z7 + z8 Note that a RV can have infinite expected value (not too common, but it can happen). EXAMPLE: Suppose you flip a coin till the first head appears, and you win $ if this takes only one flip, $4 if it takes two flips, $8 if it takes 3 flips, etc. (the amount always doubles - i.e. you get $ for each of the first flips, $4 for the third, $8 for the fourth, etc.). What is the expected value of your win? Clearly, you win Y = X dollars. E(Y )= = =.

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B) REVIEW OF MAIN CONCEPTS AND FORMULAS Boolean algebra of events (subsets of a sample space) DeMorgan s formula: A B = Ā B A B = Ā B The notion of conditional probability, and of mutual independence of two

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

Jointly Distributed Random Variables

Jointly Distributed Random Variables Jointly Distributed Random Variables CE 311S What if there is more than one random variable we are interested in? How should you invest the extra money from your summer internship? To simplify matters,

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

ECON Fundamentals of Probability

ECON Fundamentals of Probability ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable

CHAPTER 4 MATHEMATICAL EXPECTATION. 4.1 Mean of a Random Variable CHAPTER 4 MATHEMATICAL EXPECTATION 4.1 Mean of a Random Variable The expected value, or mathematical expectation E(X) of a random variable X is the long-run average value of X that would emerge after a

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula Lecture 4: Proofs for Expectation, Variance, and Covariance Formula by Hiro Kasahara Vancouver School of Economics University of British Columbia Hiro Kasahara (UBC) Econ 325 1 / 28 Discrete Random Variables:

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Preliminary Statistics. Lecture 3: Probability Models and Distributions Preliminary Statistics Lecture 3: Probability Models and Distributions Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Revision of Lecture 2 Probability Density Functions Cumulative Distribution

More information

ECE Lecture #9 Part 2 Overview

ECE Lecture #9 Part 2 Overview ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables We have a probability space (S, Pr). A random variable is a function X : S V (X ) for some set V (X ). In this discussion, we must have V (X ) is the real numbers X induces a

More information

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics,

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES VARIABLE Studying the behavior of random variables, and more importantly functions of random variables is essential for both the

More information

Review of Probability. CS1538: Introduction to Simulations

Review of Probability. CS1538: Introduction to Simulations Review of Probability CS1538: Introduction to Simulations Probability and Statistics in Simulation Why do we need probability and statistics in simulation? Needed to validate the simulation model Needed

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51

Problem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51 Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables

More information

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Stat 5101 Notes: Algorithms (thru 2nd midterm) Stat 5101 Notes: Algorithms (thru 2nd midterm) Charles J. Geyer October 18, 2012 Contents 1 Calculating an Expectation or a Probability 2 1.1 From a PMF........................... 2 1.2 From a PDF...........................

More information

Topic 3: The Expectation of a Random Variable

Topic 3: The Expectation of a Random Variable Topic 3: The Expectation of a Random Variable Course 003, 2017 Page 0 Expectation of a discrete random variable Definition (Expectation of a discrete r.v.): The expected value (also called the expectation

More information

Math 180B Problem Set 3

Math 180B Problem Set 3 Math 180B Problem Set 3 Problem 1. (Exercise 3.1.2) Solution. By the definition of conditional probabilities we have Pr{X 2 = 1, X 3 = 1 X 1 = 0} = Pr{X 3 = 1 X 2 = 1, X 1 = 0} Pr{X 2 = 1 X 1 = 0} = P

More information

Chapter 4 continued. Chapter 4 sections

Chapter 4 continued. Chapter 4 sections Chapter 4 sections Chapter 4 continued 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP:

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Notes on Mathematics Groups

Notes on Mathematics Groups EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties

More information

Topic 3 Random variables, expectation, and variance, II

Topic 3 Random variables, expectation, and variance, II CSE 103: Probability and statistics Fall 2010 Topic 3 Random variables, expectation, and variance, II 3.1 Linearity of expectation If you double each value of X, then you also double its average; that

More information

Random Variables and Expectations

Random Variables and Expectations Inside ECOOMICS Random Variables Introduction to Econometrics Random Variables and Expectations A random variable has an outcome that is determined by an experiment and takes on a numerical value. A procedure

More information

Notes for Math 324, Part 19

Notes for Math 324, Part 19 48 Notes for Math 324, Part 9 Chapter 9 Multivariate distributions, covariance Often, we need to consider several random variables at the same time. We have a sample space S and r.v. s X, Y,..., which

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Multivariate Distributions (Hogg Chapter Two)

Multivariate Distributions (Hogg Chapter Two) Multivariate Distributions (Hogg Chapter Two) STAT 45-1: Mathematical Statistics I Fall Semester 15 Contents 1 Multivariate Distributions 1 11 Random Vectors 111 Two Discrete Random Variables 11 Two Continuous

More information

n x u x v n x = (u+v) n. p x (1 p) n x x = ( e t p+(1 p) ) n., x = 0,1,2,... λe = e λ t ) x = e λ e λet = e λ(et 1).

n x u x v n x = (u+v) n. p x (1 p) n x x = ( e t p+(1 p) ) n., x = 0,1,2,... λe = e λ t ) x = e λ e λet = e λ(et 1). Eercise 8 We can write E(X b) 2 = E ( X EX +EX b ) 2 = E ( (X EX)+(EX b) ) 2 = E(X EX) 2 +(EX b) 2 +2E ( (X EX)(EX b) ) = E(X EX) 2 +(EX b) 2 +2(EX b)e(x EX) = E(X EX) 2 +(EX b) 2 as E(X EX) = This is

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Covariance and Correlation Class 7, Jeremy Orloff and Jonathan Bloom

Covariance and Correlation Class 7, Jeremy Orloff and Jonathan Bloom 1 Learning Goals Covariance and Correlation Class 7, 18.05 Jerem Orloff and Jonathan Bloom 1. Understand the meaning of covariance and correlation. 2. Be able to compute the covariance and correlation

More information

X = X X n, + X 2

X = X X n, + X 2 CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

Distributions of linear combinations

Distributions of linear combinations Distributions of linear combinations CE 311S MORE THAN TWO RANDOM VARIABLES The same concepts used for two random variables can be applied to three or more random variables, but they are harder to visualize

More information

Analysis of Engineering and Scientific Data. Semester

Analysis of Engineering and Scientific Data. Semester Analysis of Engineering and Scientific Data Semester 1 2019 Sabrina Streipert s.streipert@uq.edu.au Example: Draw a random number from the interval of real numbers [1, 3]. Let X represent the number. Each

More information

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18

Variance reduction. Michel Bierlaire. Transport and Mobility Laboratory. Variance reduction p. 1/18 Variance reduction p. 1/18 Variance reduction Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Variance reduction p. 2/18 Example Use simulation to compute I = 1 0 e x dx We

More information

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3 Probability Paul Schrimpf January 23, 2018 Contents 1 Definitions 2 2 Properties 3 3 Random variables 4 3.1 Discrete........................................... 4 3.2 Continuous.........................................

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Stat 5101 Notes: Algorithms

Stat 5101 Notes: Algorithms Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................

More information

3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices.

3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices. 3d scatterplots You can also make 3d scatterplots, although these are less common than scatterplot matrices. > library(scatterplot3d) > y par(mfrow=c(2,2)) > scatterplot3d(y,highlight.3d=t,angle=20)

More information

Introduction to Machine Learning

Introduction to Machine Learning What does this mean? Outline Contents Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola December 26, 2017 1 Introduction to Probability 1 2 Random Variables 3 3 Bayes

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

1.12 Multivariate Random Variables

1.12 Multivariate Random Variables 112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random

More information

Probability and random variables. Sept 2018

Probability and random variables. Sept 2018 Probability and random variables Sept 2018 2 The sample space Consider an experiment with an uncertain outcome. The set of all possible outcomes is called the sample space. Example: I toss a coin twice,

More information

Overview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis

Overview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis Day 5 Functions/Probability Overview Functions Probabilistic analysis Neil Rhodes UC San Diego Image/Coimage The image of f is the set of values f actually takes on (a subset of the codomain) The inverse

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

Multivariate probability distributions and linear regression

Multivariate probability distributions and linear regression Multivariate probability distributions and linear regression Patrik Hoyer 1 Contents: Random variable, probability distribution Joint distribution Marginal distribution Conditional distribution Independence,

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

ECE Homework Set 3

ECE Homework Set 3 ECE 450 1 Homework Set 3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 2016 MODULE 1 : Probability distributions Time allowed: Three hours Candidates should answer FIVE questions. All questions carry equal marks.

More information

Lecture 10. Variance and standard deviation

Lecture 10. Variance and standard deviation 18.440: Lecture 10 Variance and standard deviation Scott Sheffield MIT 1 Outline Defining variance Examples Properties Decomposition trick 2 Outline Defining variance Examples Properties Decomposition

More information

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University Statistics for Economists Lectures 6 & 7 Asrat Temesgen Stockholm University 1 Chapter 4- Bivariate Distributions 41 Distributions of two random variables Definition 41-1: Let X and Y be two random variables

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

The mean, variance and covariance. (Chs 3.4.1, 3.4.2)

The mean, variance and covariance. (Chs 3.4.1, 3.4.2) 4 The mean, variance and covariance (Chs 3.4.1, 3.4.2) Mean (Expected Value) of X Consider a university having 15,000 students and let X equal the number of courses for which a randomly selected student

More information

Class 8 Review Problems 18.05, Spring 2014

Class 8 Review Problems 18.05, Spring 2014 1 Counting and Probability Class 8 Review Problems 18.05, Spring 2014 1. (a) How many ways can you arrange the letters in the word STATISTICS? (e.g. SSSTTTIIAC counts as one arrangement.) (b) If all arrangements

More information

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else ECE 450 Homework #3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3 4 5

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

Business Statistics 41000: Homework # 2 Solutions

Business Statistics 41000: Homework # 2 Solutions Business Statistics 4000: Homework # 2 Solutions Drew Creal February 9, 204 Question #. Discrete Random Variables and Their Distributions (a) The probabilities have to sum to, which means that 0. + 0.3

More information

Review of probability

Review of probability Review of probability Computer Sciences 760 Spring 2014 http://pages.cs.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts definition of probability random variables

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

Expectation and Variance

Expectation and Variance Expectation and Variance August 22, 2017 STAT 151 Class 3 Slide 1 Outline of Topics 1 Motivation 2 Expectation - discrete 3 Transformations 4 Variance - discrete 5 Continuous variables 6 Covariance STAT

More information

ACE 562 Fall Lecture 2: Probability, Random Variables and Distributions. by Professor Scott H. Irwin

ACE 562 Fall Lecture 2: Probability, Random Variables and Distributions. by Professor Scott H. Irwin ACE 562 Fall 2005 Lecture 2: Probability, Random Variables and Distributions Required Readings: by Professor Scott H. Irwin Griffiths, Hill and Judge. Some Basic Ideas: Statistical Concepts for Economists,

More information

Lecture 3: Basic Statistical Tools. Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013

Lecture 3: Basic Statistical Tools. Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013 Lecture 3: Basic Statistical Tools Bruce Walsh lecture notes Tucson Winter Institute 7-9 Jan 2013 1 Basic probability Events are possible outcomes from some random process e.g., a genotype is AA, a phenotype

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl. E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution: 4.1 Bivariate Distributions. Chapter 4. Multivariate Distributions For a pair r.v.s (X,Y ), the Joint CDF is defined as F X,Y (x, y ) = P (X x,y y ). Obviously, the marginal distributions may be obtained

More information

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

Notation: X = random variable; x = particular value; P(X = x) denotes probability that X equals the value x.

Notation: X = random variable; x = particular value; P(X = x) denotes probability that X equals the value x. Ch. 16 Random Variables Def n: A random variable is a numerical measurement of the outcome of a random phenomenon. A discrete random variable is a random variable that assumes separate values. # of people

More information

Estimation of uncertainties using the Guide to the expression of uncertainty (GUM)

Estimation of uncertainties using the Guide to the expression of uncertainty (GUM) Estimation of uncertainties using the Guide to the expression of uncertainty (GUM) Alexandr Malusek Division of Radiological Sciences Department of Medical and Health Sciences Linköping University 2014-04-15

More information

Homework 4 Solution, due July 23

Homework 4 Solution, due July 23 Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var

More information

Alex Psomas: Lecture 17.

Alex Psomas: Lecture 17. Alex Psomas: Lecture 17. Random Variables: Expectation, Variance 1. Random Variables, Expectation: Brief Review 2. Independent Random Variables. 3. Variance Random Variables: Definitions Definition A random

More information

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1 EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

1. Consider a random independent sample of size 712 from a distribution with the following pdf. c 1+x. f(x) =

1. Consider a random independent sample of size 712 from a distribution with the following pdf. c 1+x. f(x) = 1. Consider a random independent sample of size 712 from a distribution with the following pdf f(x) = c 1+x 0

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information

4.1 Expected value of a discrete random variable. Suppose we shall win $i if the die lands on i. What is our ideal average winnings per toss?

4.1 Expected value of a discrete random variable. Suppose we shall win $i if the die lands on i. What is our ideal average winnings per toss? Chapter 4 Mathematical Expectation 4.1 Expected value of a discrete random variable Examples: (a) Given 1, 2, 3, 4, 5, 6. What is the average? (b) Toss an unbalanced die 1 times. Lands on 1 2 3 4 5 6 Probab..1.3.1.2.2.1

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability

More information

CS70: Jean Walrand: Lecture 22.

CS70: Jean Walrand: Lecture 22. CS70: Jean Walrand: Lecture 22. Confidence Intervals; Linear Regression 1. Review 2. Confidence Intervals 3. Motivation for LR 4. History of LR 5. Linear Regression 6. Derivation 7. More examples Review:

More information

The expected value E[X] of discrete random variable X is defined by. xp X (x), (6.1) E[X] =

The expected value E[X] of discrete random variable X is defined by. xp X (x), (6.1) E[X] = Chapter 6 Meeting Expectations When a large collection of data is gathered, one is typically interested not necessarily in every individual data point, but rather in certain descriptive quantities such

More information

Joint probability distributions: Discrete Variables. Two Discrete Random Variables. Example 1. Example 1

Joint probability distributions: Discrete Variables. Two Discrete Random Variables. Example 1. Example 1 Joint probability distributions: Discrete Variables Two Discrete Random Variables Probability mass function (pmf) of a single discrete random variable X specifies how much probability mass is placed on

More information