Let X and Y denote two random variables. The joint distribution of these random

Size: px
Start display at page:

Download "Let X and Y denote two random variables. The joint distribution of these random"

Transcription

1 EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P. (3-) This is the probability that (X,Y) lies in the shaded region (below y and to the left of x) depicted on Figure 3-. Elementary Properties of the Joint Distribution As x and/or y approach minus infinity the distribution approaches zero; that is, F XY (-,y) = 0 and F XY (x,- ) = 0. (3-) To show this, note that {X = -, Y y} {X = - }, but P[X = - ] = 0 F XY (-,y) = 0. Similar reasoning can be given to show that F XY (x,- ) = 0. As x and y both approach infinity (simultaneously, and in any order) the distribution approaches unity; that is, F XY (, ) =. (3-3) to x = - y-axis y x x-axis to y = - Figure 3-: Region included in the definition of F(x,y). Updates at 3-

2 EE385 Class Notes 9/7/0 John Stensby y-axis y x D x x-axis y = Figure 3-: Region x < X x, Y y on plane. This follows easily by noting that {X, Y } = S and P(S ) =. In many applications, the identities P[x < X x, Y y] = F XY (x,y) - F XY (x,y) (3-4) P[X x, y < Y y ] = F XY (x,y ) - F XY (x,y ) (3-5) are useful. To show (3-4), note that P[x < X x, Y y] is the probability that the pair (X, Y) is in the shaded region D depicted by Figure 3-. Now, it is easily seen that P[X x, Y y] = P[X x, Y y] + P[x < X x, Y y], which is equivalent to F XY (x,y) = F XY (x,y) + P[x < X x, Y y]. This leads to (3-4). A similar development leads to (3-5). Joint Density The joint density of X and Y is defined as the function Updates at 3-

3 EE385 Class Notes 9/7/0 John Stensby f (x,y) = XY FXY( x, y). (3-6) x y Integrate this result to obtain F XY z x y ( x, y) = f XY(u, v) du dv. (3-7) z Density f XY (x,y) and distribution F XY (x,y) describe the joint statistics of the random variables X and Y. On the other hand, f X (x) and F X (x) describe the marginal statistics of X. Let D denote a region of the x-y plane such that {(X,Y) D} = {ρ S : (X(ρ), Y(ρ)) D} is an event (see Fig. 3-3). The probability of this event is P[(X,Y) ] = f XY (x, y) dx dy D D. (3-8) Note that [- < X,Y < ] = f XY(x, y) dx dy = - - P. (3-9) y-axis D x-axis Figure 3-3: Region used in development of P[(X,Y) D]. Updates at 3-3

4 EE385 Class Notes 9/7/0 John Stensby That is, there is one unit of area under the joint density function. Marginal descriptions can be obtained from joint descriptions. We claim that F X (x) = F XY (x, ) and F Y (y) = F XY (,y). (3-0) To see this, note that {X x} = {X x, Y } and {Y y} = {X, Y y}. Take the probability of these events to obtain the desired results. Other relationships are important as well. For example, marginal density f X (x) can be obtained from the joint density f XY (x,y) by using z fx( x) = fxy( x, y)dy. (3-) To see this, use Leibnitz's rule to take the partial derivative, with respect to x, of the distribution x y F XY (x,y)= f XY(u,v) dvdu x F XY to obtain zy ( x, y ) = f (x, v) dv. (3-) XY Now, let y go to infinity, and use F X (x) = F XY (x, ) to get the desired result f X(x) = F XY(x, )= f XY(x,v) dv x. (3-3) A similar development leads to the conclusion that z fy( y) = fxy( x, y)dx. (3-4) Updates at 3-4

5 EE385 Class Notes 9/7/0 John Stensby Special Case: Jointly Gaussian Random Variables Random variables X and Y are jointly Gaussian (a.k.a jointly normal) if their joint density has the form (x η x ) (x ηx)(y ηy) (y ηy) f XY (x,y) = exp r + x y ( r ) x y r σ σ πσ σ σx σy, (3-5) where η x = E[X], η y = E[Y], σ y = Var[Y], σ x = Var[X], and r is a parameter known as the correlation coefficient (r lies in the range - r ). The marginal densities for X and Y are given by f x x exp x x ( ) = ( η ) / σx and fy( y) = exp ( y ηy) / σy. πσ πσ x y Independence Random variables X and Y are said to be independent if all events of the form {X A} and {Y B}, where A and B are sets of real numbers, are independent. Apply this to the events {X x} and {Y y} to see that if X and Y are independent then F XY (x,y) = P[X x, Y y] = P[X x] P[Y y] = F X (x) F Y (y) f XY (x,y) = F XY (x,y) = F X (x) F Y (y) = f X (x) f Y (y). xy x y (3-6) The converse of this can be shown as well. Hence, X and Y are independent if, and only if, their joint density factors into a product of marginal densities; a similar statement can be made for distribution functions. This result generalizes to more than two random variables; n random Updates at 3-5

6 EE385 Class Notes 9/7/0 John Stensby variables are independent if, and only if, their joint density (alternatively, joint distribution) factors into a product of marginal densities (alternatively, marginal distributions). Example 3-: Consider X and Y as jointly Gaussian. The only way you can get the equality πσxσy r exp R S T ( x ηx ) ( r x ηx)( y ηy) ( y ηy) + σ σ σ σ ( r ) x x = exp ( x η ) / σ exp ( y ηy) / σ πσ πσ x L N M y x x y y y OU Q PV W (3-7) is to have the correlation coefficient r = 0. Hence, Gaussian X and Y are independent if and only if r = 0. Many problems become simpler if their random variables are (or can be assumed to be) independent. For example, when dealing with independent random variables, the expected value of a product (of independent random variables) can be expressed as a product of expected values. Also, the variance of a sum of independent random variables is the sum of the variances. These two simplifications are discussed next. Expectation of a Product of Independent Random Variables Independence of random variables can simplify many calculations. As an example, let X and Y be random variables. Clearly, the product Z = XY is a random variable (review the definition of a random variable given in Chapter ). As we will discuss in Chapter 5, the expected value of Z = XY can be computed as E[XY] xy f (x, y) dxdy =, (3-8) XY where f XY (x,y) is the joint density of X and Y. Suppose X and Y are independent random variables. Then (3-6) and (3-8) yield Updates at 3-6

7 EE385 Class Notes 9/7/0 John Stensby XY X Y E[XY] = xy f (x, y) dxdy = x f (x)dx y f (y)dy = E[X]E[Y]. (3-9) This result generalizes to more than two random variables; for n independent random variables, the expected value of a product is the product of the expected values. The converse of (3-9) is not true, in general. That is, if E[XY] = E[X]E[Y], it does not necessarily follow that X and Y are independent. Variance of a Sum of Independent Random Variables For a second example where independence simplifies a calculation, let X and Y be independent random variables, and compute the variance of their sum. The variance of X+Y is given by Var[X + Y] = E {(X + Y) (E[X] + E[Y])} = E {(X E[X]) + (Y E[Y])} [ ] = E {X E[X]} + E {X E[X]}{Y E[Y]} + E {Y E[Y]}. (3-0) Since X and Y are independent we have E[{X-E[X]}{Y-E[Y]}] = E[X-E[X]] E[Y-E[Y]] = 0, and (3-0) becomes Var[X + Y] = E {X E[X]} + E {Y E[Y]} = Var[X] + Var[Y]. (3-) That is, for independent random variables, the variance of the sum is the sum of the variances (this applies to two or more random variables). In general, if random variables X and Y are dependent, then (3-) is not true. Example 3-: The result just given can be used to simplify the calculation of variance in some cases. Consider the binomial random variable X, the number of successes out of n independent trials. As used in an example that was discussed in Chapter (where we showed Updates at 3-7

8 EE385 Class Notes 9/7/0 John Stensby that E[X] = np), we can express binomial X as X = X+ X + + Xn, (3-) where X i, i n, are random variables defined by Xi = th, if i trial is a "success", i n = 0, otherwise. (3-3) Note that all n of the X i are independent, they have identical mean p, and they have identical variance ( ) i i i Var[X] = E[X ] E[X] = p p = pq, (3-4) where p is the probability of success on any trial, and q = -p. Hence, we can express the variance of Binomial X as Var[X] = Var[X+ X + + X n] = Var[X ] + Var[X ] + + Var[X n] = npq. (3-5) Hence, for the Binomial random variable X, we know that E[X] = np and VAR[X] = npq. Random Vectors: Vector-Valued Mean and Covariance Matrix Let X, X,..., X n denote a set of n random variables. In this subsection, we use vector and matrix techniques to simplify working with multiple random variables. Denote the vector-valued random variable T X = [X X X X ]. (3-6) 3 n Updates at 3-8

9 EE385 Class Notes 9/7/0 John Stensby Clearly, using vector notation is helpful; writing X is much easier than writing out the n random variables X, X,..., X n. The mean of X is a constant vector η = E[X ] with components equal to the means of the X i. We write T η E X E [X X X 3 X n ] = = E[X ] E[X ] E[X 3] E[X n] T = [ η η η3 ηn ] [ ] T (3-7) where η i = E[X i ], i n. The covariance of X i and X j is defined as σ ij = E[(X i - η i )(X j - η j )], i, j n. (3-8) Note that σ ij = σ ji. Use these n covariance values to form the covariance matrix Λ= L N M σ σ σ n σ σ σ n σ σ σ n n nn O Q P. (3-9) Note that this matrix is symmetric; that is, note that Λ = Λ Τ. Finally, we can write T E Λ= (X η) (X η). (3-30) Equation (3-30) provides a compact, simple definition for Λ. Symmetric Positive Semi-Definite and Positive Definite Matrices A real-valued, symmetric matrix Q is positive semi-definite (sometimes called Updates at 3-9

10 EE385 Class Notes 9/7/0 John Stensby nonnegative definite) if U T QU 0 for all real-valued vectors U. A real-valued, symmetric matrix Q is positive definite if U T QU > 0 for all real-valued vectors U 0. A real-valued, symmetric, positive semi-definite matrix may (or may not) be singular. However, a positive definite symmetric matrix is always nonsingular. Theorem 3-: The covariance matrix Λ is positive semi-definite. Proof: Let U = [ u u T u n ] be an arbitrary, real-valued vector. Now, define the scalar n j j j. (3-3) j= Y= u (X η ) Clearly, E[Y ] 0. However n n n n E[Y ]=E u (X η ) u (X η ) = u E[(X η )(X η )]u j j j k k k j k k j j k j= k= j= k= n n = u σ j= k= u j jk k (3-3) T = U ΛU 0. Hence U U T Λ 0 for all U and Λ is positive semi-definite. Matrix Λ is positive definite in almost all practical applications. That is, U T Λ U >0 for all U 0. If Λ is not positive definite, then at least one of the X i can be expressed as a linear combination of the remaining n- random variables, and the problem can be simplified by reducing the number of random variables. If Λ is positive definite, then Λ 0, Λ is nonsingular and Λ exits ( Λ denotes the determinant of the covariance matrix). Updates at 3-0

11 EE385 Class Notes 9/7/0 John Stensby Uncorrelated Random Variables Suppose we are given n random variables X i, i n. We say that the X i are uncorrelated if, for i, j n, E[XiX j] = E[X i]e[x j], i j. (3-33) For uncorrelated X i, i n, we have σ = E[(X - η )(X - η )] = E[X X ]- ηe[x ]- η E[X ] + η η, i j ij i i j j i j i j j i j j = E[X ]E[X ]- ηη - η η + η η, i j i j i j j i j j (3-34) and this leads to ij i σ = σ, i = j = 0, i j. (3-35) Hence, for uncorrelated random variables, matrix Λ is diagonal and of the form (the variances are on the diagonal) Λ= L NM σ σ σn σn O QP. (3-36) If X i and X j are independent they are also uncorrelated, a conclusion that follows from (3-9). However, the converse is not true, in general. Uncorrelated random variables may be dependent. Updates at 3-

12 EE385 Class Notes 9/7/0 John Stensby Multivariable Gaussian Density Let X, X,..., X n be jointly Gaussian random variables. Let Λ denote the covariance matrix for the random vector X = [X X... X n ] T, and denote η = E[X ]. The joint density of X, X,..., X n can be expressed as a density for the vector X. This density is denoted as f(x ), and it is given by f( X) exp T = ( X η) Λ ( X η). (3-37) n/ ( π) / Λ When n = (or ), this result yields the expressions given in class for the first (second) order case. With (3-37), we have perpetuated a common abuse of notation. We have used X = [X X X n ] T to denote a vector of random variables. However, in (3-37), X is a vector of algebraic variables. This unfortunate dual use of a symbol is common in the literature. This ambiguity should present no real problem; from context, the exact interpretation of X should be clear. Example 3-3: Let X = [X X ] T be a Gaussian random vector. Then X and X are Gaussian random variables with joint density of the form (3-5). Let η = E[X ] and η = E[X ] denote the means of X and X, respectively; likewise, let σ = VAR[X ] and σ = VAR[X ]. Finally, let r denote the correlation coefficient in the joint density. Find an expression for covariance matrix Λ in terms of these quantities. This can be accomplished by comparing the exponents of (3-37) and (3-5). For the exponent of (3-37), let Q = Λ - and write T (X η) Q(X η ) = {X η} {X η} [ ] q q X η q q X η { } = q (X η ) + q (X η )(X η ) + q (X η ), (3-38) Updates at 3-

13 EE385 Class Notes 9/7/0 John Stensby where we have used the fact that Q is symmetric (Q T = Q). Now, compare (3-38) with the exponent of (3-5) (where X and X are used instead of x and y) and write { } q (X η ) + q (X η )(X η ) + q (X η ) (X η ) (X η )(X η ) (X η ) = r +. ( r ) σσ σ σ (3-39) Equate like terms on both sides of (3-39) and obtain q = σ ( r ) r q = σσ ( r ) (3-40) q =. σ ( r ) Finally, take the inverse of matrix Q and obtain r ( r ) ( r ) σ rσσ σ σ σ Λ= Q = =. (3-4) r rσσ σ σσ ( r ) σ( r ) The matrix on the right-hand-side of (3-4) shows the general form of the covariance matrix for a two-dimensional Gaussian random vector. From (3-4) and the discussion before (3-9), we note that E[(X -η )(X -η )] = rσ σ. Hence, the correlation coefficient r can be written as Updates at 3-3

14 EE385 Class Notes 9/7/0 John Stensby E[(X η )(X η )] σ ηη r = = σσ σσ, (3-4) the covariance normalized by the product σ σ. When working problems, Equation (3-4) is an important, very useful, formula for correlation coefficient r. Updates at 3-4

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

More than one variable

More than one variable Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS In many practical situations, multiple random variables are required for analysis than a single random variable. The analysis of two random variables especially

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

Practice Examination # 3

Practice Examination # 3 Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single

More information

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #7 Prof. Young-Han Kim Tuesday, May 6, 4 Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei). Neural net. Let Y = X + Z, where the signal X U[,] and noise Z N(,) are independent.

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2 MATH 3806/MATH4806/MATH6806: MULTIVARIATE STATISTICS Solutions to Problems on Rom Vectors Rom Sampling Let X Y have the joint pdf: fx,y) + x +y ) n+)/ π n for < x < < y < this is particular case of the

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

16.584: Random Vectors

16.584: Random Vectors 1 16.584: Random Vectors Define X : (X 1, X 2,..X n ) T : n-dimensional Random Vector X 1 : X(t 1 ): May correspond to samples/measurements Generalize definition of PDF: F X (x) = P[X 1 x 1, X 2 x 2,...X

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

Statistics, Data Analysis, and Simulation SS 2015

Statistics, Data Analysis, and Simulation SS 2015 Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler

More information

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review

Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc. ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers

More information

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

Properties of Random Variables

Properties of Random Variables Properties of Random Variables 1 Definitions A discrete random variable is defined by a probability distribution that lists each possible outcome and the probability of obtaining that outcome If the random

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Stochastic Processes. Monday, November 14, 11

Stochastic Processes. Monday, November 14, 11 Stochastic Processes 1 Definition and Classification X(, t): stochastic process: X : T! R (, t) X(, t) where is a sample space and T is time. {X(, t) is a family of r.v. defined on {, A, P and indexed

More information

Joint Gaussian Graphical Model Review Series I

Joint Gaussian Graphical Model Review Series I Joint Gaussian Graphical Model Review Series I Probability Foundations Beilun Wang Advisor: Yanjun Qi 1 Department of Computer Science, University of Virginia http://jointggm.org/ June 23rd, 2017 Beilun

More information

20. Gaussian Measures

20. Gaussian Measures Tutorial 20: Gaussian Measures 1 20. Gaussian Measures M n (R) isthesetofalln n-matrices with real entries, n 1. Definition 141 AmatrixM M n (R) is said to be symmetric, if and only if M = M t. M is orthogonal,

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Continuous r.v practice problems

Continuous r.v practice problems Continuous r.v practice problems SDS 321 Intro to Probability and Statistics 1. (2+2+1+1 6 pts) The annual rainfall (in inches) in a certain region is normally distributed with mean 4 and standard deviation

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

Class 8 Review Problems solutions, 18.05, Spring 2014

Class 8 Review Problems solutions, 18.05, Spring 2014 Class 8 Review Problems solutions, 8.5, Spring 4 Counting and Probability. (a) Create an arrangement in stages and count the number of possibilities at each stage: ( ) Stage : Choose three of the slots

More information

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1 EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University

Statistics for Economists Lectures 6 & 7. Asrat Temesgen Stockholm University Statistics for Economists Lectures 6 & 7 Asrat Temesgen Stockholm University 1 Chapter 4- Bivariate Distributions 41 Distributions of two random variables Definition 41-1: Let X and Y be two random variables

More information

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2}, ECE32 Spring 25 HW Solutions April 6, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

Basics on Probability. Jingrui He 09/11/2007

Basics on Probability. Jingrui He 09/11/2007 Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability

More information

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2

matrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2 Short Guides to Microeconometrics Fall 2018 Kurt Schmidheiny Unversität Basel Elements of Probability Theory 2 1 Random Variables and Distributions Contents Elements of Probability Theory matrix-free 1

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

A Course Material on. Probability and Random Processes

A Course Material on. Probability and Random Processes A Course Material on Probability and Random Processes By Mrs. V.Sumathi ASSISTANT PROFESSOR DEPARTMENT OF SCIENCE AND HUMANITIES SASURIE COLLEGE OF ENGINEERING VIJAYAMANGALAM 638 56 QUALITY CERTIFICATE

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/ STA263/25//24 Tutorial letter 25// /24 Distribution Theor ry II STA263 Semester Department of Statistics CONTENTS: Examination preparation tutorial letterr Solutions to Assignment 6 2 Dear Student, This

More information

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v } Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v

More information

Homework 5 Solutions

Homework 5 Solutions 126/DCP126 Probability, Fall 214 Instructor: Prof. Wen-Guey Tzeng Homework 5 Solutions 5-Jan-215 1. Let the joint probability mass function of discrete random variables X and Y be given by { c(x + y) ifx

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

Elements of Probability Theory

Elements of Probability Theory Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......

More information

Conditional distributions. Conditional expectation and conditional variance with respect to a variable.

Conditional distributions. Conditional expectation and conditional variance with respect to a variable. Conditional distributions Conditional expectation and conditional variance with respect to a variable Probability Theory and Stochastic Processes, summer semester 07/08 80408 Conditional distributions

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

Random variables (discrete)

Random variables (discrete) Random variables (discrete) Saad Mneimneh 1 Introducing random variables A random variable is a mapping from the sample space to the real line. We usually denote the random variable by X, and a value that

More information

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs Lecture Notes 3 Multiple Random Variables Joint, Marginal, and Conditional pmfs Bayes Rule and Independence for pmfs Joint, Marginal, and Conditional pdfs Bayes Rule and Independence for pdfs Functions

More information

Stochastic Models of Manufacturing Systems

Stochastic Models of Manufacturing Systems Stochastic Models of Manufacturing Systems Ivo Adan Organization 2/47 7 lectures (lecture of May 12 is canceled) Studyguide available (with notes, slides, assignments, references), see http://www.win.tue.nl/

More information

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there

More information

Statistical Learning Theory

Statistical Learning Theory Statistical Learning Theory Part I : Mathematical Learning Theory (1-8) By Sumio Watanabe, Evaluation : Report Part II : Information Statistical Mechanics (9-15) By Yoshiyuki Kabashima, Evaluation : Report

More information

1 Random variables and distributions

1 Random variables and distributions Random variables and distributions In this chapter we consider real valued functions, called random variables, defined on the sample space. X : S R X The set of possible values of X is denoted by the set

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

Multivariate Distribution Models

Multivariate Distribution Models Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Lecture 5: Moment Generating Functions

Lecture 5: Moment Generating Functions Lecture 5: Moment Generating Functions IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge February 28th, 2018 Rasmussen (CUED) Lecture 5: Moment

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University.

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University. Random Signals and Systems Chapter 3 Jitendra K Tugnait Professor Department of Electrical & Computer Engineering Auburn University Two Random Variables Previously, we only dealt with one random variable

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information