ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

Size: px
Start display at page:

Download "ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1"

Transcription

1 EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1

2 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation Example + Handout Intro/Reference Orthogonality Principle, with Example Random Vectors & Transformations of Random Vectors Independent Experiments & Repeated Trials omplex Random Variables: pdf, covariance, variance h. 6 orrelation Matrix, ovariance Matrix for R. Vectors onditional Densities and Distributions onditional Expected Values haracteristic Functions for R. Vectors

3 Functions of RV s Start with RV s X and Y that are statistically known onsider these as input to systems, say g and h, yielding new RV s Z and W: x y g h z w old RV s; known joint pdf new RV s Goal: Find the statistical description (i.e., the joint pdf) for Z and W; Use f zw (z, w) to find f z (z), f w (w) 3

4 Functions of RV s z = g(x, y); w = h(x, y) laim: f zw (z, w) = fxy (x1,y 1) J(x,y ) 1 1 fxy (x,y) J(x,y ) fxy (xn,yn) J(x,y ) (summing over all roots) n n x -1, y -1 where J(x, y) = z x w x z y w y is the Jacobian of the transformation. d"new" d"old" Getting rid of x, y in answer 4

5 Functions of RV s: An Example onsider the linear transformation: z = ax + by J(x, y) = a c b d ad bc w = cx + dy Solve original system backwards for x and y: x dz bw, k k ad bc aw cz y, k dz bw aw cz f xy, f zw (z, w) = k k ad bc to get rid of x, y Note: if X, Y are jointly normal, then so are W, Z 5

6 Linear Transformation: Example, continued Special ase: z = x cos f + y sin f w = -x sin f + y cos f (a = cos f, b = sin f, c = - sin f, d = cos f) (z 0, w 0 ) f (x 0, y 0 ) Rotation of RV s (x, y) by angle f 6

7 Introduction to Estimation Theory (not from Miller & hilders) Tx: Y (RV of interest) Random Disturbance Rcv: X (Observable, data) Goal: get estimate of y in terms of observation x (i.e., as a function of X i.e., f(x)) Best estimate (one possibility): minimize mean square value of estimation error (MS estimate, or MMSE) hoose f(x) to minimize: E{[Y-f(X)] } 7

8 ase 1: Estimation of RV Y by constant c: f(x) = c Notation: Let e = E{[Y-c] } = ( y c) fy(y)dy hoose c to minimize e, the mean-squared error de dc (y c)f Y (y)dy 0 y fy(y)dy c fy(y)dy h Y c c = h Y 8

9 ase : Linear MS Estimation of RV Y Goal: get estimate of y as linear function of observation X: f(x) = AX + B hoose A, B to minimize e = E{[Y (AX B)] } (*) First fix A, so the equivalent requirement (to *) is: hoose constant B to minimize e = Thus, (**) becomes (plugging in for B): e = E{[(Y AX) hy Ah X )] } to be est. d By case 1, we want: B = E{Y A X } = h Y - Ah X E{[(Y AX) B)] } (**) constant est. E{[(Y h ) A(X Y h X )] } 9

10 ase : Linear MS Estimation of RV Y ontinuing with: e = E{[(Y h E{(Y h Y ) ) A(X h A(X Y h X X )(Y h )] Y } ) A (X h X ) } = s Y A r s X s Y + A s X ov, XY Now set de/da = 0 to minimize e by our choice of A: r s x s y = A s x A s r s y x Also: A s r s Y cov( X,Y) sy cov( X,Y) X sx sy sx s X 10

11 Vocabulary for ase : Linear MS Estimation of RV Y The linear estimate: AX + B Non-homogeneous linear estimate: AX + B Homogeneous linear estimate: AX The data or observable of the estimate: RV X The error of the estimate: E = Y (AX + B) The mean-squared error of the estimate: e = E{E } 11

12 ase 3: Non-linear MS Estimate of Y by Some Function c(x) (No constraints) Arbitrary function best choice for minimizing MS error) Goal: find c(x) to minimize: e = E{ [Y - c(x)] } [y c(x)] f(x,y)dx dy f(x) [y c(x)] 0 f(y x)dy dx f(x) f(y x) Minimize this for each fixed x But note: c(x) is constant for each fixed x, and f y (y x) is just some f y (y) for each fixed x. c(x) = E{Y X} = yf (y x) dy From ase 1 1

13 ase 3, Special ases (1): Y = g(x), (): X, Y independent 1. Here Y is a deterministic, known function of X, so: c(x) = E{Y X} = g(x) e = E{E } = E{ [Y c(x)] } = E{ [Y g(x)] } = 0. Here knowing X tells me nothing about Y: c(x) = E{Y X} = E{Y}, constant, independent of observation 13

14 Notes on Estimation Theory In general, the non-linear MS estimate, c(x) = E{Y X} is not a straight line, and will yield a smaller e than the linear estimate AX + B. But, if X and Y are jointly normal, the non-linear MS estimate and the linear MS estimate are identical: E{Y X} = AX + B hard easy 14

15 Summary: MMSE Estimation Tx: Y (RV of interest) Random Disturbance Rcv: Y (observable) ase 1: Estimating Y by constant c: c = h Y = E{Y} ase : Linear estimate of Y [Y = AX + B] B = h Y A h X, A = r s Y /s X ase 3: Arbitrary estimate of Y [Y = c(x)] c(x) = E{Y X} = y f Y (y x)dy (reduces to linear estimate if x,y jointly Gaussian) Recall: RV s X and Y are orthogonal iff: = EE 650 D. van Alphen 15

16 MMSE Estimation Example Tx: Y (RV of interest) Random Disturbance Rcv: X (observable) Assume X: U(0,1), Y: U(0,x), given X = x Find the (unconstrained) MMSE estimate of Y, given X = x. Solution: y ^ MMSE = E{ Y X=x } = y f Y (y x) dy 1/x, on (0, x) 1 x y x 0 1 x x x EE 650 D. van Alphen 16

17 ond. Prob. & Estimation Example (See separate handout on web page for solution.) (Scheaffer & Mclave) A soft-drink machine has a random amount Y in supply at the beginning of a given day, and dispenses a random amount Y 1 during the day (say in gallons). It is not re-supplied during the day; hence, Y 1 Y. The joint density for Y 1 and Y is: f(y 1, y ) = ½ 0 y 1 y, 0 y 0 else (That is, the points (y 1, y ) are uniformly distributed over the triangle shown.) Find the conditional probability density of Y 1, given that Y = y. Also evaluate the probability that less than ½ gallon is sold, given that the machine contains 1 gallon at the start of the day. Y avail. Y 1 dispensed (look-down sketch) y 1 0 EE 650 D. van Alphen 17 0 y

18 Orthogonality Principle onsider the linear MMSE estimate AX + B of RV Y, as a function of A and B, and thus minimized if: e/a = 0, e/b = 0 A e {E[(Y (AX B)) ]} A E[(Y (AX B))( X)] 0 E[(Y (AX B))(X)] 0 Error RV Data, or obs. Linear MMSE estimate AX + B of Y is the one that makes the error orthogonal to the data. EE 650 D. van Alphen 18

19 Orthogonality Principle: Intuitive Sketch Sketch for the ase of the homogeneous linear MMSE Estimate (B = 0): y Error: y - Ax x Ax Note that B = 0 means the estimate is ^ y = Ax, in the same direction as x. EE 650 D. van Alphen 19

20 Example: Finding a Homogeneous Linear MMSE Estimate Find a such that e = E{ [ Y ax] } is minimum. error Applying the orthogonality principle, we need: E{ [ Y ax] X} = 0 (error orthogonal to data) E{YX ax } = 0 E{YX} = E{aX } E{XY} = ae{x } a E{XY } E{X } EE 650 D. van Alphen 0

21 Random Vectors A random vector is a column vector X = [X 1, X,, X n ] T whose components X i are RV s, where T denotes the transpose. To find the probability that R. Vector X is in region D, we do an n-dimensional integral of the pdf, over region D: Pr{ X D} D f X (X,X 1,...,Xn)dX1dX...dXn, where the joint density for the RV s is n F(x1,x,,xn) f X (X) = f X (X 1, X,, X n ) = x1,..., xn and the joint cdf for the RV s is: F X (x 1, x,, x n ) = Pr{X 1 x 1,, X n x n } EE 650 D. van Alphen 1

22 Mean Vectors The random (column) vector vector X = [X 1, X,, X n ] T has mean (vector) E(X) = [h X 1, h X,, h Xn ]T where each entry in the vector is the mean of corresponding RV. Example: onsider the R. Vector [X 1, X, X 3, X 4 ] T where the component RV s X k are independent Gaussians, and where: N(h X k = k, s Xk = k ). Then the mean vector is: E(X) = [h X 1, h X, h X3, h X4 ]T = [1,, 3, 4] T EE 650 D. van Alphen

23 Random Vectors, continued In the over-all joint cdf, F X (x 1,, x n ): Replace some of the arguments by to obtain the joint cdf for other RV s e.g., F(x 1,, x 3, ) = F(x 1, x 3 ) Integrate the over-all joint pdf, f X (x 1,, x n ): Over some of the arguments to obtain the joint pdf for other RV s fx e.g., (x,x,x,x )dx dx f(x,x ) EE 650 D. van Alphen 3

24 Transformations of Random Vectors (6.4.1) Given n functions: g 1 (X),, g n (X), where X = [X 1,, X n ] T consider RV s: Y 1 = g 1 (X),, Y n = g n (X) Then solve the system backwards for the x i s in terms of the y i s. 1. If the system of equations has no roots, then f Y (y 1,, y n ) = 0. If the system of equations has a single root, then: f (x,...,x f Y (y 1,, y n ) = x 1 n (*) J(x1,...,xn ) EE 650 D. van Alphen 4 ) where

25 Transformations of R. Vectors, continued J(x 1,...,x n ) g x g x 1 1 n 1 g x g x 1 n n n is the Jacobian of the transformation ( d(new)/d(old) ) 3. If the system of equations has multiple roots, then add corresponding terms (for each root) to equation (*), summing over all roots. 4. Replace x i s in the final equation by the y i s obtained from the solve backwards step. EE 650 D. van Alphen 5

26 Independence of RV s The RV s X 1,, X n are (mutually) independent iff: F(x 1,, x n ) = F(x 1 ) F(x n ) f(x 1,, x n ) = f(x 1 ) f(x n ) If the RV s X 1,, X n are independent, then so are the RV s Y 1 = g 1 (X 1 ),, Y n = g n (X n ) (Functions of independent RV s are themselves independent.) EE 650 D. van Alphen 6

27 Independent Experiments & Repeated Trials Let S n = S 1 x S x x S n be the sample space of a combined experiment where RV s X i only depend on outcome z i of S i ; i.e., X i (z 1, z,, z i,, z n ) = X i (z i ) Special ase: repeat the same experiment n times; then each of the repetitions is independent of the others RV s X i are independent and identically distributed (iid) Example: Toss a coin 100 times; let X i = 1 if i th toss is heads, 0 if tails f X i (x i) (½) 0 1 EE 650 D. van Alphen 7 (½) x i

28 orrelation Matrices for R Vectors Multiple RV s {X i } are uncorrelated if ij = 0 for all i j. Define the correlation matrix for the R. Vector X = [X 1 X n ] T R x R XX R R R 11 1 n1 R R R 1 n E[ XX where R ij = E{X i X j } = R ji is the correlation of RV s X i and X j. cov(x, X j ) R R R 1n n nn T ] (Note that the matrix is.) EE 650 D. van Alphen 8

29 orrelation Matrices & ovariance Matrices Define the covariance matrix for the R. Vector X = [X 1 X n ] T X XX 11 1 n1 where ij = E{X i X j } h i h j = R ji h i h j = ji is the covariance of RV s X i and X j. X1 Note that R n = E{X X T } = E [X1 Xn] Size of product: Xn (n, n) (1, n) (n, 1) Recall RV s X i and X j are said to be orthogonal if E{X i X j } = 0. EE 650 D. van Alphen 9 1 n 1n n nn

30 orrelation Matrices & ovariance Matrices - An Example Find the covariance matrix for the R. Vector X = [X 1, X, X 3, X 4 ] T where the component RV s X k are independent Gaussians, each: N(h = k, s = k ). Note1: The diagonal entries are just the variances: kk = k Note : The off-diagonal entries are the covariances; independent uncorrelated cov ij = 0 (i j) Thus, EE 650 D. van Alphen 30

31 Review of Facts from Linear Algebra Definition: Square real matrix Z of size (n, n) is non-negative definite if for any real vector A = [ a 1,, a n ]. Q = A Z A T 0 (*) Non-negative definite (nnd) matrices have all eigenvalues 0. If Q in equation (*) is strictly > 0, then Z is positive definite, and all of the eigenvalues of Z will be positive. EE 650 D. van Alphen 31

32 Special Properties of orrelation Matrices Let D n be the determinant of correlation matrix R X of RV s {X i }. 1. R X is non-negative definite.. D n is real and non-negative: D n 0 3. D n R 11 R R nn with equality iff the RV s {X i } are mutually orthogonal matrix R X is a diagonal matrix. Note that covariance matrix X will have properties similar to the 3 above, because it is the correlation matrix for the centered RV s {X i h i }. EE 650 D. van Alphen 3

33 onditional Densities & Distributions Recall the conditional pdf for RV s X and Y: Similarly, the conditional pdf for RV s X n,, X k+1, given X k,, X 1 : f(x,,x,,x ) f(x,,x x,,x ) 1 k n n k1 k 1 f(x,,x ) hain Rule, with 4 RV s: f(y Example: f(x,x,x ) d f(x x,x ) F(x1 x,x3) f(x,x ) dx x) f(x 1, x, x 3, x 4 ) = f(x 4 x 3, x, x 1 ) f(x 3 x, x 1 ) f(x x 1 ) f(x 1 ) k f(x,y) f(x) EE 650 D. van Alphen 33

ECE Lecture #9 Part 2 Overview

ECE Lecture #9 Part 2 Overview ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl. E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

ECE 650 1/11. Homework Sets 1-3

ECE 650 1/11. Homework Sets 1-3 ECE 650 1/11 Note to self: replace # 12, # 15 Homework Sets 1-3 HW Set 1: Review Assignment from Basic Probability 1. Suppose that the duration in minutes of a long-distance phone call is exponentially

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't

More information

ECE Lecture #10 Overview

ECE Lecture #10 Overview ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions

More information

ECE Homework Set 3

ECE Homework Set 3 ECE 450 1 Homework Set 3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else ECE 450 Homework #3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3 4 5

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

ECON Fundamentals of Probability

ECON Fundamentals of Probability ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,

More information

Chapter 5,6 Multiple RandomVariables

Chapter 5,6 Multiple RandomVariables Chapter 5,6 Multiple RandomVariables ENCS66 - Probabilityand Stochastic Processes Concordia University Vector RandomVariables A vector r.v. is a function where is the sample space of a random experiment.

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS

UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS UNIT-2: MULTIPLE RANDOM VARIABLES & OPERATIONS In many practical situations, multiple random variables are required for analysis than a single random variable. The analysis of two random variables especially

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores

More information

Jointly Distributed Random Variables

Jointly Distributed Random Variables Jointly Distributed Random Variables CE 311S What if there is more than one random variable we are interested in? How should you invest the extra money from your summer internship? To simplify matters,

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University.

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University. Random Signals and Systems Chapter 3 Jitendra K Tugnait Professor Department of Electrical & Computer Engineering Auburn University Two Random Variables Previously, we only dealt with one random variable

More information

Chapter 5 Class Notes

Chapter 5 Class Notes Chapter 5 Class Notes Sections 5.1 and 5.2 It is quite common to measure several variables (some of which may be correlated) and to examine the corresponding joint probability distribution One example

More information

10. Joint Moments and Joint Characteristic Functions

10. Joint Moments and Joint Characteristic Functions 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent the inormation contained in the joint p.d. o two r.vs.

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

2. Conditional Expectation (9/15/12; cf. Ross)

2. Conditional Expectation (9/15/12; cf. Ross) 2. Conditional Expectation (9/15/12; cf. Ross) Intro / Definition Examples Conditional Expectation Computing Probabilities by Conditioning 1 Intro / Definition Recall conditional probability: Pr(A B) Pr(A

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the

More information

Multivariate Distributions (Hogg Chapter Two)

Multivariate Distributions (Hogg Chapter Two) Multivariate Distributions (Hogg Chapter Two) STAT 45-1: Mathematical Statistics I Fall Semester 15 Contents 1 Multivariate Distributions 1 11 Random Vectors 111 Two Discrete Random Variables 11 Two Continuous

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution: 4.1 Bivariate Distributions. Chapter 4. Multivariate Distributions For a pair r.v.s (X,Y ), the Joint CDF is defined as F X,Y (x, y ) = P (X x,y y ). Obviously, the marginal distributions may be obtained

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Elements of Probability Theory

Elements of Probability Theory Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Raquel Prado. Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010

Raquel Prado. Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010 Raquel Prado Name: Department of Applied Mathematics and Statistics AMS-131. Spring 2010 Final Exam (Type B) The midterm is closed-book, you are only allowed to use one page of notes and a calculator.

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Communication Theory II

Communication Theory II Communication Theory II Lecture 5: Review on Probability Theory Ahmed Elnakib, PhD Assistant Professor, Mansoura University, Egypt Febraury 22 th, 2015 1 Lecture Outlines o Review on probability theory

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 3 October 29, 2012 Silvia Masciocchi, GSI Darmstadt s.masciocchi@gsi.de Winter Semester 2012 / 13 Outline Reminder: Probability density function Cumulative

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs Lecture Notes 3 Multiple Random Variables Joint, Marginal, and Conditional pmfs Bayes Rule and Independence for pmfs Joint, Marginal, and Conditional pdfs Bayes Rule and Independence for pdfs Functions

More information

Chapter 4 continued. Chapter 4 sections

Chapter 4 continued. Chapter 4 sections Chapter 4 sections Chapter 4 continued 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP:

More information

Chapter 2. Probability

Chapter 2. Probability 2-1 Chapter 2 Probability 2-2 Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance with certainty. Examples: rolling a die tossing

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Stat 5101 Notes: Algorithms (thru 2nd midterm) Stat 5101 Notes: Algorithms (thru 2nd midterm) Charles J. Geyer October 18, 2012 Contents 1 Calculating an Expectation or a Probability 2 1.1 From a PMF........................... 2 1.2 From a PDF...........................

More information

Section 8.1. Vector Notation

Section 8.1. Vector Notation Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

16.584: Random Vectors

16.584: Random Vectors 1 16.584: Random Vectors Define X : (X 1, X 2,..X n ) T : n-dimensional Random Vector X 1 : X(t 1 ): May correspond to samples/measurements Generalize definition of PDF: F X (x) = P[X 1 x 1, X 2 x 2,...X

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

Stat 5101 Notes: Algorithms

Stat 5101 Notes: Algorithms Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................

More information

Recitation. Shuang Li, Amir Afsharinejad, Kaushik Patnaik. Machine Learning CS 7641,CSE/ISYE 6740, Fall 2014

Recitation. Shuang Li, Amir Afsharinejad, Kaushik Patnaik. Machine Learning CS 7641,CSE/ISYE 6740, Fall 2014 Recitation Shuang Li, Amir Afsharinejad, Kaushik Patnaik Machine Learning CS 7641,CSE/ISYE 6740, Fall 2014 Probability and Statistics 2 Basic Probability Concepts A sample space S is the set of all possible

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information