Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Size: px
Start display at page:

Download "Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π"

Transcription

1 Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y)) ], where sgn(x) { x + x >. (b) Plot g(y) vs. y. The minimum MSE is achieved when g(y) E(sgn(X) Y). We have g(y) E(sgn(X) Y y) To find the conditional pdf of X given Y, we use f X Y (x y) f Y X(y x)f X (x) f Y (y) Since X and Z are independent,, where f X (x) sgn(x)f X Y (x y)dx. { X otherwise. f Y X (y x) f Z (y x) Y {X x} N(x,). To find f Y (y) we integrate f Y X (y x)f X (x) over x: f Y (y) f Y X (y x)f X (x)dx π e (x y) dx (Q(y ) Q(y +)). Combining the above results, we get g(y) f Y (y) π e ( sgn(x)f X Y (x y)dx ( (y x) dx π e (x y) dx π e (x y) dx+ Q(y +) Q(y)+Q(y ) Q(y ) Q(y +). sgn(x) e (y x) π f Y (y) ) e (x y) dx π dx ) e (x y) dx π The plot is shown below. Note the sigmoidal shape corresponding to the common neural network activation function.

2 .5.5 g(y) y. Additive shot noise channel. Consider an additive noise channel Y X + Z, where the signal X N(,), and the noise Z {X x} N(,x ), i.e., the noise power of increases linearly with the signal squared. (a) Find E(Z ). (b) Find the best linear MSE estimate of X given Y. (a) Since Z {X x} N(,x ), Therefore, E(Z X) Var(Z X x) E(Z X x) E(Z X x) x. E(Z ) E(E(Z X)) E(X ). (b) From the the best linear estimate formula, Here we have E(X), (Y E(Y))+E(X). E(Y) E(X +Z) E(X)+E(Z) E(X)+E(E(Z X)) +E(), E(XZ) E(E(XZ X)) E(XE(Z X)) E(X ) E(), σ Y E(Y ) (E(Y)) E((X +Z) ) E(X )+E(Z )+E(XZ) ++, Cov(X,Y) E((X E(X))(Y E(Y))) E(XY) Using all of the above, we get E(X(X +Z)) E(X )+E(XZ) +. ˆX Y.

3 3. Estimation vs.detection. Let the signal { +, with probability X, with probability, and the noise Z Unif[,] be independent random variables. Their sum Y X + Z is observed. (a) Find the best MSE estimate of X given Y and its MSE. (b) Now suppose we use a decoder to decide whether X + or X so that the probability of error is minimized. Find the optimal decoder and its probability of error. Compare the optimal decoder s MSE to the minimum MSE. (a) We can easily find the piecewise constant density of Y 4 y f Y (y) 8 < y 3 otherwise The conditional probabilities of X given Y are 3 y < P{X + Y y} y + + < y +3 3 y < P{X Y y} y + + < y +3 Thus the best MSE estimate is 3 Y < g(y) E(X Y) Y < Y +3 The minimum mean square error is E Y (Var(X Y)) E Y (E(X Y) (E(X Y)) ) E( g(y) ) E(g(Y) ) ( dy+ 3 8 dy g(y) f Y (y)dy +3 4 dy+ + 8 dy dy ) 3

4 (b) The optimal decoder is given by the MAP rule. The a posteriori pmf of X was found in part (a). Thus the MAP rule reduces to 3 y D(y) ± < y < < y +3 Sinceeither value can bechosen for D(y) in thecenter range of Y, a symmetrical decoder is sufficient, i.e., { y < D(y) + y The probability of decoding error is P{d(Y) X} P{X, Y } + P{X, Y < } P{X Y }P{Y } + P{X Y < }P{Y < } If we use the decoder (detector) as an estimator, its MSE is E ( (d(y) X) ) This MSE is twice that of the minimum mean square error estimator. 4. Linear estimator. Consider a channel with the observation Y XZ, where the signal X and the noise Z are uncorrelated Gaussian random variables. Let E[X], E[Z], σ X 5, and σ Z 8. (a) Find the best MSE linear estimate of X given Y. (b) Suppose your friend from Caltech tells you that he was able to derive an estimator with a lower MSE. Your friend from UCLA disagrees, saying that this is not possible because the signal and the noise are Gaussian, and hence the best linear MSE estimator will also be the best MSE estimator. Could your UCLA friend be wrong? (a) We know that the best linear estimate is given by the formula (Y E(Y))+E(X). Note that X and Z Gaussian and uncorrelated implies they are independent. Therefore, E(Y) E(XZ) E(X)E(Z), E(XY) E(X Z) E(X )E(Z) (σ X +E (X))E(Z), E(Y ) E(X Z ) E(X )E(Z ) (σ X +E (X)),(σ Z +E (Z)) 7, Cov(X,Y) σ Y σ Y E(Y ) E (Y) 68, E(XY) E(X)E(Y) σ Y Using all of the above, we get ˆX 5 34 Y

5 (b) The fact that the best linear estimate equals the best MMSE estimate when input and noise are independent Gaussians is only known to be true for additive channels. For multiplicative channels this need not be the case in general. In the following, we prove Y is not Gaussian by contradiction. Suppose Y is Gaussian, then Y N(,68). We have f Y (y) π 68 e (y ) 68. On the other hand, as a function of two random variables, Y has pdf ( y ) f Y (y) X (x)f Z dx. f x But these two expressions are not consistent, because ( ) f Y () X (x)f Z dx f Z () f x e ( ) 8 π 8 π 68 e ( ) 68 f Y (), f X (x)dx f Z () which is a contradiction. Hence, X and Y are not joint Gaussian, and we might able to derive an estimator with a lower MSE. 5. Additive-noise channel with path gain. Consider the additive noise channel shown in the figure below, where X and Z are zero mean and uncorrelated, and a and b are constants. Z X a b Y b(ax +Z) Find the MMSE linear estimate of X given Y and its MSE in terms only of σ X, σ Z, a, and b. By the theorem of MMSE linear estimate, we have (Y E(Y))+E(X). Since X and Z are zero mean and uncorrelated, we have E(X), E(Y) b(ae(x)+e(z)), Cov(X,Y) E(XY) E(X)E(Y) E(Xb(aX +Z)) abσx, E(Y ) (E(Y)) E(b (ax +Z) ) b a σx +b σz. Hence, the best linear MSE estimate of X given Y is given by ˆX aσ X ba σx Y. +bσ Z 5

6 6. Worst noise distribution. Consider an additive noise channel Y X+Z, where the signal X N(,P) and the noise Z has zero mean and variance N. Assume X and Z are independent. Find a distribution of Z that maximizes the minimum MSE of estimating X given Y, i.e., the distribution of the worst noise Z that has the given mean and variance. You need to justify your answer. The worst noise has Gaussian distribution, i.e. Z N(,N). To prove this statement, we show that the MSE corresponds to any other distribution of Z is less than or equal to the MSE of Gaussian noise, i.e. MSE NonG MSE G. We know for any noise, MMSE estimation is no worse than linear MMSE estimation, so MSE NonG LMSE. Linear MMSE estimate of X given Y is given by (Y E(Y))+E(X) P P +N Y, LMSE σ X Cov (X,Y) σ Y P P P +N NP P +N. Note that LMSE only depends on the second moment of X and Z. So MSE corresponds to any distribution of Z is always upper bounded by the same LMSE, i.e. MSE NonG NP P+N. When Z is Gaussian and independent of X, (X,Y) are joint Gaussian. Then MSE G is equal to LMSE, i.e. MSE G NP P+N. Hence, which shows the Gaussian noise is the worst. MSE NonG NP P +N MSE G, 7. Image processing. A pixel signal X U[ k, k] is digitized to obtain X i+, if i < X i+, i k, k+,..., k, k. To improve the the visual appearance, the digitized value X is dithered by adding an independent noise Z with mean E(Z) and variance Var(Z) N to obtain Y X +Z. (a) Find the correlation of X and Y. (b) Find the best linear MSE estimate of X given Y. Your answer should be in terms only of k, N, and Y. (a) From the definition of X, we know P{ X k} P{i < X i+} k. By the law of 6

7 total expectation, we have Cov(X,Y) E(XY) E(X)E(Y) E(X( X +Z)) E(X X) k E[X X i < X i+]p(i < X i+) i k k i+ i k i 4k. Since, k i i k(k +)(k +)/6. (b) We have E(X), E(Y) E( X)+E(Z), σ Y Var X +VarZ i k x(i+ ) k dx 8k k (i+) 4k i k i k (i ) i k (i+ ) k +N k (i+) +N 4k 4k Then, the best linear MMSE estimate of X given Y is given by (Y E(Y))+E(X) 4k 4k +N Y. 4k 4k +N Y +N. 8. Covariance matrices. Which of the following matrices can be a covariance matrix? Justify youranswereither byconstructingarandomvector X, asafunctionof thei.i.dzeromeanunit variance random variables Z,Z, andz 3, withthe given covariance matrix, or by establishing a contradiction. [ ] [ ] (a) (b) (c) (d) (a) This cannot be a covariance matrix because it is not symmetric. (b) This is a covariance matrix for X Z +Z and X Z +Z 3. (c) This is a covariance matrix for X Z, X Z +Z, and X 3 Z +Z +Z 3. (d) This cannot be a covariance matrix. Suppose it is, then σ 3 9 > σ σ 33 6, which contradicts the Schwartz inequality. You can also verify this by showing that the matrix is not positive semidefinite. For example, the determinant is. Also one of the eigenvalues is negative (λ.856). Alternatively, we can directly show that this matrix does not satisfy the definition of positive semidefiniteness by [ ] 3 <

8 9. Iocane or Sennari: Return of the chemistry professor. An absent-minded chemistry professor forgets to label two identically looking bottles. One contains a chemical named Iocane and the other contains a chemical named Sennari. It is well known that the radioactivity level of Iocane has the Unif[, ] distribution, while the radioactivity level of Sennari has the Exp() distribution. In the previous homework, we found the optimal rule to decide which bottle is which, by measuring the radioactivity level of one of the bottles. The chemistry professor got smarter this time; she now measures both bottles. (a) Let X be the radioactivity level measured from one bottle, and let Y be the radioactivity level measured from the other bottle. What is the optimal decision rule (based on the measurement (X, Y)) that maximizes the chance of correctly identifying the contents? Assume that the radioactivity level of one chemical is independent of the level of the other bottle (conditioned on which bottle contains which). (b) What is the associated probability of error? Let Θ denote the case in which the first bottle (measurement X) is Iocane and the second bottle (measurement Y) is Sennari. Let Θ denote the other case. (a) The optimal MAP rule is equivalent to the ML rule {, f D(x,y) X,Y Θ (x,y ) > f X,Y Θ (x,y ),, otherwise. Since f X,Y Θ (x,y ) x e y and f X,Y Θ (x,y ) e x y, D(x,y) (b) The probability of error is given by {, (x <,y > ) or ( y < x ),, otherwise. P(Θ D(X,Y)) P( X Y Θ )+ P( Y < X Θ ) P( X Y Θ ) y e, e y dxdy which is less than the error probability ( e ) from the single measurement. 8

9 Solutions to Additional Exercises. Orthogonality. Let ˆX be the minimum MSE estimate of X given Y. (a) Show that for any function g(y), E((X ˆX)g(Y)), i.e., the error (X ˆX) and g(y) are orthogonal. (b) Show that Var(X) E(Var(X Y))+Var( ˆX). Provide a geometric interpretation for this result. (a) We use iterated expectation and the fact that E(g(Y) Y) g(y): ( E (X ˆX)g(Y) ) [ ( E E (X ˆX)g(Y) Y )] (b) First we write and E[E((X E(X Y))g(Y)) Y)] E(g(Y)E((X E(X Y)) Y)) E(g(Y)(E(X Y) E(X Y))). E(Var(X Y)) E(X ) E((E(X Y)) ), Var(E(X Y)) E((E(X Y)) ) (E(E(X Y))) Adding the two terms completes the proof. E((E(X Y)) ) (E(X)). Interpretation: IfweviewX, E(X Y), andx E(X Y)asvectorswith norm Var(X), Var(E(X Y)), and E(Var(X Y)), respectively, then thisresultprovides a Pythagorastheorem, wherethesignal, theerror, andtheestimate arethesidesofaright triangle (estimate and error being orthogonal).. Jointly Gaussian random variables. Let X and Y be jointly Gaussian random variables with pdf f X,Y (x,y) π 3/4 e (4x /3+6y /3+8xy/3 8x 6y+6). (a) Find E(X), E(Y), Var(X), Var(Y), and Cov(X,Y). (b) Find the minimum MSE estimate of X given Y and its MSE. 9

10 (a) We can write the joint pdf for X and Y jointly Gaussian as ( exp f X,Y (x,y) [ a(x µ X ) +b(y µ Y ) +c(x µ X )(y µ Y ) πσ X σ Y ρ X,Y ]), where a ( ρ, b X,Y )σ X ( ρ, c X,Y )σ Y ρ X,Y ( ρ X,Y )σ Xσ Y. By inspection of the given f X,Y (x,y) we find that a 3, b 8 3, c 4 3, and we get three equations in three unknowns ρ X,Y c ab, To find µ X and µ Y, we solve the equations and find that Finally σx ( ρ, X,Y )a ( ρ X,Y )b 4. aµ X +cµ Y 4, bµ Y +cµ X 8, µ X, µ Y. Cov(X,Y) ρ X,Y σ X σ Y 4. (b) X and Y are jointly Gaussian random variables. Thus, the minimum MSE estimate of X given Y is linear E(X Y) Cov(X,Y) (Y µ Y )+µ X (Y )+ 3 Y. MMSE E(Var(X Y)) ( ρ XY )σ X 3 4.

11 3. Let X and Y be two random variables. Let Z X +Y and let W X Y. Find the best linear estimate of W given Z as a function of E(X),E(Y),σ X,σ Y,ρ XY and Z. By the theorem of MMSE linear estimate, we have Here we have E(W) E(X) E(Y), E(Z) E(X)+E(Y), σ Z σ X +σ Y +ρ XY σ X σ Y, Ŵ Cov(W,Z) σz (Z E(Z))+E(W). Cov(W,Z) E(WZ) E(W)E(Z) E((X Y)(X +Y)) (E(X) E(Y))(E(X) +E(Y)) E(X ) E(Y ) (E(X)) +(E(Y)) σ X σ Y. So the best linear estimate of W given Z is Ŵ σ X σ Y σ X +σ Y +ρ XYσ X σ Y (Z E(X) E(Y))+E(X) E(Y). 4. Let X and Y be two random variables with joint pdf { x+y, x, y, f(x,y), otherwise. (a) Find the MMSE estimator of X given Y. (b) Find the corresponding MSE. (c) Find the pdf of Z E(X Y). (d) Find the linear MMSE estimator of X given Y. (e) Find the corresponding MSE. (a) We first calculate the marginal pdf of Y, by a direct integration. For y < or y >, we integrate over. For y, f Y (y) (x+y)dx y +. Note that the limits of the integration are derived from the definition of the joint pdf. Thus, { y + f Y (y), y, otherwise. Now we can calculate the conditional pdf f X Y (x y) f XY(x,y) f Y (y) x+y +y

12 for x,y. Therefore, for y hence E[X Y y] (b) The MSE is given by E[X Y] xf X Y (x y)dx 3 + Y +Y. E(Var(X Y)) EX E((E(X Y)) ) ( x x+ ) dx x +yx y + ( 3 + y +y ( 3 + ln(3) ) 44 ln(3) 44 dx 3 + y +y, ) ( y + ) dy (c) Since E[X Y y] 3 +y +y + 6(+y), we have 5 9 E[X Y y] 3 for y. From the part (a), we know Z E[X Y] /3+Y/ /+Y. Thus, 5 9 Z 3. We first find the cdf F Z (z) of Z and then differentiate it to get the pdf. Consider { 3 F Z (z) P{Z z} P + Y { P Y < 3z 3 6z } +Y z }. For 3z 3 6z, i.e., 5 9 z 3, we have F Z (z) f Y (y)dx 3z 3 6z 3z 3 6z By differentiating with respect to z, we get f Z (z) ( 3z 3 6z + ) 8(z ) 3 for 5 9 z 3. Otherwise f Z(z). (d) The best linear MMSE estimate is d dz { P{+3Y 3z +6zY} P Y ( ) y + dx (3z ) 3 6z 6( z) (Y E(Y))+E(X). 3( z) 3z } 3 6z

13 Here we have E(X) E(Y) x(x+y)dydx y(y + )dy 7, Cov(X,Y) E(XY) E(X)E(Y) σ Y E(Y ) (E(Y)) So the best linear MMSE estimate is (e) The MSE of linear estimate is ˆX Here by symmetry, σ X σ Y 44. Thus, x(x+ )dydx 7, xy(x+y)dydx , y (y + ( ) 7 )dy 44. ( Y 7 ) + 7. MSE σx Cov (X,Y). MSE σ X Cov (X,Y) σ Y We can check that LMSE > MSE. 5. Additive-noise channel with signal dependent noise. Consider the channel with correlated signal X and noise Z and observation Y X +Z, where µ X, µ Z, σ X 4, σ Z 9, ρ X,Z 3 8. Find the best MSE linear estimate of X given Y. The best linear MMSE estimate is given by the formula Here we have (Y E(Y))+E(X). E(Y) E(X)+E(Z), σ Y 4σ X +σ Z +4ρ XZ σ X σ Z 6, Cov(X,Y) E(XY) E(X)E(Y) E(X +XZ) So the best linear MMSE estimate is (σ X +µ X)+(ρ XZ σ X σ Z +µ X µ Z ) 3 4. ˆX 3 3 (Y ) Y

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #7 Prof. Young-Han Kim Tuesday, May 6, 4 Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei). Neural net. Let Y = X + Z, where the signal X U[,] and noise Z N(,) are independent.

More information

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the

More information

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011 UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, 2014 Homework Set #6 Due: Thursday, May 22, 2011 1. Linear estimator. Consider a channel with the observation Y = XZ, where the signal X and

More information

Solutions to Homework Set #6 (Prepared by Lele Wang)

Solutions to Homework Set #6 (Prepared by Lele Wang) Solutions to Homework Set #6 (Prepared by Lele Wang) Gaussian random vector Given a Gaussian random vector X N (µ, Σ), where µ ( 5 ) T and 0 Σ 4 0 0 0 9 (a) Find the pdfs of i X, ii X + X 3, iii X + X

More information

Solutions to Homework Set #3 (Prepared by Yu Xiang) Let the random variable Y be the time to get the n-th packet. Find the pdf of Y.

Solutions to Homework Set #3 (Prepared by Yu Xiang) Let the random variable Y be the time to get the n-th packet. Find the pdf of Y. Solutions to Homework Set #3 (Prepared by Yu Xiang). Time until the n-th arrival. Let the random variable N(t) be the number of packets arriving during time (0,t]. Suppose N(t) is Poisson with pmf p N

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

UCSD ECE 153 Handout #20 Prof. Young-Han Kim Thursday, April 24, Solutions to Homework Set #3 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE 153 Handout #20 Prof. Young-Han Kim Thursday, April 24, Solutions to Homework Set #3 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE 53 Handout #0 Prof. Young-Han Kim Thursday, April 4, 04 Solutions to Homework Set #3 (Prepared by TA Fatemeh Arbabjolfaei). Time until the n-th arrival. Let the random variable N(t) be the number

More information

Final Examination Solutions (Total: 100 points)

Final Examination Solutions (Total: 100 points) Final Examination Solutions (Total: points) There are 4 problems, each problem with multiple parts, each worth 5 points. Make sure you answer all questions. Your answer should be as clear and readable

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

More than one variable

More than one variable Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EE 26 Spring 2006 Final Exam Wednesday, May 7, 8am am DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the final. The final consists of five

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

Lecture 4: Least Squares (LS) Estimation

Lecture 4: Least Squares (LS) Estimation ME 233, UC Berkeley, Spring 2014 Xu Chen Lecture 4: Least Squares (LS) Estimation Background and general solution Solution in the Gaussian case Properties Example Big picture general least squares estimation:

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

MATH/STAT 3360, Probability Sample Final Examination Model Solutions

MATH/STAT 3360, Probability Sample Final Examination Model Solutions MATH/STAT 3360, Probability Sample Final Examination Model Solutions This Sample examination has more questions than the actual final, in order to cover a wider range of questions. Estimated times are

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009. NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University.

Random Signals and Systems. Chapter 3. Jitendra K Tugnait. Department of Electrical & Computer Engineering. Auburn University. Random Signals and Systems Chapter 3 Jitendra K Tugnait Professor Department of Electrical & Computer Engineering Auburn University Two Random Variables Previously, we only dealt with one random variable

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

18.440: Lecture 26 Conditional expectation

18.440: Lecture 26 Conditional expectation 18.440: Lecture 26 Conditional expectation Scott Sheffield MIT 1 Outline Conditional probability distributions Conditional expectation Interpretation and examples 2 Outline Conditional probability distributions

More information

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc. ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20

Problem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20 Problem Set MAS 6J/.6J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 0 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain a

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2

MATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2 MATH 3806/MATH4806/MATH6806: MULTIVARIATE STATISTICS Solutions to Problems on Rom Vectors Rom Sampling Let X Y have the joint pdf: fx,y) + x +y ) n+)/ π n for < x < < y < this is particular case of the

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

ECE Lecture #9 Part 2 Overview

ECE Lecture #9 Part 2 Overview ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by

More information

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

STT 441 Final Exam Fall 2013

STT 441 Final Exam Fall 2013 STT 441 Final Exam Fall 2013 (12:45-2:45pm, Thursday, Dec. 12, 2013) NAME: ID: 1. No textbooks or class notes are allowed in this exam. 2. Be sure to show all of your work to receive credit. Credits are

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},

Problem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2}, ECE32 Spring 25 HW Solutions April 6, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7

UCSD ECE250 Handout #24 Prof. Young-Han Kim Wednesday, June 6, Solutions to Exercise Set #7 UCSD ECE50 Handout #4 Prof Young-Han Kim Wednesday, June 6, 08 Solutions to Exercise Set #7 Polya s urn An urn initially has one red ball and one white ball Let X denote the name of the first ball drawn

More information

Chapter 5 Joint Probability Distributions

Chapter 5 Joint Probability Distributions Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger Chapter 5 Joint Probability Distributions 5 Joint Probability Distributions CHAPTER OUTLINE 5-1 Two

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

FINAL EXAM: Monday 8-10am

FINAL EXAM: Monday 8-10am ECE 30: Probabilistic Methods in Electrical and Computer Engineering Fall 016 Instructor: Prof. A. R. Reibman FINAL EXAM: Monday 8-10am Fall 016, TTh 3-4:15pm (December 1, 016) This is a closed book exam.

More information

Bivariate Paired Numerical Data

Bivariate Paired Numerical Data Bivariate Paired Numerical Data Pearson s correlation, Spearman s ρ and Kendall s τ, tests of independence University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Bivariate Distributions. Discrete Bivariate Distribution Example

Bivariate Distributions. Discrete Bivariate Distribution Example Spring 7 Geog C: Phaedon C. Kyriakidis Bivariate Distributions Definition: class of multivariate probability distributions describing joint variation of outcomes of two random variables (discrete or continuous),

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

6.041/6.431 Fall 2010 Quiz 2 Solutions

6.041/6.431 Fall 2010 Quiz 2 Solutions 6.04/6.43: Probabilistic Systems Analysis (Fall 200) 6.04/6.43 Fall 200 Quiz 2 Solutions Problem. (80 points) In this problem: (i) X is a (continuous) uniform random variable on [0, 4]. (ii) Y is an exponential

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Continuous r.v practice problems

Continuous r.v practice problems Continuous r.v practice problems SDS 321 Intro to Probability and Statistics 1. (2+2+1+1 6 pts) The annual rainfall (in inches) in a certain region is normally distributed with mean 4 and standard deviation

More information

Chapter 5,6 Multiple RandomVariables

Chapter 5,6 Multiple RandomVariables Chapter 5,6 Multiple RandomVariables ENCS66 - Probabilityand Stochastic Processes Concordia University Vector RandomVariables A vector r.v. is a function where is the sample space of a random experiment.

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

General Random Variables

General Random Variables 1/65 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Probability A general random variable is discrete, continuous, or mixed. A discrete random variable

More information

Chapter 2. Probability

Chapter 2. Probability 2-1 Chapter 2 Probability 2-2 Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance with certainty. Examples: rolling a die tossing

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Problem Set 7 Due March, 22

Problem Set 7 Due March, 22 EE16: Probability and Random Processes SP 07 Problem Set 7 Due March, Lecturer: Jean C. Walrand GSI: Daniel Preda, Assane Gueye Problem 7.1. Let u and v be independent, standard normal random variables

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7 UCSD ECE50 Handout #0 Prof. Young-Han Kim Monday, February 6, 07 Solutions to Exercise Set #7. Minimum waiting time. Let X,X,... be i.i.d. exponentially distributed random variables with parameter λ, i.e.,

More information

Practice Examination # 3

Practice Examination # 3 Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single

More information

Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful?

Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation. So: How can we evaluate E(X Y )? What is a neural network? How is it useful? Lecture Notes Special: Using Neural Networks for Mean-Squared Estimation The Story So Far... So: How can we evaluate E(X Y )? What is a neural network? How is it useful? EE 278: Using Neural Networks for

More information

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE 153 Handout #46 Prof. Young-Han Kim Thursday, June 5, Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE 53 Handout #46 Prof. Young-Han Kim Thursday, June 5, 04 Solutions to Homework Set #8 (Prepared by TA Fatemeh Arbabjolfaei). Discrete-time Wiener process. Let Z n, n 0 be a discrete time white

More information

Gaussian random variables inr n

Gaussian random variables inr n Gaussian vectors Lecture 5 Gaussian random variables inr n One-dimensional case One-dimensional Gaussian density with mean and standard deviation (called N, ): fx x exp. Proposition If X N,, then ax b

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

ECE531 Homework Assignment Number 6 Solution

ECE531 Homework Assignment Number 6 Solution ECE53 Homework Assignment Number 6 Solution Due by 8:5pm on Wednesday 3-Mar- Make sure your reasoning and work are clear to receive full credit for each problem.. 6 points. Suppose you have a scalar random

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random

More information

Properties of Random Variables

Properties of Random Variables Properties of Random Variables 1 Definitions A discrete random variable is defined by a probability distribution that lists each possible outcome and the probability of obtaining that outcome If the random

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Preliminary Statistics. Lecture 3: Probability Models and Distributions Preliminary Statistics Lecture 3: Probability Models and Distributions Rory Macqueen (rm43@soas.ac.uk), September 2015 Outline Revision of Lecture 2 Probability Density Functions Cumulative Distribution

More information