ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions
|
|
- Leonard Lane
- 5 years ago
- Views:
Transcription
1 ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, and Problem Solution The joint PDF of X and Y is f X,Y (x, y) { 6(y x) x y 1 otherwise (1) (a) The conditional PDF of X given Y is found by dividing the joint PDF by the marginal with respect to Y. For y < or y > 1, f Y (y). For y 1, f Y (y) y The complete expression for the marginal PDF of Y is 6(y x) dx 6xy 3x 2 y 3y2 (2) f Y (y) { 3y 2 y 1 otherwise (3) Thus for < y 1, f X Y (x y) f X,Y (x, y) f Y (y) { 6(y x) 3y 2 x y otherwise (4) (b) The minimum mean square estimator of X given Y y is ˆX M (y) E [X Y y] y (c) First we must find the marginal PDF for X. For x 1, f X (x) f X,Y (x, y) dy The conditional PDF of Y given X is xf X Y (x y) dx (5) 6x(y x) 3y 2 dx 3x2 y 2x 3 xy 3y 2 y/3 (6) x x 6(y x) dy 3y 2 6xy y1 yx (7) 3(1 2x + x 2 ) 3(1 x) 2 (8) f Y X (y x) f X,Y (x, y) f X (x) { 2(y x) 1 2x+x 2 x y 1 otherwise (9) 1
2 (d) The minimum mean square estimator of Y given X is Ŷ M (x) E [Y X x] Perhaps surprisingly, this result can be simplified to yf Y X (y x) dy (1) 2y(y x) dy (11) x 1 2x + x2 (2/3)y3 y 2 y1 x 2 3x + x3 1 2x + x 2 3(1 x) 2. (12) yx Ŷ M (x) x (13) Problem Solution The problem statement tells us that f V (v) { 1/12 6 v 6, otherwise. Furthermore, we are also told that R V + X where X is a Gaussian (, 3) random variable. (a) The expected value of R is the expected value V plus the expected value of X. We already know that X has zero expected value, and that V is uniformly distributed between -6 and 6 volts and therefore also has zero expected value. So E [R] E [V + X] E [V ] + E [X]. (2) (b) Because X and V are independent random variables, the variance of R is the sum of the variance of V and the variance of X. (c) Since E[R] E[V ], Var[R] Var[V ] + Var[X] (3) Cov [V, R] E [V R] E [V (V + X)] E [ V 2] Var[V ]. (4) (1) (d) The correlation coefficient of V and R is ρ V,R Cov [V, R] Var[V ] Var[R] The LMSE estimate of V given R is Var[V ] Var[V ] Var[R] σ V σ R. (5) σ V ˆV (R) ρ V,R (R E [R]) + E [V ] σ2 V σ R σr 2 R 12 R. (6) 15 Therefore a 12/15 4/5 and b. (e) The minimum mean square error in the estimate is e Var[V ](1 ρ 2 V,R) 12(1 12/15) 12/5 (7) 2
3 Problem Solution The linear mean square estimator of X given Y is ˆX L (Y ) ( E [XY ] µx µ Y Var[Y ] To find the parameters of this estimator, we calculate f Y (y) f X (x) y The moments of X and Y are E [Y ] E [ Y 2] x ) (Y µ Y ) + µ X. (1) 6(y x) dx 6xy 3x 2 y 3y2 ( y 1) (2) 6(y x) dy 3y 3 dy 3/4 E [X] 3y 4 dy 3/5 E [ X 2] The correlation between X and Y is E [XY ] 6 { 3(1 + 2x + x 2 ) x 1, otherwise. x (3) 3x(1 2x + x 2 ) dx 1/4 (4) 3x 2 (1 + 2x + x 2 ) dx 1/1 (5) xy(y x) dy dx 1/5 (6) Putting these pieces together, the optimal linear estimate of X given Y is ( ) ( 1/5 3/16 ˆX L (Y ) 3/5 (3/4) 2 Y 3 ) Y 3 (7) Problem Solution From the problem statement we know that R is an exponential random variable with expected value 1/µ. Therefore it has the following probability density function. f R (r) { µe µr r otherwise (1) It is also known that, given R r, the number of phone calls arriving at a telephone switch, N, is a Poisson (α rt ) random variable. So we can write the following conditional probability mass function of N given R. P N R (n r) { (rt ) n e rt n! n, 1,... otherwise (2) (a) The minimum mean square error estimate of N given R is the conditional expected value of N given R r. This is given directly in the problem statement as r. ˆN M (r) E [N R r] rt (3) 3
4 (b) The maximum a posteriori estimate of N given R is simply the value of n that will maximize P N R (n r). That is, ˆn MAP (r) arg max n P N R (n r) arg max n (rt )n e rt /n! (4) Usually, we set a derivative to zero to solve for the maximizing value. In this case, that technique doesn t work because n is discrete. Since e rt is a common factor in the maximization, we can define g(n) (rt ) n /n! so that ˆn MAP arg max n g(n). We observe that g(n) rt g(n 1) (5) n this implies that for n rt, g(n) g(n 1). Hence the maximizing value of n is the largest n such that n rt. That is, ˆn MAP rt. (c) The maximum likelihood estimate of N given R selects the value of n that maximizes f R Nn (r), the conditional PDF of R given N. When dealing with situations in which we mix continuous and discrete random variables, its often helpful to start from first principles. In this case, f R N (r n) dr P [r < R r + dr N n] (6) In terms of PDFs and PMFs, we have P [r < R r + dr, N n] P [N n] P [N n R r] P [r < R r + dr] P [N n] f R N (r n) P N R (n r) f R (r) P N (n) To find the value of n that maximizes f R N (r n), we need to find the denominator P N (n). P N (n) (7) (8) (9) P N R (n r) f R (r) dr (1) (rt ) n e rt µe µr dr (11) n! µt n r n (µ + T )e (µ+t )r dr (12) n!(µ + T ) µt n n!(µ + T ) E [Xn ] (13) where X is an exponential random variable with expected value 1/(µ + T ). There are several ways to derive the nth moment of an exponential random variable including integration by parts. In Example 6.5, the MGF is used to show that E[X n ] n!/(µ + T ) n. Hence, for n, µt n P N (n) (µ + T ) n+1 (14) 4
5 Finally, the conditional PDF of R given N is f R N (r n) P N R (n r) f R (r) P N (n) The ML estimate of N given R is (rt ) n e rt n! µe µr µt n (µ+t ) n+1 (15) (µ + T ) [(µ + T )r]n e (µ+t )r n! (16) ˆn ML (r) arg max n f R N (r n) arg max n (µ + T )[(µ + T )r]n e (µ+t )r n! (17) This maximization is exactly the same as in the previous part except rt is replaced by (µ + T )r. The maximizing value of n is ˆn ML (µ + T )r. Problem Solution From the problem statement, we learn for vectors X [ X 1 X 2 ] [ X 3 and W W1 ] W 2 that 1 3/4 1/2 E [X], R X 3/4 1 3/4, E [W], R W 1/2 3/4 1 [ ].1.1 (1) In addition, Y [ Y1 Y 2 ] AX + W [ ] 1 1 X + W. (2) 1 1 (a) Since E[Y] AE[X], we can apply Theorem 9.7(a) which states that the minimum mean square error estimate of X 1 is ˆX 1 (Y) â Y where â R 1 Y R YX 1. First we find R Y. R Y E [ YY ] E [ (AX + W)(AX + W) ] (3) E [ (AX + W)(X A + W ) ] (4) E [ AXX A ] + E [ WX A ] + E [ AXW ] + E [ WW ] (5) Since X and W are independent, E[WX ] and E[XW ]. This implies R Y AE [ XX ] A + E [ WW ] (6) AR X A + R W (7) [ ] 1 3/4 1/2 1 [ ] [ ] 1 1 3/4 1 3/ (8) /2 3/4 1 1 Once again, independence of W and X 1 yields R YX1 E [YX 1 ] E [(AX + W)X 1 ] AE [XX 1 ]. (9) 5
6 This implies R YX1 A E [ X1 2 ] E [X 2 X 1 ] E [X 3 X 1 ] R X (1, 1) A R X (2, 1) R X (3, 1) Putting these facts together, we find that â R 1 Y R YX 1 [ 1/11 25/33 25/33 1/11 Thus the linear MMSE estimator of X 1 given Y is [ ] ] [ 7/4 5/4 ] 1 3/4 1/2 [ ] 7/4. (1) 5/4 1 [ ] 85. (11) ˆX 1 (Y) â Y Y Y Y Y 2. (12) (b) By Theorem 9.7(c), the mean squared error of the optimal estimator is e L Var[X 1 ] â R YX1 (13) R X (1, 1) R YX 1 R 1 Y R YX 1 (14) 1 [ 7/4 5/4 ] [ ] [ ] 1/11 25/33 7/ (15) 25/33 1/11 5/4 264 In Problem 9.4.1, we solved essentialy the same problem but the observations Y were not subjected to the additive noise W. In comparing the estimators, we see that the additive noise perturbs the estimator somewhat but not dramatically because the correaltion structure of X and the mapping A from X to Y remains unchanged. On the other hand, in the noiseless case, the resulting mean square error was about half as much, 3/ versus.198. (c) We can estimate random variable X 1 based on the observation of random variable Y 1 using Theorem 9.4. Note that Theorem 9.4 is a special case of Theorem 9.8 in which the observation is a random vector. In any case, from Theorem 9.4, the optimum linear estimate is ˆX 1 (Y 1 ) a Y 1 + b where a Cov [X 1, Y 1 ], b µ X1 a µ Y1. (16) Var[Y 1 ] Since E[X i ] µ Xi and Y 1 X 1 + X 2 + W 1, we see that µ Y1 E [Y 1 ] E [X 1 ] + E [X 2 ] + E [W 1 ]. (17) These facts, along with independence of X 1 and W 1, imply Cov [X 1, Y 1 ] E [X 1 Y 1 ] E [X 1 (X 1 + X 2 + W 1 )] (18) R X (1, 1) + R X (1, 2) 7/4 (19) In addition, using R Y from part (a), we see that Var[Y 1 ] E [ Y1 2 ] RY (1, 1) 3.6. (2) 6
7 Thus a Cov [X 1, Y 1 ] 7/4 Var[Y 1 ] , b µ X1 a µ Y1. (21) Thus the optimum linear estimate of X 1 given Y 1 is ˆX 1 (Y 1 ) Y 1. (22) From Theorem 9.4(b), the mean square error of this estimator is e L σ2 X 1 (1 ρ 2 X 1,Y 1 ) (23) Since X 1 and Y 1 have zero expected value, σx 2 1 R X (1, 1) 1 and σy 2 1 R Y (1, 1) 3.6. Also, since Cov[X 1, Y 1 ] 7/4, we see that ρ X1,Y 1 Cov [X 1, Y 1 ] 7/4 49 σ X1 σ Y (24) Thus e L 1 (49/242 ) As we would expect, the estimate of X 1 based on just Y 1 has larger mean square error than the estimate based on both Y 1 and Y 2. Problem Solution For this problem, let Y [ X 1 X 2 n 1] and let X X n. Since E[Y] and E[X], Theorem 9.7(a) tells us that the minimum mean square linear estimate of X given Y is ˆX n (Y) â Y, where â R 1 Y R YX. This implies that â is the solution to Note that 1 c c n 2 R Y E [ YY ] c c c, c n 2 c 1 R Y â R YX. (1) R YX E X 1 X 2. X n 1 X n c n 1 c n 2. c. (2) We see that the last column of cr Y equals R YX. Equivalently, if â [ c ], then R Y â R YX. It follows that the optimal linear estimator of X n given Y is which completes the proof of the claim. The mean square error of this estimate is ˆX n (Y) â Y cx n 1, (3) e L E [ (X n cx n 1 ) 2] (4) R X (n, n) cr X (n, n 1) cr X (n 1, n) + c 2 R X (n 1, n 1) (5) 1 2c 2 + c 2 1 c 2 (6) When c is close to 1, X n 1 and X n are highly correlated and the estimation error will be small. Comment: We will see in Chapter 11 that correlation matrices with this structure arise frequently in the study of wide sense stationary random sequences. In fact, if you read ahead, you will find that the claim we just proved is the essentially the same as that made in Theorem
8 Problem Solution (a) In this case, we use the observation Y to estimate each X i. Since E[X i ], E [Y] E [X j ] p j S j + E [N]. (1) j1 Thus, Theorem 9.7(a) tells us that the MMSE linear estimate of X i is ˆX i (Y) â Y where â R 1 Y R YX i. First we note that R YXi E [YX i ] E X j pj S j + N X i (2) Since N and X i are independent, E[NX i ] E[N]E[X i ]. Because X i and X j are independent for i j, E[X i X j ] E[X i ]E[X j ] for i j. In addition, E[X 2 i ] 1, and it follows that R YXi j1 E [X j X i ] p j S j + E [NX i ] p i S i. (3) j1 For the same reasons, R Y E [ ( YY ] ) E pj X j S j + N pl X l S l + N (4) j1 l1 j1 pj p l E [X j X l ] S j S l pj E [X j N] S j + }{{} l1 + pl E [ X l N ] S l + E [ NN ] (5) }{{} j1 l1 p j S j S j + σ 2 I (6) j1 Now we use a linear algebra identity. For a matrix S with columns S 1, S 2,..., S k, and a diagonal matrix P diag[p 1, p 2,..., p k ], p j S j S j SPS. (7) j1 Although this identity may be unfamiliar, it is handy in manipulating correlation matrices. (Also, if this is unfamiliar, you may wish to work out an example with k 2 vectors of length 2 or 3.) Thus, R Y SPS + σ 2 I, (8) 8
9 and â R 1 Y R YX i ( SPS + σ 2 I ) 1 pi S i. (9) Recall that if C is symmetric, then C 1 is also symmetric. This implies the MMSE estimate of X i given Y is ˆX i (Y) â Y p i S i ( SPS + σ 2 I ) 1 Y (1) (b) We observe that V (SPS + σ 2 I) 1 Y is a vector that does not depend on which bit X i that we want to estimate. Since ˆX i p i S iv, we can form the vector of estimates ˆX ˆX 1. ˆX k p1 S 1V. pk S kv p1... pk S 1. S k V (11) P 1/2 S V (12) P 1/2 S ( SPS + σ 2 I ) 1 Y (13) Problem Solution The solution to this problem is almost the same as the solution to Example 9.1, except perhaps the Matlab code is somewhat simpler. As in the example, let W (n), X (n), and Y (n) denote the vectors, consisting of the first n components of W, X, and Y. Just as in Examples 9.8 and 9.1, independence of X (n) and W (n) implies that the correlation matrix of Y (n) is [ R Y (n) E (X (n) + W (n) )(X (n) + W (n) ) ] R X (n) + R W (n) (1) Note that R X (n) and R W (n) are the n n upper-left submatrices of R X and R W. In addition, X 1 + W 1 r R Y (n) X E. X 1.. (2) X n + W n Compared to the solution of Example 9.1, the only difference in the solution is in the reversal of the vector R Y (n) X. The optimal filter based on the first n observations is â (n) R 1 R Y (n) Y (n) X, and the mean square error is r n 1 e L Var[X 1] (â (n) ) R Y (n) X. (3) 9
10 function emse953(r) Nlength(r); e[]; for n1:n, RYXr(1:n) ; RYtoeplitz(r(1:n))+.1*eye(n); ary\ryx; enr(1)-(a )*RYX; e[e;en]; end plot(1:n,e); The program mse953.m simply calculates the mean square error e L. The input is the vector r corresponding to the vector [ ] r r 2, which holds the first row of the Toeplitz correlation matrix R X. Note that R X (n) is the Toeplitz matrix whose first row is the first n elements of r. To plot the mean square error as a function of the number of observations, n, we generate the vector r and then run mse953(r). For the requested cases (a) and (b), the necessary Matlab commands and corresponding mean square estimation error output as a function of n are shown here:.1.1 MSE.5 MSE n rasinc(.1*pi*(:2)); mse953(ra) (a) n rbcos(.5*pi*(:2)); mse953(rb) (b) In comparing the results of cases (a) and (b), we see that the mean square estimation error depends strongly on the correlation structure given by r i j. For case (a), Y 1 is a noisy observation of X 1 and is highly correlated with X 1. The MSE at n 1 is just the variance of W 1. Additional samples of Y n mostly help to average the additive noise. Also, samples X n for n 1 have very little correlation with X 1. Thus for n 1, the samples of Y n result in almost no improvement in the estimate of X 1. In case (b), Y 1 X 1 + W 1, just as in case (a), is simply a noisy copy of X 1 and the estimation error is due to the variance of W 1. On the other hand, for case (b), X 5, X 9, X 13 and X 17 and X 21 are completely correlated with X 1. Other samples also have significant correlation with X 1. As a result, the MSE continues to go down with increasing n. 1
Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationProblem Solving. Correlation and Covariance. Yi Lu. Problem Solving. Yi Lu ECE 313 2/51
Yi Lu Correlation and Covariance Yi Lu ECE 313 2/51 Definition Let X and Y be random variables with finite second moments. the correlation: E[XY ] Yi Lu ECE 313 3/51 Definition Let X and Y be random variables
More informationStatistics, Data Analysis, and Simulation SS 2015
Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler
More informationSolutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π
Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))
More informationUCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationProblem Y is an exponential random variable with parameter λ = 0.2. Given the event A = {Y < 2},
ECE32 Spring 25 HW Solutions April 6, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where
More informationLecture 4: Least Squares (LS) Estimation
ME 233, UC Berkeley, Spring 2014 Xu Chen Lecture 4: Least Squares (LS) Estimation Background and general solution Solution in the Gaussian case Properties Example Big picture general least squares estimation:
More informationMultivariate probability distributions and linear regression
Multivariate probability distributions and linear regression Patrik Hoyer 1 Contents: Random variable, probability distribution Joint distribution Marginal distribution Conditional distribution Independence,
More informationSection 8.1. Vector Notation
Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationUCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #7 Prof. Young-Han Kim Tuesday, May 6, 4 Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei). Neural net. Let Y = X + Z, where the signal X U[,] and noise Z N(,) are independent.
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationStatistics STAT:5100 (22S:193), Fall Sample Final Exam B
Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationChapter 2 Random Processes
Chapter 2 Random Processes 21 Introduction We saw in Section 111 on page 10 that many systems are best studied using the concept of random variables where the outcome of a random experiment was associated
More informationSection 9.1. Expected Values of Sums
Section 9.1 Expected Values of Sums Theorem 9.1 For any set of random variables X 1,..., X n, the sum W n = X 1 + + X n has expected value E [W n ] = E [X 1 ] + E [X 2 ] + + E [X n ]. Proof: Theorem 9.1
More informationECE 541 Stochastic Signals and Systems Problem Set 11 Solution
ECE 54 Stochastic Signals and Systems Problem Set Solution Problem Solutions : Yates and Goodman,..4..7.3.3.4.3.8.3 and.8.0 Problem..4 Solution Since E[Y (t] R Y (0, we use Theorem.(a to evaluate R Y (τ
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationIntroduction to Computational Finance and Financial Econometrics Matrix Algebra Review
You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1
More informationRecall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.
Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationECE 673-Random signal analysis I Final
ECE 673-Random signal analysis I Final Q ( point) Comment on the following statement "If cov(x ; X 2 ) 0; then the best predictor (without limitation on the type of predictor) of the random variable X
More informationSTA 256: Statistics and Probability I
Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationReminders. Thought questions should be submitted on eclass. Please list the section related to the thought question
Linear regression Reminders Thought questions should be submitted on eclass Please list the section related to the thought question If it is a more general, open-ended question not exactly related to a
More informationFINAL EXAM: Monday 8-10am
ECE 30: Probabilistic Methods in Electrical and Computer Engineering Fall 016 Instructor: Prof. A. R. Reibman FINAL EXAM: Monday 8-10am Fall 016, TTh 3-4:15pm (December 1, 016) This is a closed book exam.
More informationChapter 4. Chapter 4 sections
Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation
More information5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors
EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More information[POLS 8500] Review of Linear Algebra, Probability and Information Theory
[POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationThis exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.
TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to
More informationProblem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20
Problem Set MAS 6J/.6J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 0 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain a
More information(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.
54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and
More information18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More informationFinal Examination Solutions (Total: 100 points)
Final Examination Solutions (Total: points) There are 4 problems, each problem with multiple parts, each worth 5 points. Make sure you answer all questions. Your answer should be as clear and readable
More informationProbability and Stochastic Processes
Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationChapter 4 : Expectation and Moments
ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average
More informationName of the Student: Problems on Discrete & Continuous R.Vs
Engineering Mathematics 05 SUBJECT NAME : Probability & Random Process SUBJECT CODE : MA6 MATERIAL NAME : University Questions MATERIAL CODE : JM08AM004 REGULATION : R008 UPDATED ON : Nov-Dec 04 (Scan
More information11 - The linear model
11-1 The linear model S. Lall, Stanford 2011.02.15.01 11 - The linear model The linear model The joint pdf and covariance Example: uniform pdfs The importance of the prior Linear measurements with Gaussian
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationECE302 Spring 2015 HW10 Solutions May 3,
ECE32 Spring 25 HW Solutions May 3, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where
More informationEstimation techniques
Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................
More informationFINAL EXAM: 3:30-5:30pm
ECE 30: Probabilistic Methods in Electrical and Computer Engineering Spring 016 Instructor: Prof. A. R. Reibman FINAL EXAM: 3:30-5:30pm Spring 016, MWF 1:30-1:0pm (May 6, 016) This is a closed book exam.
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49
State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationThis exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.
TEST #3 STA 536 December, 00 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. You will have access to a copy
More informationECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.
ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationJointly Distributed Random Variables
Jointly Distributed Random Variables CE 311S What if there is more than one random variable we are interested in? How should you invest the extra money from your summer internship? To simplify matters,
More informationBivariate distributions
Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationSTAT/MA 416 Answers Homework 6 November 15, 2007 Solutions by Mark Daniel Ward PROBLEMS
STAT/MA 4 Answers Homework November 5, 27 Solutions by Mark Daniel Ward PROBLEMS Chapter Problems 2a. The mass p, corresponds to neither of the first two balls being white, so p, 8 7 4/39. The mass p,
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007
UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas
More informationState-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53
State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State
More informationUCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011
UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, 2014 Homework Set #6 Due: Thursday, May 22, 2011 1. Linear estimator. Consider a channel with the observation Y = XZ, where the signal X and
More informationUCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)
UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable
More informationLecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel
Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random
More informationProperties of Random Variables
Properties of Random Variables 1 Definitions A discrete random variable is defined by a probability distribution that lists each possible outcome and the probability of obtaining that outcome If the random
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationENGG2430A-Homework 2
ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,
More informationWe introduce methods that are useful in:
Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More information18.440: Lecture 28 Lectures Review
18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationParameter Estimation
1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationStat 206: Sampling theory, sample moments, mahalanobis
Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because
More informationME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander
ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2 Discrete Random Variable X
More informationECE Homework Set 3
ECE 450 1 Homework Set 3 0. Consider the random variables X and Y, whose values are a function of the number showing when a single die is tossed, as show below: Exp. Outcome 1 3 4 5 6 X 3 3 4 4 Y 0 1 3
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationACM 116: Lectures 3 4
1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance
More informationExpectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationChapter 5,6 Multiple RandomVariables
Chapter 5,6 Multiple RandomVariables ENCS66 - Probabilityand Stochastic Processes Concordia University Vector RandomVariables A vector r.v. is a function where is the sample space of a random experiment.
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationStatistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }
Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v
More informationCS281A/Stat241A Lecture 17
CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationECE-340, Spring 2015 Review Questions
ECE-340, Spring 2015 Review Questions 1. Suppose that there are two categories of eggs: large eggs and small eggs, occurring with probabilities 0.7 and 0.3, respectively. For a large egg, the probabilities
More informationTwo hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45
Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions
More informationChapter 4 Multiple Random Variables
Review for the previous lecture Definition: n-dimensional random vector, joint pmf (pdf), marginal pmf (pdf) Theorem: How to calculate marginal pmf (pdf) given joint pmf (pdf) Example: How to calculate
More informationIntroduction to Simple Linear Regression
Introduction to Simple Linear Regression Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68 About me Faculty in the Department
More informationJoint Probability Distributions, Correlations
Joint Probability Distributions, Correlations What we learned so far Events: Working with events as sets: union, intersection, etc. Some events are simple: Head vs Tails, Cancer vs Healthy Some are more
More information