Chapter 4 : Expectation and Moments

Size: px
Start display at page:

Download "Chapter 4 : Expectation and Moments"

Transcription

1 ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average value of a random variable X is defined by,. E[X] µ X i x ip X (x i ), if X is discrete.. E[X] xf X(x)dx, if X is continuous. Example. Let X P oisson(λ). What is the expected value of X? The PMF of X is given by, P r(x k) e λ λk, k 0,,...,. k! E[X] k0 k λ λk ke k! λ λk ke k! λe λ + k λe λ e λ λ. λ k (k )! Theorem. (Linearity of Expectation) Let X and Y be any two random variables and let a and b be constants, then, E[aX + by ] ae[x] + be[y ]. Example. Let X Binomial(n, p). What is the expected value of X? The PMF of X is given by, ( ) n P r(x k) p k ( p) n k, k 0,,..., n. k E[X] n ( ) n k p k ( p) n k. k k0

2 Rather than evaluating this sum, an easier way to calculate E[X] is to express X as the sum of n independent Bernoulli random variables and apply Theorem. In fact, X X + X X n. Where X i Bernoulli(p), for all i,..., n. Hence, By linearity of expectation (Theorem ), E[X] E[X X n ]. { with probability p, X i 0 with probability p.. E[X i ] p + 0 ( p) p. E[X] E[X ] E[X n ] np. Theorem. (Expected value of a function of a RV) Let X be a RV. For a function of a RV, that is, Y g(x), the expected value of Y can be computed from, E[Y ] g(x)f X (x)dx. Example 3. Let X N(µ, σ ) and Y X. What is the expected value of Y? Rather than calculating the pdf of Y and afterwards computing E[Y ], we apply Theorem : E[Y ] µ + σ. x πσ (x µ) e σ dx Conditional Expectations Definition. The conditional expectation of X given that the event B was observed is given by,. E[X B] i x ip X B (x i B), if X is discrete.. E[X B] xf X B(x B)dx, if X is continuous. Intuition: Think of E[X Y y] as the best estimate (guess) of X given that you observed Y. Example 4. A fair die is tossed twice. Let X be the number observed after the first toss, and Y be the number observed after the second toss. Let Z X + Y.. Calculate E[X]. E[X]

3 . Calculate E[X Y 3]. E[X Y 3] E[X] Calculate E[X Z 5]. Remark. E[X Z] is a random variable. E[X Z 5] Theorem 3. (Towering Property of Conditional Expectation) Let X and Y be two random variables, then, E [E[X Y ]] E[X]. Example 5. Let X and Y be two zero mean jointly gaussian random variables, that is, [ f X,Y (x, y) πσ ρ exp x + y ] ρxy σ ( ρ. ) Where ρ. Calculate E[X Y y]. E[X Y y] xf X Y (x Y y)dx. f X Y (X Y y) f X,Y (x, y) f Y (y) [ πσ exp ρ x +y ρxy σ ( ρ ) [ ] exp y πσ σ [ πσ ( ρ ) exp x + y ρxy ( ρ )y ] [ πσ ( ρ ) exp ] (x ρy) σ ( ρ ) σ ( ρ ) ]. Hence, E[X Y y] ρy. [ x πσ ( ρ ) exp (x ρy) σ ( ρ ) ] dx Remark. If ρ 0 X and Y are independent, Remark 3. For gaussian random variables, E[X Y y] E[X] 0. E[X Y ] ρy. E[Y X] ρx. Remark 4. Let ˆX E[X Y ]. ˆX is the estimate of X given that Y was observed. The MSE (mean-square error) of this estimate is given by E[ ˆX X ]. 3

4 Example 6. A movie, of size N bits, is downloaded through a binary erasure channel. Where N P oisson(λ). Let K be the number of received bits. 0 ɛ ɛ ɛ ɛ 0 E Figure : Binary Erasure Channel. Calculate E[K]. Intuitively E[K] λ( ɛ). Now we will prove it mathematically by conditioning on N as a first step. For a given number of bits N n, K is a binomial random variable (K Binomial(n, ɛ)). E[K N n] n( ɛ). E[K N] N( ɛ). Applying the towering property of conditional expectation, E[K] E [E[K N]] E[N( ɛ)] ( ɛ)e[n] λ( ɛ).. Calculate E[N K]. Intuitively E[N K] k + λɛ. Now we will prove it mathematically, E[N K k] n0 np r(n K k). P r(n n, K k) P r(n K k) P r(k k) P r(n n)p r(k k N n). P r(k k) P r(n n) λn e λ. ( n! ) n P r(k k N n) ( ɛ) k ɛ n k. k P r(k k) nk nk P r(n n)p r(k k N n) e λ λ n n! ( ) n ( ɛ) k ɛ n k. k 4

5 P r(n n K k) e λ λ n n! + nk e λ λ n n! λ n ɛ n (n k)! λɛ + (λɛ) n k nk (n k)! (λɛ)n (n k)! (λɛ) k e λɛ (λɛ)n k e λɛ. (n k)! n! k!(n k)! ( ɛ)k ɛ n k n! k!(n k)! ( ɛ)k ɛ n k E[N K k] nk nk n (λɛ)n k e λɛ (n k)! (n k + k) (λɛ)n k e λɛ (n k)! (n k)(λɛ) n k e λɛ +k (n k)! nk }{{} λɛ k + λɛ. nk (λɛ) n k e λɛ (n k)! } {{ } 3 Moments of Random Variables Definition 3. The r th moment, r 0,,..., of a RV X is defined by,. E[X r ] m r i xr i P X(x i ), if X is discrete.. E[X r ] m r xr f X (x)dx, if X is continuous. Remark 5. Note that m 0 for any X, and m E[X] µ (the mean). Definition 4. The r th central moment, r 0,,..., of a RV X is defined as,. E[(X µ) r ] c r i (x i µ) r P X (x i ), if X is discrete.. E[(X µ) r ] c r (x µ)r f X (x)dx, if X is continuous. Remark 6. Note that for any RV X,. c 0.. c E[X µ] E[X] µ µ µ 0. 5

6 3. c E[(X µ) ] σ V ar[x] (the variance). In fact, σ is called the standard deviation. σ E[(X µ) ] E[X ] µ. Example 7. Let X Binomial(n, p). What is the variance of X? The PMF of X is given by, ( ) n P r(x k) p k ( p) n k, k 0,,..., n. k E[X] np (from Example ). σ E[X ] E[X] n ( ) n k p k ( p) n k n p. k k0 Check the textbook for the calculation of this sum. Here we will calculate σ using the same idea of Example, i.e. expressing X as the sum of n independent Bernoulli random variables. In fact, X X + X X n. Where X i Bernoulli(p), for all i,..., n. Hence, Where, E[X ] E [ (X X n ) ] E[X X n + i X i X j ] j j i ne[x i ] + n(n )E[X i X j ] ne[x i ] + n(n )E[X i ][X j ]. E[X i ] p. E[X i ] p + 0 ( p) p. Hence, E[X ] np + n(n )p n p np + np. σ E[X ] E[X] n p np + np n p np( p). Example 8. Let X Geometric(p). What is the variance of X? The PMF of X is given by, P r(x k) ( p) k p, k,,... The variance of X is given by, σ E[X ] E[X]. 6

7 E[X] k( p) k p. E[X ] k k k ( p) k p. To calculate these sums we use the following facts, for x <, Deriving both sides with respect to x, Deriving both sides with respect to x, i i0 x i x ix i ( x) Hence, i i(i )x i ( x) 3 i i x i i i i ix i i x i (i + ) x i i ( x) 3 i i ix i + ( x) 3 (i + )x i + i x i + ( x) 3 i ( x) 3 ix i ( x) 3 ( x). Hence, i i x i + x ( x) 3. p E[X] ( ( p)) p. E[X ] p ( + ( p)) ( ( p)) 3 p p. 7

8 σ E[X ] E[X] p p p p p. Definition 5. The covariance of two random variables X and Y is defined by, cov(x, Y ) E [(X µ X )(Y µ Y )] E[XY ] µ X µ Y. Definition 6. The correlation coefficient of two random variables X and Y is defined by, ρ X,Y cov(x, Y ) σ X σ Y. If ρ X,Y 0, then cov(x, Y ) 0 and X and Y are said to be uncorrelated. Lemma. If X and Y are independent X and Y are uncorrelated. Remark 7. There could be two RVs which are uncorrelated but dependent. Example 9. (discrete case: uncorrelated independent) Consider two random variables X and Y with joint PMF P X,Y (x i, y j ) as shown. Figure : Values of P X,Y (x i, y j ). µ X Hence, µ Y with probability /3, XY 0 with probability /3, with probability /3. E[XY ] ρ X,Y cov(x, Y ) σ X σ Y E[XY ] µ Xµ Y σ X σ Y 0 σ X σ Y 0 X and Y are uncorrelated. However, X and Y are dependent. For example P (X Y 0) 0 P (X ) /3. 8

9 Example 0. (continuous case: uncorrelated independent) Consider a RV Θ uniformly distributed on [0, π]. Let X cos Θ and Y sin Θ. X and Y are obviously dependent, in fact, Hence, E[X] E[cos Θ] π E[Y ] E[sin Θ] π π 0 π E[XY ] E[sin Θ cos Θ] π 0 X + Y. cos θdθ 0. sin θdθ 0. π 0 sin θ cos θdθ π sin θdθ 0. 4π 0 ρ X,Y cov(x, Y ) 0. X and Y are uncorrelated although they are dependent. Theorem 4. Given two random variables X and Y, ρ X,Y. Proof. We will prove this theorem using the Cauchy-Schwarz inequality by showing that, cov(x, Y ) σ X σ Y. θ u v < u, v > u. v cos θ < u, v > u. v. Using the same idea on random variables, let Z Y ax where a R. Assume that X and Y have zero mean. Consider the following cases:. Z 0 for all a R. Hence, Where, E[Z ] E [ (Y ax) ] > 0. E [ (Y ax) ] E[Y axy + a X ] E[X ]a E[XY ]a + E[Y ]. Since E[Z ] > 0 and a 0, 4E[XY ] 4E[X ]E[Y ] < 0. Hence, E[XY ] < E[X ]E[Y ] σxσ Y, E[XY ] < σ X σ Y. cov(x, Y ) < σ X σ Y. 9

10 . There exists a 0 R such that Z Y a 0 X 0. Y a 0 X, hence, E[XY ] E[a 0 X ] a 0 E[X ] a 0 σ X. V ar[y ] V ar[a 0 X] a 0V ar[x] a 0σ X σ Y. E[XY ] a 0 σ X σ X σ X σ Y. Therefore there is equality in this case, cov(x, Y ) σ X σ Y. Combining the results of the cases, cov(x, Y ) σ X σ Y. And therefore, ρ X,Y. Lemma. V ar[x + Y ] V ar[x] + V ar[y ] + cov(x, Y ). Proof. V ar[x + Y ] E[(X + Y ) ] E[(X + Y )] E[(X + Y ) ] (E[X] + E[Y ]) E[X ] + E[XY ] + E[Y ] E[X] E[X]E[Y ] E[Y ] V ar[x] + V ar[y ] + cov(x, Y ). Lemma 3. If X and Y are uncorrelated, then V ar[x + Y ] V ar[x] + V ar[y ]. Proof. X and Y are correlated cov(x, Y ) 0. Result then follows for Lemma. 0

11 4 Estimation An application for the previously mentioned definitions and theorems is estimation. Suppose we wish to estimate the a values of a RV Y by observing the values of another random variable X. This estimate is represented by Ŷ and the estimation error is given by ɛ Ŷ Y. A popular approach for determining Ŷ the estimate of Y given X, is by minimizing the MSE (mean squared error): MSE E[ɛ ] E[(Ŷ Y ) ]. Theorem 5. The MMSE (minimum mean squared estimate) of Y given X is ŶMMSE E[Y X]. In other words, the theorem states that ŶMMSE E[Y X] minimizes the MSE. Sometimes E[Y X] is difficult to find and a linear MMSE is used instead, i.e. Ŷ LMMSE αx + β. Proof. (sketch) Assume WLOG that random variables X and Y are zero mean. Denote by Ŷ the Y Ŷ ɛ X estimate of Y given X. In order to minimize E[ɛ ] ɛ E[ Y Ŷ ], the error ɛ should be orthogonal to the observation X as shown in the figure above. ɛ X, therefore, E[(Ŷ Y )X] 0. E[(Y αx)x] 0, E[XY ] αe[x ] 0. Hence, α E[XY ] E[X ] cov(x, Y ) σ X Ŷ LMMSE σ Y ρ X,Y σ X X. σ Y ρ X,Y σ X. The result above is for any two zero mean random variables. The general result, i.e. when µ X, µ Y 0, can be obtained by the same reasoning and is given by, Theorem 6. The LMMSE (linear minimum mean squared estimate) of Y given X that minimize the MSE is given by Ŷ LMMSE σ Y ρ X,Y σ X (X µ X ) + µ Y. Remark: (orthogonality principle) Let Ŷ be the estimate of Y given X, and ɛ the estimation error. Then Ŷ that minimizes the MSE E[ ɛ ] E[ Y Ŷ ] is given by Ŷ ɛ i.e. Y (Y Ŷ ).

12 Example. Suppose that in a room the temperature is given by a RV Y N(µ, σ ). A sensor in this room observes X Y + W, where W is the additive noise given by N(0, σw ). Assume Y and W are independent.. Find the MMSE of Y given X. Ŷ MMSE E[Y X]. f Y X (Y X x) f X,Y (x, y) f X (x) f Y (y)f X Y (x, y). f X (x) Since X is the sum of two independent gaussian RVs Y and W, we know from homework 3 that, X N(µ, σ + σ W ). Furthermore, E[X Y y] E[Y + W Y y] y + E[W ] y. V ar[x Y y] E[X Y y] E[X Y y] E[Y + Y W + W Y y] y E[Y Y y] + E[Y W Y y] + E[W Y y] y y σ W y σ W. f X Y (X Y y) πσw exp [ ] (x y) σw. f Y X (Y X x) π σ σ W σ +σ W ] (y µ) (x y) (x µ) exp [ σ σw + (σ + σw ) [ exp (y µ ) πσ σ ]. Where, We are interested in, σ σ W σ σ + σw. Ŷ MMSE E[Y X] µ.

13 To determine µ take y 0: µ σ σ W σ +σ W µ σ µ σ W µ σ + σ W x σ W + (x µ) (σ + σ W ) + σ x σ + σ W σ4 x + µσ σw x + σ4 W µ (σ + σw ( ) σ x + σw µ ) σ + σw. σ σw (x µ) (σ + σw ) Ŷ MMSE E[Y X] µ σ σ + σ W X + σ W µ σ + σw.. Find the linear MMSE of Y given X. cov(x, Y ) E[XY ] E[X]E[Y ] E[(Y + W )Y ] E[Y + W ]E[Y ] E[Y ] + E[Y W ] µ σ + µ + 0 µ σ. Hence, ρ X,Y σ σ σ + σw σ. σ + σw Applying the general formula of LMMSE, Ŷ LMMSE σ Y ρ X,Y σ X (X µ X ) + µ Y σ (X µ) + µ σ + σw σ + σw σ σ + σw X + σ W µ σ + σw. Notice that ŶLMMSE ŶMMSE, in fact this is always the case for jointly gaussian random variables X and Y. 3

14 3. Find the MSE. E[ɛ ] E[(Ŷ Y ) ] E[ɛ(Ŷ Y )] 0 E[ɛŶ ] E[ɛY ] E[ɛY ] E[Ŷ Y ] E[Y ] [ σ E σ + σw σ (σ + µ ) σ + σw σ σw σ + σw. XY + σ W µ σ + σw Y + σ W µ σ + σ W σ µ ] σ µ 4

15 5 Jointly Gaussian Random Variables Definition 7. Two random variable X and Y are jointly gaussian if, [ ( f X,Y (x, y) πσ X σ exp (x µx ) Y ρ ( ρ ) σx + (y µ Y ) σy ρ(x µ )] X)(y µ Y ). σ X σ Y Properties:. If X and Y are jointly gaussian random variables, f X (x) f Y (y) f X,Y (x, y)dy f X,Y (x, y)dx e (x µ X ) σ X. πσx e (y µy) σ Y. πσy. If X and Y are jointly gaussian uncorrelated random variables X and Y are independent. 3. If X and Y are jointly gaussian random variables then any linear combination Z ax + by is a gaussian random variable. Figure 3: Two-variable joint gaussian distribution (from Wikipedia). 5

16 6 Bounds Theorem 7. (Chebyshev s Bound) Let X be an arbitrary random variable with mean µ and finite variance σ. Then for any δ > 0, Proof. σ (x µ) f X (x)dx x µ δ P r( X µ δ) σ δ. (x µ) f X (x)dx δ x µ δ f X (x)dx δ P r( X µ δ). Corollary. Corollary. P r( X µ < δ) σ δ. P r( X µ kδ) k. Example. A fair coin is flipped n times, let X be the number of heads observed. Determine a bound for P r(x 75% n). By applying Chebyshev s inequality, E[X] np n. V [X] np( p) n 4. P r(x 3 4 n) P r(x n n 4 ) P r( X n n 4 ) n/4 n /4 n. Theorem 8. (Markov Inequality) Consider a RV X for which f X (x) 0 for x < 0. Then X is called a nonnegative RV and the Markov inequality applies: P r(x δ) E[X]. δ Proof. E[X] 0 xf X (x)dx δ xf X (x)dx δ δ xf X (x)dx δp r(x δ). Example 3. Same setting as Example. According to Markov inequality, P r(x 3 n/ n) 4 3n/4 3. which is not dependent on n. The bound from Chebyshev s inequality is much tighter. 6

17 Theorem 9. (Law of Large Numbers) Let X, X,..., X n be n iid RVs with mean µ and variance σ. Consider the sample mean: ˆµ n X i. n We say, Proof. Since X i s are iid, E[ˆµ] E[ n V [ˆµ] V [ n Applying Chebyshev s inequality, i lim P r( ˆµ µ δ) 0 δ > 0. n + ˆµ in probability µ. n X i ] n i n X i ] n i n E[X i ] E[X i ] µ. i n i V [X i ] n V [X i] σ n. P r( ˆµ µ δ) σ 0 as n + for δ. nδ n 7 Moment Generating Functions (MGF) Definition 8. The moment-generating function (MGF), if it exists, of an RV X is defined by M(t) E[e tx ] where t is a complex variable. For discrete RVs, we can define M(t) using the PMF as e tx f X (x)dx, M(t) E[e tx ] i e tx i P X (x i ). Example: Let X Poisson(λ), P (X k) e λ λk k!, k 0,,,..... Find M x (t). M X (t) E(e tλ ) M X (t) k0 M X (t) e λ e λet. e tk P (X k) k0 e tk λ λk e k! e λ (λe t ) k k0 k! 7

18 . Find E(X) from M x (t). E(X) M X(t) t t0 λe t e λ(et ) t0 λ. Example: Let X N(µ, σ ), find M x (t). M X (t) e xt e (x µ) σ dx πσ e µt+ σ t Lemma 4. If M(t) exists, moments m k E[X k ] can be obtained by m k M (k) (0) dk dt k (M(t)), k 0,,.... t0 Proof. ] M X (t) E[e tx ] E [ + tx + (tx) + + (tx)n +...! n! + E[X] + t! E[X ] + + tn n! E[Xn ] +... m k E[X k ] dk dt k (M(t)) t0 8 Chernoff Bound In this section, we introduce the Chernoff bound. Recall that to use Markov s inequality X must be positive. Theorem 0. (Chernoff s bound) For any RV X, P (X a) e at M X (t) t > 0. In particular, P (X a) min e at M X (t). t Proof. Apply Markov on Y e tx, but first recall that P (X a) P (e tx e ta ) P (Y e ta ), by Markov we get P (Y e ta ) E(Y ) e ta e ta E(Y ) P (X a) e ta M X (t) 8

19 Example: Consider X N(µ, σ) and try to bound P (X a) using Chernoff bound, this is an artificial example because we know the distribution of X. From last lecture M X (t) e µt+ σ t hence P (X) min t e at e µt+ σ t min t e (µ a)t+ σ t Remark: You can check at home how the parameter t can affect the outer bound. For example pick µ 0, σ and change t; for t 0 you will get the trivial bound P and for t you will get P. See how it varies. min t e (µ a)t+ σ t f(t) t 0 (σ t + µ a)e 0 t a µ σ Which gives us the following: P (X a) e (µ a)t + σ t P (X a) e (a µ)(µ a) σ + σ (a µ) σ 4 P (X a) e (a µ) σ We can compare this result with the reality where we know that P (X a) a πσ e (x µ) σ dx. 9 Characteristic Function In this section, we define a characteristic function and give some examples. The characteristic function of a RV is similar to a Fourrier transform of a function without the -. Definition 9. X is a RV, Φ X (w) E(e jwx ) f X (x)e jwx dx, () is called the characteristic function of X where j is the complex number j. Example: Find the characteristic function of X exp(λ). Recall that for λ 0 { λe λx if x 0 f X (x) 0 otherwise 9

20 Then Φ X (w) λ 0 0 λe λx e jwx dx e (jw λ)x dx λ [ e (jw λ)x ] jw λ 0 Since λ 0 and jw is a unit quantity (jw λ) 0 therefore lim x e (jw λ)x 0. Which results in λ Φ X (w) (0 ) jw λ λ Φ X (w) λ jw Lemma 5. If Φ X (w) exists, moments m n E[X n ] can be obtained by m n j n Φ(n) X (0) where Φ (n) X (0) dn dw n Φ(w) X. w0 Proof. Φ X (w) E[e jwx ] (jw) n n! n0 m n ( d n j n m n dw n Φ(w) X ). w0 Lemma 6. if X, Y are two independent RV and Z X + Y then Φ Z (w) Φ X (w)φ Y (w) and M Z (t) M X (t)m Y (t) Remark: To find the distribution of Z X + Y it could be easier to find Φ X (w), Φ Y (w), multiply them and then invert the from Fourrier domain by integrating or by using tables of Fourrier inverse. Example: Consider the example of problem 9 of homework 3: 0

21 Question: Let X and X be two independent RV such that X N(µ, σ ) and X N(µ, σ ) and let X ax + bx. Find the distribution of X. Answer: Let X ax, X bx it is clear that X N(aµ, aσ ) and X N(bµ, bσ ) and that Φ X (w) Φ X (w)φ X (w). Φ X (w)... e aµ jw a σ w Φ X (w) e aµ jw a σ w e bµ jw b σ w Φ X (w) e j(aµ +bµ )w (a σ +b σ ) w Which implies that X N(aµ + bµ, a σ + b σ ). Fact. A linear combination of two independent Gaussian RV is a Gaussian RV. 0 Central Limit Theorem In this section we state the central limit theorem and give a rigourous proof. Theorem. Let X, X,..., X n be n independent RVs with µ Xi 0 and V (X i ) i then Z n X + X + + X n n N(0, ) n In other words z lim P (Z z) e x dx n π This is for example a way to convert flipping a coin n times to a Gaussian RV (fig. 4,fig. 5). { 0 if a tail is observed with p X i if a head is observed with p And set S n n i0 X i n n, notice that S n {0,,..., n } and according to CLT S n N(0, ). n CLT: says that no matter how far you are from the mean, the probability of X x being outside x µ n decreases exponentially with n. Remark: The RVs X i have to be independent because if for example X i X for i {, 3,..., n} then { n if X S n nx i 0 if X 0 which does not converge to a Gaussian distribution when n.

22 S n /n n Figure 4: This is Sn n as a function of n, we can clearly see that when n grows Sn n goes to 0.5 for the equation of the example below for n goes to 00. Refer to section 6 for detailed code. Proof. (of theorem ) lim Φ Z n n (w) e w f Z (z) e x π where this form of Φ Zn (w) is the characteristic function of a N(0, ) RV. Z n X n + X n + + X n n }{{} W n }{{}}{{} W W Φ Zn (w) Φ W (w)φ W (w)φ W3 (w)... Φ Wn (w) [Φ W (w)] n Φ W (w) E(e wjw ) E(e jwx n ) Φ X ( w n )

23 S n /n n Figure 5: This is Sn n as a function of n, we can clearly see that when n grows Sn n goes to 0.5 for the equation of the example below for n goes to 300. Refer to section 6 for detailed code. Taylor expansion: Using the Taylor expansion of Φ W (w) around 0 we get,. Φ W (w) Φ W (0) + Φ W (0) + Φ W (0)! Find the value of Φ W (0). Φ W (0) E(e jwx n ) w0 e j0x n f X (x)dx f X (x)dx 3

24 . Find the value of Φ W (0). Φ W (0) jx n e j0x n f X (x)dx j xf X (x)dx n j n E(X ) 0 3. Find the value of Φ W (0). Φ W (0) d Φ W (w) dw n n n ( jx ) e j0x n f X (x)dx n x f X (x)dx (V (X) + E (X)) }{{}}{{} 0 Hence, using these results and Taylor s expansion, Φ W (w) w n. Therefore Φ Z n (w) [ w Recall that log ( ɛ) ɛ, then log Φ Zn n log ( w n ) log Φ Zn n( w n ) log Φ Zn w Φ Zn e w n ]n. MATLAB Code generating the figures In this section we give the MATLAB code used to generate fig. 4 and fig 5. 4

25 A[];B[]; % generate two empty vectors for i:00 % in this loop i stands for the number of times the coin is flipped A[A,binornd(i,0.5)/i]; % at each iteration generate a binomial random number with end % parameters ni, p0.5 and divide it by n to have (S n)/n for n:300 B[B,binornd(n,0.5)/n]; % same as previous but repeat it 300 times end x[:i];x[:n]; % x and x are used to represent n in each figure figure() plot(x,a); hold on plot(x,0.5,'r','linewidth',); xlabel('n'); ylabel('s n/n'); figure() plot(x,b); hold on plot(x,0.5,'r','linewidth',); xlabel('n'); ylabel('s n/n'); 5

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)

UCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Probability- the good parts version. I. Random variables and their distributions; continuous random variables. Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Solutions to Homework Set #6 (Prepared by Lele Wang)

Solutions to Homework Set #6 (Prepared by Lele Wang) Solutions to Homework Set #6 (Prepared by Lele Wang) Gaussian random vector Given a Gaussian random vector X N (µ, Σ), where µ ( 5 ) T and 0 Σ 4 0 0 0 9 (a) Find the pdfs of i X, ii X + X 3, iii X + X

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Lecture 19: Properties of Expectation

Lecture 19: Properties of Expectation Lecture 19: Properties of Expectation Dan Sloughter Furman University Mathematics 37 February 11, 4 19.1 The unconscious statistician, revisited The following is a generalization of the law of the unconscious

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7 UCSD ECE50 Handout #0 Prof. Young-Han Kim Monday, February 6, 07 Solutions to Exercise Set #7. Minimum waiting time. Let X,X,... be i.i.d. exponentially distributed random variables with parameter λ, i.e.,

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

Final Examination Solutions (Total: 100 points)

Final Examination Solutions (Total: 100 points) Final Examination Solutions (Total: points) There are 4 problems, each problem with multiple parts, each worth 5 points. Make sure you answer all questions. Your answer should be as clear and readable

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 14: Continuous random variables Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

discrete random variable: probability mass function continuous random variable: probability density function

discrete random variable: probability mass function continuous random variable: probability density function CHAPTER 1 DISTRIBUTION THEORY 1 Basic Concepts Random Variables discrete random variable: probability mass function continuous random variable: probability density function CHAPTER 1 DISTRIBUTION THEORY

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable ECE353: Probability and Random Processes Lecture 7 -Continuous Random Variable Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Continuous

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 2 Basic Concepts CHAPTER 1 DISTRIBUTION THEORY 3 Random Variables (R.V.) discrete random variable: probability

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Fundamental Tools - Probability Theory II

Fundamental Tools - Probability Theory II Fundamental Tools - Probability Theory II MSc Financial Mathematics The University of Warwick September 29, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory II 1 / 22 Measurable random

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

ENGG2430A-Homework 2

ENGG2430A-Homework 2 ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

X = X X n, + X 2

X = X X n, + X 2 CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk

More information

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

ECE534, Spring 2018: Solutions for Problem Set #3

ECE534, Spring 2018: Solutions for Problem Set #3 ECE534, Spring 08: Solutions for Problem Set #3 Jointly Gaussian Random Variables and MMSE Estimation Suppose that X, Y are jointly Gaussian random variables with µ X = µ Y = 0 and σ X = σ Y = Let their

More information

Chapter 4 continued. Chapter 4 sections

Chapter 4 continued. Chapter 4 sections Chapter 4 sections Chapter 4 continued 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP:

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations Robert Vanderbei Fall 2014 Slides last edited on October 20, 2014 http://www.princeton.edu/ rvdb Definition The expectation of a random variable

More information

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc. ECE32 Exam 2 Version A April 21, 214 1 Name: Solution Score: /1 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully. Please check your answers

More information

More than one variable

More than one variable Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Appendix A : Introduction to Probability and stochastic processes

Appendix A : Introduction to Probability and stochastic processes A-1 Mathematical methods in communication July 5th, 2009 Appendix A : Introduction to Probability and stochastic processes Lecturer: Haim Permuter Scribe: Shai Shapira and Uri Livnat The probability of

More information

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1 EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015 Review : STAT 36 Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics August 25, 25 Support of a Random Variable The support of a random variable, which is usually denoted

More information