Solutions to Homework Set #3 (Prepared by Yu Xiang) Let the random variable Y be the time to get the n-th packet. Find the pdf of Y.

Similar documents
UCSD ECE 153 Handout #20 Prof. Young-Han Kim Thursday, April 24, Solutions to Homework Set #3 (Prepared by TA Fatemeh Arbabjolfaei)

Lecture Notes 3 Multiple Random Variables. Joint, Marginal, and Conditional pmfs. Bayes Rule and Independence for pmfs

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Lecture 1: August 28

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

We introduce methods that are useful in:

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2018 Kannan Ramchandran February 14, 2018.

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)

1 Random Variable: Topics

2 Functions of random variables

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Final Solutions Fri, June 8

Lecture Notes 2 Random Variables. Random Variable

MAS223 Statistical Inference and Modelling Exercises

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page.

Independent random variables

UCSD ECE153 Handout #27 Prof. Young-Han Kim Tuesday, May 6, Solutions to Homework Set #5 (Prepared by TA Fatemeh Arbabjolfaei)

SDS 321: Introduction to Probability and Statistics

Chapter 2: Random Variables

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 8 10/1/2008 CONTINUOUS RANDOM VARIABLES

MA 519 Probability: Review

The exponential distribution and the Poisson process

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

Multiple Random Variables

Northwestern University Department of Electrical Engineering and Computer Science

S n = x + X 1 + X X n.

Naïve Bayes classification

M378K In-Class Assignment #1

Chapter 3: Random Variables 1

Math 416 Lecture 2 DEFINITION. Here are the multivariate versions: X, Y, Z iff P(X = x, Y = y, Z =z) = p(x, y, z) of X, Y, Z iff for all sets A, B, C,

Lecture Notes 3 Convergence (Chapter 5)

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Chapter 4 Multiple Random Variables

Conditioning a random variable on an event

6.041/6.431 Fall 2010 Final Exam Wednesday, December 15, 9:00AM - 12:00noon.

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

Check Your Understanding of the Lecture Material Finger Exercises with Solutions

Probability. Lecture Notes. Adolfo J. Rumbos

1 Solution to Problem 2.1

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2018 Kannan Ramchandran February 14, 2018.

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Pairs of Random Variables

6.041/6.431 Fall 2010 Final Exam Solutions Wednesday, December 15, 9:00AM - 12:00noon.

Exponential Distribution and Poisson Process

University of Chicago Graduate School of Business. Business 41901: Probability Final Exam Solutions

1 Introduction. P (n = 1 red ball drawn) =

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Mathematics 426 Robert Gross Homework 9 Answers

Chapter 3: Random Variables 1

Tom Salisbury

MAT 271E Probability and Statistics

18.440: Lecture 28 Lectures Review

EXAM # 3 PLEASE SHOW ALL WORK!

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2.1 Elementary probability; random sampling

General Random Variables

Lecture 2: Review of Basic Probability Theory

Exercises with solutions (Set D)

Lecture 8: Channel Capacity, Continuous Random Variables

Midterm Exam 1 Solution

Multivariate distributions

LECTURE 3. Last time:

Recitation 2: Probability

1 Probability and Random Variables

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

[Chapter 6. Functions of Random Variables]

Why study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Special distributions

Lecture 4: State Estimation in Hidden Markov Models (cont.)

Math 407: Probability Theory 5/10/ Final exam (11am - 1pm)

Conditional densities, mass functions, and expectations

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Joint Probability Distributions and Random Samples (Devore Chapter Five)

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random

Quick Tour of Basic Probability Theory and Linear Algebra

HW7 Solutions. f(x) = 0 otherwise. 0 otherwise. The density function looks like this: = 20 if x [10, 90) if x [90, 100]

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Lecture 10: Broadcast Channel and Superposition Coding

Exercise 1. = P(y a 1)P(a 1 )

conditional cdf, conditional pdf, total probability theorem?

i=1 k i=1 g i (Y )] = k f(t)dt and f(y) = F (y) except at possibly countably many points, E[g(Y )] = f(y)dy = 1, F(y) = y

1 Basic continuous random variable problems

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

Closed book and notes. 60 minutes. Cover page and four pages of exam. No calculators.

CME 106: Review Probability theory

Transcription:

Solutions to Homework Set #3 (Prepared by Yu Xiang). Time until the n-th arrival. Let the random variable N(t) be the number of packets arriving during time (0,t]. Suppose N(t) is Poisson with pmf p N (n) = (λt)n e λt for n = 0,,,... n! Let the random variable Y be the time to get the n-th packet. Find the pdf of Y. Solution: To find the pdf f Y (t) of the random variable Y, note that the event Y t} occurs iff the time of the nth packet is in [0,t], that is, iff the number N(t) of packets arriving in [0,t] is at least n. Alternatively, Y > t} occurs iff N(t) < n}. Hence, the cdf F Y (t) of Y is given by (λt) k F Y (t) = PY t} = PN(t) n} = e λt. k! k=n k=n Differentiating F Y (t) with respect to t, we get the pdf f Y (t) as ] f Y (t) = [ λe λt(λt)k +λe λt(λt)k k! (k )! for t > 0. = λe λt(λt)n (n )! = λe λt(λt)n (n )! k=n λe λt(λt)k k! + k=n+ λe λt(λt)k (k )! Or we can use another way. Since we know that the time interval T between packet arrivals is an exponential random variable with pdf λe f T (t) = λt, if t 0, Let T i denote the i.i.d. exponential interarrival times, then Y = T + T + + T n. By convolving f T (t) with itself n times, which can be also computed by its Fourier transform (characteristic function), we can show that the pdf of Y is given by f Y (t) = λe λt (λt) n (n )!, if t 0,. Diamond distribution. Consider the random variables X and Y with the joint pdf c if x + y / f X,Y (x,y) = 0 otherwise, where c is a constant.

(a) Find c. (b) Find f X (x) and f X Y (x y). (c) Are X and Y independent random variables? Justify your answer. Solution: (a) The integral of the pdf f X,Y (x,y) over < x <, < y < is c, and therefore by the definition of joint density c =. (b) The marginal pdf is obtained by integrating the joint pdf with respect to y. For 0 x, x ( ) f X (x) = cdy = x, and for x 0, f X (x) = +x +x x ( ) cdy = +x. So the marginal pdf may be written as x x f X (x) = 0 otherwise. Now since f XY (x,y) is symmetrical, f Y (y) = f X (y). Thus, (c) X and Y are not independent since f X Y (x y) = f X,Y(x,y) f Y (y) = y x + y, y 0 otherwise. f X,Y (x,y) f X (x)f Y (y). Alternatively, X and Y are not independent since f X Y (x y) depends on the value of y. 3. First available teller. Consider a bank with two tellers. The service times for the tellers are independent exponentially distributed random variables X Exp(λ ) and X Exp(λ ) respectively. You arrive at the bank and find that both tellers are busy but that nobody else is waiting to be served. You are served by the first available teller once he/she is free. What is the probability that you are served by the first teller? Solution: From the memoryless property of the exponential distribution, the remaining services for the tellers are also independent exponentially distributed random variables with

parameters λ and λ, respectively. The probability that you will be served by the first teller is the probability that the first teller finishes the service before the second teller does. Thus, PX < X } = f X,X (x,x )dx dx x >x = = x =0 x =0 = λ λ +λ. x =x λ e λ x λ e λ x dx dx λ e (λ +λ )x dx 4. Coin with random bias. You are given a coin but are not told what its bias (probability of heads) is. You are told instead that the bias is the outcome of a random variable P Unif[0, ]. To get more infromation about the coin bias, you flip it independently 0 times. Let X be the number of heads you get. Thus X B(0,P). Assuming that X = 9, find and sketch the a posteriori probability of P, i.e., f P X (p 9). Solution: In order to find the conditional pdf of P, apply Bayes rule for mixed random variables to get f P X (p x) = p X P(x p) f P (p) = p X (x) Now it is given that X = 9, thus for 0 p p 9 ( p) f P X (p 9) = 0 p9 ( p)dp = p9 ( p) 0 = 0p 9 ( p). p X P (x p) 0 p X P(x p)f P (p)dp f P(p). Figure compares theunconditional andtheconditional pdfsforp. Itmay beseen that given the information that 0 independent tosses resulted in 9 heads, the pdf is shifted towards the value 9 0. 3

4.5 4 f P (p) f P X (p 9) 3.5 3.5.5 0.5 0 0 0. 0. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Figure : Comparison of a priori and a posteriori pdfs of P 5. Optical communication channel. Let the signal input to an optical channel be: X = λ0 with probability λ with probability. The conditional pmf of the output of the channel Y X = λ 0 } Poisson(λ 0 ), i.e., Poisson with intensity λ 0 and Y X = λ } Poisson(λ ). Show that the MAP rule reduces to: λ0, y < y D(y) = λ, otherwise. Find y and the corresponding probability of error. Solution: The MAP rule λ0 p D(y) = X Y (λ 0 y) > p X Y (λ y) λ otherwise minimizes the probability of decoding error. Since the a priori probabilities for the two X values are equal, the MAP rule is equivalent to the ML rule p λ Y X (y λ 0 ) D(y) = 0 p Y X (y λ ) > λ otherwise. Now, p Y X (y ) p Y X (y 0) = e λ0 λ y 0 /y! e λ y λ /y! = e λ λ 0 yln(λ /λ 0 ). 4

This ratio is greater than if y < λ λ 0 ln(λ ) ln(λ 0 ). Therefore, when λ 0 = and λ =, we have D(y) = y < ln() otherwise and y = ln() =.44. The probability of error is P e = PD(Y) X} = PY > y X = }PX = }+PY < y X = }PX = } e e y = 0.5+ 0.5 y! y! y= = 0.335. y=0 Repeating this for λ 0 = and λ = 00, we have y < 99 D(y) = ln(00) otherwise and y = ln() =.497. The probability of error is P e = PD(Y) X} e = 0.5+ y! y= y=0 e 00 (00) y y! 0.5.5(P(Y > X = )+ e 00 (00) ).5(/+.6 0 0 ).05.! where the inequality is obtained by using Markov inequality as well as the shape of the PMF of Poisson random variable (increasing before k ). This can be further tightened up (using Chebyshev s Inequality): P e.5(p( Y > X = )+ e 00 (00) ).5(/() +.6 0 0 ).00.! 6. Iocane or Sennari. An absent-minded chemistry professor forgets to label two identically looking bottles. One contains a chemical named Iocane and the other contains a chemical named Sennari. It is well known that the radioactivity level of Iocane has the Unif[0, ] distribution, while the radioactivity level of Sennari has the Exp() distribution. 5

(a) Let X be the radioactivity level measured from one of the bottles. What is the optimal decision rule (based on the measurement X) that maximizes the chance of correctly identifying the content of the bottle? (b) What is the associated probability of error? Solution: Let Θ = 0 denote the case in which the content of the bottle is Iocane and let Θ = denote the case in which the content of the bottle is Sennari. Implicit in the problem statement is that P(Θ = 0) = P(Θ = ) = /. (a) The optimal MAP rule is equivalent to the ML rule 0, f D(x) = X Θ (x 0) > f X Θ (x ),, otherwise. Since the Unif(0,) pdf f X Θ (x 0) is larger than the Exp() pdf f X Θ (x ) for 0 < x <, we have 0, 0 < x <, D(x) =, otherwise. (b) The probability of error is given by P(Θ D(X)) = P(Θ D(X) Θ = 0)+ P(Θ D(X) Θ = ) = P(X > Θ = 0)+ P(0 < X < Θ = ) = ( e ). 7. Two independent uniform random variables. Let X and Y be independently and uniformly drawn from the interval [0,]. (a) Find the pdf of U = max(x,y). (b) Find the pdf of V = min(x,y). (c) Find the pdf of W = U V. (d) Find the probability P X Y /}. Solution: (a) We have F U (u) = PU u} = Pmin(X,Y) u} = PX u,y u} = PX u}py u} = u 6

for 0 u. Hence, f U (u) = u, 0 u, (b) Similarly, F V (v) = PV > v} = Pmax(X,Y) > v} = PX > v,y > v} = PX > v}py > v} = ( v), or equivalently, F V (v) = ( v), for 0 v. Hence, ( v), 0 v, f V (v) = (c) First note that W = U V = X Y. (Why?) Hence, PW w} = P X Y w} = P( w X Y w). Since X and Y are uniformly distributed over [0,], the above integral is equal to the area of the shaded region in the following figure: y w 0 w x The area can be easily calculated as ( w) for 0 w. Hence F W (w) = ( w) and ( w), 0 w, f W (w) = 7

(d) From the figure above, P X Y /} = PW /} = /4. 8. Waiting time at the bank. Consider a bank with two tellers. The service times for the tellers are independent exponentially distributed random variables X Exp(λ ) and X Exp(λ ), respectively. You arrive at the bank and find that both tellers are busy but that nobody else is waiting to be served. You are served by the first available teller once he/she becomes free. Let the random variable Y denote your waiting time. Find the pdf of Y. Solution: First observe that Y = min(x,x ). Since PY > y} = PX > y,x > y} = PX > y}px > y} = e λ y e λ y = e (λ +λ )y for y 0, Y is an exponential random variable with pdf (λ +λ )e (λ +λ )y, y 0, f Y (y) = 8

Additional Exercises Do not turn in solutions to these problems.. Independence. Let X X and Y Y be two independent discrete random variables. (a) Show that any two events A X and B Y are independent. (b) Show that any two functionsof X andy separately areindependent; that is, if U = g(x) and V = h(y) then U and V are independent. Solution: (a) Recall that the probability of any event A X is given by PX A} = x A X p X(x). Because of the independence of X and Y, we have PX A,Y B} = p X,Y (x,y) Therefore, A and B are independent. x A X y B Y = x A X y B Y = x A X p X (x) p X (x)p Y (y) y B Y = PX A}PY B}. p Y (y) (b) Let A x = x : g(x) < u} and B y = y : h(y) < v}. Then the joint distribution of U and V is F U,V (u,v) = PU u,v v} = Pg(X) u,h(y) v} = PX A x,y B y }. However, because of the independence of X and Y, so that Z and W are independent. F U,V (u,v) = PX A x,y B y = PX A x }PY B y } = Pg(X) < u}ph(y) < v} = PU < u}pv < v} = F(u)F(v),. Family planning. Alice and Bob choose a number X at random from the set,3,4} (so the outcomes are equally probable). If the outcome is X = x, they decide to have children until they have a girl or x children, whichever comes first. Assume that each child is a girl with probability / (independent of the number of children and gender of other children). Let Y be the number of children they will have. (a) Find the conditional pmf p Y X (y x) for all possible values of x and y. (b) Find the pmf of Y. 9

Solution: (a) Note that Y,,3,4}. The conditional pmf is as follows (b) The pmf of Y is: p Y () = p Y (3) = p Y X ( ) =, p Y X( ) =, p Y X ( 3) =, p Y X( 3) = 4, p Y X(3 3) = 4, p Y X ( 4) =, p Y X( 4) = 4, p Y X(3 4) = 8, p Y X(4 4) = 8. 4 p X (x)p Y X ( x) = /, p Y () = x= 4 p X (x)p Y X ( x) = /3, x= 4 p X (x)p Y X (3 x) = /8, p Y (4) = p X (4)p Y X (4 4) = /4. x=3 3. Joint cdf or not. Consider the function G(x,y) = if x+y 0 0 otherwise. Can G be a joint cdf for a pair of random variables? Justify your answer. Solution: No. Note that for every x, But for any genuine marginal cdf, lim G(x,y) =. y lim F X(x) = 0. x Therefore G(x,y) is not a cdf. Alternatively, assume that G(x,y) is a joint cdf for X and Y, then P < X, < Y } = G(,) G(,) G(, ) +G(, ) = +0 =. But this violates the property that the probability of any event must be nonnegative. 0

0.5 f Y S (y ) f Y S (y 0) f Y S (y ) 0.4 0.3 0. 0. 0 3 0 3 Figure : f Y S (y,s) for λ = 4. Ternary signaling. Let the signal S be a random variable defined as follows: with probability 3 S = 0 with probability 3 + with probability 3. The signal is sent over a channel with additive Laplacian noise Z, i.e., Z is a Laplacian random variable with pdf f Z (z) = λ e λ z, < z <. The signal S and the noise Z are assumed to be independent and the channel output is their sum Y = S +Z. (a) Find f Y S (y s) for s =, 0, +. Sketch the conditional pdfs on the same graph. (b) Find the optimal decoding rule D(Y) for deciding whether S is, 0 or +. Give your answer in terms of ranges of values of Y. (c) Find the probability of decoding error for D(y) in terms of λ. Solution: (a) We use a trick here that is used several times in the lecture notes. Since Y = S+Z and Z and S are independent, the conditional pdf is f Y S (y s) = f Z (y s) = λe λ y s. The plots are shown for λ = in Figure on page.

(b) The optimal decoding rule is MAP: D(y) = s where s maximizes p(s y) = f(y s)p(s) f(y) Sincep S (s)isthesamefors =,0,+, themaprulebecomesthemaximum-likelihood decoding rule: D(y) = argmaxf(y s). The conditional pdfs are plotted in Figure. By inspection, the ML rule reduces to s y < g(y) = 0 < y < + + y > +. (c) Inspection of Figure shows how to calculate the probability of error. P e = Perror i sent}pi sent} i Perror i sent} = 3 i = 3 ( P < S +Z < + S = 0}) + 3 P S +Z > S = } + 3 P S +Z < + S = +} = ( 3 P < Z < }) + 3 PZ < }+ 3 PZ > + } = 3 PZ < }+ 3 PZ > + }+PZ < }+ 3 PZ > + } = 3( PZ < }+PZ > + }) = 4 3 PZ > + } (by symmetry) = 4 3 λe λ z dz = 3 e λ. 5. Signal or no signal. Consider a communication system that is operated only from time to time. When the communication system is in the normal mode (denoted by M = ), it transmits a random signal S = X with +, with probability /, X =, with probability /. When the system is in the idle mode (denoted by M = 0), it does not transmit any signal (S = 0). Both normal and idle modes occur with equal probability. Thus X, with probability /, S = 0, with probability /. The receiver observes Y = S+Z, where the ambient noise Z U[,] is independent of S..

(a) Find and sketch the conditional pdf f Y M (y ) of the receiver observation Y given that the system is in the normal mode. (b) Find and sketch the conditional pdf f Y M (y 0) of the receiver observation Y given that the system is in the idle mode. (c) Find the optimal decoder D(y) for deciding whether the system is normal or idle. Provide the answer in terms of intervals of y. (d) Find the associated probability of error. Solution: (a) If M =, Hence, we have Y = + Z, with probability /, + Z, with probability /. f Y M (y ) = f Z(y )+ f Z(y +) = 4, y, (b) If M = 0, Y = Z, so f Y M (y 0) = f Z (y) =, y, (c) Since both modes are equally likely, the optimal MAP decoding rule reduces to the ML rule, in which 0, if f d(y) = Y M (y 0) > f Y M (y ),, otherwise 0, if < y <, =, otherwise. (d) The probability of error is given by PM d(y)} = PM =, < Y < } = PM = }P < Y < M = } = = 4. 3