Basics of Stochastic Modeling: Part II Continuous Random Variables 1 Sandip Chakraborty Department of Computer Science and Engineering, INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR August 10, 2016 1 Reference Book: Probability & Statistics with Reliability, Queuing and Computer Science Applications, by Kishor S. Trivedi Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 0 / 18
Random Variables in Continuous Space When the sample space S is non-denumerable, not every subset of the sample space is an event that can be assigned a probability. If X is to be a random variable, we require that P(X x) be well defined for every real number X. Let F denote the class of measurable subset of S. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 1 / 18
Random Variables in Continuous Space When the sample space S is non-denumerable, not every subset of the sample space is an event that can be assigned a probability. If X is to be a random variable, we require that P(X x) be well defined for every real number X. Let F denote the class of measurable subset of S. A random variable (continuous) X on a probability space (S, F, P) is a function X : S R that assigns a real number X (s) to each sample point s S, such that for every real number x, the set {s X (s) x} is a member of F. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 1 / 18
Random Variables in Continuous Space When the sample space S is non-denumerable, not every subset of the sample space is an event that can be assigned a probability. If X is to be a random variable, we require that P(X x) be well defined for every real number X. Let F denote the class of measurable subset of S. A random variable (continuous) X on a probability space (S, F, P) is a function X : S R that assigns a real number X (s) to each sample point s S, such that for every real number x, the set {s X (s) x} is a member of F. The cumulative distribution function F X of a random variable X is defined to be the function F X (s) = P(X x) for < x <. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 1 / 18
Cumulative Distribution Function A continuous random variable X is characterized by a distribution function F X (x) that is a continuous function of x for all < x <. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 2 / 18
Cumulative Distribution Function A continuous random variable X is characterized by a distribution function F X (x) that is a continuous function of x for all < x <. The continuous uniform distribution is given by, 0, x < a x a F (x) = b a, a x < b 1, x b Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 2 / 18
Probability Density Function For a continuous random variable X, f (x) = df (x)/dx is called the probability density function (pdf) of X. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 3 / 18
Probability Density Function For a continuous random variable X, f (x) = df (x)/dx is called the probability density function (pdf) of X. The pdf enables us to obtain the CDF by integrating under the pdf; F X (x) = P(X x) = x f X (t)dt, < x < Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 3 / 18
Probability Density Function We can also obtain other probabilities of interest, as P(X (a, b]) = P(a < X b) = P(X b) P(X a) = b f X (t)dt a f X (t)dt b = f X (t)dt a Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 4 / 18
Probability Density Function We can also obtain other probabilities of interest, as P(X (a, b]) = P(a < X b) = P(X b) P(X a) f (x) has following properties; 1) f (x) 0 for all x 2) f (x)dx = 1 b a = f X (t)dt f X (t)dt b = f X (t)dt a Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 4 / 18
Practice Problem Show that the CDF of a continuous random variable does not have any jumps; i.e. P(X = c) is zero. Proof: Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 5 / 18
Practice Problem Show that the CDF of a continuous random variable does not have any jumps; i.e. P(X = c) is zero. Proof: P(X = c) = P(c X c) = c c f x (y)dy = 0 Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 5 / 18
Practice Problem Show that the CDF of a continuous random variable does not have any jumps; i.e. P(X = c) is zero. Proof: P(X = c) = P(c X c) = Show that for a continuous random variable X, c c f x (y)dy = 0 P(a X b) = P(a < X b) = P(a x < b) = P(a < X < b) = b a f X (x)dx Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 5 / 18
The Exponential Distribution The exponential distribution function, is given by { 1 e F (x) = λx, if 0 x < 0 Otherwise Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 6 / 18
The Exponential Distribution The exponential distribution function, is given by { 1 e F (x) = λx, if 0 x < 0 Otherwise The pdf of an exponential random variable X is given as; { λe f (x) = λx, if 0 x < 0 Otherwise Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 6 / 18
Markov Property of Exponential Distribution Suppose we know that X, an exponential random variable, exceeds some given value t; i.e. X > t (Ex: X is the lifetime of a component, and we have observed that the component has already been operating for t hours) Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 7 / 18
Markov Property of Exponential Distribution Suppose we know that X, an exponential random variable, exceeds some given value t; i.e. X > t (Ex: X is the lifetime of a component, and we have observed that the component has already been operating for t hours) We are interested in the distribution of Y = X t : the remaining or residual lifetime Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 7 / 18
Markov Property of Exponential Distribution Suppose we know that X, an exponential random variable, exceeds some given value t; i.e. X > t (Ex: X is the lifetime of a component, and we have observed that the component has already been operating for t hours) We are interested in the distribution of Y = X t : the remaining or residual lifetime Let G t (y) denote the conditional probability of Y y given that X > t. Therefore, G t (y) = P(Y y X > t) Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 7 / 18
Markov Property of Exponential Distribution Suppose we know that X, an exponential random variable, exceeds some given value t; i.e. X > t (Ex: X is the lifetime of a component, and we have observed that the component has already been operating for t hours) We are interested in the distribution of Y = X t : the remaining or residual lifetime Let G t (y) denote the conditional probability of Y y given that X > t. Therefore, G t (y) = P(Y y X > t) Show that G t (y) is independent of t. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 7 / 18
Markov Property of Exponential Distribution G t (y) = P(Y y X > t) = P(X t y X > t) = P(X y + t X > t) P(X y + t and X > t) = P(X > t) P(t < X y + t) = P(X > t) = y+t t t f (x)dx f (x)dx Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 8 / 18
Markov Property of Exponential Distribution G t (y) = y+t t t λe λx dx λe λx dx = e λt (1 e λy ) e λt = 1 e λy Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 9 / 18
Markov Property of Exponential Distribution G t (y) = y+t t t λe λx dx λe λx dx = e λt (1 e λy ) e λt = 1 e λy Thus G t (y) is independent of t and is identical to the original exponential distribution of X. The distribution of remaining life does not depend on how long the component has been operating. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 9 / 18
Markov Property of Exponential Distribution y+t λe λx dx t If the inter-arrival times for G t (y) a network = traffic are exponentially distributed, then the Markov (memoryless) property λe λx implies dx that the time we must t wait for a new packet arrival is statistically independent = e λt (1 e λy of how long we have already spent waiting for it! ) e λt = 1 e λy Thus G t (y) is independent of t and is identical to the original exponential distribution of X. The distribution of remaining life does not depend on how long the component has been operating. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 9 / 18
An Interesting Statistical Observation If X is a non-negative continuous random variable with the Markov property, then show that the distribution of X must be exponential. Proof Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 10 / 18
An Interesting Statistical Observation If X is a non-negative continuous random variable with the Markov property, then show that the distribution of X must be exponential. Proof For a random variable with Markov property, P(t < X y + t) P(X > t) = P(X y) = P(0 < X y) Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 10 / 18
An Interesting Statistical Observation If X is a non-negative continuous random variable with the Markov property, then show that the distribution of X must be exponential. Proof For a random variable with Markov property, P(t < X y + t) P(X > t) = P(X y) = P(0 < X y) Therefore, F X (y + t) F X (t) 1 F X (t) = F X (y) F X (0) F X (y + t) F X (t) = [1 F X (t)][f X (y) F X (0)] Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 10 / 18
Since F X (0) = 0, rearranging the equation we can write, F X (y + t) F X (y) t = F X (t) [1 F X (y)] t Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 11 / 18
Since F X (0) = 0, rearranging the equation we can write, F X (y + t) F X (y) t Taking limit at both sides; = F X (t) [1 F X (y)] t F X (y + t) F X (y) F X (0 + t) F X (0) lim = lim [1 F X (y)] t 0 t t 0 t Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 11 / 18
Since F X (0) = 0, rearranging the equation we can write, F X (y + t) F X (y) t Taking limit at both sides; = F X (t) [1 F X (y)] t F X (y + t) F X (y) F X (0 + t) F X (0) lim = lim [1 F X (y)] t 0 t t 0 t Let F X (y) = df X (y)/dy. Then, F X (y) = F X (0)[1 F X (y)] Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 11 / 18
Since F X (0) = 0, rearranging the equation we can write, F X (y + t) F X (y) t Taking limit at both sides; = F X (t) [1 F X (y)] t F X (y + t) F X (y) F X (0 + t) F X (0) lim = lim [1 F X (y)] t 0 t t 0 t Let F X (y) = df X (y)/dy. Then, F X (y) = F X (0)[1 F X (y)] Let R X (y) = 1 F X (y), Then R X (y) = F X (y). Replacing the values, R X (y) = R X (0)R X (y) Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 11 / 18
The solution to this differential equation is given by; R X (y) = Ke R X (0)y Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 12 / 18
The solution to this differential equation is given by; R X (y) = Ke R X (0)y R X (0) = F X (0) = f X (0) which is a constant. Further R X (0) = 1. Denoting f X (0) by the constant λ, we get, R X (y) = e λy Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 12 / 18
The solution to this differential equation is given by; R X (y) = Ke R X (0)y R X (0) = F X (0) = f X (0) which is a constant. Further R X (0) = 1. Denoting f X (0) by the constant λ, we get, R X (y) = e λy Therefore, F X (y) = 1 e λy Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 12 / 18
Order Statistics Let X 1, X 2,..., X n be mutually independent, identically distributed continuous random variables, each having the distribution function F and density f. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 13 / 18
Order Statistics Let X 1, X 2,..., X n be mutually independent, identically distributed continuous random variables, each having the distribution function F and density f. Let Y 1, Y 2,..., Y n be random variables obtained by permuting the set X 1, X 2,..., X n so as to be in increasing order; Y 1 = min{x 1, X 2,..., X n } Y n = max{x 1, X 2,..., X n } Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 13 / 18
Order Statistics Let X 1, X 2,..., X n be mutually independent, identically distributed continuous random variables, each having the distribution function F and density f. Let Y 1, Y 2,..., Y n be random variables obtained by permuting the set X 1, X 2,..., X n so as to be in increasing order; Y 1 = min{x 1, X 2,..., X n } Y n = max{x 1, X 2,..., X n } The random variable Y k is called the kth-order statistics Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 13 / 18
Order Statistics Let X 1, X 2,..., X n be mutually independent, identically distributed continuous random variables, each having the distribution function F and density f. Let Y 1, Y 2,..., Y n be random variables obtained by permuting the set X 1, X 2,..., X n so as to be in increasing order; Y 1 = min{x 1, X 2,..., X n } Y n = max{x 1, X 2,..., X n } The random variable Y k is called the kth-order statistics Example: Let X i be the lifetime of the ith component in a system of n independent components. If the system is a series system, then Y 1 be the overall system lifetime If the system is a parallel system, then Y n be the overall system lifetime Y n m+1 will be the lifetime of an m-out-of-n system (m failures can sustain) Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 13 / 18
Joint Distribution Function The joint (or compound) distribution function of random variables X and Y is defined by; F X,Y (x, y) = P(X x, Y y) < x < ; < y < Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 14 / 18
Joint Distribution Function The joint (or compound) distribution function of random variables X and Y is defined by; F X,Y (x, y) = P(X x, Y y) < x < ; < y < The joint (compound) probability density function of X and Y is given by f (x, y); we can often find a function f (x, y) such that; F X,Y (x, y) = y x f (u, v)dudv Therefore; f (x, y) = δ2 F X,Y (x, y) δxδy Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 14 / 18
Expectation The expectation, E[X ], of a random variable X is defined by; x i P(x i ); if X is discrete, i E[X ] = xf (x)dx; if X is continuous Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 15 / 18
Expectation The expectation, E[X ], of a random variable X is defined by; x i P(x i ); if X is discrete, i E[X ] = xf (x)dx; if X is continuous The expectation for a random variable X with exponential density; Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 15 / 18
Expectation The expectation, E[X ], of a random variable X is defined by; x i P(x i ); if X is discrete, i E[X ] = xf (x)dx; if X is continuous The expectation for a random variable X with exponential density; E[X ] = xf (x)dx = λxe λx dx 0 Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 15 / 18
Expectation The expectation, E[X ], of a random variable X is defined by; x i P(x i ); if X is discrete, i E[X ] = xf (x)dx; if X is continuous The expectation for a random variable X with exponential density; E[X ] = xf (x)dx = λxe λx dx Let u = λx, then du = λdx, therefore; 0 E[X ] = 1 λ 0 ue u du = 1 λ Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 15 / 18
Central Moments We define the k th central moment, µ k, of the random variable X by; µ k = E[(X E[X ]) k ] Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 16 / 18
Central Moments We define the k th central moment, µ k, of the random variable X by; µ k = E[(X E[X ]) k ] The zeroth central moment is the total probability (i.e. one), the first central moment is zero, the second central moment is the variance and the third central moment is the skewness. Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 16 / 18
Mean and Median Mean is the statistical the average of all data points the expected value E[X ] Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 17 / 18
Mean and Median Mean is the statistical the average of all data points the expected value E[X ] Median is the point that separates the lower half of the data points from the higher half. Statistically, median is a point m such that; and; P(X m) 1 2 P(X m) 1 2 For an absolutely continuous probability distribution function f (x) P(X m) = P(X m) = m f (x)dx = 1 2 Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 17 / 18
Mode and Skewness The mode is a value that appears most in a data set. For discrete distribution the value x at which pmf takes maximum value For continuous distribution the value x at which pdf has its maximum value Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 18 / 18
Thank You Sandip Chakraborty (IIT Kharagpur) CS 60019 August 10, 2016 18 / 18