Basic concepts of probability theory

Similar documents
Basic concepts of probability theory

Basic concepts of probability theory

Queuing Theory and Stochas St t ochas ic Service Syste y ms Li Xia

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

Stochastic process. X, a series of random variables indexed by t

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1

CDA5530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

Slides 8: Statistical Models in Simulation

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

1 Review of Probability

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

continuous random variables

Chapter 5. Chapter 5 sections

6.1 Moment Generating and Characteristic Functions

Chapter 2: Random Variables

Lecture 2: Repetition of probability theory and statistics

1 Random Variable: Topics

CS145: Probability & Computing

1.1 Review of Probability Theory

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Basics of Stochastic Modeling: Part II

1 Inverse Transform Method and some alternative algorithms

2 Random Variable Generation

STAT Chapter 5 Continuous Distributions

General Random Variables

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Things to remember when learning probability distributions:

1 Basic concepts from probability theory

CSE 312, 2017 Winter, W.L. Ruzzo. 7. continuous random variables

Brief Review of Probability

Probability Distributions Columns (a) through (d)

Continuous Random Variables and Continuous Distributions

Chapter 5 continued. Chapter 5 sections

Twelfth Problem Assignment

HW7 Solutions. f(x) = 0 otherwise. 0 otherwise. The density function looks like this: = 20 if x [10, 90) if x [90, 100]

Part I Stochastic variables and Markov chains

Lecture 4: Random Variables and Distributions

CHAPTER 6. 1, if n =1, 2p(1 p), if n =2, n (1 p) n 1 n p + p n 1 (1 p), if n =3, 4, 5,... var(d) = 4var(R) =4np(1 p).

Continuous Probability Distributions. Uniform Distribution

3. Probability and Statistics

EE/CpE 345. Modeling and Simulation. Fall Class 5 September 30, 2002

Probability distributions. Probability Distribution Functions. Probability distributions (contd.) Binomial distribution

Northwestern University Department of Electrical Engineering and Computer Science

Probability and Distributions

Lecture 1: August 28

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Tom Salisbury

Stat 100a, Introduction to Probability.

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

3 Multiple Discrete Random Variables

Lecture 3 Continuous Random Variable

Generation from simple discrete distributions

Chapter 2 Queueing Theory and Simulation

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

Lecturer: Olga Galinina

Queueing Theory and Simulation. Introduction

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Closed book and notes. 60 minutes. Cover page and four pages of exam. No calculators.

Preliminary Statistics. Lecture 3: Probability Models and Distributions

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Math Spring Practice for the final Exam.

Statistics, Data Analysis, and Simulation SS 2015

Probability and Statistics Concepts

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Special Discrete RV s. Then X = the number of successes is a binomial RV. X ~ Bin(n,p).

1 Review of Probability and Distributions

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

Chapter 3: Random Variables 1

Guidelines for Solving Probability Problems

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

4 Branching Processes

EE 505 Introduction. What do we mean by random with respect to variables and signals?

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

It can be shown that if X 1 ;X 2 ;:::;X n are independent r.v. s with

Stochastic Models in Computer Science A Tutorial

ECE 313 Probability with Engineering Applications Fall 2000

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

Discrete Distributions

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Chapter Learning Objectives. Probability Distributions and Probability Density Functions. Continuous Random Variables

Physics 403 Probability Distributions II: More Properties of PDFs and PMFs

STAT 430/510: Lecture 15

Algorithms for Uncertainty Quantification

Sampling Distributions

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

STAT 418: Probability and Stochastic Processes

EE4601 Communication Systems

Section 9.1. Expected Values of Sums

Generating Random Variates 2 (Chapter 8, Law)

(Practice Version) Midterm Exam 2

Random Variables. P(x) = P[X(e)] = P(e). (1)

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

IB Mathematics HL Year 2 Unit 7 (Core Topic 6: Probability and Statistics) Valuable Practice

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Transcription:

Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential, Erlang, Hyper-exponential, phasetype, Markovian arrival process 1

Random variable X is denoted as random variable Discrete random variable, if X is discrete Continuous random variable, if X is continuous Distribution function, or called cdf, cumulative distribution function F(x)=Pr(X<x) Probability density/mass function f(x)=pr(x=x), x is discrete (probability mass function, pmf) f(x)= F(x)/ x, x is continuous (pdf) 2

Random variable Mean: E(X) Variance: Var(X), or σ 2 (X) σ 2 (X) = E{(X-E(X)) 2 }=E(X 2 )-E 2 (X) Standard deviation: σ(x) Covariance of two random variables X, Y Cov(X,Y)=E{(X-E(X))(Y-E(Y))} Correlation coefficient of X, Y r(x,y)=cov(x,y)/ σ(x)σ(y) -1 r(x,y) 1 3

Coefficient of variation: Coefficient of variation: c X c X = σ(x)/e(x) c X =0: deterministic c X <1: smooth c X =1: pure random c X >1: bursty Figures as example more than variance t t t 4

Discrete Random Variables Probability mass function(pmf), P(n) Pn ( ) = Pr[ X= n] wh ere 0 Pn ( ) 1, Pn ( ) = 1 Cumulative distribution function(cdf), F(x) Fx ( ) = Pr[ X x] = Pn ( ) n x where F(0) = 0 F( ) = 1 Fb ( ) Fa ( ), if b a Fb ( ) Fa ( ) = Pr[ a< X< b] n Suppose the random variable X is positive 5

Discrete Random Variables Mean and Variance Mean: x = np(n) n Variance: 2 2 σ = ( n x) P(n) n 6

Continuous Random Variables Probability density function(pdf), f(x) f( x) 0 Pr[ a X b] = f ( x) dx 0 f ( x) dx = 1 b a Cumulative distribution function(cdf), F(x) F( x) = Pr[ X x] = f ( y) dy df( x) f( x) = dx 0 x 7

Continuous Random Variables Mean and Variance Mean: x = xf ( x) dx = xdf( x) 0 0 x = (1 F ( x )) dx 0 Variance: σ 2 2 2 2 ( x x) f ( x) dx x f ( x) dx x = = 0 0 8

Z-transform for discrete distribution Z-transform is also called generating function P(z): Z-transform of discrete r.v. X, p(n)=pr(x=n), assume n=0,1,2, Pz ( ) = Ez ( X ) = pnz ( ) Property n= 0 P(0) = p(0), P(1) = 1, P (1) = EX ( ) n P (1) =? 9

Laplace-transform for continuous distribution F*(s): Laplace transform of a continuous r.v. X pdf of X is f(x), cdf of X is F(x), assume x 0 * sx sx F s = e f x dx = e df x 0 0 ( ) ( ) ( ) Property Shortcut to calculate the k-moment of X * * *( k) k k F(0) = 1, F (0) = EX ( ), F (0) = ( 1) EX ( ) * sx F () s = s e F() x dx 0 10

Geometric distribution p: success probability in a Bernoulli trial X: the number of Bernoulli trials needed to first get one success g(k;p)=pr{x=k}=p(1-p) k-1, k=1,2, Property E(X)=1/p Var(X)=(1-p)/p 2 c X =(1-p) ½ Curve of probability mass function Discrete version of exponential distribution 11

Mixed Geometric distribution n set of independent Bernoulli trials, with success probability p i, respectively, i=1,2,,n Mixed probability is θ i, i=1,2,,n X: the number of Bernoulli trials first get one success of any one from the n set trials n k pmf: gb( k; θ, p) = θipi(1 pi) EX ( ) = θi / n i= 1 p i i= 1 Discrete version of hyper-exponential distribution 12

Binomial distribution p: success probability in a Bernoulli trial X: the number of successes during n Bernoulli trials Probability function n r Prnp ( ;, ) = Pr[ X= r] = p(1 p) r Property E(X)=np n r 13

Negative binomial distribution Definition (also called Pascal distribution) Bernoulli trial with success prob. p, predefined number k of failures has occurred, stop the random number of successes, X, obeys NB distr. k+ r 1 r nb( rk ;, p) = Pr[ X= r] = p(1 p) r Not required, for your reference k 14

Poisson distribution A Poisson random variable X with parameter λ has probability distribution n λ λ PX ( = n) = e, n= 0,1, 2,... n! For the Poisson distribution, it holds X = σ ( X) = λ, cx = 2 1 λ 15

Exponential Distribution (with Parameter μ) Pdf and cdf are f( x) = µ e µ x F( x) = 1 e, x 0 1 2 1 σ x x = σ x = c 1 2 x = = µ µ x f(x) µ 1 Pure random F(x) 0 x 0 16 x

Exponential distribution If X 1, X 2,, X n are independent exponential random variables with parameter μ 1, μ 2,, μ n Then, Y = min(x 1, X 2,, X n ) is an exponential random variable with parameter μ = μ 1 + μ 2 + + μ n How about Z = max(x 1, X 2,, X n )? Prove it as homework 17

Exponential distribution has memoryless property PX ( > x+ t) PX ( > x+ t X> t) = PX ( > t) µ ( x+ t) e µ x = = e = PX ( > x) µ t e P( t < X < x+ t X > t) = P( X < x) = F( x) = 1 e µx Memoryless: Future state only depends on the current state, independent of the history 18

Erlang distribution X: Erlang-k random variable, X= X 1 +X 2 + +X k, X i are independent random variables obeying exponential distribution with para. μ Denote this distribution as E k (μ), its pdf is Its cdf is 19

Erlang distribution Mean,variance, squared coefficient of variation Model smooth data traffic barber shop pdf curve is 20

Hyperexponential distribution X is with prob. p i to select exponential r.v. X i with para. μ i, i=1,,k, denote this distribution as H k Its pdf is Its mean is Its coefficient of variation 1 Model bursty data traffic 21

Phase-type distribution (discrete PH type distribution time case) the first passage time to the absorbing state of a discrete time Markov chain Two para., one is transition probability matrix T, the other is the initial state probability α, n phase k cdf:, pmf: Example Fk ( ) = 1 αte k 1 f( k) = αt t 0.3 0.4 0.3 T t P = 0.2 0.6 0.2 = 0 1 0 0 1 α = [0.3,0.7] Te+ t = e 22

Phase-type distribution (discrete time case) Example Geometric distribution, 1 phase 1 p p T t P = = 0 1 α = [1] 0 1 pmf: Mix-Geometric distribution, n phase 0.3 0 0.7 T t P = 0 0.8 0.2 = 0 1 0 0 1 pmf: f( k) = T t = p(1 p) k 1 k 1 α α = [0.5,0.5] f( k) = T t = 0.5 0.7 0.3 + 0.5 0.2 0.8 k 1 k 1 k 1 α 23

PH type distribution (continuous PH type distribution time case) the first passage time to the absorbing state of a continuous time Markov process Two para., one is transition rate matrix T, the other is the initial state probability α, n phase cdf: Example Tx F( x) = 1 αe e 5 4 1 T t B = 2 6 4 = 0 0 0 0 0 α = [0.8,0.2] Te+ t = 0 24

PH type distribution (continuous Example time case) Exponential distribution: F( x) 1 e Tx = α e= 1 e λ Erlang distribution with (m,λ): α = [1,0,0,...,0] Hyper-exponential distribution α = [ θ1, θ2,..., θ n ] x α = 1, T = λ T t B = 0 0 λ λ λ λ T =.... λ λ1 λ 2 T =... λn 25

Coxian distribution A special case of PH type distribution µ 1 p1µ 1 0 0... (1 p1) µ 1 0 µ 2 p2µ 2 0... (1 p2) µ 2 B = 0 0... µ k 1 pk 1µ k 1 (1 pk 1) µ k 1 0 0... 0 µ k µ k 0 0... 0 0 0 α = [1,0,0,...,0] k phase 26

Pareto distribution Pareto(α): a distribution with a power-law tail pdf: cdf: f x x x f x Usually, 0<α<2 α 1 ( ) = α, 1; otherwise, ( ) = 0 α F( x) = 1 x, x 1; otherwise, F( x) = 0 Pareto distribution decays much more slowly than exponential distribution We call it has heavy tail or fat tail Heavy tail is very important for practical data traffic 27

Relation to Central Limit Theorem CLT: X i are i.i.d. r.v. s with mean μ and variance σ 2, define Z n =(X 1 + + X n - nμ)/(σ n), then Z n converges to the standard normal distribution CLT can explain many natural phenomena obey normal distribution Binomial(n,p) is a sum of i.i.d. Bernoulli(p) r.v. s, it converges to a normal distribution when n is large Poisson(λ) can be approximated by a normal distribution with mean λ and variance λ Binomial and Poisson can be viewed as an approximation of discrete version of normal distribution, plot their pdf 28

An explanation to Additive White Gaussian Noise (AWGN) AWGN explanation Transmit a signal in a channel 100 i.i.d. sources make low noise, uniformly distributed among [-1, 1] Noise is additive, if the total amount of noise level is greater than 10 or less than -10, then it corrupts the signal; otherwise, no problem Calculate the probability of signal no corruption Calculation X~Unif(-1,1), E(X)=0, Var(X)=1/3; S 100 ~Norm(0,100/3) P(-10<S100<10)=Φ( 3)- Φ(- 3)=2Φ( 3)-1=2(0.9572)-1 =0.9144 29

Sum of a random number of r.v. 30

A map of the relation of all these distributions Chalk drawing PH type distribution can almost model any distribution Convert stochastic process to Markovian model Widely used in queuing system, reliable system, regenerative system, etc. Discrete version v.s. continuous version Geometric v.s. Exponential Mixed Geo v.s. HyerExp Binomial, Poisson v.s. Normal Erlang (continuous r.v.) Normal(continuous r.v.) 31