Basic concepts of probability theory

Similar documents
Basic concepts of probability theory

Basic concepts of probability theory

Queuing Theory and Stochas St t ochas ic Service Syste y ms Li Xia

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

Stochastic process. X, a series of random variables indexed by t

CDA5530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random

Bulk input queue M [X] /M/1 Bulk service queue M/M [Y] /1 Erlangian queue M/E k /1

Chapter 2: Random Variables

Slides 8: Statistical Models in Simulation

1 Review of Probability

1.1 Review of Probability Theory

Chapter 5. Chapter 5 sections

1 Random Variable: Topics

Lecture 2: Repetition of probability theory and statistics

2 Random Variable Generation

Things to remember when learning probability distributions:

6.1 Moment Generating and Characteristic Functions

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

continuous random variables

Part I Stochastic variables and Markov chains

STAT Chapter 5 Continuous Distributions

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Basics of Stochastic Modeling: Part II

Northwestern University Department of Electrical Engineering and Computer Science

Special Discrete RV s. Then X = the number of successes is a binomial RV. X ~ Bin(n,p).

CS145: Probability & Computing

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Probability Distributions Columns (a) through (d)

Chapter 5 continued. Chapter 5 sections

HW7 Solutions. f(x) = 0 otherwise. 0 otherwise. The density function looks like this: = 20 if x [10, 90) if x [90, 100]

Lecturer: Olga Galinina

Chapter 5. Statistical Models in Simulations 5.1. Prof. Dr. Mesut Güneş Ch. 5 Statistical Models in Simulations

CSE 312, 2017 Winter, W.L. Ruzzo. 7. continuous random variables

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Brief Review of Probability

Computer Science, Informatik 4 Communication and Distributed Systems. Simulation. Discrete-Event System Simulation. Dr.

Continuous Probability Distributions. Uniform Distribution

Probability and Distributions

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

Lecture 1: August 28

Guidelines for Solving Probability Problems

General Random Variables

1 Inverse Transform Method and some alternative algorithms

It can be shown that if X 1 ;X 2 ;:::;X n are independent r.v. s with

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Generation from simple discrete distributions

Continuous Random Variables and Continuous Distributions

Chapter Learning Objectives. Probability Distributions and Probability Density Functions. Continuous Random Variables

Lecture 3 Continuous Random Variable

Tom Salisbury

Probability and Statistics Concepts

1 Review of Probability and Distributions

Chapter 3: Random Variables 1

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

CHAPTER 6. 1, if n =1, 2p(1 p), if n =2, n (1 p) n 1 n p + p n 1 (1 p), if n =3, 4, 5,... var(d) = 4var(R) =4np(1 p).

Math Spring Practice for the final Exam.

Queueing Theory and Simulation. Introduction

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Stat 100a, Introduction to Probability.

SOLUTION FOR HOMEWORK 12, STAT 4351

Probability distributions. Probability Distribution Functions. Probability distributions (contd.) Binomial distribution

3 Multiple Discrete Random Variables

IE 303 Discrete-Event Simulation

EE/CpE 345. Modeling and Simulation. Fall Class 5 September 30, 2002

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

PAS04 - Important discrete and continuous distributions

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Twelfth Problem Assignment

Algorithms for Uncertainty Quantification

Random variable X is a mapping that maps each outcome s in the sample space to a unique real number x, x. X s. Real Line

Chapter 2 Queueing Theory and Simulation

Sampling Distributions

Sampling Distributions

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

3. Probability and Statistics

Fundamentals of Applied Probability and Random Processes

4 Branching Processes

FINAL EXAM: Monday 8-10am

Chapter 6: Random Processes 1

Stochastic Models in Computer Science A Tutorial

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Glossary availability cellular manufacturing closed queueing network coefficient of variation (CV) conditional probability CONWIP

Preliminary Statistics. Lecture 3: Probability Models and Distributions

STAT 418: Probability and Stochastic Processes

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

Generating Random Variates 2 (Chapter 8, Law)

EE 505 Introduction. What do we mean by random with respect to variables and signals?

Discrete Random Variables

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Chapter 3: Random Variables 1

1 Basic concepts from probability theory

Math 151. Rumbos Fall Solutions to Review Problems for Exam 2. Pr(X = 1) = ) = Pr(X = 2) = Pr(X = 3) = p X. (k) =

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

15 Discrete Distributions

1 Bernoulli Distribution: Single Coin Flip

ABC methods for phase-type distributions with applications in insurance risk problems

Transcription:

Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential, Erlang, Hyper-exponential, phasetype, Markovian arrival process 1

Random variable X is denoted as random variable Discrete random variable, if X is discrete Continuous random variable, if X is continuous Distribution function, or called cdf, cumulative distribution function F(x)=Pr(X<x) Probability density/mass function f(x)=pr(x=x), x is discrete (probability mass function, pmf) f(x)= F(x)/ x, x is continuous (pdf) 2

Random variable Mean: E(X) Variance: Var(X), or σ 2 (X) σ 2 (X) = E{(X-E(X)) 2 }=E(X 2 )-E 2 (X) Standard deviation: σ(x) Covariance of two random variables X, Y Cov(X,Y)=E{(X-E(X))(Y-E(Y))} Correlation coefficient of X, Y r(x,y)=cov(x,y)/ σ(x)σ(y) -1 r(x,y) 1 3

Coefficient of variation: Coefficient of variation: c X c X = σ(x)/e(x) c X =0: deterministic c X <1: smooth c X =1: pure random c X >1: bursty Figures as example more than variance t t t 4

Discrete Random Variables Probability mass function(pmf), P(n) P( n) Pr[ X n] where 0 P( n) 1, P( n) 1 Cumulative distribution function(cdf), F(x) F( x) Pr[ X x] P( n) where F(0) 0 F( ) 1 nx F( b) F( a), if b a F( b) F( a) Pr[ a X b] n Suppose the random variable X is positive 5

Discrete Random Variables Mean and Variance Mean: x np(n) n Variance: 2 2 ( n x) P(n) n 6

Continuous Random Variables Probability density function(pdf), f(x) f( x) 0 Pr[ a X b] f ( x) dx 0 f ( x) dx 1 b a Cumulative distribution function(cdf), F(x) F( x) Pr[ X x] f ( y) dy df( x) f( x) dx 0 x 7

Continuous Random Variables Mean and Variance Mean: x xf ( x) dx xdf( x) 0 0 x 0 (1 F ( x )) dx Variance: 2 2 2 2 ( x x) f ( x) dx x f ( x) dx x 0 0 8

Z-transform for discrete distribution Z-transform is also called generating function P(z): Z-transform of discrete r.v. X, p(n)=pr(x=n), assume n=0,1,2, P( z) E( z X ) p( n) z Property n0 P(0) p(0), P(1) 1, P(1) E( X ) P(1)? n 9

Laplace-transform for continuous distribution F*(s): Laplace transform of a continuous r.v. X pdf of X is f(x), cdf of X is F(x), assume x 0 * sx sx F s e f x dx e df x 0 0 ( ) ( ) ( ) Property Shortcut to calculate the k-moment of X * * *( k ) k k F (0) 1, F (0) E( X ), F (0) ( 1) E( X ) * sx F ( s) s e F( x) dx 0 10

Geometric distribution p: success probability in a Bernoulli trial X: the number of Bernoulli trials needed to first get one success g(k;p)=pr{x=k}=p(1-p) k-1, k=1,2, Property E(X)=1/p Var(X)=(1-p)/p 2 c X =(1-p) ½ Curve of probability mass function Discrete version of exponential distribution 11

Mixed Geometric distribution n set of independent Bernoulli trials, with success probability p i, respectively, i=1,2,,n Mixed probability is θ i, i=1,2,,n X: the number of Bernoulli trials first get one success of any one from the n set trials n k pmf: gb( k;, p) i pi (1 pi ) E( X ) i / n i1 p i i1 Discrete version of hyper-exponential distribution 12

Binomial distribution p: success probability in a Bernoulli trial X: the number of successes during n Bernoulli trials Probability function n r P( r; n, p) Pr[ X r] p (1 p) r Property E(X)=np nr 13

Negative binomial distribution Definition (also called Pascal distribution) Bernoulli trial with success prob. p, predefined number k of failures has occurred, stop the random number of successes, X, obeys NB distr. kr1 r nb( r; k,p) Pr[ X r] p (1 p) r Not required, for your reference k 14

Poisson distribution A Poisson random variable X with parameter λ has probability distribution n P( X n) e, n 0,1,2,... n! For the Poisson distribution, it holds X ( X ), cx 2 1 15

x Exponential Distribution (with Parameter μ) Pdf and cdf are f ( x) e x x F( x) 1 e, x 0 1 2 1 x x c 1 2 x x f(x) 1 Pure random F(x) 0 x 0 x 16

Exponential distribution If X 1, X 2,, X n are independent exponential random variables with parameter μ 1, μ 2,, μ n Then, Y = min(x 1, X 2,, X n ) is an exponential random variable with parameter μ = μ 1 + μ 2 + + μ n How about Z = max(x 1, X 2,, X n )? Prove it as homework 17

Exponential distribution has memoryless property P( X x t) P( X x t X t) P( X t) ( xt) e x e P( X x) t e P( t X x t X t) P( X x) F( x) 1 e x Memoryless: Future state only depends on the current state, independent of the history 18

Erlang distribution X: Erlang-k random variable, X= X 1 +X 2 + +X k, X i are independent random variables obeying exponential distribution with para. μ Denote this distribution as E k (μ), its pdf is Its cdf is 19

Erlang distribution Mean, variance, squared coefficient of variation Model smooth data traffic barber shop pdf curve is 20

Hyperexponential distribution X is with prob. α i to select exponential r.v. X i with para. μ i, i=1,,k, denote this distribution as H k k it Its pdf is f ( t) i ie, t 0 Its mean is Its variance is i1 k i EX ( ) Its coefficient of variation 1 i1 Model bursty data traffic i k k 2 i i ( X ) 2 2 i1 i i1 i 2 1 k 1 1 k k 21

Phase-type distribution (discrete PH type distribution time case) the first passage time to the absorbing state of a discrete time Markov chain Two para., T: part of transition probability matrix, α: the initial state distribution, n phase k cdf:, pmf: Example F( k) 1T e 0.3 0.4 0.3 T t P 0.2 0.6 0.2 0 1 0 0 1 k 1 f ( k) T t [0.3,0.7] Te t e 22

Phase-type distribution (discrete time case) Example Geometric distribution, 1 phase 1 p p T t P 0 1 0 1 pmf: Mix-Geometric distribution, n phase 0.3 0 0.7 T t P 0 0.8 0.2 [0.5,0.5] 0 1 0 0 1 pmf: f ( k) T t p(1 p) [1] k 1 k 1 f ( k) T t 0.50.7 0.3 0.50.2 0.8 k 1 k 1 k 1 23

PH type distribution (continuous PH type distribution time case) the first passage time to the absorbing state of a continuous time Markov process Two para., T: part of the transition rate matrix, α: the initial state distribution, n phase cdf: Example Tx F( x) 1e e 5 4 1 T t B 2 6 4 0 0 0 0 0 [0.8,0.2] Te -pdf: t 0 f x = αe Tx t 24

PH type distribution (continuous Example time case) Exponential distribution: Tx x F( x) 1e e 1 e Erlang distribution with (m,λ): [1,0,0,...,0] 1, T Hyper-exponential distribution [ 1, 2,..., n ] T 1 T.... 2 T t B 0 0... n 25

Coxian distribution A special case of PH type distribution 1 p11 0 0... (1 p1 ) 1 0 2 p22 0... (1 p2) 2 B 0 0... p (1 p ) 0 0... 0 k k 0 0... 0 0 0 [1,0,0,...,0] k 1 k 1 k 1 k 1 k 1 k phase 26

Pareto distribution Pareto(α): a distribution with a power-law tail pdf: cdf: f x x x f x Usually, 0<α<2 1 ( ), 1; otherwise, ( ) 0 F( x) 1 x, x 1; otherwise, F( x) 0 Pareto distribution decays much more slowly than exponential distribution We call it has heavy tail or fat tail Heavy tail is very important for practical data traffic 27

Relation to Central Limit Theorem CLT: X i are i.i.d. r.v. s with mean μ and variance σ 2, define Z n =(X 1 + + X n - nμ)/(σ n), then Z n converges to the standard normal distribution CLT can explain many natural phenomena obey normal distribution Binomial(n,p) is a sum of i.i.d. Bernoulli(p) r.v. s, it converges to a normal distribution when n is large Poisson(λ) can be approximated by a normal distribution with mean λ and variance λ: sum of λ r.v. s with rate 1 Binomial and Poisson can be viewed as an approximation of discrete version of normal distribution, plot their pdf 28

Relation to Central Limit Theorem Binomial Distribution sum of n Geometric distribution r.v. s with probability p Poisson Distribution sum of λ Poisson distribution r.v. s with rate 1 29

An explanation to Additive White Gaussian Noise (AWGN) AWGN explanation Transmit a signal in a channel 100 i.i.d. sources make low noise, uniformly distributed among [-1, 1] Noise is additive, if the total amount of noise level is greater than 10 or less than -10, then it corrupts the signal; otherwise, no problem Calculate the probability of signal no corruption Calculation X~Unif(-1,1), E(X)=0, Var(X)=1/3; S 100 ~Norm(0,100/3) P(-10<S 100 <10)=Φ( 3)- Φ(- 3)=2Φ( 3)-1=2(0.9572)-1 =0.9144 30

A map of the relation of all these distributions Chalk drawing PH type distribution can almost model any distribution Convert stochastic process to Markovian model Widely used in queuing system, reliable system, regenerative system, etc. Discrete version v.s. continuous version Geometric v.s. Exponential Mixed Geo v.s. HyerExp Binomial, Poisson v.s. Normal Erlang (continuous r.v.) Normal(continuous r.v.) 31