Probability Distributions
|
|
- Nicholas Hicks
- 5 years ago
- Views:
Transcription
1 Lecture : Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function (mgf) M(t) for some well-known discrete and continuous probability distributions Discrete Distributions Discrete Uniform: f(x) {, n x, 2,, n, 0, otherwise µ + n, σ 2 n2 2 2, M(t) e(n+)t e t n(e t ), t 0 Geometric: f(x) { p( p) x, x 0,, 2,, 0, otherwise, where 0 < p < µ p p, σ2 p p 2, M(t) Binomial b(n, p): f(x) ( ) n 0 < p < The notation x p ( p)e t ( ) n p x ( p) n x, x 0,, 2,, n, 0, otherwise, n! x!(n x)! where n is a positive integer and µ np, σ 2 np( p), M(t) ( p + pe t ) n where n is a positive inte- ( ) x + n p Negative Binomial: f(x) n n ( p) x, x 0,, 2,, 0, otherwise, ger and 0 < p < µ n( p), σ 2 n( p) p p 2, M(t) p n [ ( p)e t ] n λ x e λ Poisson, P o(λ): f(x), x 0,, 2,, x! 0, otherwise, µ λ, σ 2 λ, M(t) e λ(et ) where λ is a positive constant Continuous Distributions Uniform U(a, b): f(x) b a, a x b, 0, otherwise, where a < b are constants µ a + b 2, σ2 (b a)2, M(t) ebt e at 2 t(b a), t 0
2 Gamma: f(x) 0 Γ(α)β α xα e x/β, x 0, 0, x < 0, β α xα e x/β dx For a positive integer n, Γ(n) (n )! µ αβ, σ 2 αβ 2, M(t) ( βt) α, t < /β where α and β are positive constants and Γ(α) Exponential: f(x) { λe λx, x 0, 0, x < 0, where λ is a positive constant µ λ, σ2 λ 2, M(t) λ λ t, t < λ Normal, N(µ, σ 2 ): f(x) ( ) σ 2π exp (x µ)2, < x <, where µ and σ are constants 2σ 2 E(X) µ, V ar(x) σ 2, M(t) e µt+σ2 t 2 /2 The probability generating function (pgf) is P X (t) E(t X ), moment generating function (mgf) is M X (t) E(e tx ) and cumulant generating function (cgf) is K X (t) ln(m X (t)) The mean µ X E(X) and variance σ 2 X E(X2 ) E 2 (X) computed from the pgf, P X (t), mgf, M X (t) and cgf, K X (t): µ X P X() M X(0) K X(0) and P X () + P X () [P X ()]2 σx 2 M X (0) [M X (0)]2 K X (0) Continuous-time Markov Chain Example, Simple Birth Process: The per capita rate of birth is λ Population size Time Figure : Three sample paths (stochastic realizations) of a simple birth process, X(0) ; n(t) e t is the dashed curve See Table 2
3 Table : For two stochastic realizations, the times at which a birth occurs for a simple birth process Realization Realization 2 Size X(t) Event Time t Size X(t) Event Time t Lecture 2: Discrete-Time Markov Chains (DTMCs) Transition probabilities: p ji (n) Prob{X n+ j X n i} If p ji (n) does not depend on n, then the process is said to be time homogeneous The transition matrix of a DTMC {X n } n0 with state space {, 2, } and one-step transition probabilities, {p ij} i,j, is denoted as P (p ij ), where p p 2 p 3 p 2 p 22 p 23 P p 3 p 32 p 33 The column sums equal one, a stochastic matrix, j p ji The n-step transition matrix P n (p (n) The probabilities p i (n) Prob{X n i}, p i (n + ) j p ijp j (n), i, 2 p(n + ) P p(n) See the directed graph in Figure 2 and the corre- Random walk model with absorbing barriers sponding (N + ) (N + ) transition matrix: q q p p P q p p ij ) The Markov chain, graphed in Figure 2, has three communication classes: {0}, {, 2,, N }, and {N} The Markov chain is reducible States 0 and N are absorbing; the remaining states are transient 3
4 0 2 N Figure 2: Probability of moving to right is p and to the left is q, p + q Boundaries 0 and N are absorbing, p 00 p NN (random walk with absorbing barriers or gambler s ruin problem) A DTMC is irreducible if its digraph is strongly connected Otherwise it is called reducible An irreducible DTMC can be positive recurrent or null recurrent or transient It may also be classified as periodic or aperiodic Recurrence is defined for each state i in the chain Recurrence means for each state i if the process leaves state i it will return to state i at some future time If not, the state is transient A state i is positive recurrent if the mean recurrence time (µ ii mean return time) is finite A state i with an infinite mean recurrence time is called null recurrent be a recurrent, ir- Theorem (Basic Limit Theorem for aperiodic Markov chains) Let {X n } n0 reducible, and aperiodic DTMC with transition matrix P (p ij ) Then lim n p(n) ij, µ ii where µ ii is the mean recurrence time for state i and i and j are any states of the chain [If µ ii, then lim n p (n) ij 0] be a recurrent, irre- Theorem 2 (Basic Limit Theorem for periodic Markov chains) Let {X n } n0 ducible, and d-periodic DTMC, d >, with transition matrix P (p ij ) Then lim n p(nd) ii d µ ii and p (m) ii 0 if m is not a multiple of d, where µ ii is the mean recurrence time for state i [If µ ii, then lim n p (nd) ii 0] Example of Genetics of Inbreeding: Two alleles A and a There are six possible breeding pairs which denote the six states of the DTMC, : AA AA, 2 aa aa, 3 Aa Aa, 4 Aa aa, 5 AA aa, 6 AA Aa Inbreeding of the first two types results in offspring of the same genotypes and inbreeding in the next generation will be of the same type; they are absorbing states The remaining states, 3,4,5,6 are transient The transition matrix has the following form: 0 /6 0 0 /4 0 /6 /4 0 0 ( ) P 0 0 /4 /4 /4 I A 0 0 /4 /2 0 0 O T 0 0 / /4 0 0 /2 Probability of absorption into ( states or 2 from states 3, 4, 5, 6 is computed from the fundamental matrix, I (A + AT + AT (I T ), lim n P n 2 ) ( ) + ) I A(I T ) Thus, A(I T ) O O O O is the probability of absorption into states or 2 (fixation) from states 3, 4, 5 or 6 Let the matrix E be a 2 2 matrix of ones, then E(I T ) is the mean time until absorption from states 3, 4, 5, or 6 4
5 Lecture 3: Discrete-Time Branching Processes Figure 3: Sample path of a branching process {X n } n0 In the first generation, four individuals are born, X 4 The four individuals in generation one give birth to three, zero, four, and one individuals, respectively, making a total of eight individuals in generation two, X 2 8 Generating functions rather than transition matrices are useful in analysis of branching processes Offspring pgf: f(t) p k t k k0 Recall f() and m f () is the mean number of offspring The branching process is called subcritical if m <, critical if m, and supercritical if m > Theorem 3 (Branching Process Theorem) Let X 0 Assume f(0) p 0 > 0 and p 0 + p < (i) If m, then lim n Prob{X n 0} (ii) If m >, then lim n Prob{X n 0} q, where q f(q) is the unique fixed point in the interval (0, ) If X 0 N and m >, then lim n p 0 (n) lim n Prob{X n 0} q N The conditional expectation E(X n+ X n ) mx n, E(X n+ ) me(x n ) Multitype Branching Process, n different types: (X,, X n ) The offspring random variable of type i is Y i Offspring pgfs: f i (t,, n) s n s P i (s,, s n )t s tsn n P i (s,, s n ) Prob{Y s,, Y n s n } Expectation matrix M (m ij ), where m ji f t j t,,t n Assume M is irreducible Denote the spectral radius of M as ρ(m), the maximum modulus of the eigenvalues of MThe multitype branching process is called subcritical if ρ(m) <, critical if ρ(m) and supercritical if ρ(m) > 5
6 Lecture 4: Continuous-Time Markov Chains (CTMCs), Introduction Discrete random variable X(t), t [0, ) Probabilities p i (t) Prob{X(t) i} Transition probability: p ji (t, s) Prob{X(t) j X(s) i}, s < t We will assume time-homogenous transition probabilities p ji (t, s) p ji (t s) The transition matrix is a stochastic matrix: where p 00 (t) p 0 (t) P (t) (p ij (t)) p 0 (t) p (t), p ji ( t) δ ji + q ji t + o( t) is an infinitesimal transition probability The infinitesimal generator matrix: q 00 q 0 Q (q ij ) q 0 q P (t) I, Q lim t 0 t The column sums of Q equal zero dp (t) Forward Kolmogorov differential equations: QP (t) dt dp (t) Backward Kolmogorov differential equations: P (t)q dt The embedded DTMC is used to define irreducible, recurrent, and transient states or chains for the associated CTMC Let Y n denote the random variable for the state of a CTMC {X(t) : t [0, )} at the time of the nth jump, Y n X(W n ) (See Figure 4) The set of discrete random variables {Y n } 0 is the embedded Markov chain T0 T T2 T3 0 W W2 W3 W4 Figure 4: Sample path of a CTMC, illustrating waiting times {W i } and interevent times, {T i } A CTMC is irreducible, recurrent or transient if the corresponding embedded Markov chain has these properties Some differences in the dynamics of a CTMC as opposed to a DTMC are the possibility of a finite-time blow up in a CTMC (explosive process) and the fact that CTMC are not periodic See Figure 5 The embedded MC cannot be used to classify chains as positive recurrent or null recurrent This latter classification depends on the mean recurrence time µ ii 6
7 0 W W2 W3 W4 W Figure 5: One sample path of a continuous time Markov chain that is explosive Theorem 4 (Basic Limit Theorem for CTMCs) If the CTMC {X(t) : t [0, )} is nonexplosive and irreducible, then for all i and j, lim p ij(t), () t q ii µ ii where µ ii is the mean recurrence time, 0 < µ ii In particular, a finite, irreducible CTMC is nonexplosive and the limit () exists and is positive If the DTMC is nonexplosive and positive recurrent, it has a limiting positive stationary distribution π satisfying Qπ 0 Poisson process with X(0) 0, p i+,i ( t) λ t + o( t) and p i (t) e λt (λt) i /i! has generator matrix λ 0 0 λ λ 0 Q 0 λ λ The associated embedded Markov chain has a transition matrix T 0 0 The Poisson process is transient A finite CTMC with two states {, 2} The generator matrix ( ) a b Q, a, b > 0 a b The CTMC is irreducible and positive recurrent The limiting stationary distribution can be found from the forward Kolmogorov differential equations by solving for the stationary distribution Qπ 0 In this case π (b/(a + b), a/(a + b)) tr The mean recurrence times are µ ii a + b, i, 2 ab 7
8 Lecture 5: Continuous-Time Markov Chains (CTMCs), Interevent Time To generate sample paths, we must know the time between jumps and the state to which the process jumps The Markov assumption implies the interevent time is exponentially distributed because the exponential distribution has the memoryless property Let T i be the continuous random variable for the time until the i + st event See Figure 4 Theorem 5 (Interevent Time) Assume j n p jn( t) α(n) t + o( t) Then the cumulative distribution function for the interevent time T i is F i (t) exp( α(n)t) with mean and variance µ Ti α(n) and σ 2 T i [α(n)] 2 Theorem 6 (Interevent Time Simulation) Let U U[0, ] be the uniform distribution on [0,] and T i the continuous random variable for interevent time with state space [0, ) Then T i Fi (U) ln(u) α(n) Simple Birth and Death Markov Chain: In the simple birth and death process, an event can be a birth or a death Let X(0) N The infinitesimal transition probabilities are p i+j,i ( t) Prob{ X(t) j X(t) i} µi t + o( t), j λi t + o( t), j (λ + µ)i t + o( t), j 0 o( t), j, 0, Use two random numbers, u and u 2, from the uniform distribution U(0, ) to determine the interevent time and the state to which the process jumps In MATLAB, indices begin from, so instead of writing t(0), we use t() Consider the simple birth and death chain, in a MATLAB program, t() 0, and the time to the next event is t(2) t()+ln(u )/(α(n)), where α(n) λn+µn, given the process is in state n Since there are two events, to determine whether there is a birth or a death, the unit interval is divided into two subintervals, one subinterval has probability λ/(λ + µ) and the other has probability µ/(λ + µ) Generate a uniform random number u 2 If u 2 < λ/(λ + µ), then this random number lies in the first subinterval and there is a birth, otherwise if u 2 > λ/(λ + µ), the random number lies in the second subinterval and there is a death This concept can be easily extended to k > 2 events In a MATLAB program with k events the unit interval must be divided into k subintervals, each with predetermined probability for i,, k that depends on the current state and the transition probabilities For example, suppose there are four events with the following rates a i (n)m, i, 2, 3, 4 which depend on the current state n The probabilities of these four events are a i (n)/a(n), a(n) i a i(n), i, 2, 3, 4 The subinterval [0, ] is subdivided into four subintervals with the following endpoints: Therefore, in a MATLAB program: 0, a a, a + a 2, a a + a 2 + a 3, a if u2<a/a, then event occurs elseif u2>a/a & u2<(a+a2)/a, then event 2 occurs elseif u2>(a+a2)/2 & u2<(a+a2+a3)/a, then event 3 occurs else u2>(a+a2+a3)/a, then event 4 occurs The subintervals change each time the process changes state n If the number of events are large, deciding which event occurs can become quite lengthy and there are ways to speed up the process of selecting a particular event 8
9 %MatLab program: simple birth and death process clear all x05; b; d05; % initial and parameter values for j:3 % Three sample paths clear x t n; t(,j)0; x(n)x0; % starting values while x(n)>0 & x(n)<50; % continue until the process hits zero or reaches 0 urand; u2rand; % two uniform random numbers t(n+,j)-log(u)/(b*x(n)+d*x(n))+t(n,j); if u2< b/(b+d); x(n+)x(n)+; else x(n+)x(n)-; end nn+; end sstairs(t(:,j),x, r-, Linewidth,2); hold on end xlabel( Time ); ylabel( Population size ); hold off Figure 6: Three sample paths of the simple birth and death process, X(0) 5, λ b, µ d Population size Time Lecture 6: Continuous-Time Birth and Death Processes: The simple birth, simple death, simple birth and death, and simple birth, death, and immigration processes are linear in the rates, λi + ν, µi From the forward Kolmogorov differential equations, first order partial differential equations for the pgf and mgf can be derived Applying the method of characteristics to the first-order partial differential equations, explicit expressions can be found for the pgf and mgf of these processes Denote the pgf for these simple process as M(θ, t) P(z, t) p i (t)z i i0 p i (t)e iθ P(e θ, t) i0 9
10 Simple Birth, Death, and Immigration Process: Let X(0) N The infinitesimal transition probabilities are p i+j,i ( t) Prob{ X(t) j X(t) i} µi t + o( t), j (ν + λi) t + o( t), j [ν + (λ + µ)i] t + o( t), j 0 o( t), j, 0, The forward Kolmogorov differential equations withx(0) N are dp i dt dp 0 dt [λ(i ) + ν]p i + µ(i + )p i+ (λi + µi + ν)p i νp 0 + µp for i, 2, with initial conditions p i (0) δ in Applying the generating function technique, it follows that the mgf M(θ, t) is a solution of the following first-order partial differential equation M t [ ] M λ(e θ ) + µ(e θ ) θ + ν(eθ )M with initial condition M(θ, 0) e Nθ The preceding differential equation is first-order because the rates are linear The mgf is the solution of this first-order partial differential equation, M(θ, t) (λ µ)ν/λ [ µ(e (λ µ)t ) e θ (µe (λ µ)t λ) ] N [ (λe (λ µ)t µ) λ(e (λ µ)t )e θ] N+ν/λ The moments E(X n (t)) of the probability distribution X(t) can be found by differentiating the mgf with respect to θ and evaluating at θ 0: E(X n (t)) n M(θ, t) θ n θ0 Table 2: Mean, variance, and pgf for the simple birth, simple death, and simple birth and death processes, where X(0) N and ρ e (λ µ)t, λ µ Simple Simple Simple Birth Death Birth and Death m(t) Ne λt Ne µt Ne (λ µ)t σ 2 (t) Ne 2λt ( e λt ) Ne µt ( e µt ) N λ + µ ρ(ρ ) λ µ (pz) N ( z( p)) N ( p + pz) N P(z, t) Negative binomial p e λt Binomial b(n,p) p e µt ( ρ ) N (λz µ) µ(z ) ρ (λz µ) λ(z ) 0
11 Table 3: Mean, variance, and pgf X(0) N and ρ e (λ µ)t, λ µ for the simple birth and death with immigration process, where m(t) Simple Birth and Death with Immigration ρ[n(λ µ) + ν] ν λ µ σ 2 (t) N (λ2 µ 2 )ρ[ρ ] µ + ρ(λρ µ λ) (λ µ) 2 + ν (λ µ) 2 P(z, t) (λ µ) ν/λ [µ(ρ ) z(µρ λ)] N [λρ µ λ(ρ )z] N+ν/λ
Probability Distributions
Lecture 1: Background in Probability Theory Probability Distributions The probability mass function (pmf) or probability density functions (pdf), mean, µ, variance, σ 2, and moment generating function
More information6 Continuous-Time Birth and Death Chains
6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.
More informationAARMS Homework Exercises
1 For the gamma distribution, AARMS Homework Exercises (a) Show that the mgf is M(t) = (1 βt) α for t < 1/β (b) Use the mgf to find the mean and variance of the gamma distribution 2 A well-known inequality
More informationLecture 4a: Continuous-Time Markov Chain Models
Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationAn Introduction to Stochastic Epidemic Models
An Introduction to Stochastic Epidemic Models Linda J. S. Allen Department of Mathematics and Statistics Texas Tech University Lubbock, Texas 79409-1042, U.S.A. linda.j.allen@ttu.edu 1 Introduction The
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationPart I Stochastic variables and Markov chains
Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationSTAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it
More informationStat410 Probability and Statistics II (F16)
Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two
More informationContinuous-Time Markov Chain
Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationTHE QUEEN S UNIVERSITY OF BELFAST
THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M
More informationAN INTRODUCTION TO STOCHASTIC EPIDEMIC MODELS-PART I
AN INTRODUCTION TO STOCHASTIC EPIDEMIC MODELS-PART I Linda J. S. Allen Department of Mathematics and Statistics Texas Tech University Lubbock, Texas U.S.A. 2008 Summer School on Mathematical Modeling of
More informationSTAT STOCHASTIC PROCESSES. Contents
STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationBirth and Death Processes. Birth and Death Processes. Linear Growth with Immigration. Limiting Behaviour for Birth and Death Processes
DTU Informatics 247 Stochastic Processes 6, October 27 Today: Limiting behaviour of birth and death processes Birth and death processes with absorbing states Finite state continuous time Markov chains
More informationContinuous Distributions
A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later
More informationLecture 21. David Aldous. 16 October David Aldous Lecture 21
Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationContinuous time Markov chains
Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationSTAT 3610: Review of Probability Distributions
STAT 3610: Review of Probability Distributions Mark Carpenter Professor of Statistics Department of Mathematics and Statistics August 25, 2015 Support of a Random Variable Definition The support of a random
More informationCDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical
CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one
More informationThings to remember when learning probability distributions:
SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions
More informationTMA4265 Stochastic processes ST2101 Stochastic simulation and modelling
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes
More informationReview 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015
Review : STAT 36 Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics August 25, 25 Support of a Random Variable The support of a random variable, which is usually denoted
More information4 Branching Processes
4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring
More informationExponential Distribution and Poisson Process
Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model
More informationDiscrete Distributions
Chapter 2 Discrete Distributions 2.1 Random Variables of the Discrete Type An outcome space S is difficult to study if the elements of S are not numbers. However, we can associate each element/outcome
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More informationSampling Distributions
In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationUNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON
UNIVERSITY OF LONDON IMPERIAL COLLEGE LONDON BSc and MSci EXAMINATIONS (MATHEMATICS) MAY JUNE 23 This paper is also taken for the relevant examination for the Associateship. M3S4/M4S4 (SOLUTIONS) APPLIED
More informationQuantitative Model Checking (QMC) - SS12
Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSTATS 3U03. Sang Woo Park. March 29, Textbook: Inroduction to stochastic processes. Requirement: 5 assignments, 2 tests, and 1 final
STATS 3U03 Sang Woo Park March 29, 2017 Course Outline Textbook: Inroduction to stochastic processes Requirement: 5 assignments, 2 tests, and 1 final Test 1: Friday, February 10th Test 2: Friday, March
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More informationChapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan
Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process
More informationA review of Continuous Time MC STA 624, Spring 2015
A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationMoments. Raw moment: February 25, 2014 Normalized / Standardized moment:
Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationStatistics STAT:5100 (22S:193), Fall Sample Final Exam B
Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2
More informationLECTURE #6 BIRTH-DEATH PROCESS
LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death
More informationDerivation of Itô SDE and Relationship to ODE and CTMC Models
Derivation of Itô SDE and Relationship to ODE and CTMC Models Biomathematics II April 23, 2015 Linda J. S. Allen Texas Tech University TTU 1 Euler-Maruyama Method for Numerical Solution of an Itô SDE dx(t)
More information1 Probability and Random Variables
1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in
More informationContinuous Time Markov Chains
Continuous Time Markov Chains Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 2015 Outline Introduction Continuous-Time Markov
More informationCDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv
More informationMathematical Methods for Computer Science
Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationStatistics 3657 : Moment Generating Functions
Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Chapter 6 : Moment Functions Néhémy Lim 1 1 Department of Statistics, University of Washington, USA Winter Quarter 2016 of Common Distributions Outline 1 2 3 of Common Distributions
More informationChapter 6: Random Processes 1
Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More information15 Discrete Distributions
Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.
More informationP 2 (t) = o(t). e λt. 1. keep on adding independent, exponentially distributed (with mean 1 λ ) inter-arrival times, or
POISSON PROCESS PP): State space consists of all non-negative integers, time is continuous. The process is time homogeneous, with independent increments and P 0 t) = λ t+ot) P t) = λ t+ot) P 2 t) = ot).
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume II: Probability Emlyn Lloyd University oflancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester - New York - Brisbane
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationApplied Stochastic Processes
STAT455/855 Fall 26 Applied Stochastic Processes Final Exam, Brief Solutions 1 (15 marks (a (7 marks For 3 j n, starting at the jth best point we condition on the rank R of the point we jump to next By
More information3 Continuous Random Variables
Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random
More information(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More informationIntroduction to Queuing Networks Solutions to Problem Sheet 3
Introduction to Queuing Networks Solutions to Problem Sheet 3 1. (a) The state space is the whole numbers {, 1, 2,...}. The transition rates are q i,i+1 λ for all i and q i, for all i 1 since, when a bus
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationECON 5350 Class Notes Review of Probability and Distribution Theory
ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one
More information1 Types of stochastic models
1 Types of stochastic models Models so far discussed are all deterministic, meaning that, if the present state were perfectly known, it would be possible to predict exactly all future states. We have seen
More informationLecture 17: The Exponential and Some Related Distributions
Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if
More informationSTA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008
Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There
More informationSampling Distributions
Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of
More informationContinuous Random Variables and Continuous Distributions
Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable
More informationECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains
ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer
More informationFigure 10.1: Recording when the event E occurs
10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable
More informationContinuous time Markov chains
Chapter 2 Continuous time Markov chains As before we assume that we have a finite or countable statespace I, but now the Markov chains X {X(t) : t } have a continuous time parameter t [, ). In some cases,
More informationIntroduction to Probability Theory for Graduate Economics Fall 2008
Introduction to Probability Theory for Graduate Economics Fall 008 Yiğit Sağlam October 10, 008 CHAPTER - RANDOM VARIABLES AND EXPECTATION 1 1 Random Variables A random variable (RV) is a real-valued function
More informationRandom variables and transform methods
Chapter Random variables and transform methods. Discrete random variables Suppose X is a random variable whose range is {,,..., } and set p k = P (X = k) for k =,,..., so that its mean and variance are
More informationLTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather
1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in
More informationSTAT 380 Continuous Time Markov Chains
STAT 380 Continuous Time Markov Chains Richard Lockhart Simon Fraser University Spring 2018 Richard Lockhart (Simon Fraser University)STAT 380 Continuous Time Markov Chains Spring 2018 1 / 35 Continuous
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationEcon 508B: Lecture 5
Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline
More information1 Continuous-time chains, finite state space
Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm
More informationThe story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes
The story of the film so far... Mathematics for Informatics 4a José Figueroa-O Farrill Lecture 19 28 March 2012 We have been studying stochastic processes; i.e., systems whose time evolution has an element
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More information1 IEOR 4701: Continuous-Time Markov Chains
Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change
More information