ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010

Similar documents
1 Gambler s Ruin Problem

Module 8. Lecture 3: Markov chain

Analysis of some entrance probabilities for killed birth-death processes

Chapter 16 focused on decision making in the face of uncertainty about one future

Lectures on Markov Chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Numerical Linear Algebra

Analysis of M/M/n/K Queue with Multiple Priorities

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

QUEUING MODELS AND MARKOV PROCESSES

ISM206 Lecture, May 12, 2005 Markov Chain

8 STOCHASTIC PROCESSES

ECE 534 Information Theory - Midterm 2

Homework Solution 4 for APPM4/5560 Markov Processes

Markov Chains Introduction

Outline. EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Simple Error Detection Coding

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

The Longest Run of Heads

Name of the Student:

Extension of Minimax to Infinite Matrices

6 Stationary Distributions

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Solution: (Course X071570: Stochastic Processes)

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

A MONOTONICITY RESULT FOR A G/GI/c QUEUE WITH BALKING OR RENEGING

An Analysis of TCP over Random Access Satellite Links

1 Gambler s Ruin Problem

An Introduction to Information Theory: Notes

Solutions to In Class Problems Week 15, Wed.

MATH 2710: NOTES FOR ANALYSIS

John Weatherwax. Analysis of Parallel Depth First Search Algorithms

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Continuous-Time Markov Chain

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018

NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR

Lecture 9 Classification of States

Chapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population

STOCHASTIC PROCESSES Basic notions

1 Probability Spaces and Random Variables

Markov Chains. Chapter 16. Markov Chains - 1

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies

The Transition Probability Function P ij (t)

CSE 599d - Quantum Computing When Quantum Computers Fall Apart

ISE/OR 760 Applied Stochastic Modeling

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

Distributed Rule-Based Inference in the Presence of Redundant Information

Interlude: Practice Final

Introduction to Probability for Graphical Models

Answers to selected exercises

18.175: Lecture 30 Markov chains

BERNOULLI TRIALS and RELATED PROBABILITY DISTRIBUTIONS

Calculation of MTTF values with Markov Models for Safety Instrumented Systems

Random variables. Lecture 5 - Discrete Distributions. Discrete Probability distributions. Example - Discrete probability model

Markov Chains (Part 3)

Random Variables Example:

Chapter 8 Markov Chains and Some Applications ( 馬哥夫鏈 )

Chapter 1: PROBABILITY BASICS

Markov Processes Hamid R. Rabiee

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

MAS275 Probability Modelling Exercises

Probability, Random Processes and Inference

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Statics and dynamics: some elementary concepts

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Elementary Analysis in Q p

Markov Chain Model for ALOHA protocol

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

2 DISCRETE-TIME MARKOV CHAINS

LECTURE #6 BIRTH-DEATH PROCESS

Readings: Finish Section 5.2

Stochastic process. X, a series of random variables indexed by t

The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION COURSE III. Wednesday, August 16, :30 to 11:30 a.m.

Markov Chains Handout for Stat 110

IEOR 6711, HMWK 5, Professor Sigman

Queuing Theory. Using the Math. Management Science

18.600: Lecture 32 Markov Chains

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

What is a random variable

CHAPTER-5 PERFORMANCE ANALYSIS OF AN M/M/1/K QUEUE WITH PREEMPTIVE PRIORITY

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Queuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011

Markov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India

Lecture 20 : Markov Chains

Part I Stochastic variables and Markov chains

Lectures on Probability and Statistical Models

Hotelling s Two- Sample T 2

18.440: Lecture 33 Markov Chains

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

Discrete Random Variable

Lecture 4a: Continuous-Time Markov Chain Models

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

Chi-Squared Tests. Semester 1. Chi-Squared Tests

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Transcription:

ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11, sections 1-3, for Thursday. Read the Bianchi aer for next Tuesday. 1 Markov Processes e re going to talk about random rocesses which have limited memory. Def n: Markov Process A discrete-time random rocess X n is Markov if it has the roerty that P [X n+1 X n,x n 1,X n 2,...] = P [X n+1 X n ] A continuous-time random rocess X(t) is Markov if it has the roerty that P [X(t n+1 ) X(t n ),X(t n 1 ),X(t n 2 ),...] = P [X(t n+1 ) X(t n )] If at time n you write a distribution for X n+1 given all ast values of X, the distribution is no different from that just using the resent value X n. Given the resent, the ast does not matter. Note that how you define X n is u to you. Examles Foreachone, writep [X(t n+1 ) X(t n ),X(t n 1 ),...]and P [X(t n+1 ) X(t n )]: Brownian motion: The value of X n+1 is equal to X n lus the random motion that occurs between time n and n+1. This motion is i.i.d. in a Brownian motion rocess. Any indeendent increments rocess (e.g., Poisson rocess). Gambling or investments. Digital systems. The state is described by what is in the comuter s memory; and the transitions may be non-random (described by a deterministic algorithm) or random. Randomness may arrive from inut signals.

ECE 6960-002 Fall 2010 2 Notes: The value X n is also called the state. The change from X n to X n+1 is called the state transition. i.i.d. r..s are also Markov. Ther.v. X n canbeeitherdiscrete-valuedorcontinuous-valued in order to have the Markov roerty. However, it must be discrete-valued in order to be reresented in a Markov chain, which we will talk about next. 2 Markov Chains hen X n is a Markov rocess and: 1. The r.v.s X n are discrete-valued, and 2. The transition robabilities P [X n+1 X n ] are not a function of n, we can reresent it as a Markov chain. Because the event sace Ω is countable, we tyically reresent our range S X as a set of integers. (If it wasn t, we could consider Y i = g(x i ) to be a function which assigned a unique integer to each element of S X.) Def n: Transition Probability The robability of transition from state i to state j is denoted i,j, i,j P [X n+1 = j X n = i] 2.1 Visualization e make diagrams to show the ossible rogression of a Markov rocess. Each state is a circle; while each transition is an arrow, labeled with the robability of that transition. Examle: Discrete Telegrah ave r.. Let X n be a Binomial r.. with arameter, and let Y n = n ( 1) X i = ( 1) n i=1 X i = Y n 1 ( 1) Xn i=1 Each time a trial is a success, the r.. Y n switches from 1 to -1 or vice versa. See the state transition diagram drawn in Fig. 1. Examle: (Miller & Childers) Collect Them All This is the fast food chain romotion with a series of toys for kids who are told to Collect them all!. Let there be four toys, and

ECE 6960-002 Fall 2010 3 1- -1 +1 1- Figure 1: A state transition diagram for the Discrete Telegrah ave. let X n be the number out of four that you ve collected after your nth visit to the chain. How many states are there? hat are the transition robabilities? 1 0.25 0.5 0.75 0.75 0.5 0.25 1 0 1 2 3 4 Figure 2: A state transition diagram for the Collect Them All! random rocess. 2.2 Single Ste Transition Probability Matrices This is Section 4.1. The transition robabilities satisfy: 1. i,j 0 2. j i,j = 1 Note: i i,j 1! Don t make this mistake. The robability of leaving state i for any state i is equal to 1. Def n: State Transition Probability Matrix The state transition robability matrix P of an N-state Markov chain is given by: 1,1 1,2 1,N 2,1 2,2 2,N..... N,1 N,2 N,N Note: The rows sum to one; the columns may not. There may be N states, but they may not have values 1, 2, 3,...,N. Thus if we don t have such values, we may create an

ECE 6960-002 Fall 2010 4 intermediate r.v. n which is equal to the rank of the value of X n, or n = rankx n, for some arbitrary ranking system. Examle: Discrete telegrah wave hat is the the TPM of the Discrete Telegrah ave r..? Use: n = 1 for X n = 1, and n = 2 for X n = 1: [ ] [ ] 1, 1 1,1 1 = 1, 1 1,1 1 Examle: Collect Them All hat is the the TPM of the Collect them all examle? Use n = X n +1: = 1,1 1,2 1,3 1,4 1,5 2,1 2,2 2,3 2,4 2,5 3,1 3,2 3,3 3,4 3,5 4,1 4,2 4,3 4,4 4,5 5,1 5,2 5,3 5,4 5,5 0 1 0 0 0 0 0.25 0.75 0 0 0 0 0.5 0.5 0 0 0 0 0.75 0.25 0 0 0 0 1 Examle: Gambling $50 You start at a casino with 5 $10 chis. Each time n you bet one chi. You win with robability 0.45, and lose with robability 0.55. If you run out, you will sto betting. Also, you decide beforehand to sto if you doubleyour money. hat is the TPMfor this random rocess? 1 0 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0 1 Examle: aiting in a finite queue A mail server (bank) can deliver one email (customer) at each

ECE 6960-002 Fall 2010 5 minute. But, X n more emails (customers) arrive in minute n, where X n is (i.i.d.) Poisson with arameter λ = 1 er minute. Emails (eole) who can t be handled immediately are queued. But if the number in the queue, Y n, is equal to 2, the queue is full, and emails will be droed (customers won t stay and wait). Thus the number of emails in the queue (eole in line) is given by hat is the P [X n = k]? Y n+1 = min(2,max(0,y n 1)+X n ) P [X n = k] = (λt)k e λt = 1 k! ek! P [X n = 0] = 1/e 0.37, and P [X n = 1] = 1/e 0.37, and P [X n = 2] = 1/(2e) 0.18. 0,0 0,1 0,2 1,0 1,1 1,2 2,0 2,1 2,2 = 0.37 0.37 0.26 0.37 0.37 0.26 0 0.37 0.63 Examle: Chute and Ladder See Figure 3. You roll a die (a fair die) and move forward that number of squares. Then, if you land on to of a chute, you have to fall down to a lower square; if you land on bottom of a ladder, you climb u to the higher square. The object is to land on inner. You don t need to get there with an exact roll. This is a Markov Chain: your future square only deends on your resent square and your roll. hat are the states? They are S X = {1,2,4,5,7} Since you ll never stay on 3 or 6, we don t need to include them as states. (e could but there would just be 0 robability of landing on them, so why bother.) This is the transition robability matrix: 7 inner 6 5 3 4 2 1 Start Figure 3: Playing board for the game, Chute and Ladder.

ECE 6960-002 Fall 2010 6 0 1/6 2/6 2/6 1/6 0 0 2/6 2/6 2/6 0 0 1/6 1/6 4/6 0 0 1/6 0 5/6 0 0 0 0 1 Examle: Countably Infinite Markov Chain e can also have a countably infinite number of states. It is a discrete-valued r.. after all; we might still have an infinite number of states. For examle, if we didn t ever sto gambling at a fixed uer number. Or, if we allowed ourselves to get into arbitrary debts. Such an examle, where we gamble $1 at each time, is shown in Figure 4. -1 0 1 2 Figure 4: Examle of a Markov Chain with a countably infinite state sace. Examle: Random Backoff In medium access control (MAC) rotocols for acket radio channels, asender may transmitbuthave its acket collide with a acket from another sender who sent at the same time. If a collision occurs (which haens with robability ), each will wait a random back-off time rior to transmitting. This random back-off time is chosen to be a uniform in {1,...,} for some maximum wait time (ignoring the ossible increase in after multile collisions). Figure 5 shows a transition diagram. 1 + 1 1 1 1 0 1 2-2 -1 Figure 5: Markov Chain of the waiting time in a random back-off MAC rotocol. A TPM for this random rocess is, 1 + 1 1 1 1 1 0 0 0 0 0 1 0 0 0.......... 0 0 0 0 0 0 0 0 1 0 1

ECE 6960-002 Fall 2010 7 2.3 Multi-ste Markov Chain Dynamics 2.3.1 Initialization e might not know in exactly which state the markov chain will start. For examle, for the bank queue examle, we might have eole lined u when the bank oens. Let s say we ve measured over many days and found that at time zero, the number of eole is uniformly distributed, i.e., { 1/3, x = 0,1,2 P [X 0 = k] = 0, o.w. e reresent this kind of information in a vector: In general, (0) = [P [X 0 = 0],P [X 0 = 1],P [X 0 = 2]] (n) = [P [X n = 0],P [X n = 1],P [X n = 2]] The only requirement is that the sum of (n) is 1 for any n. 2.3.2 Multile-Ste Transition Matrix This is in Ross Section 4.2. Def n: n-ste transition Matrix The n-ste transition robability matrix P(n) of Markov chain X n has (i, j)th element i,j (n) = P [X n+m = j X m = i]

ECE 6960-002 Fall 2010 8 Theorem: Chaman-Kolmogorov equations: For a Markov chain, the n-ste transition matrix satisfies P(n+m) = P(n)P(m) Proof: Consider the (i,j) element of P(n+m), i,j (n+m), i,j (n+m) = P [X n+m = j X 0 = i] = P [X n+m = j,x n = k X 0 = i] k hy is this ste true? i,j (n+m) = k = k = k = k P [X n+m = j X n = k,x 0 = i]p [X n = k X 0 = i] P [X n+m = j X n = k]p [X n = k X 0 = i] k,j (m) i,k (n) i,k (n) k,j (m) This latter form shows the matrix multilication. hen you have a sum of matrix elements, you should be able to recognize when that exression can be written as a matrix multilication. Here, the dummy index is on the inside of the subscrits. This is how we can see that i,j (n+m) is equal to the sum of the roducts of row i of P(n) and column j of P(m). Thus P(n+m) = P(n)P(m) This means, to find the two-ste transition matrix, you multily (matrix multily) P and P together. In general, the n-ste transition matrix is P(n) = [P(1)] n The state robabilities at time n can be found as (n) T = (0) T [P(1)] n 3 Markov Chain State Classification This is Leon-Garcia 11.3. There are quite a few definitions and terms which accomany Markov chains. Def n: Accessible A state j is accessible from state i if i,j (n) > 0 for some n 0.

ECE 6960-002 Fall 2010 9 Notes: Note that a state always communicates with itself. Accessible is also that there is a ositive robability that, starting at state i, state j will ever be entered. Def n: Communicate States i and j communicate if: State j is accessible from state i, and State i is accessible from state j. Notes: e write i j if states i and j communicate. Of course it is a symmetric relation. Communicates with is also transitive, i.e., if i j and k j then i k. Def n: Class States which communicate with each other are in the same class. Def n: Irreducible If all states in a Markov chain are in one class, then the chain is irreducible. Examle: Ross, 4.12 Consider the 4-state Markov chain with states {0,1,2,3} and TPM P = 0.5 0.5 0 0 0.5 0.5 0 0 0.25 0.25 0.25 0.25 0 0 0 1 hich states communicate? hat class(es) exist? Is this MC irreducible? Def n: Absorbing A state is absorbing if no other state is accessible from it.