Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Similar documents
MARKOV PROCESSES. Valerio Di Valerio

Markov Chains (Part 3)

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

ISM206 Lecture, May 12, 2005 Markov Chain

MATH HOMEWORK PROBLEMS D. MCCLENDON

Markov Processes Hamid R. Rabiee

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

CMPSCI 240: Reasoning Under Uncertainty

Conditional Probability and Independence

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

T 1. The value function v(x) is the expected net gain when using the optimal stopping time starting at state x:

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

Probability, Random Processes and Inference

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Lecture 9 Classification of States

Stochastic Petri Net

Readings: Finish Section 5.2

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Quantitative Verification

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

ISE/OR 760 Applied Stochastic Modeling

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Markov Chains. Contents

Monopoly An Analysis using Markov Chains. Benjamin Bernard

Markov Chains Handout for Stat 110

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

Stochastic Processes

Name of the Student:

MAS275 Probability Modelling Exercises

Answers to selected exercises

Chapter 16 focused on decision making in the face of uncertainty about one future

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

The Theory behind PageRank

From Stochastic Processes to Stochastic Petri Nets

Computational modelling techniques Exercise set 3 Solutions

Markov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Markov chains. MC 1. Show that the usual Markov property. P(Future Present, Past) = P(Future Present) is equivalent to

Exercises Stochastic Performance Modelling. Hamilton Institute, Summer 2010

Some Definition and Example of Markov Chain

Discrete Markov Chain. Theory and use

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

HW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).

2 Discrete-Time Markov Chains

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Classification of Countable State Markov Chains

Probability. VCE Maths Methods - Unit 2 - Probability

1 : Introduction. 1 Course Overview. 2 Notation. 3 Representing Multivariate Distributions : Probabilistic Graphical Models , Spring 2014

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

4. The Negative Binomial Distribution

Markov Chains. Chapter 16. Markov Chains - 1

1 Random Walks and Electrical Networks

Worksheet 2 Problems

FINITE MARKOV CHAINS

RISKy Business: An In-Depth Look at the Game RISK

Lecture 6 - Random Variables and Parameterized Sample Spaces

Advanced Data Science

Markov Chain Model for ALOHA protocol

CHAPTER 6. Markov Chains

Lecture 2 : CS6205 Advanced Modeling and Simulation

Computer Engineering II Solution to Exercise Sheet Chapter 12

33 The Gambler's Ruin 1

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2017 Kannan Ramchandran March 21, 2017.

Markov Chains (Part 4)

COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

MATH 3C: MIDTERM 1 REVIEW. 1. Counting

1 The Basic Counting Principles

An Introduction to Stochastic Modeling

Birth-death chain models (countable state)

IE 336 Seat # Name (clearly) Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

Random Walk on a Graph

Chapter 10 Markov Chains and Transition Matrices

STOCHASTIC PROCESSES Basic notions

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Stat-491-Fall2014-Assignment-III

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

1 Gambler s Ruin Problem

Homework set 2 - Solutions

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

ISyE 6650 Probabilistic Models Fall 2007

Probability Notes. Definitions: The probability of an event is the likelihood of choosing an outcome from that event.

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

MCS 341 Probability Theory Name. Final Exam: Probability Theory 17 December 2010

Stochastic process. X, a series of random variables indexed by t

1.3 Convergence of Regular Markov Chains

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov Chains and Stochastic Sampling

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

2. Transience and Recurrence

Transcription:

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University of Twente 2016/2017 Let the probability transition matrices of the following DTMCs be given. (a) P = 0.5 0.4 0.1 0 0.5 0.5 0 0 1 (b) Q = 0.2 0.8 0 0 0.6 0.4 0 0 0.2 0.3 0.5 0 0 0 0 1 a) Draw the corresponding state-transition graphs. b) Classify the states of the following DTMCs (i.e., identify communicating classes, indicate for all classes whether they are (a)periodic, and/or positive/null-recurrent and/or transient, absorbing, etc. ) Solutions: a) The state transition graphs for P resp. Q can be found in fig. 1 resp. fig. 2 Figure 1: State transition graph for P b) Classification of states: (a) For P : all states are aperiodic states 1 and 2 are transient, whereas state 3 is recurrent and absorbing (b) For Q: All states are aperiodic, states 1, 2 and 4 are recurrent, state 3 is transient and state 4 is absorbing 1

Figure 2: State transition graph for Q 2.2 Craps The game Craps is based on betting on the outcome of the roll of two dice. The outcome of the first roll - the come-out roll - determines whether there is a need for any further rolls. On outcome 7 or 11, the game is over and the player wins. The outcomes 2, 3 or 12, however are craps ; the player loses. On any other outcome, the dice are rolled again, but the outcome of the come-out roll is remembered (the so-called point ). If the next roll yields 7 or the point, the game is over. On 7, the player loses, on point the player wins. In any other case, the dice are rolled until eventually either 7 or the point is obtained. a) Draw the state-transition graph for Craps. b) What is the probability to win? a) State transition graph (cf. fig. 3): b) 2 9 + 2 (( 1 12 )2 ( 3 4 )n + ( 1 9 )2 ( 13 18 )n + ( 5 36 )2 ( 25 36 )n ) = 2 + 0.27 = 0.492 9 n=0 2.3 Absent minded post-doc An absent minded post-doc has 2 umbrellas that he uses when commuting from home to office and back. If it rains and an umbrella is available in his location, he takes it. If it is not raining, he always forgets to take an umbrella. Suppose that it rains with probability 0.6 each time if he commutes, independently of other times. Our goal is to find the fraction of days he gets wet during a commute. a) Draw the state-transition diagram of the DTMC, modelling the umbrella problem. 2

Figure 3: State transition graph for craps game b) Derive the corresponding transition probability matrix. c) Compute the steady state distribution. d) Using the previous results, compute the fraction of days, he gets wet during a commute. a) State i represents the number of umbrellas at the current location, regardless of what the current location actually is (office or home). Figure 4: State transition graph for absent minded post-doc b) P = 0 0 1 0 0.4 0.6 0.4 0.6 0 c) The System is irreducible, aperiodic and pos. recurrent: π 0 = 0.4 π 2 π 2 = 2.5 π 0 π 1 = 0.4 π 1 + 0.6 π 2 π 1 = π 2 = 2.5 π 0 3

π 2 = π 0 + 0.6 π 1 π 0 + π 1 + π 2 = 1 π 0 + 2.5 π 0 + 2.5 π 0 = 1 6 π 0 = 1 π 0 = 1 6 π 1 = π 2 = 5 12 d) He gets wet, if he starts from a location without umbrellas and if it is raining: P r{get wet} = 0.6 π 0 = 0.6 1 6 = 1 10 2.4 Multiprocessor interference in shared memory systems Consider the following multiprocessor system: The memory modules can be accessed independently and in parallel. Each memory access has the same fixed length. All modules can handle at most one request per time unit. Each processor is waiting for at most one pending memory request. After the service of a memory request, each processor generates a new request to memory module i with probability q i (q i is independent of the number of the processor; this is the so-called uniform memory access assumption). Set m = n = 2. Compute with DTMCs a) the stationary probability that there is no memory access conflict, b) the stationary probability that there is a conflict in memory module 1, c) the mean number of served memory requests per time unit. Stated transition graph for shared memory system (cf. fig 5) π 1 q 2 π 1 + π 2 q 2 2 = 0 4

Figure 5: State transition graph for shared memory system π 1 q 1 + 2π 2 q 1 q 2 π 2 + π 3 q 2 = 0 π 2 q 2 1 + π 3 q 1 π 3 = 0 π 1 + π 2 + π 3 = 1 - π 3 = π 2 q 2 1 (q 1 1) a) π 2 b) π 1 π 1 = π 2 q 2 2 (q 2 1) π 2 ( q2 2 q 2 1 + 1 q2 1 q 1 1 ) = 1 c) mean number of served memory requests per time unit : π 1 1 + π 2 2 + π 3 1 5