Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Similar documents
EE 445 / 850: Final Examination

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

QUEUING MODELS AND MARKOV PROCESSES

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

Markov Reliability and Availability Analysis. Markov Processes

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Markov Processes Hamid R. Rabiee

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

STOCHASTIC PROCESSES Basic notions

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Markov decision processes and interval Markov chains: exploiting the connection

Lecture Notes: Markov chains Tuesday, September 16 Dannie Durand

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Markov Chains Absorption Hamid R. Rabiee

Probability, Random Processes and Inference

ISM206 Lecture, May 12, 2005 Markov Chain

Markov Chains, Stochastic Processes, and Matrix Decompositions

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

LECTURE NOTES: Discrete time Markov Chains (2/24/03; SG)

The probability of going from one state to another state on the next trial depends only on the present experiment and not on past history.

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Markov Chains (Part 3)

Discrete Markov Chain. Theory and use

2 Discrete-Time Markov Chains

ISyE 6650 Probabilistic Models Fall 2007

Dynamic network sampling

An Application of Graph Theory in Markov Chains Reliability Analysis

each nonabsorbing state to each absorbing state.

Markov Chains, Random Walks on Graphs, and the Laplacian

Math 597/697: Solution 5

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Markov Chains and Transition Probabilities

18.600: Lecture 32 Markov Chains

Availability. M(t) = 1 - e -mt

Markov Chains Absorption (cont d) Hamid R. Rabiee

Updating PageRank. Amy Langville Carl Meyer

ISE/OR 760 Applied Stochastic Modeling

Lecture 20 : Markov Chains

On asymptotic behavior of a finite Markov chain

18.175: Lecture 30 Markov chains

Lecturer: Olga Galinina

Stochastic modelling of epidemic spread

Solutions to Problem Set 5

Markov Processes. Stochastic process. Markov process

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

CS 798: Homework Assignment 3 (Queueing Theory)

n α 1 α 2... α m 1 α m σ , A =

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Stochastic2010 Page 1

WAITING-TIME DISTRIBUTION FOR THE r th OCCURRENCE OF A COMPOUND PATTERN IN HIGHER-ORDER MARKOVIAN SEQUENCES

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains

57:022 Principles of Design II Final Exam Solutions - Spring 1997

Chapter 16 focused on decision making in the face of uncertainty about one future

Stochastic process. X, a series of random variables indexed by t

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

MARKOV PROCESSES. Valerio Di Valerio

18.440: Lecture 33 Markov Chains

Stochastic modelling of epidemic spread

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

TMA4265 Stochastic processes ST2101 Stochastic simulation and modelling

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

The Transition Probability Function P ij (t)

Stochastic Models: Markov Chains and their Generalizations

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

2 DISCRETE-TIME MARKOV CHAINS

1.3 Convergence of Regular Markov Chains

Contents Preface The Exponential Distribution and the Poisson Process Introduction to Renewal Theory

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Markov Chains and Related Matters

Session 1: Probability and Markov chains

Reliability of Technical Systems

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

MAA704, Perron-Frobenius theory and Markov chains.

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Answers to selected exercises

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Page rank computation HPC course project a.y

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Liquidity in Credit Networks A Little Trust Goes a Long Way

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

Question Points Score Total: 70

Name of the Student: Problems on Discrete & Continuous R.Vs

Birth-death chain models (countable state)

STAT 380 Continuous Time Markov Chains

Social network analysis: social learning

Random Walk on a Graph

Finite-Horizon Statistics for Markov chains

Markov chains. Randomness and Computation. Markov chains. Markov processes

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Context tree models for source coding

Transcription:

Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior that varies randomly with time and space is known as a stochastic process. A stochastic process that meets the following requirements is a Markovian process, otherwise non-markovian Requirements:. system states must be identifiable. lack of memory: future states are independent of all past states, except the present state 3. stationary: probability of transition between states is the same at all times Requirements & 3 are met by systems with probability distributions characterized by a constant hazard rate. Markov Approach: - discrete (time or space) Discrete Markov Chain - continuous (time) Continuous Markov Process

Discrete Markov Chain - State System 3/4 remaining in State /4 leaving State P[remaining in State ] + P[leaving State ] = ¾ + ¼ = The behavior of the system (probability of residing in a state after a number of time intervals) can be illustrated by a tree diagram. Probability of any branch - multiply the probability of each step in the branch Probability of residing in a state - sum of branch probabilities that lead to that state

State probabilities (time dependent) of the -state system:

Time interval State probability State State 0.0 0.0 ½ = 0.500 ½ = 0.500 (½)(½) + (½)(¼) = 0.375 (½)(½) + (½)(3/4) = 0.65 3 0.344 0.656 4 0.336 0.664 5 0.334 0.666 Probability 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0. 0. 0 start in State start in State 0 3 4 5 6 Number of Time Intervals State State As the # of time intervals increase, state probability tends to a constant (limiting) value limiting state probability Transient behavior (time-dependent state probability) depends on the initial condition Limiting state probability of ergodic system (or process) is independent of the initial condition.

Ergodic System: every state of a system can be reached from every other state directly or through intermediate states Systems with absorbing states are not ergodic. Absorbing State: a state once entered cannot be left e.g. a system failure state in a mission oriented system Evaluation Procedure using Markov Model: - develop Markov model for the component (or system) - evaluate state probability (time dependent or limiting state) using: o Tree diagram: impractical for large systems or a large number of time intervals o Stochastic Transitional Probability Matrix o Other techniques for continuous Markov process will be discussed later

Stochastic Transitional Probability Matrix Square matrix (order = number of states) Rows : from states Columns: to states Element : probability value from one state to another P ij = prob of transition from state i to state j from nodes P = to nodes.. n P P.. P n P P.. P n.......... n P n P n.. P nn sum of probabilities in each row must be unity Transient behavior: State probabilities after n intervals is given by, P(n) = P(0).P n where P(0) is the initial probability vector (state probabilities at initial condition) Limiting State Probability: repeated multiplications of P until resulting P does not change with further multiplications. αp = α where α = limiting probability vector

Example: /4 Stochastic Transitional Probability Matrix, P = /4 3/4 If the system starts in State, Initial probability vector P(0) = [ 0] State probabilities after interval, P() = P(0).P = [ 0]. = [ 0.375 0.65 ] /4 3/4 Limiting State Probabilities: α = [P P ] Using the equation, P = limiting probability of being in State P = limiting probability of being in State αp = α [P P ] = [P P ] () /4 3/4 P + P = () Solving () & (), P = 0.333 and P = 0.667

Absorbing States System states when once entered, cannot be left until the system starts a new mission. e.g. failure states in mission oriented systems Need to evaluate: How many time intervals does the system operate on average before entering the absorbing state? Expected # of time intervals, where, N = [ I Q ] - I = identity matrix Q = truncated matrix created by deleting row(s) and column(s) associated with the absorbing states /4 /3 /3 3 absorbing state

Example: /4 3 absorbing state Stochastic Transitional Probability Matrix, P = 3 3/4 /4 0 0 3 0 0 Truncated Matrix (deleting Row 3 & Column 3 from P ) Q = 3/4 /4 0 Average number of time intervals spent in each state before entering the absorbing state, N = [ I Q ] - 0 0 3/4 /4 0 /4 -/4 0 = { } - = = - 4 0 i.e. average no. of time intervals spent in state given that the system starts in state is 4.