Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Similar documents
CHUN-HUA GUO. Key words. matrix equations, minimal nonnegative solution, Markov chains, cyclic reduction, iterative methods, convergence rate

Structured Markov chains solver: tool extension

Modelling Complex Queuing Situations with Markov Processes

Multi Stage Queuing Model in Level Dependent Quasi Birth Death Process

The MATLAB toolbox SMCSolver for matrix-analytic methods

Censoring Technique in Studying Block-Structured Markov Chains

Matrix Analytic Methods for Stochastic Fluid Flows

Advanced Queueing Theory

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Lecture 21. David Aldous. 16 October David Aldous Lecture 21

Statistics 150: Spring 2007

Markov Chains and Stochastic Sampling

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Markov Chains, Stochastic Processes, and Matrix Decompositions

Classification of Countable State Markov Chains

A TANDEM QUEUE WITH SERVER SLOW-DOWN AND BLOCKING

Finite-Horizon Statistics for Markov chains

Stationary Distribution of a Perturbed Quasi-Birth-and-Death Process

ON SOME BASIC PROPERTIES OF THE INHOMOGENEOUS QUASI-BIRTH-AND-DEATH PROCESS

Birth-death chain models (countable state)

Solving Nonlinear Matrix Equation in Credit Risk. by Using Iterative Methods

IEOR 6711: Stochastic Models I Professor Whitt, Thursday, November 29, Weirdness in CTMC s

Overload Analysis of the PH/PH/1/K Queue and the Queue of M/G/1/K Type with Very Large K

Cover Page. The handle holds various files of this Leiden University dissertation

Departure Processes of a Tandem Network

Simultaneous Transient Analysis of QBD Markov Chains for all Initial Configurations using a Level Based Recursion

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

A FAST MATRIX-ANALYTIC APPROXIMATION FOR THE TWO CLASS GI/G/1 NON-PREEMPTIVE PRIORITY QUEUE

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

STOCHASTIC PROCESSES Basic notions

Markov Chains, Random Walks on Graphs, and the Laplacian

arxiv:math/ v1 [math.pr] 24 Mar 2005

Analysis of generalized QBD queues with matrix-geometrically distributed batch arrivals and services

Finite queues at the limit of saturation

Cover Page. The handle holds various files of this Leiden University dissertation

2. Transience and Recurrence

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

Improved Newton s method with exact line searches to solve quadratic matrix equation

1 Ways to Describe a Stochastic Process

Readings: Finish Section 5.2

QUASI-BIRTH-AND-DEATH PROCESSES, LATTICE PATH COUNTING, AND HYPERGEOMETRIC FUNCTIONS

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

An Introduction to Entropy and Subshifts of. Finite Type

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Perron Frobenius Theory

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

Convex Optimization CMU-10725

Numerical Methods in Matrix Computations

A Heterogeneous Two-Server Queueing System with Balking and Server Breakdowns

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

An M/M/1 Queue in Random Environment with Disasters

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

MARKOV PROCESSES. Valerio Di Valerio

Chapter 1. Introduction. 1.1 Stochastic process

A New Look at Matrix Analytic Methods

57:022 Principles of Design II Final Exam Solutions - Spring 1997

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

arxiv: v1 [math.na] 9 Apr 2010

Lecture 15 Perron-Frobenius Theory

QBD Markov Chains on Binomial-Like Trees and its Application to Multilevel Feedback Queues

IEOR 6711, HMWK 5, Professor Sigman

MAA704, Perron-Frobenius theory and Markov chains.

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

Chapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Lecture #5. Dependencies along the genome

CS 798: Homework Assignment 3 (Queueing Theory)

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread

An Introduction to Stochastic Modeling

Math Homework 5 Solutions

Characterizing the BMAP/MAP/1 Departure Process via the ETAQA Truncation 1

On the Class of Quasi-Skip Free Processes: Stability & Explicit solutions when successively lumpable

Stochastic Processes

Structured Markov Chains

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.

Non-homogeneous random walks on a semi-infinite strip

Statistics 992 Continuous-time Markov Chains Spring 2004

Homework 3 posted, due Tuesday, November 29.

Quantitative Model Checking (QMC) - SS12

Chapter 16 focused on decision making in the face of uncertainty about one future

The SIS and SIR stochastic epidemic models revisited

Markov Chains Handout for Stat 110

MATH36001 Perron Frobenius Theory 2015

Necessary and sufficient conditions for strong R-positivity

On Finding Optimal Policies for Markovian Decision Processes Using Simulation

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

The cost/reward formula has two specific widely used applications:

Lectures on Markov Chains

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

ECEN 689 Special Topics in Data Science for Communications Networks

Stochastic Models: Markov Chains and their Generalizations

SIMILAR MARKOV CHAINS

On a quadratic matrix equation associated with an M-matrix

Transcription:

1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne Université Libre de Bruxelles University of Western Ontario Cirm, Luminy. April 28, 2010

2/29 Matrix-Analytic Methods In the 1970s and 1980s Marcel Neuts proposed a class of techniques for analysing Markov chains with block-structured transition matrices that have become known as matrix-analytic methods. More recently, other classes of models have been analysed using similar techniques, such as fluid queues (see Lecture 2) branching processes (see Lecture 3) The interaction of mathematical analysis and physical insight has played an important role in the development of results in this area.

2/29 Matrix-Analytic Methods In the 1970s and 1980s Marcel Neuts proposed a class of techniques for analysing Markov chains with block-structured transition matrices that have become known as matrix-analytic methods. More recently, other classes of models have been analysed using similar techniques, such as fluid queues (see Lecture 2) branching processes (see Lecture 3) The interaction of mathematical analysis and physical insight has played an important role in the development of results in this area.

2/29 Matrix-Analytic Methods In the 1970s and 1980s Marcel Neuts proposed a class of techniques for analysing Markov chains with block-structured transition matrices that have become known as matrix-analytic methods. More recently, other classes of models have been analysed using similar techniques, such as fluid queues (see Lecture 2) branching processes (see Lecture 3) The interaction of mathematical analysis and physical insight has played an important role in the development of results in this area.

/29 Quasi-Birth-and-Death processes A QDB process is a two-dimensional Markov chain {(X k, ϕ k ), k N} where X k N is called the level, ϕ k {1, 2,, m} is called the phase, the only possible transitions from state (n, i) are to state (n + 1, j) (1 level up), (n, j) (the same level), (n 1, j) (1 level down)

Quasi-Birth-and-Death processes 4/29

5/29 Example: tandem queues λ µ 1 µ 2 C C < = maximum capacity of the second buffer n is number ahead of first server = level of the QBD j is number beyond first server = phase of the QBD.

6/29 Quasi-Birth-and-Death processes m m 1 2 1 0 1 n n + 1 Transition probabilities: (A 1 ) ij to go up: (n, i) to (n+1, j) (A 0 ) ij to remain in same level: (n, i) to (n, j) (A 1 ) ij to go down: (n, i) to (n 1, j)

7/29 Stationary distribution Block-structured transition matrix: B A 1 0 A 1 A 0 A 1 P = 0 A 1 A 0... Transitions are homogeneous with respect to the levels π = [π 0, π 1, π 2, ] is the stationary probability vector of the QBD, where (π n ) i = lim k P[(X k, ϕ k ) = (n, i)]

/29 Stationary distribution Block-structured transition matrix: B A 1 0 A 1 A 0 A 1 P = 0 A 1 A 0... Transitions are homogeneous with respect to the levels π = [π 0, π 1, π 2, ] is the stationary probability vector of the QBD, where (π n ) i = lim k P[(X k, ϕ k ) = (n, i)]

/29 Stationary distribution Block-structured transition matrix: B A 1 0 A 1 A 0 A 1 P = 0 A 1 A 0... Transitions are homogeneous with respect to the levels π = [π 0, π 1, π 2, ] is the stationary probability vector of the QBD, where (π n ) i = lim k P[(X k, ϕ k ) = (n, i)]

8/29 Existence of the stationary distribution Let x be the solution to x(a 1 + A 0 + A 1 ) = x. Then the QBD is positive recurrent, null recurrent or transient according as x A 1 1 x A 1 1 is < 0, = 0 or > 0. The stationary probability vector π exists if and only if the QBD is positive recurrent.

8/29 Existence of the stationary distribution Let x be the solution to x(a 1 + A 0 + A 1 ) = x. Then the QBD is positive recurrent, null recurrent or transient according as x A 1 1 x A 1 1 is < 0, = 0 or > 0. The stationary probability vector π exists if and only if the QBD is positive recurrent.

8/29 Existence of the stationary distribution Let x be the solution to x(a 1 + A 0 + A 1 ) = x. Then the QBD is positive recurrent, null recurrent or transient according as x A 1 1 x A 1 1 is < 0, = 0 or > 0. The stationary probability vector π exists if and only if the QBD is positive recurrent.

9/29 Computation of the stationary distribution The stationary probability vector π satisfies π P = π, π 1 = 1 Partition the state space into E, E c [ ] [ P E P EE c π E π E c P E c E P E c ] [ = π E π E c ] One has that π E (P E + P EE c (I P E c ) 1 P E c E ) = π E π E c = π E P EE c (I P E c ) 1

9/29 Computation of the stationary distribution The stationary probability vector π satisfies π P = π, π 1 = 1 Partition the state space into E, E c [ ] [ P E P EE c π E π E c P E c E P E c ] [ = π E π E c ] One has that π E (P E + P EE c (I P E c ) 1 P E c E ) = π E π E c = π E P EE c (I P E c ) 1

9/29 Computation of the stationary distribution The stationary probability vector π satisfies π P = π, π 1 = 1 Partition the state space into E, E c [ ] [ P E P EE c π E π E c P E c E P E c ] [ = π E π E c ] One has that π E (P E + P EE c (I P E c ) 1 P E c E ) = π E π E c = π E P EE c (I P E c ) 1

Computation of the stationary distribution Let l(n) denote the level n of the QBD. Choose E = l(0), E c = {l(1), l(2), l(3),, } [ P = P E P E c E P EE c P E c ] = B A 1 0 0 A 1 A 0 A 1 0 A 1 A 0 A 1 0 A 1 A 0. The equation π E c = π E P EE c (I P E c ) 1 becomes [ π 1 π 2 ] = π 0 [ A 1 0 0 ] 2 6 4 I A 0 A 1 A 1 I A 0 A 1 A 1 I A 0 3 1 7 5 {z } N 10/29 We obtain for π 1 : π 1 = π 0 A 1 N 11.

Computation of the stationary distribution Let l(n) denote the level n of the QBD. Choose E = l(0), E c = {l(1), l(2), l(3),, } [ P = P E P E c E P EE c P E c ] = B A 1 0 0 A 1 A 0 A 1 0 A 1 A 0 A 1 0 A 1 A 0. The equation π E c = π E P EE c (I P E c ) 1 becomes [ π 1 π 2 ] = π 0 [ A 1 0 0 ] 2 6 4 I A 0 A 1 A 1 I A 0 A 1 A 1 I A 0 3 1 7 5 {z } N 10/29 We obtain for π 1 : π 1 = π 0 A 1 N 11.

Computation of the stationary distribution Let l(n) denote the level n of the QBD. Choose E = l(0), E c = {l(1), l(2), l(3),, } [ P = P E P E c E P EE c P E c ] = B A 1 0 0 A 1 A 0 A 1 0 A 1 A 0 A 1 0 A 1 A 0. The equation π E c = π E P EE c (I P E c ) 1 becomes [ π 1 π 2 ] = π 0 [ A 1 0 0 ] 2 6 4 I A 0 A 1 A 1 I A 0 A 1 A 1 I A 0 3 1 7 5 {z } N 10/29 We obtain for π 1 : π 1 = π 0 A 1 N 11.

Computation of the stationary distribution Let l(n) denote the level n of the QBD. Choose E = l(0), E c = {l(1), l(2), l(3),, } [ P = P E P E c E P EE c P E c ] = B A 1 0 0 A 1 A 0 A 1 0 A 1 A 0 A 1 0 A 1 A 0. The equation π E c = π E P EE c (I P E c ) 1 becomes [ π 1 π 2 ] = π 0 [ A 1 0 0 ] 2 6 4 I A 0 A 1 A 1 I A 0 A 1 A 1 I A 0 3 1 7 5 {z } N 10/29 We obtain for π 1 : π 1 = π 0 A 1 N 11.

Computation of the stationary distribution Let l(n) denote the level n of the QBD. Choose E = l(0), E c = {l(1), l(2), l(3),, } [ P = P E P E c E P EE c P E c ] = B A 1 0 0 A 1 A 0 A 1 0 A 1 A 0 A 1 0 A 1 A 0. The equation π E c = π E P EE c (I P E c ) 1 becomes [ π 1 π 2 ] = π 0 [ A 1 0 0 ] 2 6 4 I A 0 A 1 A 1 I A 0 A 1 A 1 I A 0 3 1 7 5 {z } N 10/29 We obtain for π 1 : π 1 = π 0 A 1 N 11.

11/29 Computation of the stationary distribution Then, for any n 1, choose E = {l(0), l(1),, l(n)}, E c = {l(n + 1), l(n + 2),, } By the same argument, we get n 0 π n+1 = π n A 1 N 11 = π 0 (A 1 N 11 ) n+1 (N 11 ) ij is the matrix of expected number of visits of state (n + 1, j), starting from (n + 1, i), before returning to {l(0), l(1),, l(n)}, for any n 0.

1/29 Computation of the stationary distribution Then, for any n 1, choose E = {l(0), l(1),, l(n)}, E c = {l(n + 1), l(n + 2),, } By the same argument, we get n 0 π n+1 = π n A 1 N 11 = π 0 (A 1 N 11 ) n+1 (N 11 ) ij is the matrix of expected number of visits of state (n + 1, j), starting from (n + 1, i), before returning to {l(0), l(1),, l(n)}, for any n 0.

1/29 Computation of the stationary distribution Then, for any n 1, choose E = {l(0), l(1),, l(n)}, E c = {l(n + 1), l(n + 2),, } By the same argument, we get n 0 π n+1 = π n A 1 N 11 = π 0 (A 1 N 11 ) n+1 (N 11 ) ij is the matrix of expected number of visits of state (n + 1, j), starting from (n + 1, i), before returning to {l(0), l(1),, l(n)}, for any n 0.

12/29 Stationary distribution In summarize, for all n 0, π n = π 0 R n, where R = A 1 N 11 and R ij is the expected number of visits to (1, j) starting from (0, i) before the first return to level 0. This is a matrix-geometric stationary distribution. To complete its characterization, it remains to characterize π 0 and R.

2/29 Stationary distribution In summarize, for all n 0, π n = π 0 R n, where R = A 1 N 11 and R ij is the expected number of visits to (1, j) starting from (0, i) before the first return to level 0. This is a matrix-geometric stationary distribution. To complete its characterization, it remains to characterize π 0 and R.

2/29 Stationary distribution In summarize, for all n 0, π n = π 0 R n, where R = A 1 N 11 and R ij is the expected number of visits to (1, j) starting from (0, i) before the first return to level 0. This is a matrix-geometric stationary distribution. To complete its characterization, it remains to characterize π 0 and R.

13/29 Characterization of π 0 Take again E = l(0). From π E (P E + P EE c (I P E c ) 1 P E c E ) = π E, we get π 0 (B + A 1 N 11 A 1 ) = π 0. From the normalization constraint π 1 = i=0 π i 1 = 1 we have π i 1 = π 0 i=0 i=0 R i 1 = π 0 (I R) 1 1 = 1. Thus π 0 satisfies { π0 (B + R A 1 ) = π 0 π 0 (I R) 1 1 = 1

13/29 Characterization of π 0 Take again E = l(0). From π E (P E + P EE c (I P E c ) 1 P E c E ) = π E, we get π 0 (B + A 1 N 11 A 1 ) = π 0. From the normalization constraint π 1 = i=0 π i 1 = 1 we have π i 1 = π 0 i=0 i=0 R i 1 = π 0 (I R) 1 1 = 1. Thus π 0 satisfies { π0 (B + R A 1 ) = π 0 π 0 (I R) 1 1 = 1

13/29 Characterization of π 0 Take again E = l(0). From π E (P E + P EE c (I P E c ) 1 P E c E ) = π E, we get π 0 (B + A 1 N 11 A 1 ) = π 0. From the normalization constraint π 1 = i=0 π i 1 = 1 we have π i 1 = π 0 i=0 i=0 R i 1 = π 0 (I R) 1 1 = 1. Thus π 0 satisfies { π0 (B + R A 1 ) = π 0 π 0 (I R) 1 1 = 1

13/29 Characterization of π 0 Take again E = l(0). From π E (P E + P EE c (I P E c ) 1 P E c E ) = π E, we get π 0 (B + A 1 N 11 A 1 ) = π 0. From the normalization constraint π 1 = i=0 π i 1 = 1 we have π i 1 = π 0 i=0 i=0 R i 1 = π 0 (I R) 1 1 = 1. Thus π 0 satisfies { π0 (B + R A 1 ) = π 0 π 0 (I R) 1 1 = 1

14/29 Matrix R and first passage probability matrix G Many expressions for R; for instance where R = A 1 [ I (A0 + A 1 G) ] 1 G ij is the probability of eventually moving to (0, j), starting from (1, i). (A 0 + A 1 G) ij is the probability of visiting (1, j) from (1, i) avoiding level 0. G is stochastic and it is the minimal nonnegative solution of X = A 1 + A 0 X + A 1 X 2

14/29 Matrix R and first passage probability matrix G Many expressions for R; for instance where R = A 1 [ I (A0 + A 1 G) ] 1 G ij is the probability of eventually moving to (0, j), starting from (1, i). (A 0 + A 1 G) ij is the probability of visiting (1, j) from (1, i) avoiding level 0. G is stochastic and it is the minimal nonnegative solution of X = A 1 + A 0 X + A 1 X 2

15/29 Formal manipulation X = A 1 + A 0 X + A 1 X 2 leads to X A 0 X A 1 X 2 = A 1 X = [ I (A 0 + A 1 X ) ] 1 A 1. (1) Other equations have been proposed, from decomposition A 1 + A 0 X + A 1 X 2 = F (X ) + G(X )X All may be solved by functional iteration, (1) is the most interesting for its efficiency and its ease of interpretation

15/29 Formal manipulation X = A 1 + A 0 X + A 1 X 2 leads to X A 0 X A 1 X 2 = A 1 X = [ I (A 0 + A 1 X ) ] 1 A 1. (1) Other equations have been proposed, from decomposition A 1 + A 0 X + A 1 X 2 = F (X ) + G(X )X All may be solved by functional iteration, (1) is the most interesting for its efficiency and its ease of interpretation

16/29 Interpretation of functional iterations Start with X 0 = 0 and iterate X n = [ I (A 0 + A 1 X n 1 ) ] 1 A 1 or X n = A 1 + A 0 X n + A 1 X n 1 X n 0 1 2 n n + 1 n n 1 X n : process is allowed to move freely among n levels as n, restrictions disappear and X n G

17/29 Convergence rates Linear convergence with rates X n = [ I (A 0 + A 1 X n 1 ) ] 1 A 1 η = lim n G X n 1 n Actually, η is the Perron-Frobenius eigenvalue of R. When the QBD is positive recurrent, η < 1.

17/29 Convergence rates Linear convergence with rates X n = [ I (A 0 + A 1 X n 1 ) ] 1 A 1 η = lim n G X n 1 n Actually, η is the Perron-Frobenius eigenvalue of R. When the QBD is positive recurrent, η < 1.

8/29 Acceleration: keep it stochastic Instead of X 0 = 0, take X 0 = I with the same functional iteration X n = [ I (A 0 + A 1 X n 1) ] 1 A 1 The new sequence is well defined and X n corresponds to a new irreducible process which is forced to remain between levels 1 and n prove by induction that X n X n convergence rate strictly less than η, the P.F. eigenvalue of R. Total complexity O(m 3 ).

19/29 Newton s scheme X = A 1 + A 0 X + A 1 X 2 Newton s scheme: convergence to G, monotone and quadratic for any initial nonnegative matrix between 0 and G. Requires at each step the solution of T n H(T n 1 )A 1 T n H(T n 1 )A 1 = H(T n 1 )A 1, where H(X ) = [ I (A 0 + A 1 X ) ] 1 Sylvester equation, O(m 3 ) algorithm since 1992 (Gardiner, Laub, Amato and Moler) Total complexity O(m 6 ).

20/29 Logarithmic & cyclic reduction algorithms Two nearly identical, quadratically convergent, algorithms Logarithmic reduction: Latouche and Ramaswami, 1993 Cyclic reduction: Bini and Meini, 1995; slightly faster at each step, generalized to more complex Markov chains

21/29 Cyclic reduction The equation G = A 1 + A 0 G + A 1 G 2 is transformed into the linear system I A 0 A 1 0 A 1 I A 0 A 1. A 1 I A.. 0... 0 G G 2 G 3. = Apply an even-odd permutation to rows and columns. Perform one step of Gaussian elimination to remove the even-numbered blocks and obtain A 1 0 0.

21/29 Cyclic reduction The equation G = A 1 + A 0 G + A 1 G 2 is transformed into the linear system I A 0 A 1 0 A 1 I A 0 A 1. A 1 I A.. 0... 0 G G 2 G 3. = Apply an even-odd permutation to rows and columns. Perform one step of Gaussian elimination to remove the even-numbered blocks and obtain A 1 0 0.

21/29 Cyclic reduction The equation G = A 1 + A 0 G + A 1 G 2 is transformed into the linear system I A 0 A 1 0 A 1 I A 0 A 1. A 1 I A.. 0... 0 G G 2 G 3. = Apply an even-odd permutation to rows and columns. Perform one step of Gaussian elimination to remove the even-numbered blocks and obtain A 1 0 0.

22/29 Cyclic reduction the same structure. I Â(1) 0 A (1) A (1) 1 I A (1) 0 A (1) 1 A (1) 1 I A (1)... 0... 0 1 0 G G 3 G 5. = A 1 0 0. Repeat as needed [ At nth step: G G 2n +1 G 2 2n +1 G 3 2n +1 ] T upper diagonal: A (n) 1 tends to 0 as n tends to. When A (n) 1 0, G (I Â(n) 0 ) 1 A 1.

22/29 Cyclic reduction the same structure. I Â(1) 0 A (1) A (1) 1 I A (1) 0 A (1) 1 A (1) 1 I A (1)... 0... 0 1 0 G G 3 G 5. = A 1 0 0. Repeat as needed [ At nth step: G G 2n +1 G 2 2n +1 G 3 2n +1 ] T upper diagonal: A (n) 1 tends to 0 as n tends to. When A (n) 1 0, G (I Â(n) 0 ) 1 A 1.

23/29 Other structured Markov chains GI /M/1-type Markov chains A GI /M/1-type Markov chain is a two-dimensional Markovian chain {(X k, ϕ k ), k N} where the only possible transitions from state (n, i) are to state (n + 1, j) (1 level up), (n, j) (the same level), (n l, j) (l levels down) for all 1 l n. Ã 0 A 1 0 0 Ã 1 A 0 A 1 0 P = Ã 2 A 1 A 0 A 1 Ã 3 A 2 A 1 A 0.....

24/29 Solution for GI /M/1-type Markov Chains For a GI /M/1-type Markov chain, let x be the solution to [ ] x A 1 k = x. k=0 Then the chain is positive recurrent, null recurrent or transient according as [ ] x A 0 1 x (k 1)A 1 k 1 is < 0, = 0 or > 0. k=2 The stationary distribution π exists if and only if the chain is positive recurrent.

24/29 Solution for GI /M/1-type Markov Chains For a GI /M/1-type Markov chain, let x be the solution to [ ] x A 1 k = x. k=0 Then the chain is positive recurrent, null recurrent or transient according as [ ] x A 0 1 x (k 1)A 1 k 1 is < 0, = 0 or > 0. k=2 The stationary distribution π exists if and only if the chain is positive recurrent.

24/29 Solution for GI /M/1-type Markov Chains For a GI /M/1-type Markov chain, let x be the solution to [ ] x A 1 k = x. k=0 Then the chain is positive recurrent, null recurrent or transient according as [ ] x A 0 1 x (k 1)A 1 k 1 is < 0, = 0 or > 0. k=2 The stationary distribution π exists if and only if the chain is positive recurrent.

25/29 Stationary distribution of GI /M/1-type Markov Chains The stationary distribution π of a positive recurrent GI /M/1-type Markov chain of is such that π n = π 0 R n n 0, where the matrix R has exactly the same probabilistic interpretation as for a QBD, and the vector π 0 satisfies [ ] π 0 R k à 2 k = π 0, k=0 and a normalization constraint. This is the well-known Matrix-Geometric form of the stationary distribution.

25/29 Stationary distribution of GI /M/1-type Markov Chains The stationary distribution π of a positive recurrent GI /M/1-type Markov chain of is such that π n = π 0 R n n 0, where the matrix R has exactly the same probabilistic interpretation as for a QBD, and the vector π 0 satisfies [ ] π 0 R k à 2 k = π 0, k=0 and a normalization constraint. This is the well-known Matrix-Geometric form of the stationary distribution.

26/29 Matrix R for GI /M/1-type Markov chains The matrix R is the minimal nonnegative solution to the matrix equation X = X k+1 A k. k= 1 It is obtained using a duality property with M/G/1-type Markov chains: Determining R for a positive recurrent GI /M/1-type MC determining G for a transient M/G/1-type MC.

6/29 Matrix R for GI /M/1-type Markov chains The matrix R is the minimal nonnegative solution to the matrix equation X = X k+1 A k. k= 1 It is obtained using a duality property with M/G/1-type Markov chains: Determining R for a positive recurrent GI /M/1-type MC determining G for a transient M/G/1-type MC.

Other structured Markov chains M/G/1-type Markov chains An M/G/1-type Markov chain is a two-dimensional Markovian chain {(X k, ϕ k ), k N} where the only possible transitions from state (n, i) are to state (n + l, j) (l levels up) for all l 1, (n, j) (the same level), 7/29 (n 1, j) (1 level down). Ã 0 Ã 1 Ã 2 Ã 3 A 1 A 0 A 1 A 2 P = 0 A 1 A 0 A 1. 0 0 A 1 A 0.... M/G/1-type Markov chains do not have a matrix-geometric stationary distribution.

28/29 Matrix G for M/G/1-type Markov chains The matrix G is the minimal nonnegative solution to the matrix equation X = A k X k+1. k= 1 It may be obtained using for instance the generalization of the cyclic-reduction algorithm to M/G/1-type Markov chains.

8/29 Matrix G for M/G/1-type Markov chains The matrix G is the minimal nonnegative solution to the matrix equation X = A k X k+1. k= 1 It may be obtained using for instance the generalization of the cyclic-reduction algorithm to M/G/1-type Markov chains.

29/29 References Latouche and Ramaswami, Introduction to Matrix Analytic Methods in Stochastic Modeling, SIAM, 1999 Bini, Latouche and Meini, Numerical Methods for Structured Markov Chains, Oxford University Press, 2005 Latouche, Newton s iteration for non-linear equations in Markov chains, IMA Journal of Numerical Analysis, 1994, 14(4):583-598