Analysis of Markov Reward Models with Partial Reward Loss Based on a Time Reverse Approach

Size: px
Start display at page:

Download "Analysis of Markov Reward Models with Partial Reward Loss Based on a Time Reverse Approach"

Transcription

1 Analysis of Markov Reward Models with Partial Reward Loss Based on a Time Reverse Approach Gábor Horváth, Miklós Telek Technical University of Budapest, 1521 Budapest, Hungary {hgabor,telek}@webspn.hit.bme.hu M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 1/15

2 Markov Reward models with reward loss The difficulty of time forward approach The time reverse analysis approach Properties of the obtained solution s M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 2/15

3 Markov Reward models without reward loss Markov reward models (MRM) Markov Reward models without reward loss Markov Reward models with total reward loss Markov Reward models with partial reward loss a finite state CTMC, non negative reward rates (r i ), performance measures: reward accumulated up to time t, time to accumulate reward w. B(t) r i r k r j r k Z(t) k j i t t M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 3/15

4 Markov Reward models with total reward loss Markov Reward models without reward loss Markov Reward models with total reward loss Markov Reward models with partial reward loss We consider first order MRM (deterministic dependence on Z(t)), without impulse reward, but with potential reward loss at state transition. B(t) r i r k r j Z(t) k j r k t i t M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 4/15

5 Markov Reward models with partial reward los In case of partial reward loss: Markov Reward models without reward loss Markov Reward models with total reward loss Markov Reward models with partial reward loss α i remaining portion of reward when leaving state i, the lost reward is proportional to: total accumulated reward partial total loss, reward accumulated in the last state partial incremental loss. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 5/15

6 Markov Reward models with partial reward los Markov Reward models without reward loss Markov Reward models with total reward loss Markov Reward models with partial reward loss In case of partial reward loss: α i remaining portion of reward when leaving state i, the lost reward is proportional to: total accumulated reward partial total loss, reward accumulated in the last state partial incremental loss. B(t) Z(t) k r i r k B(T 1 )α i r j B(T 2 )α k r k B(t) B(T3 )α j r k r i r k α k r j α j α j [B(T3 ) B(T 2 )] r i α i B(T 2 ) t Z(t) t k r j r k j j i i T 1 T 2 T 3 t T 1 T 2 T 3 t M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 5/15

7 Time forward approach Time forward approach Time reverse approach Possible interpretation: Reduced (r i α i ) reward accumulation up to the last state transition, and total (r i ) reward accumulation in the last state without reward loss. B(t) r i r i α i r k r k α k r j r j α j r k α j [B(T3 ) B(T 2 )] B(T 2 ) Z(t) k t j i T 1 T 2 T 3 t M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 6/15

8 Time forward approach Time forward approach Time reverse approach Possible interpretation: Reduced (r i α i ) reward accumulation up to the last state transition, and total (r i ) reward accumulation in the last state without reward loss. B(t) r i r i α i r k r k α k r j r j α j r k α j [B(T3 ) B(T 2 )] B(T 2 ) Z(t) k t j i T 1 T 2 T 3 t Unfortunately, the last state transition before time T is not a stopping time. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 6/15

9 Time reverse approach Time forward approach Time reverse approach Behaviour of the time reverse process: Inhomogeneous CTMC with initial probability γ (0) = γ(t) and generator Q(τ) = { q ij (τ)}, where q ij (τ) = γ j (T τ) γ i (T τ) q ji if i j, γ k (T τ) γ i (T τ) q ki if i = j. k S,k i M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 7/15

10 Time reverse approach Time forward approach Time reverse approach Behaviour of the time reverse process: Inhomogeneous CTMC with initial probability γ (0) = γ(t) and generator Q(τ) = { q ij (τ)}, where q ij (τ) = γ j (T τ) γ i (T τ) q ji if i j, γ k (T τ) γ i (T τ) q ki if i = j. k S,k i Total (r i ) reward accumulation in the first state, and reduced (r i α i ) reward accumulation in all consecutive states without reward loss. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 7/15

11 Time reverse approach Potential model description: Time reverse approach duplicate the state space to describe the total reward accumulation in the first state (r i ), and the reduced reward accumulation in all further states (r i α i ). M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 8/15

12 Time reverse approach Potential model description: Time reverse approach duplicate the state space to describe the total reward accumulation in the first state (r i ), and the reduced reward accumulation in all further states (r i α i ). π (0) = [γ(t), 0], Q (τ) = Q D (τ) Q(τ) Q D (τ) 0 Q(τ), R = R 0 0 R α M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 8/15

13 Inhomogeneous differential Introducing Y i (τ, w) = Pr( B(τ) w, Z (τ) = i) Inhomogeneous differential Homogeneous differential Block structure of the differential Moments of accumulated reward we can apply the analysis approach available for inhomogeneous MRMs. It is based on the solution of the inhomogeneous partial differential τ Y (τ, w) + Y (τ, w)r = Y (τ, w) Q(τ), w where Y (τ, w) = { Y i (τ, w)}. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 9/15

14 Inhomogeneous differential Introducing Y i (τ, w) = Pr( B(τ) w, Z (τ) = i) Inhomogeneous differential Homogeneous differential Block structure of the differential Moments of accumulated reward we can apply the analysis approach available for inhomogeneous MRMs. It is based on the solution of the inhomogeneous partial differential τ Y (τ, w) + Y (τ, w)r = Y (τ, w) Q(τ), w where Y (τ, w) = { Y i (τ, w)}. But a drawback of this approach is that it requires the computation of Q(τ). M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 9/15

15 Homogeneous differential To overcome this drawback we introduce the conditional distribution of reward accumulated by the reverse process V i (τ, w) = Pr( B(τ) w Z (τ) = i) Inhomogeneous differential Homogeneous differential Block structure of the differential Moments of accumulated reward and the row vector V (τ, w) = { V i (τ, w)}. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 10/15

16 Homogeneous differential To overcome this drawback we introduce the conditional distribution of reward accumulated by the reverse process V i (τ, w) = Pr( B(τ) w Z (τ) = i) Inhomogeneous differential Homogeneous differential Block structure of the differential Moments of accumulated reward and the row vector V (τ, w) = { V i (τ, w)}. Using this performance measure we have to solve τ V (τ, w) + V (τ, w)r = V (τ, w)q T, w where Q T is the transpose of Q. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 10/15

17 Block structure of the differential Utilizing the special block structure of the Q (τ) and the R matrices (of size 2#S) we can obtain two homogeneous partial differential s of size #S: Inhomogeneous differential Homogeneous differential Block structure of the differential Moments of accumulated reward and τ τ X1(τ, w) + X1(τ, w)r = X1(τ, w)q D, w X2(τ, w)+ X2(τ, w)r α = X1(τ, w)(q Q D ) T + X2(τ, w)q T, w M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 11/15

18 Moments of accumulated reward The analysis approach available for inhomogeneous MRMs allows to describe the moments of IMRMs with an inhomogeneous ordinary differential. Inhomogeneous differential Homogeneous differential Block structure of the differential Moments of accumulated reward Similar to the reward distribution case, this approach is also applicable for our model, but it requires the the computation of Q(τ). M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 12/15

19 Moments of accumulated reward The analysis approach available for inhomogeneous MRMs allows to describe the moments of IMRMs with an inhomogeneous ordinary differential. Inhomogeneous differential Homogeneous differential Block structure of the differential Moments of accumulated reward Similar to the reward distribution case, this approach is also applicable for our model, but it requires the the computation of Q(τ). Using similar state dependent moment measures we obtain homogeneous ordinary differential s and d dτ M1 (n) (τ) = n M1 (n 1) (τ)r + M1 (n) (τ)q D, d dτ M2 (n) (τ) = n M2 (n 1) (τ)r α + M1 (n) (τ)(q Q D ) T + M2 (n) (τ)q M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 12/15

20 Randomization based numerical method The ordinary differential with constant coefficients allows to compose a randomization based numerical method. Randomization based numerical method and where M1 (n) (τ) = τ n e R n E D (τ), M2 (n) (τ) = n!d n k=0 λτ (λτ)k e k! D (n) (k), e (I A k D ) n = 0 D (n) (k) = 0 k n, n 1 D (n 1) (k 1)S α + D (n) (k 1)A+ ) e S n A k 1 n D (A A D ) k > n, n 1 ( k 1 n M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 13/15

21 Numerical Example α N =0.5 α N 1 =0.5 α N 2 =0.5 α 0 =1 r N =Nr r N 1 =(N 1)r r N 2 =(N 2)r r 0 =0 10 N N λ (N 1)λ N 1 N 2 λ Numerical Example ρ σ σ σ M σ r M =0 α M =1 Structure of the Markov chain e-05 1st moment 2nd moment 3rd moment 4th moment 5th moment Moments of the accumulated reward With parameters N = , λ = , σ = 1.5, ρ = 0.1, r = , α = 0.5, M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 14/15

22 The analysis of partial loss MRM is usually rather complex. We propose an analysis method with the following features: M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 15/15

23 The analysis of partial loss MRM is usually rather complex. We propose an analysis method with the following features: non stopping time time reverse approach M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 15/15

24 The analysis of partial loss MRM is usually rather complex. We propose an analysis method with the following features: non stopping time time reverse approach inhomogeneous differential proper performance measure, M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 15/15

25 The analysis of partial loss MRM is usually rather complex. We propose an analysis method with the following features: non stopping time time reverse approach inhomogeneous differential proper performance measure, partial differential ordinary differential s, M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 15/15

26 The analysis of partial loss MRM is usually rather complex. We propose an analysis method with the following features: non stopping time time reverse approach inhomogeneous differential proper performance measure, partial differential ordinary differential s, numerical stability, error control randomization based analysis. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 15/15

27 The analysis of partial loss MRM is usually rather complex. We propose an analysis method with the following features: non stopping time time reverse approach inhomogeneous differential proper performance measure, partial differential ordinary differential s, numerical stability, error control randomization based analysis. Thanks for your attention. M Telek, Markov Anniversary Meeting, June Analysis of Markov Reward Models with Partial Reward Loss - p. 15/15

Analysis of Inhomogeneous Markov Reward Models

Analysis of Inhomogeneous Markov Reward Models NUMERICAL SOLUTION of MARKOV CHAINS, p. 305 322 Analysis of Inhomogeneous Markov Reward Models M. Telek, A. Horváth 2, G. Horváth Department of Telecommunications, Technical University of Budapest, 52

More information

Analysis of inhomogeneous Markov reward models

Analysis of inhomogeneous Markov reward models Linear Algebra and its Applications 386 (2004) 383 405 www.elsevier.com/locate/laa Analysis of inhomogeneous Markov reward models M. Telek a,,a.horváth b, G. Horváth a a Department of Telecommunications,

More information

Markovian techniques for performance analysis of computer and communication systems

Markovian techniques for performance analysis of computer and communication systems Markovian techniques for performance analysis of computer and communication systems Miklós Telek C.Sc./Ph.D. of technical science Dissertation Department of Telecommunications Technical University of Budapest

More information

MATH 307 Test 1 Study Guide

MATH 307 Test 1 Study Guide MATH 37 Test 1 Study Guide Theoretical Portion: No calculators Note: It is essential for you to distinguish between an entire matrix C = (c i j ) and a single element c i j of the matrix. For example,

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by:[university of Torino] [University of Torino] On: 18 May 2007 Access Details: [subscription number 778576145] Publisher: Taylor & Francis Informa Ltd Registered in England

More information

RECURSION EQUATION FOR

RECURSION EQUATION FOR Math 46 Lecture 8 Infinite Horizon discounted reward problem From the last lecture: The value function of policy u for the infinite horizon problem with discount factor a and initial state i is W i, u

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Markov decision processes and interval Markov chains: exploiting the connection

Markov decision processes and interval Markov chains: exploiting the connection Markov decision processes and interval Markov chains: exploiting the connection Mingmei Teo Supervisors: Prof. Nigel Bean, Dr Joshua Ross University of Adelaide July 10, 2013 Intervals and interval arithmetic

More information

A FAST MATRIX-ANALYTIC APPROXIMATION FOR THE TWO CLASS GI/G/1 NON-PREEMPTIVE PRIORITY QUEUE

A FAST MATRIX-ANALYTIC APPROXIMATION FOR THE TWO CLASS GI/G/1 NON-PREEMPTIVE PRIORITY QUEUE A FAST MATRIX-ANAYTIC APPROXIMATION FOR TE TWO CASS GI/G/ NON-PREEMPTIVE PRIORITY QUEUE Gábor orváth Department of Telecommunication Budapest University of Technology and Economics. Budapest Pf. 9., ungary

More information

Definition and Properties

Definition and Properties 1. Definition The convolution of two functions f and g is a third function which we denote f g. It is defined as the following integral ( f g)(t) = t + f (τ)g(t τ) dτ for t >. (1) We will leave this unmotivated

More information

Language Acquisition and Parameters: Part II

Language Acquisition and Parameters: Part II Language Acquisition and Parameters: Part II Matilde Marcolli CS0: Mathematical and Computational Linguistics Winter 205 Transition Matrices in the Markov Chain Model absorbing states correspond to local

More information

Analysis of fluid queues in saturation with additive decomposition

Analysis of fluid queues in saturation with additive decomposition Analysis of fluid queues in saturation with additive decomposition Miklós Telek and Miklós Vécsei Budapest University of Technology and Economics Department of Telecommunications 1521 Budapest, Hungary

More information

Two-sided Bounds for the Convergence Rate of Markov Chains

Two-sided Bounds for the Convergence Rate of Markov Chains UDC 519.217.2 Two-sided Bounds for the Convergence Rate of Markov Chains A. Zeifman, Ya. Satin, K. Kiseleva, V. Korolev Vologda State University, Institute of Informatics Problems of the FRC CSC RAS, ISEDT

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Finite queues at the limit of saturation

Finite queues at the limit of saturation Finite queues at the limit of saturation Miklós Telek Budapest University of Technology and Economics Department of Telecommunications 1521 Budapest, Hungary Email: telek@webspnhitbmehu Miklós Vécsei Budapest

More information

Linear Algebra Solutions 1

Linear Algebra Solutions 1 Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Analysis of generalized QBD queues with matrix-geometrically distributed batch arrivals and services

Analysis of generalized QBD queues with matrix-geometrically distributed batch arrivals and services manuscript No. (will be inserted by the editor) Analysis of generalized QBD queues with matrix-geometrically distributed batch arrivals and services Gábor Horváth the date of receipt and acceptance should

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections ) c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Numerical Analysis of Large Markov. Miklos Telek. Department of Telecommunications, Sandor Racz. Department of Telecommunications and Telematics,

Numerical Analysis of Large Markov. Miklos Telek. Department of Telecommunications, Sandor Racz. Department of Telecommunications and Telematics, Numerical Analysis of Large Markov Reward Models Miklos Telek Department of Telecommunications, Technical University of Budapest, Hungary, telek@hit.bme.hu Sandor Racz Department of Telecommunications

More information

Acyclic discrete phase type distributions: properties and a parameter estimation algorithm

Acyclic discrete phase type distributions: properties and a parameter estimation algorithm Performance Evaluation 54 (2003) 1 32 Acyclic discrete phase type distributions: properties and a parameter estimation algorithm A. Bobbio a, A. Horváth b, M. Scarpa c, M. Telek b, a Dip. di Scienze e

More information

MAT 242 CHAPTER 4: SUBSPACES OF R n

MAT 242 CHAPTER 4: SUBSPACES OF R n MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)

More information

Linear Ordinary Differential Equations

Linear Ordinary Differential Equations MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R

More information

A Numerical Method for the Evaluation of the Distribution of Cumulative Reward Till Exit of a Subset of Transient States of a Markov Reward Model

A Numerical Method for the Evaluation of the Distribution of Cumulative Reward Till Exit of a Subset of Transient States of a Markov Reward Model A Numerical Method for the Evaluation of the Distribution of Cumulative Reward Till Exit of a Subset of Transient States of a Markov Reward Model Juan A. Carrasco and V. Suñé Departament d Enginyeria Electrònica

More information

Modelling in Systems Biology

Modelling in Systems Biology Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative

More information

Packet Loss Analysis of Load-Balancing Switch with ON/OFF Input Processes

Packet Loss Analysis of Load-Balancing Switch with ON/OFF Input Processes Packet Loss Analysis of Load-Balancing Switch with ON/OFF Input Processes Yury Audzevich, Levente Bodrog 2, Yoram Ofek, and Miklós Telek 2 Department of Information Engineering and Computer Science, University

More information

Stochastic Shortest Path Problems

Stochastic Shortest Path Problems Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can

More information

Markov Chains and Related Matters

Markov Chains and Related Matters Markov Chains and Related Matters 2 :9 3 4 : The four nodes are called states. The numbers on the arrows are called transition probabilities. For example if we are in state, there is a probability of going

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

EXTERNALLY AND INTERNALLY POSITIVE TIME-VARYING LINEAR SYSTEMS

EXTERNALLY AND INTERNALLY POSITIVE TIME-VARYING LINEAR SYSTEMS Int. J. Appl. Math. Comput. Sci., 1, Vol.11, No.4, 957 964 EXTERNALLY AND INTERNALLY POSITIVE TIME-VARYING LINEAR SYSTEMS Tadeusz KACZOREK The notions of externally and internally positive time-varying

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

On Backward Product of Stochastic Matrices

On Backward Product of Stochastic Matrices On Backward Product of Stochastic Matrices Behrouz Touri and Angelia Nedić 1 Abstract We study the ergodicity of backward product of stochastic and doubly stochastic matrices by introducing the concept

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Robotics & Automation. Lecture 06. Serial Kinematic Chain, Forward Kinematics. John T. Wen. September 11, 2008

Robotics & Automation. Lecture 06. Serial Kinematic Chain, Forward Kinematics. John T. Wen. September 11, 2008 Robotics & Automation Lecture 06 Serial Kinematic Chain, Forward Kinematics John T. Wen September 11, 2008 So Far... We have covered rigid body rotational kinematics: representations of SO(3), change of

More information

Models of Molecular Evolution

Models of Molecular Evolution Models of Molecular Evolution Bret Larget Departments of Botany and of Statistics University of Wisconsin Madison September 15, 2007 Genetics 875 (Fall 2009) Molecular Evolution September 15, 2009 1 /

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

The Scale Factor: A New Degree of Freedom in Phase Type Approximation

The Scale Factor: A New Degree of Freedom in Phase Type Approximation The Scale Factor: A New Degree of Freedom in Phase Type Approximation Andrea Bobbio DISTA, Università del Piemonte Orientale, Alessandria, Italy,bobbio@unipmn.it András Horváth, Miklós Telek Dept. of Telecommunications,

More information

Chapter 3: Birth and Death processes

Chapter 3: Birth and Death processes Chapter 3: Birth and Death processes Thus far, we have ignored the random element of population behaviour. Of course, this prevents us from finding the relative likelihoods of various events that might

More information

LINEAR SYSTEMS OF GENERALIZED ORDINARY DIFFERENTIAL EQUATIONS

LINEAR SYSTEMS OF GENERALIZED ORDINARY DIFFERENTIAL EQUATIONS Georgian Mathematical Journal Volume 8 21), Number 4, 645 664 ON THE ξ-exponentially ASYMPTOTIC STABILITY OF LINEAR SYSTEMS OF GENERALIZED ORDINARY DIFFERENTIAL EQUATIONS M. ASHORDIA AND N. KEKELIA Abstract.

More information

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997) A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.

More information

CSL105: Discrete Mathematical Structures. Ragesh Jaiswal, CSE, IIT Delhi

CSL105: Discrete Mathematical Structures. Ragesh Jaiswal, CSE, IIT Delhi Definition (Linear homogeneous recurrence) A linear homogeneous recurrence relation of degree k with constant coefficients is a recurrence relation of the form a n = c 1 a n 1 + c 2 a n 2 +... + c k a

More information

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration

More information

Statistical analysis of peer-to-peer live streaming traffic

Statistical analysis of peer-to-peer live streaming traffic Statistical analysis of peer-to-peer live streaming traffic Levente Bodrog 1 Ákos Horváth 1 Miklós Telek 1 1 Technical University of Budapest Probability and Statistics with Applications, 2009 Outline

More information

Fluid models in performance analysis

Fluid models in performance analysis Fluid models in performance analysis Miklós Telek Dept. of Telecom., Technical University of Budapest SFM-07:PE, May 31, 2007 Bertinoro, Italy Joint work with Marco Gribaudo Dip. di Informatica, Università

More information

On Minimal Representations of Rational Arrival Processes

On Minimal Representations of Rational Arrival Processes On Minimal Representations of Rational Arrival Processes Peter Buchholz Informatik IV TU Dortmund D-1 Dortmund Germany Miklós Telek Department of Telecommunications Technical University of Budapest H-151

More information

Curvature of Digital Curves

Curvature of Digital Curves Curvature of Digital Curves Left: a symmetric curve (i.e., results should also be symmetric ). Right: high-curvature pixels should correspond to visual perception of corners. Page 1 March 2005 Categories

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions.

MAT SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions. MAT 4371 - SYS 5120 (Winter 2012) Assignment 5 (not to be submitted) There are 4 questions. Question 1: Consider the following generator for a continuous time Markov chain. 4 1 3 Q = 2 5 3 5 2 7 (a) Give

More information

Deterministic Dynamic Programming

Deterministic Dynamic Programming Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

6 Continuous-Time Birth and Death Chains

6 Continuous-Time Birth and Death Chains 6 Continuous-Time Birth and Death Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology.

More information

Uncertainty calculations for estimates of project durations

Uncertainty calculations for estimates of project durations Uncertainty calculations for estimates of project durations DTU Informatics Technical University of Denmark 1-3.11.2012 YEQT VI in Eindhoven outline Project management The Successive Principle Different

More information

Impulsive stabilization of two kinds of second-order linear delay differential equations

Impulsive stabilization of two kinds of second-order linear delay differential equations J. Math. Anal. Appl. 91 (004) 70 81 www.elsevier.com/locate/jmaa Impulsive stabilization of two kinds of second-order linear delay differential equations Xiang Li a, and Peixuan Weng b,1 a Department of

More information

Lecturer: Olga Galinina

Lecturer: Olga Galinina Lecturer: Olga Galinina E-mail: olga.galinina@tut.fi Outline Motivation Modulated models; Continuous Markov models Markov modulated models; Batch Markovian arrival process; Markovian arrival process; Markov

More information

Matrix Algebra & Elementary Matrices

Matrix Algebra & Elementary Matrices Matrix lgebra & Elementary Matrices To add two matrices, they must have identical dimensions. To multiply them the number of columns of the first must equal the number of rows of the second. The laws below

More information

Perron Frobenius Theory

Perron Frobenius Theory Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B

More information

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat Linear Algebra Lecture 2 1.3.7 Matrix Matrix multiplication using Falk s

More information

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation University of Oxford Statistical Methods Autocorrelation Identification and Estimation Dr. Órlaith Burke Michaelmas Term, 2011 Department of Statistics, 1 South Parks Road, Oxford OX1 3TG Contents 1 Model

More information

Moment-based Availability Prediction for Bike-Sharing Systems

Moment-based Availability Prediction for Bike-Sharing Systems Moment-based Availability Prediction for Bike-Sharing Systems Jane Hillston Joint work with Cheng Feng and Daniël Reijsbergen LFCS, School of Informatics, University of Edinburgh http://www.quanticol.eu

More information

Statistics 992 Continuous-time Markov Chains Spring 2004

Statistics 992 Continuous-time Markov Chains Spring 2004 Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Markov Processes. Stochastic process. Markov process

Markov Processes. Stochastic process. Markov process Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Simulation of Contrast Agent Enhanced Ultrasound Imaging based on Field II

Simulation of Contrast Agent Enhanced Ultrasound Imaging based on Field II Simulation of Contrast Agent Enhanced Ultrasound Imaging based on Field II Tobias Gehrke, Heinrich M. Overhoff Medical Engineering Laboratory, University of Applied Sciences Gelsenkirchen tobias.gehrke@fh-gelsenkirchen.de

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

Stochastic Petri Net

Stochastic Petri Net Stochastic Petri Net Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2013, June 24th 2013 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic Petri Net 4 Generalized

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Upon successful completion of MATH 220, the student will be able to:

Upon successful completion of MATH 220, the student will be able to: MATH 220 Matrices Upon successful completion of MATH 220, the student will be able to: 1. Identify a system of linear equations (or linear system) and describe its solution set 2. Write down the coefficient

More information

Round-off error propagation and non-determinism in parallel applications

Round-off error propagation and non-determinism in parallel applications Round-off error propagation and non-determinism in parallel applications Vincent Baudoui (Argonne/Total SA) vincent.baudoui@gmail.com Franck Cappello (Argonne/INRIA/UIUC-NCSA) Georges Oppenheim (Paris-Sud

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of

More information

Computing Battery Lifetime Distributions

Computing Battery Lifetime Distributions Computing Battery Lifetime Distributions Lucia Cloth, Marijn R. Jongerden, Boudewijn R. Haverkort University of Twente Design and Analysis of Communication Systems [lucia,brh,jongerdenmr]@ewi.utwente.nl

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

Solving Dynamic Equations: The State Transition Matrix

Solving Dynamic Equations: The State Transition Matrix Overview Solving Dynamic Equations: The State Transition Matrix EGR 326 February 24, 2017 Solutions to coupled dynamic equations Solutions to dynamic circuits from EGR 220 The state transition matrix Discrete

More information

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes The story of the film so far... Mathematics for Informatics 4a José Figueroa-O Farrill Lecture 19 28 March 2012 We have been studying stochastic processes; i.e., systems whose time evolution has an element

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

On asymptotic behavior of a finite Markov chain

On asymptotic behavior of a finite Markov chain 1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong

More information

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected 4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X

More information

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. . MATH 030: MATRICES Matrix Operations We have seen how matrices and the operations on them originated from our study of linear equations In this chapter we study matrices explicitely Definition 01 A matrix

More information

General Lorentz Boost Transformations, Acting on Some Important Physical Quantities

General Lorentz Boost Transformations, Acting on Some Important Physical Quantities General Lorentz Boost Transformations, Acting on Some Important Physical Quantities We are interested in transforming measurements made in a reference frame O into measurements of the same quantities as

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

The Boundary Problem: Markov Chain Solution

The Boundary Problem: Markov Chain Solution MATH 529 The Boundary Problem: Markov Chain Solution Consider a random walk X that starts at positive height j, and on each independent step, moves upward a units with probability p, moves downward b units

More information

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations MTHSC 3110 Section 2.1 Matrix Operations Notation Let A be an m n matrix, that is, m rows and n columns. We ll refer to the entries of A by their row and column indices. The entry in the i th row and j

More information

Markov Chains Absorption Hamid R. Rabiee

Markov Chains Absorption Hamid R. Rabiee Markov Chains Absorption Hamid R. Rabiee Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A

More information

Outline. Linear maps. 1 Vector addition is commutative: summands can be swapped: 2 addition is associative: grouping of summands is irrelevant:

Outline. Linear maps. 1 Vector addition is commutative: summands can be swapped: 2 addition is associative: grouping of summands is irrelevant: Outline Wiskunde : Vector Spaces and Linear Maps B Jacobs Institute for Computing and Information Sciences Digital Security Version: spring 0 B Jacobs Version: spring 0 Wiskunde / 55 Points in plane The

More information