Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.

Similar documents
Calculating Bounds on Expected Return and First Passage Times in Finite-State Imprecise Birth-Death Chains

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Markov Chains and Stochastic Sampling

P(X 0 = j 0,... X nk = j k )

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

On Successive Lumping of Large Scale Systems

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Cover Page. The handle holds various files of this Leiden University dissertation

CHAOS AND STABILITY IN SOME RANDOM DYNAMICAL SYSTEMS. 1. Introduction

Lecture 4: State Estimation in Hidden Markov Models (cont.)

The SIS and SIR stochastic epidemic models revisited

P (A G) dp G P (A G)

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS


Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Clustering using Mixture Models

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

1) The line has a slope of ) The line passes through (2, 11) and. 6) r(x) = x + 4. From memory match each equation with its graph.

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

ON COMPOUND POISSON POPULATION MODELS

LECTURE 3. Last time:

STOCHASTIC PROCESSES Basic notions

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Transformations and Expectations

Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks

Hidden Markov Models. Hosein Mohimani GHC7717

Convex Optimization and Modeling

Discrete Probability Refresher

Boundary Problems for One and Two Dimensional Random Walks

ON THE INTEGRATION OF UNBOUNDED FUNCTIONS*

Math 141: Lecture 11

Hybrid HMM/MLP models for time series prediction

IB Math SL Year 2 Name: Date: 8-1 Rate of Change and Motion

The integers. Chapter 3

Sample Spaces, Random Variables

A SIGNAL THEORETIC INTRODUCTION TO RANDOM PROCESSES

Lecture 10. Announcement. Mixture Models II. Topics of This Lecture. This Lecture: Advanced Machine Learning. Recap: GMMs as Latent Variable Models

Lecture 6 Simplex method for linear programming

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

ABSTRACT. Department of Mathematics. interesting results. A graph on n vertices is represented by a polynomial in n

Introduction to Algebraic and Geometric Topology Week 3

9.1 Linear Programs in canonical form

1 Basics of probability theory

An Extended Algorithm for Finding Global Maximizers of IPH Functions in a Region with Unequal Constrains

Classes of Polish spaces under effective Borel isomorphism

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Markov Chains and Hidden Markov Models. COMP 571 Luay Nakhleh, Rice University

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Irregular Birth-Death process: stationarity and quasi-stationarity

Study # 1 11, 15, 19

Segmentation of Dynamic Scenes from the Multibody Fundamental Matrix

( v 1 + v 2 ) + (3 v 1 ) = 4 v 1 + v 2. and ( 2 v 2 ) + ( v 1 + v 3 ) = v 1 2 v 2 + v 3, for instance.

CALCULUS. Berkant Ustaoğlu CRYPTOLOUNGE.NET

Lecture 11. Probability Theory: an Overveiw

Machine Learning for Data Science (CS4786) Lecture 24

MARKOV MODEL WITH COSTS In Markov models we are often interested in cost calculations.

ERRATA AND SUGGESTION SHEETS Advanced Calculus, Second Edition

Example: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected

ALGEBRAICALLY TRIVIAL, BUT TOPOLOGICALLY NON-TRIVIAL MAP. Contents 1. Introduction 1

Stochastic Models: Markov Chains and their Generalizations

Math 530 Lecture Notes. Xi Chen

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Intrinsic Noise in Nonlinear Gene Regulation Inference

Discrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results

Math Numerical Analysis Mid-Term Test Solutions

A Markov chain approach to quality control

A&S 320: Mathematical Modeling in Biology

A DUALITY THEOREM FOR NON-LINEAR PROGRAMMING* PHILIP WOLFE. The RAND Corporation

CALCULUS JIA-MING (FRANK) LIOU

Stochastic Shortest Path Problems

Markov Chain Monte Carlo Methods

Chapter 5: Integer Compositions and Partitions and Set Partitions

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016

On the Relationship between Sum-Product Networks and Bayesian Networks

Budapest University of Tecnology and Economics. AndrásVetier Q U E U I N G. January 25, Supported by. Pro Renovanda Cultura Hunariae Alapítvány

Birth-Death Processes

Modelling Complex Queuing Situations with Markov Processes

14 Branching processes

Algebraic geometry of the ring of continuous functions

Lecture 4 October 18th

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

120 CHAPTER 1. RATES OF CHANGE AND THE DERIVATIVE. Figure 1.30: The graph of g(x) =x 2/3.

Two-Dimensional Systems and Z-Transforms

Computer Vision Group Prof. Daniel Cremers. 14. Clustering

Chapter 1. Vectors, Matrices, and Linear Spaces

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Birth-death chain models (countable state)

Grilled it ems are prepared over real mesquit e wood CREATE A COMBO STEAKS. Onion Brewski Sirloin * Our signature USDA Choice 12 oz. Sirloin.

Discrete Math, Spring Solutions to Problems V

5 Mutual Information and Channel Capacity

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

Structured Markov Chains

Transcription:

Birth-death chains Birth-death chain special type of Markov chain Finite state space X := {0,...,L}, with L 2 N X n X k:n k,n 2 N k apple n Random variable and a sequence of variables, with and A sequence can be infinite as well X k: Sequence of state values x 1:n := x 1,...,x n in X n Markov condition E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n where for E n+1 ( x n ) is the expectation operator with p.m.f p time-homogeneous P = p(x n+1 x n ) 0 1 r 0 p 0 0 0 q 1 r 1 p 1 0 0. B............. C @ 0 0 q L 1 r L 1 p L 1 A 0 0 q L r L

Imprecise birth-death chains Consider a matrix P with p.m.f. not precisely known For every i 2 X, the p.m.f. of the i row belong to a credal set M i and consists of elements f i of the form f i ( j)= 8 q i if j = i 1 >< if j = i r i p i if j = i + 1 >: 0 otherwise i 2 X \{0,L} f 0 ( j)= 8 >< r 0 if j = 0 p 0 if j = 1 >: 0 otherwise f L ( j)= 8 >< q L if j = L 1 r L if j = L >: 0 otherwise Positivity assumption: r 0, p 0,r L,q L and q i,r i, p i for all i 2 X \{0,L} strictly positive

Imprecise Markov condition Lower and upper expectations of real-valued function f on X E( f i) := min E fi ( f )= min f i ( j) f ( j) f i 2M i f i 2M i E( f i) := max f i 2M i E fi ( f )= max f i 2M i  j2x  j2x f i ( j) f ( j) and for all x 1:n 2 X n, the imprecise Markov condition is E n+1 ( x 1:n )=E n+1 ( x n ) := E( x n )

Global uncertainty models Based on the notion of submartingales, we derive global uncertainty models These models satisfy a version of the Law of Iterated expectation X For every n 2 N and every real-valued function g on X N E n+1:1 (g(x n+1:1 ) i) =E n+2:1 (g(x n+2:1 ) i). (time-homogeneity) By defining f 0 on X by f 0 (i 0 ) := E n+2: (g(i 0,X n+2: ) i 0 ) for all i 0 2 X, then E n+1: (g(x n+1: ) i)=e n+1 ( f 0 i)=e( f 0 i)

First passage time The first passage time from i to j with i, j 2 X is ( 1 X n+1 = j t i! j (i,x n+1: ) := 1 + t Xn+1! j(x n+1,x n+2: ) X n+1 6= j = 1 + I j c(x n+1 )t Xn+1! j(x n+1,x n+2: ) where I j c is the indicator function of j c := X \{j} For i = j, we have the return time Due to time-homogeneity t i! j,n := E n+1: (t i! j (i,x n+1: ) i) and t i! j,n := E n+1: (t i! j (i,x n+1: ) i) will be denoted by t i! j and t i! j t i! j t i! j Due to positivity assumption and are real-valued and strictly positive and have the form t i! j = 1 + E(I j ct! j i) and t i! j = 1 + E(I j ct! j i)

Lower expected upward first passage time The first passage time from i to j with i, j 2 X and i < j t 0!1 = 1 p 0 For all i 2 X \{0,L}, we have that min {q i t i 1!i p i t i!i+1 } = 1 f i 2M i M i For all satisfying the positivity assumption, with i 2 X \{0,L}, and c a real constant, then min {qc pµ} is strictly decreasing in µ f i 2M i

Lower expected upward first passage time min {q i t i 1!i p i t i!i+1 } = 1 f i 2M i We can calculate t i!i+1 recursively Using a bisection method, as long as we have calculated t i 1!i Moreover, For all i 2 X \{0,L}, s.t i + 1 < j, we have that t i! j = t i!i+1 + t i+1! j For all i 2 X, such that i < j, we have that t i! j = j 1 Â k=i t k!k+1

Lower expected downward first passage time The first passage time from i to j with i, j 2 X and i > j Similarly to the upward case t L!L 1 = 1 q L For all i 2 X \{0,L}, we have that min { q i t i!i 1 + p i t i+1!i } = 1 f i 2M i i 1 For all i 2 X, such that i > j, we have that t i! j = Â t k+1!k k= j

Lower expected return time The first passage time from i to j with i, j 2 X and i = j Combining the results from expected upward with these of downward first passage times t 0!0 = 1 + min f 0 2M 0 {p 0 t 1!0 } = 1 + p 0 t 1!0 t L!L = 1 + min f L 2M L {q L t L 1!L } = 1 + q L t L 1!L and for all i 2 X \{0,L} t i!i = 1 + min f i 2M i {q i t i 1!i + p i t i+1!i }

Linear vacuous mixtures The set M i is a subset of the simplex S X For any i 2 X, S Xi is the subset of S X containing p.m.f. f i Given precise f0,f L,f and e i 2 [0,1) for any i i 2 X M 0 = (1 e 0 )f 0 + e 0f 0 0 : f 0 0 2 S X 0 M L = (1 e L )f L + e Lf 0 L : f 0 L 2 S X L and for all i 2 X \{0,L} M i = (1 e i )f i + e i f 0 i : f 0 i 2 S X i

Linear vacuous mixtures We can also define q i :=(1 e i )q i and q i :=(1 e i )q i + e i for all i 2 X \{0} p i :=(1 e i )p i and p i :=(1 e i )p i + e i for all i 2 X \{L} Expected lower upward, downward first passage and return times t i!i+1 = Â i k=0 ì =k+1 q` i m=k p m t i!i 1 = Â L k=i k 1 `=i p` k m=i q m t i!i = 1 + q i t i 1!i + p i t i+1!i

Linear vacuous mixtures Consider state space X := {0,...,4}, e i = e = 0.4 and Q q P = 0 1 0.55 0.45 0 0 0 0.3 0.5 0.2 0 0 B 0 0.3 0.5 0.2 0 C @ 0 0 0.3 0.5 0.2A 0 0 0 0.6 0.4 then, for all i 2 X \{0,L} p r Q we calculate lower and upper expected return times i i!i i!i 0 1.584 91.41 1 1.526 24.956 2 1.678 17.845 3 1.656 79.71 4 2.037 503.724

General example Consider state space X := {0,...,4} M 0 is determined by p 0 2 [0.15,0.4] and M L by q L 2 [0.2,0.6] For all i 2 X \{0,L}, M i is characterised by triplets of the form Q q (q i,r i, p i ) (0.65, 0.15, 0.2), (0.6, 0.25, 0.15), (0.5, 0.4, 0.1), (0.43, 0.45, 0.12), (0.33, 0.5, 0.17), (0.27, 0.43, 0.3), (0.25, 0.35, 0.4), (0.3, 0.25, 0.45), (0.4, 0.17, 0.43), (0.55, 0.1, 0.35) ) p r lower and upper expected upward and downward first passage times 0!1 2.5 4!3 1.666 1!2 3.889 3!2 2.051 2!3 4.814 2!1 2.169 3!4 5.432 1!0 2.206 0!1 6.666 4!3 5 1!2 43.333 3!2 12 2!3 226.666 2!1 23.2 3!4 1143.333 1!0 41.12

Conclusions and future work Simple methods for computing lower and upper expected first passage and return times Applying similar methods to other type of chains, e.g. Bonus-Malus systems Applying similar methods to continuous time systems