Asymptotic properties of imprecise Markov chains
|
|
- Bruce Morrison
- 5 years ago
- Views:
Transcription
1 FACULTY OF ENGINEERING Asymptotic properties of imprecise Markov chains Filip Hermans Gert De Cooman SYSTeMS Ghent University 10 September 2009, München
2
3 Imprecise Markov chain X 0 X 1 X 2 X 3 Every node is a state with finite state space X = {x 0, x 1,..., x n }. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 3 / 15
4 Imprecise Markov chain X 0 X 1 X 2 X 3 Every node is a state with finite state space X = {x 0, x 1,..., x n }. The upper transition operator T : R n R n is given by P (f x 1 ) P (f x 2 ) T f :=. so, T f (x i) = P (f x i ). P (f x n ) F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 3 / 15
5 Imprecise Markov chain X 0 X 1 X 2 X 3 T 3 f T 2 f T f f Every node is a state with finite state space X = {x 0, x 1,..., x n }. The upper transition operator T : R n R n is given by P (f x 1 ) P (f x 2 ) T f :=. so, T f (x i) = P (f x i ). P (f x n ) Doing so, P(f ) = P 0 (T 3 f ) where T 3 := T T T. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 3 / 15
6 Outline 1 Properties of the transition operator 2 Convergence 3 Structuring the state space 4 Convergence in terms of state properties F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 4 / 15
7 T has all the usual properties The properties of upper previsions are transfered to the transition operator 1 If f g then T f T g [monotonicity preserving], 2 If c R then T (f + c) = T f + c [constant additivity], 3 If α > 0 then T (αf ) = αt f [positive-homogeneity]. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 5 / 15
8 T has all the usual properties The properties of upper previsions are transfered to the transition operator 1 If f g then T f T g [monotonicity preserving], 2 If c R then T (f + c) = T f + c [constant additivity], 3 If α > 0 then T (αf ) = αt f [positive-homogeneity]. From the properties, it automatically follows that the map T is bounded, the map T is non-expansive T f T g f g. If a map has the first two properties it is also called topical. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 5 / 15
9 A lot is known for bounded and non-expansive maps 1 Edelstein [1963] showed that all elements of the ω-limit set of f, ω T (f ), are recurrent and moreover, that T acts isometrically on every ω-limit set. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 6 / 15
10 A lot is known for bounded and non-expansive maps 1 Edelstein [1963] showed that all elements of the ω-limit set of f, ω T (f ), are recurrent and moreover, that T acts isometrically on every ω-limit set. 2 Due to a result of Sine [1990] we know that T k f converges to a limit cycle for k, lim T k f = ξ f where T p ξ f = ξ f. k Moreover, the maximal period p is limited by a function of the dimension of the underlying space only. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 6 / 15
11 A lot is known for bounded and non-expansive maps 1 Edelstein [1963] showed that all elements of the ω-limit set of f, ω T (f ), are recurrent and moreover, that T acts isometrically on every ω-limit set. 2 Due to a result of Sine [1990] we know that T k f converges to a limit cycle for k, lim T k f = ξ f where T p ξ f = ξ f. k Moreover, the maximal period p is limited by a function of the dimension of the underlying space only. 3 Lemmens and Scheutzow [2005] showed for topical functions that the maximal period for a periodic point is ( n n /2 ). F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 6 / 15
12 How we interprete convergence max T k f min T k f f T f T 2 f T 3 f T 4 f T 5 f k F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 7 / 15
13 How we interprete convergence max T k f min T k f f T f T 2 f T 3 f T 4 f T 5 f k 1 An imprecise Markov chain PF-converges if max T k f min T k f as k. 2 An imprecise Markov chain converges if T k f ξ f with T ξ f = ξ f as k Clearly PF-convergence convergence. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 7 / 15
14 Convergence can be restated in terms of fixed points It can be easily seen that Proposition 1 An imprecise Markov chain converges if and only if every periodic points is also a fixed point. Remember that f is a fixed point if Tf = f. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 8 / 15
15 Convergence can be restated in terms of fixed points It can be easily seen that Proposition 1 An imprecise Markov chain converges if and only if every periodic points is also a fixed point. 2 An imprecise Markov chain PF-converges if and only if every periodic point of T is constant. Remember that f is a fixed point if Tf = f. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 8 / 15
16 Convergence can be restated in terms of fixed points It can be easily seen that Proposition 1 An imprecise Markov chain converges if and only if every periodic points is also a fixed point. 2 An imprecise Markov chain PF-converges if and only if every periodic point of T is constant. Remember that f is a fixed point if Tf = f. To conclude upon (PF)-convergence, all periodic points of T must be investigated. This is easy in the linear case, however... F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 8 / 15
17 An accessibility relation can be defined Definition We say that state y is accessible from state x, x y, if there exists n N such that T n I y (x) > 0. As is reflexive and transitive it determines a preorder on X. The binary relation on X is the associated equivalence relation. This communication relation partitions the state set X into equivalence classes called communication classes. The preorder induces a partial order on this partition, also denoted by. Because of this partial order, maximal communication classes will exist. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 9 / 15
18 The relation structures the state space M 1 M 2 M 3 X D 5 D 6 D 8 D 9 D 4 D 3 D 7 D 1 D 2 F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 10 / 15
19 Communication classes split up the imprecise Markov chain Proposition Consider a stationary imprecise Markov chain with upper transition operator T. Let C be a closed set of states, and let C be a partition of the state set X into closed sets. Then 1 T(hI B )(x) = 0 for all h L(X ), all x C and all B C c ; 2 Th(x) = T(hI C )(x) for all h L(X ) and all x C; 3 Th = C C T(I C h) = C C I C T(I C h) for all h L(X ). F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 11 / 15
20 Communication classes split up the imprecise Markov chain Proposition Consider a stationary imprecise Markov chain with upper transition operator T. Let C be a closed set of states, and let C be a partition of the state set X into closed sets. Then 1 T(hI B )(x) = 0 for all h L(X ), all x C and all B C c ; 2 Th(x) = T(hI C )(x) for all h L(X ) and all x C; 3 Th = C C T(I C h) = C C I C T(I C h) for all h L(X ). Consequently, if more communication classes exist, then PF-convergence is not possible. Example Assume there are n communication classes and take the gamble f = n k=1 ki C k then T f = n k=1 I C k T ki Ck = n k=1 I C k T k = f. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 11 / 15
21 Regularity implies PF-convergence Definition A maximal aperiodic communication class is called regular. If there is only one communication class, then X is irreducible. If X is irreducible and aperiodic, X itself is also called regular. ( n N)( k n)( x, y X )(x k y). x F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 12 / 15
22 Regularity implies PF-convergence Definition A maximal aperiodic communication class is called regular. If there is only one communication class, then X is irreducible. If X is irreducible and aperiodic, X itself is also called regular. ( n N)( k n)( x, y X )(x k y). It can be shown that regularity of X is a sufficient condition for PF-convergence. However, regularity is too strong. Example Take the precise model with transition matrix ( ) x F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 12 / 15
23 Top class regularity Definition An imprecise Markov chain is said to be top class regular if R := { x X : ( n N)( k n)( y X )T k I {x} (y) > 0 }. R is the set of maximal regular states. This means that R is a regular top class whenever the Markov chain is regular. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 13 / 15
24 Top class regularity Definition An imprecise Markov chain is said to be top class regular if R := { x X : ( n N)( k n)( y X )T k I {x} (y) > 0 }. R is the set of maximal regular states. This means that R is a regular top class whenever the Markov chain is regular. Top-class regularity is a necessary condition for PF-convergence, but not sufficient. Example ( ) f (x) Assume X = {x, y} and T f =, then the 1 is max{f (x), f (y)} belonging to the credal set to T. Therefore there is no PF-convergence. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 13 / 15
25 Necessary and sufficient conditions for PF-convergence Definition A stationary imprecise Markov chain is called regularly absorbing if it is top class regular (under ), meaning that R := { x X : ( n N)( k n)( y X )T k I {x} (y) > 0 }, and if moreover it is leaky, i.e. for all y in X \ R there is some n N such that T n I R (y) > 0. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 14 / 15
26 Necessary and sufficient conditions for PF-convergence Definition A stationary imprecise Markov chain is called regularly absorbing if it is top class regular (under ), meaning that R := { x X : ( n N)( k n)( y X )T k I {x} (y) > 0 }, and if moreover it is leaky, i.e. for all y in X \ R there is some n N such that T n I R (y) > 0. Remark that accessibility to the complete communication class R is required and not to a state of R. Example min{f (x), f (y)} Assume X = {x, y, z} and T f = min{f (x), f (y)} then min{f (x), f (y)} T n I {x} = T n I {y} = 0 and T n I {x,y} = I {x,y}. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 14 / 15
27 Necessary and sufficient conditions for PF-convergence Definition A stationary imprecise Markov chain is called regularly absorbing if it is top class regular (under ), meaning that R := { x X : ( n N)( k n)( y X )T k I {x} (y) > 0 }, and if moreover it is leaky, i.e. for all y in X \ R there is some n N such that T n I R (y) > 0. Theorem An imprecise Markov chain is PF-converging if and only if it is regularly absorbing. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 14 / 15
28 What about convergence in general? Conjecture An imprecise Markov chain converges if and only if 1 the -maximal communication classes are regular and 2 every periodic communication class leaks to the union of its -dominating classes. F. Hermans (Ghent University) Asymptotic properties of imprecise Markov chains WPMSIIP2 15 / 15
Markov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationThe Limit Behaviour of Imprecise Continuous-Time Markov Chains
DOI 10.1007/s00332-016-9328-3 The Limit Behaviour of Imprecise Continuous-Time Markov Chains Jasper De Bock 1 Received: 1 April 2016 / Accepted: 6 August 2016 Springer Science+Business Media New York 2016
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationImprecise Probabilities in Stochastic Processes and Graphical Models: New Developments
Imprecise Probabilities in Stochastic Processes and Graphical Models: New Developments Gert de Cooman Ghent University SYSTeMS NLMUA2011, 7th September 2011 IMPRECISE PROBABILITY MODELS Imprecise probability
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationSklar s theorem in an imprecise setting
Sklar s theorem in an imprecise setting Ignacio Montes a,, Enrique Miranda a, Renato Pelessoni b, Paolo Vicig b a University of Oviedo (Spain), Dept. of Statistics and O.R. b University of Trieste (Italy),
More informationarxiv: v1 [math.pr] 9 Jan 2016
SKLAR S THEOREM IN AN IMPRECISE SETTING IGNACIO MONTES, ENRIQUE MIRANDA, RENATO PELESSONI, AND PAOLO VICIG arxiv:1601.02121v1 [math.pr] 9 Jan 2016 Abstract. Sklar s theorem is an important tool that connects
More informationCalculating Bounds on Expected Return and First Passage Times in Finite-State Imprecise Birth-Death Chains
9th International Symposium on Imprecise Probability: Theories Applications, Pescara, Italy, 2015 Calculating Bounds on Expected Return First Passage Times in Finite-State Imprecise Birth-Death Chains
More informationOn asymptotic behavior of a finite Markov chain
1 On asymptotic behavior of a finite Markov chain Alina Nicolae Department of Mathematical Analysis Probability. University Transilvania of Braşov. Romania. Keywords: convergence, weak ergodicity, strong
More informationNonnegative Matrices I
Nonnegative Matrices I Daisuke Oyama Topics in Economic Theory September 26, 2017 References J. L. Stuart, Digraphs and Matrices, in Handbook of Linear Algebra, Chapter 29, 2006. R. A. Brualdi and H. J.
More informationSerena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008
Journal of Uncertain Systems Vol.4, No.1, pp.73-80, 2010 Online at: www.jus.org.uk Different Types of Convergence for Random Variables with Respect to Separately Coherent Upper Conditional Probabilities
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationPredictive inference under exchangeability
Predictive inference under exchangeability and the Imprecise Dirichlet Multinomial Model Gert de Cooman, Jasper De Bock, Márcio Diniz Ghent University, SYSTeMS gert.decooman@ugent.be http://users.ugent.be/
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationLIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE
International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION
More informationDiscrete solid-on-solid models
Discrete solid-on-solid models University of Alberta 2018 COSy, University of Manitoba - June 7 Discrete processes, stochastic PDEs, deterministic PDEs Table: Deterministic PDEs Heat-diffusion equation
More informationRandom Times and Their Properties
Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2
More informationWinter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo
Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte
More informationLECTURE 3. Last time:
LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationJournal of Mathematical Analysis and Applications
J. Math. Anal. Appl. 421 (2015) 1042 1080 Contents lists available at ScienceDirect Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa Extreme lower previsions Jasper De Bock,
More informationStochastic Models: Markov Chains and their Generalizations
Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction
More informationThe Theory behind PageRank
The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationMarkov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015
Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,
More informationLecture Notes: Markov chains Tuesday, September 16 Dannie Durand
Computational Genomics and Molecular Biology, Lecture Notes: Markov chains Tuesday, September 6 Dannie Durand In the last lecture, we introduced Markov chains, a mathematical formalism for modeling how
More informationAcyclic orientations of graphs
Discrete Mathematics 306 2006) 905 909 www.elsevier.com/locate/disc Acyclic orientations of graphs Richard P. Stanley Department of Mathematics, University of California, Berkeley, Calif. 94720, USA Abstract
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods p. /36 Markov Chain Monte Carlo Methods Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Markov Chain Monte Carlo Methods p. 2/36 Markov Chains
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationarxiv: v1 [math.co] 12 Oct 2018
Uniform random posets Patryk Kozieł Małgorzata Sulkowska arxiv:1810.05446v1 [math.co] 12 Oct 2018 October 15, 2018 Abstract We propose a simple algorithm generating labelled posets of given size according
More informationReadings: Finish Section 5.2
LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationDiscrete Mathematics. Benny George K. September 22, 2011
Discrete Mathematics Benny George K Department of Computer Science and Engineering Indian Institute of Technology Guwahati ben@iitg.ernet.in September 22, 2011 Set Theory Elementary Concepts Let A and
More informationDiscrete Markov Chain. Theory and use
Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical
More information(Pre-)Algebras for Linguistics
1. Review of Preorders Linguistics 680: Formal Foundations Autumn 2010 (Pre-)Orders and Induced Equivalence A preorder on a set A is a binary relation ( less than or equivalent to ) on A which is reflexive
More informationNote that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +
Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n
More informationMarkov Chains (Part 4)
Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationDurham Research Online
Durham Research Online Deposited in DRO: 16 January 2015 Version of attached le: Accepted Version Peer-review status of attached le: Peer-reviewed Citation for published item: De Cooman, Gert and Troaes,
More informationDefinition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.
Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More information1. To be a grandfather. Objects of our consideration are people; a person a is associated with a person b if a is a grandfather of b.
20 [161016-1020 ] 3.3 Binary relations In mathematics, as in everyday situations, we often speak about a relationship between objects, which means an idea of two objects being related or associated one
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationMarkov and Gibbs Random Fields
Markov and Gibbs Random Fields Bruno Galerne bruno.galerne@parisdescartes.fr MAP5, Université Paris Descartes Master MVA Cours Méthodes stochastiques pour l analyse d images Lundi 6 mars 2017 Outline The
More informationDO FIVE OUT OF SIX ON EACH SET PROBLEM SET
DO FIVE OUT OF SIX ON EACH SET PROBLEM SET 1. THE AXIOM OF FOUNDATION Early on in the book (page 6) it is indicated that throughout the formal development set is going to mean pure set, or set whose elements,
More informationA Note on Bisexual Galton-Watson Branching Processes with Immigration
E extracta mathematicae Vol. 16, Núm. 3, 361 365 (2001 A Note on Bisexual Galton-Watson Branching Processes with Immigration M. González, M. Molina, M. Mota Departamento de Matemáticas, Universidad de
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationUniform random posets
Uniform random posets Patryk Kozieª Maªgorzata Sulkowska January 6, 2018 Abstract. We propose a simple algorithm generating labelled posets of given size according to the almost uniform distribution. By
More informationAperiodic correlation and the merit factor
Aperiodic correlation and the merit factor Aina Johansen 02.11.2009 Correlation The periodic correlation between two binary sequences {x t } and {y t } of length n at shift τ is defined as n 1 θ x,y (τ)
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationFINITE MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,
More informationLectures on Markov Chains
Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................
More informationDisjointness and Additivity
Midterm 2: Format Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationISM206 Lecture, May 12, 2005 Markov Chain
ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov
More informationSome Definition and Example of Markov Chain
Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and
More informationLesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains
AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent
More information25.1 Markov Chain Monte Carlo (MCMC)
CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,
More informationImprecise Continuous-Time Markov Chains: Efficient Computational Methods with Guaranteed Error Bounds
ICTMCS: EFFICIENT COMPUTATIONAL METHODS WITH GUARANTEED ERROR BOUNDS Imprecise Continuous-Time Markov Chains: Efficient Computational Methods with Guaranteed Error Bounds Alexander Erreygers Ghent University,
More informationMidterm 2 Review. CS70 Summer Lecture 6D. David Dinh 28 July UC Berkeley
Midterm 2 Review CS70 Summer 2016 - Lecture 6D David Dinh 28 July 2016 UC Berkeley Midterm 2: Format 8 questions, 190 points, 110 minutes (same as MT1). Two pages (one double-sided sheet) of handwritten
More informationQuantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)
Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University
More information4. Ergodicity and mixing
4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation
More informationHowever another possibility is
19. Special Domains Let R be an integral domain. Recall that an element a 0, of R is said to be prime, if the corresponding principal ideal p is prime and a is not a unit. Definition 19.1. Let a and b
More informationTopology Proceedings. COPYRIGHT c by Topology Proceedings. All rights reserved.
Topology Proceedings Web: http://topology.auburn.edu/tp/ Mail: Topology Proceedings Department of Mathematics & Statistics Auburn University, Alabama 36849, USA E-mail: topolog@auburn.edu ISSN: 0146-4124
More informationStochastic Processes
MTHE/STAT455, STAT855 Fall 202 Stochastic Processes Final Exam, Solutions (5 marks) (a) (0 marks) Condition on the first pair to bond; each of the n adjacent pairs is equally likely to bond Given that
More informationMarkov Processes on Discrete State Spaces
Markov Processes on Discrete State Spaces Theoretical Background and Applications. Christof Schuette 1 & Wilhelm Huisinga 2 1 Fachbereich Mathematik und Informatik Freie Universität Berlin & DFG Research
More informationOn the dynamics of sup-norm nonexpansive maps
On the dynamics of sup-norm nonexpansive maps Bas Lemmens and Michael Scheutzow Institut für Mathematik, MA 7 5, Technische Universität Berlin, Straße des 17 Juni 136, D-10623 Berlin, Germany e-mail: lemmens@mathtu-berlinde
More informationMarkov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University
Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.
More informationLecture 21. David Aldous. 16 October David Aldous Lecture 21
Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these
More informationA gentle introduction to imprecise probability models
A gentle introduction to imprecise probability models and their behavioural interpretation Gert de Cooman gert.decooman@ugent.be SYSTeMS research group, Ghent University A gentle introduction to imprecise
More informationAsymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment
Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire
More informationLimit theorems for dependent regularly varying functions of Markov chains
Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr
More informationMarkov chain Monte Carlo
1 / 26 Markov chain Monte Carlo Timothy Hanson 1 and Alejandro Jara 2 1 Division of Biostatistics, University of Minnesota, USA 2 Department of Statistics, Universidad de Concepción, Chile IAP-Workshop
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More information1 Ways to Describe a Stochastic Process
purdue university cs 59000-nmc networks & matrix computations LECTURE NOTES David F. Gleich September 22, 2011 Scribe Notes: Debbie Perouli 1 Ways to Describe a Stochastic Process We will use the biased
More informationJOSEPH B. KADANE AND JIASHUN JIN, IN HONOR OF ARNOLD ZELLNER
UNIFORM DISTRIBUTIONS ON THE INTEGERS: A CONNECTION TO THE BERNOUILLI RANDOM WALK JOSEPH B. KADANE AND JIASHUN JIN, IN HONOR OF ARNOLD ZELLNER Abstract. Associate to each subset of the integers its almost
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationEquality of P-partition Generating Functions
Bucknell University Bucknell Digital Commons Honors Theses Student Theses 2011 Equality of P-partition Generating Functions Ryan Ward Bucknell University Follow this and additional works at: https://digitalcommons.bucknell.edu/honors_theses
More informationConfidence intervals for the mixing time of a reversible Markov chain from a single sample path
Confidence intervals for the mixing time of a reversible Markov chain from a single sample path Daniel Hsu Aryeh Kontorovich Csaba Szepesvári Columbia University, Ben-Gurion University, University of Alberta
More informationProbabilistic Juggling
Probabilistic Juggling Arvind Ayyer (joint work with Jeremie Bouttier, Sylvie Corteel and Francois Nunzi) arxiv:1402:3752 Conference on STOCHASTIC SYSTEMS AND APPLICATIONS (aka Borkarfest) Indian Institute
More informationMathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )
Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationPermutation groups/1. 1 Automorphism groups, permutation groups, abstract
Permutation groups Whatever you have to do with a structure-endowed entity Σ try to determine its group of automorphisms... You can expect to gain a deep insight into the constitution of Σ in this way.
More informationRegular finite Markov chains with interval probabilities
5th International Symposium on Imprecise Probability: Theories and Applications, Prague, Czech Republic, 2007 Regular finite Markov chains with interval probabilities Damjan Škulj Faculty of Social Sciences
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationMarkov Chains for Everybody
Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität
More informationChapter 2: Markov Chains and Queues in Discrete Time
Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called
More informationQUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday January 20, 2015 (Day 1)
Tuesday January 20, 2015 (Day 1) 1. (AG) Let C P 2 be a smooth plane curve of degree d. (a) Let K C be the canonical bundle of C. For what integer n is it the case that K C = OC (n)? (b) Prove that if
More information4.7.1 Computing a stationary distribution
At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting
More informationSome exercises on coherent lower previsions
Some exercises on coherent lower previsions SIPTA school, September 2010 1. Consider the lower prevision given by: f(1) f(2) f(3) P (f) f 1 2 1 0 0.5 f 2 0 1 2 1 f 3 0 1 0 1 (a) Does it avoid sure loss?
More information