A generalization of Dobrushin coefficient
|
|
- Leon Hodges
- 5 years ago
- Views:
Transcription
1 A generalization of Dobrushin coefficient Ü µ ŒÆ.êÆ ÆÆ 202.5
2 . Introduction and main results We generalize the well-known Dobrushin coefficient δ in total variation to weighted total variation δ V, which gives a criterium for the geometric ergodicity of discrete-time Markov chains. Let X = (X n ) n 0 be a discrete-time Markov chain taking values on a measurable space (E,E ). Denote P n (x,a) := P[X n A X 0 = x], x E, A E, the n-step transition kernel. Set P = P be the one-step kernel.
3 . Introduction and main results The geometrical ergodicity is, there is an invariant probability measure π on (E,E ) and a constant ρ [0,) and a function C : E (0, ) such that P n (x, ) π Var C(x)ρ n, for all n > 0,π-a.s.x E, () where µ Var := 2sup A E variation for a signed measure µ. µ(a) = sup{ E fdµ : f } is the total
4 . Introduction and main results Then X is said to be strongly ergodic if there exist constants ρ [0,) and C < such that sup P (x, ) π Var Cρ n, for all n > 0. (2) x E
5 . Introduction and main results From [6, 7], we know that () is equivalent to that there exist constants ρ [0,), C < and a π-a.s. finite function V : E [, ] such that P n (x, ) π V CV (x)ρ n, (3) where µ V := sup{ E fdµ : f V } denotes the V -norm on signed measures.
6 . Introduction and main results Recall that for a kernel K (x,a)(x E,A E ), and for a function f, K (x, ) K V := sup V, x E V (x) f (x) f V := sup x E V (x).
7 . Introduction and main results Dobrushin gave a criterion for strong ergodicity. Let δ(p) = 2 sup P(x, ) P(y, ) Var. x E Then, X is strongly ergodic if and only if there exists n, such that δ(p n ) <. δ(p) is now in term of Dobrushin coefficient.
8 . Introduction and main results Now, we will generalize δ(p) to δ V (P) to give a criteria of geometric ergodicity in () or (3). Definition. For a transition kernel P, and a π-a.s. finite function V : E [, ], we define the generalized Dobrushin coefficient δ V (P): δ V (P) := sup x,y E V (x) + V (y) P(x, ) P(y, ) V. (4)
9 . Introduction and main results Here is the main result. Theorem. A ψ-irreducible and aperiodic Markov chain X = (X n ) n 0 is geometriclly ergodic if and only if X is ergodic with stationary distribution π, and there is a π-a.s. finite function V : E [, ] such that π(v ) < and δ V (P n ) < for n large enough. Note that when V (x), δ V (P) = δ(p) and hence Theorem is reduced to the classical Dobrushin criteria.
10 The key ingredient for the proof of Theorem comes from an excellent observation. By defining a metric on E as in [4], we can transfer δ V (P) to obtain a Wasserstein probability space. Following [4], we introduced a metric on E. Let V : E [, ). For x,y E, d V (x,y) = { 0, x = y; V (x) + V (y), x y.
11 Let ϕ be a function on (E,d V ), the Lipschitz norm of ϕ is ϕ(x) ϕ(y) ϕ Lip(dV ) := sup x y d V (x,y) ϕ(x) ϕ(y) = sup x y V (x) + V (y). For two (probability) measures µ, µ 2 on (E,E ), the Wasserstein metric W dv is defined by W dv (µ, µ 2 ) = sup ϕ: ϕ Lip(dV ) ϕd(µ µ 2 ).
12 Lemma 2. [4] For two (probability) measures µ, µ 2 on (E,E ), µ µ 2 V = W dv (µ, µ 2 ), µ i (V ) <, i =,2. (5) For V (x), then both sides of (5) are reduced to the total variation of µ µ 2. The following properties are essential to proof of Theorem.
13 Proposition 3. Let P and P be probability transition kernels on (E,E ). Then (i) δ V (P P) δ V (P)δ V ( P), (ii) δ V (P) δ V ( P) P P V, (iii) Let R be a transition kernel on (E,E ) such that R(x,E) = 0, for all x E and R V <. Then RP V R V δ V (P).
14 Proof. (i) By definition of δ V (P), we have P(x, ) P(y, ) V δ V (P) ( V (x) + V (y) ), π-a.s. x,y E. It follows from Lemma 2 that W dv (P(x, ),P(y, )) δ V (P) ( V (x) + V (y) ). Hence by definition of W dv, we get that V (x) + V (y) Pϕ(x) Pϕ(y) δ V (P) ϕ Lip(dV ), and Pϕ Lip(dV ) δ V (P) ϕ Lip(dV ). (6)
15 Since ( PP)(x,dz)ϕ(z) = (Pϕ)(z) P(x,dz), we get from (6) that ( ) W dv ( PP)(x, ),( PP)(y, ) = sup (Pϕ)(z) ( P(x,dz) ) P(y,dz) ϕ Lip(dV ) δ V (P) sup φ(z) ( P(x,dz) ) P(y,dz) φ Lip(dV ) = δ V (P)W dv ( P(x, ), P(y, ) ).
16 Therefore by Lemma 2 again, we have δ V ( PP) = sup x,y E = sup x,y E V (x) + V (y) ( PP)(x, ) ( PP)(y, ) V V (x) + V (y) W ( ) d V ( PP)(x, ),( PP)(y, ) δ V (P) sup x,y E V (x) + V (y) W ) d V ( P(x, ), P(y, ) = δ V (P) sup x,y E V (x) + V (y) P(x, ) P(y, ) V = δ V (P)δ V ( P).
17 (ii) Using the fact sup we get x u(x) sup x δ V (P) δ V ( P) = sup x,y E V (x) + V (y) P(x, ) P(y, ) V sup x,y E sup x,y E V (x) + V (y) P(x, ) P(y, ) V V (x) + V (y) sup f V v(x) sup u(x) v(x), x ( Pf (x) Pf (y) ) ( Pf (x) Pf (y) ) sup x,y E V (x) + V (y) sup [ ] Pf (x) Pf (x) + Pf (y) Pf (y) f V
18 P(x, ) sup P(x, ) V + P(y, ) P(y, ) V x,y E V (x) + V (y) P(x, ) sup P(x, ) V x E V (x) = P P V, where in the last inequality, we use the element that u(x) + u(y) v(x) + v(y) max{ u(x) v(x), u(y) }. v(y)
19 (iii) Since R(x,E) = 0, for all x E, we have Jordan-Hahn decomposition R = R + R, hence RP V = sup x E = sup x E = sup x E = sup x E V (x) sup (RP)(x,dy)f (y) f V V (x) sup [(R + P)(x,dy) (R P)(x,dy)]f (y) f V V (x) (R+ P)(x, ) (R P)(x, ) V V (x) W ( d V (R + P)(x, ),(R P)(x, ) ).
20 It follows from (6) and Lemma 2 that, ( W dv (R + P)(x, ),(R P)(x, ) ) = sup ϕ(y) ( (R + P)(x,dy) (R P)(x,dy) ) ϕ: ϕ Lip(dV ) = sup ϕ: ϕ Lip(dV ) δ V (P) sup φ: φ Lip(dV ) (Pϕ)(y)[R + (x,dy) R (x,dy)] = δ V (P)W dv ( R + (x, ),R (x, ) ) = δ V (P) R + (x, ) R (x, ) V. φ(y)[r + (x,dy) R (x,dy)]
21 Finally we can obtain RP V δ V (P)sup x E V (x) R+ (x, ) R (x, ) V = δ V (P) R V.
22 Proof of Theorem. Set Π(x, ) = π, for all x E. As noted in [6], the function V in (4) can be chosen to be such that π(v ) <. Since δ V (Π) = 0, we obtain δ V (P n ) = δ V (P n ) δ V (Π) P n Π V = sup x E V (x) Pn (x, ) π V 0. V (x)+π(v ) Conversely, we have I Π V sup V (x) = + π(v ) < x E and (I Π)(x,E) = 0 for all x E. Then P n Π V = P n ΠP n V = (I Π)P n V I Π V δ V (P n ) 0.
23 3.References W.J. Anderson, Continuous-Time Markov Chains, Springer-Verlag, (99). M.F. Chen, From Markov Chains to Non-equilibrium Particle Systems, World Scientufic, Second Edition, (2004). R.L. Dobrushin, Central limit theorem for non-stationary Markov chains I, II, Theorem Probab. Appl., (956), 63-80, M. Hairer and J.C. Mattingly, Yet another look at Harris ergodic theorem for Markov chains, Seminar on Stochastic Analysis, Random Fields and Applications VI Progress in Probability, 20, Volume 63, Part, I. Kontoyiannis and S.P. Meyn, Geometric ergodicity and the spectral gap of non-reversible Markov chains, Probab. Theory Relat. Fields, (20). S.P. Meyn and R.L. Tweedie, Markov Chains and Stochastic Stability, Springer-Verlag, (996). G.O. Roberts and J.S. Rosenthal. Geometric ergodicity and hybrid Markov chains. Electron. Comm. Probab., 2, 3õ25 (electronic), 997.
24 Thanks!
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms
Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that
More informationGeometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process
Author manuscript, published in "Journal of Applied Probability 50, 2 (2013) 598-601" Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process L. Hervé and J. Ledoux
More informationLecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More information215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that
15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that
More informationStochastic relations of random variables and processes
Stochastic relations of random variables and processes Lasse Leskelä Helsinki University of Technology 7th World Congress in Probability and Statistics Singapore, 18 July 2008 Fundamental problem of applied
More informationGeometric Ergodicity and Hybrid Markov Chains
Geometric Ergodicity and Hybrid Markov Chains by Gareth O. Roberts* and Jeffrey S. Rosenthal** (August 1, 1996; revised, April 11, 1997.) Abstract. Various notions of geometric ergodicity for Markov chains
More informationSUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS
Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationarxiv: v1 [math.pr] 17 May 2009
A strong law of large nubers for artingale arrays Yves F. Atchadé arxiv:0905.2761v1 [ath.pr] 17 May 2009 March 2009 Abstract: We prove a artingale triangular array generalization of the Chow-Birnbau- Marshall
More informationThe Monte Carlo Computation Error of Transition Probabilities
Zuse Institute Berlin Takustr. 7 495 Berlin Germany ADAM NILSN The Monte Carlo Computation rror of Transition Probabilities ZIB Report 6-37 (July 206) Zuse Institute Berlin Takustr. 7 D-495 Berlin Telefon:
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationTHE GEOMETRICAL ERGODICITY OF NONLINEAR AUTOREGRESSIVE MODELS
Statistica Sinica 6(1996), 943-956 THE GEOMETRICAL ERGODICITY OF NONLINEAR AUTOREGRESSIVE MODELS H. Z. An and F. C. Huang Academia Sinica Abstract: Consider the nonlinear autoregressive model x t = φ(x
More informationA note on adiabatic theorem for Markov chains and adiabatic quantum computation. Yevgeniy Kovchegov Oregon State University
A note on adiabatic theorem for Markov chains and adiabatic quantum computation Yevgeniy Kovchegov Oregon State University Introduction Max Born and Vladimir Fock in 1928: a physical system remains in
More informationThe simple slice sampler is a specialised type of MCMC auxiliary variable method (Swendsen and Wang, 1987; Edwards and Sokal, 1988; Besag and Green, 1
Recent progress on computable bounds and the simple slice sampler by Gareth O. Roberts* and Jerey S. Rosenthal** (May, 1999.) This paper discusses general quantitative bounds on the convergence rates of
More informationSome Results on the Ergodicity of Adaptive MCMC Algorithms
Some Results on the Ergodicity of Adaptive MCMC Algorithms Omar Khalil Supervisor: Jeffrey Rosenthal September 2, 2011 1 Contents 1 Andrieu-Moulines 4 2 Roberts-Rosenthal 7 3 Atchadé and Fort 8 4 Relationship
More informationMinicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics
Minicourse on: Markov Chain Monte Carlo: Simulation Techniques in Statistics Eric Slud, Statistics Program Lecture 1: Metropolis-Hastings Algorithm, plus background in Simulation and Markov Chains. Lecture
More informationGEOMETRIC ERGODICITY AND HYBRID MARKOV CHAINS
Elect. Comm. in Probab. 2 (1997) 13 25 ELECTRONIC COMMUNICATIONS in PROBABILITY GEOMETRIC ERGODICITY AND HYBRID MARKOV CHAINS GARETH O. ROBERTS Statistical Laboratory, University of Cambridge Cambridge
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationOn the Central Limit Theorem for an ergodic Markov chain
Stochastic Processes and their Applications 47 ( 1993) 113-117 North-Holland 113 On the Central Limit Theorem for an ergodic Markov chain K.S. Chan Department of Statistics and Actuarial Science, The University
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationSpectral properties of Markov operators in Markov chain Monte Carlo
Spectral properties of Markov operators in Markov chain Monte Carlo Qian Qin Advisor: James P. Hobert October 2017 1 Introduction Markov chain Monte Carlo (MCMC) is an indispensable tool in Bayesian statistics.
More informationRigorous approximation of invariant measures for IFS Joint work April 8, with 2016 S. 1 Galat / 21
Rigorous approximation of invariant measures for IFS Joint work with S. Galatolo e I. Nisoli Maurizio Monge maurizio.monge@im.ufrj.br Universidade Federal do Rio de Janeiro April 8, 2016 Rigorous approximation
More informationDetailed Proofs of Lemmas, Theorems, and Corollaries
Dahua Lin CSAIL, MIT John Fisher CSAIL, MIT A List of Lemmas, Theorems, and Corollaries For being self-contained, we list here all the lemmas, theorems, and corollaries in the main paper. Lemma. The joint
More informationOn Reparametrization and the Gibbs Sampler
On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department
More informationModern Discrete Probability Spectral Techniques
Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite
More informationLocal consistency of Markov chain Monte Carlo methods
Ann Inst Stat Math (2014) 66:63 74 DOI 10.1007/s10463-013-0403-3 Local consistency of Markov chain Monte Carlo methods Kengo Kamatani Received: 12 January 2012 / Revised: 8 March 2013 / Published online:
More informationIntertibility and spectrum of the multiplication operator on the space of square-summable sequences
Intertibility and spectrum of the multiplication operator on the space of square-summable sequences Objectives Establish an invertibility criterion and calculate the spectrum of the multiplication operator
More informationLecture 8: The Metropolis-Hastings Algorithm
30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:
More informationINVARIANT PROBABILITIES FOR
Applied Mathematics and Stochastic Analysis 8, Number 4, 1995, 341-345 INVARIANT PROBABILITIES FOR FELLER-MARKOV CHAINS 1 ONtSIMO HERNNDEZ-LERMA CINVESTA V-IPN Departamento de Matemticas A. Postal 1-70,
More informationA regeneration proof of the central limit theorem for uniformly ergodic Markov chains
A regeneration proof of the central limit theorem for uniformly ergodic Markov chains By AJAY JASRA Department of Mathematics, Imperial College London, SW7 2AZ, London, UK and CHAO YANG Department of Mathematics,
More informationStationarity of Generalized Autoregressive Moving Average Models
Stationarity of Generalized Autoregressive Moving Average Models Dawn B. Woodard David S. Matteson Shane G. Henderson School of Operations Research and Information Engineering and Department of Statistical
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationFaithful couplings of Markov chains: now equals forever
Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu
More informationEuler Equations: local existence
Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u
More informationITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES
UNIVERSITATIS IAGELLONICAE ACTA MATHEMATICA, FASCICULUS XL 2002 ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES by Joanna Jaroszewska Abstract. We study the asymptotic behaviour
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationGeometric ergodicity of the Bayesian lasso
Geometric ergodicity of the Bayesian lasso Kshiti Khare and James P. Hobert Department of Statistics University of Florida June 3 Abstract Consider the standard linear model y = X +, where the components
More informationCentral limit theorems for ergodic continuous-time Markov chains with applications to single birth processes
Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationHomework set 3 - Solutions
Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it
More informationStability of Cyclic Threshold and Threshold-like Autoregressive Time Series Models
1 Stability of Cyclic Threshold and Threshold-like Autoregressive Time Series Models Thomas R. Boucher, Daren B.H. Cline Plymouth State University, Texas A&M University Abstract: We investigate the stability,
More informationMixing time for a random walk on a ring
Mixing time for a random walk on a ring Stephen Connor Joint work with Michael Bate Paris, September 2013 Introduction Let X be a discrete time Markov chain on a finite state space S, with transition matrix
More informationA Nonlinear Transfer Operator Theorem
J Stat Phys 207) 66:56 524 DOI 0.007/s0955-06-646- A Nonlinear Transfer Operator Theorem Mark Pollicott Received: 22 June 206 / Accepted: 8 October 206 / Published online: 9 November 206 The Authors) 206.
More informationComputer intensive statistical methods
Lecture 11 Markov Chain Monte Carlo cont. October 6, 2015 Jonas Wallin jonwal@chalmers.se Chalmers, Gothenburg university The two stage Gibbs sampler If the conditional distributions are easy to sample
More informationConvergence to equilibrium of Markov processes (eventually piecewise deterministic)
Convergence to equilibrium of Markov processes (eventually piecewise deterministic) A. Guillin Université Blaise Pascal and IUF Rennes joint works with D. Bakry, F. Barthe, F. Bolley, P. Cattiaux, R. Douc,
More informationAliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide
aliprantis.tex May 10, 2011 Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide Notes from [AB2]. 1 Odds and Ends 2 Topology 2.1 Topological spaces Example. (2.2) A semimetric = triangle
More informationExistence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models
Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University
More informationCoupling and Ergodicity of Adaptive MCMC
Coupling and Ergodicity of Adaptive MCMC by Gareth O. Roberts* and Jeffrey S. Rosenthal** (March 2005; last revised March 2007.) Abstract. We consider basic ergodicity properties of adaptive MCMC algorithms
More informationOn Differentiability of Average Cost in Parameterized Markov Chains
On Differentiability of Average Cost in Parameterized Markov Chains Vijay Konda John N. Tsitsiklis August 30, 2002 1 Overview The purpose of this appendix is to prove Theorem 4.6 in 5 and establish various
More informationAlmost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms
Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Vladislav B. Tadić Abstract The almost sure convergence of two time-scale stochastic approximation algorithms is analyzed under
More informationAsymptotics and Simulation of Heavy-Tailed Processes
Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline
More informationAdvanced Computer Networks Lecture 2. Markov Processes
Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationComputational complexity estimates for value and policy iteration algorithms for total-cost and average-cost Markov decision processes
Computational complexity estimates for value and policy iteration algorithms for total-cost and average-cost Markov decision processes Jefferson Huang Dept. Applied Mathematics and Statistics Stony Brook
More informationCapacitary inequalities in discrete setting and application to metastable Markov chains. André Schlichting
Capacitary inequalities in discrete setting and application to metastable Markov chains André Schlichting Institute for Applied Mathematics, University of Bonn joint work with M. Slowik (TU Berlin) 12
More informationMarkov Chain Monte Carlo
Chapter 5 Markov Chain Monte Carlo MCMC is a kind of improvement of the Monte Carlo method By sampling from a Markov chain whose stationary distribution is the desired sampling distributuion, it is possible
More informationON CONVERGENCE RATES OF GIBBS SAMPLERS FOR UNIFORM DISTRIBUTIONS
The Annals of Applied Probability 1998, Vol. 8, No. 4, 1291 1302 ON CONVERGENCE RATES OF GIBBS SAMPLERS FOR UNIFORM DISTRIBUTIONS By Gareth O. Roberts 1 and Jeffrey S. Rosenthal 2 University of Cambridge
More informationA Geometric Interpretation of the Metropolis Hastings Algorithm
Statistical Science 2, Vol. 6, No., 5 9 A Geometric Interpretation of the Metropolis Hastings Algorithm Louis J. Billera and Persi Diaconis Abstract. The Metropolis Hastings algorithm transforms a given
More informationSTABILITY OF THE INVENTORY-BACKORDER PROCESS IN THE (R, S) INVENTORY/PRODUCTION MODEL
Pliska Stud. Math. Bulgar. 18 (2007), 255 270 STUDIA MATHEMATICA BULGARICA STABILITY OF THE INVENTORY-BACKORDER PROCESS IN THE (R, S) INVENTORY/PRODUCTION MODEL Zahir Mouhoubi Djamil Aissani The aim of
More informationStability of nonlinear AR(1) time series with delay
Stochastic Processes and their Applications 82 (1999 307 333 www.elsevier.com/locate/spa Stability of nonlinear AR(1 time series with delay Daren B.H. Cline, Huay-min H. Pu Department of Statistics, Texas
More informationLectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes
Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities
More informationIntroduction to Markov chain Monte Carlo
Introduction to Markov chain Monte Carlo Adam M. Johansen 1 February 12, 2018 1 Based closely on slides produced by Anthony Lee in previous years. 1 / 87 Outline Motivation What is a Markov chain? First
More informationMarkov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015
Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,
More informationZero-Sum Average Semi-Markov Games: Fixed Point Solutions of the Shapley Equation
Zero-Sum Average Semi-Markov Games: Fixed Point Solutions of the Shapley Equation Oscar Vega-Amaya Departamento de Matemáticas Universidad de Sonora May 2002 Abstract This paper deals with zero-sum average
More informationAsymptotic stability of an evolutionary nonlinear Boltzmann-type equation
Acta Polytechnica Hungarica Vol. 14, No. 5, 217 Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation Roksana Brodnicka, Henryk Gacki Institute of Mathematics, University of Silesia
More informationVariance Bounding Markov Chains
Variance Bounding Markov Chains by Gareth O. Roberts * and Jeffrey S. Rosenthal ** (September 2006; revised April 2007.) Abstract. We introduce a new property of Markov chains, called variance bounding.
More informationHypercontractivity of spherical averages in Hamming space
Hypercontractivity of spherical averages in Hamming space Yury Polyanskiy Department of EECS MIT yp@mit.edu Allerton Conference, Oct 2, 2014 Yury Polyanskiy Hypercontractivity in Hamming space 1 Hypercontractivity:
More informationQuantitative Non-Geometric Convergence Bounds for Independence Samplers
Quantitative Non-Geometric Convergence Bounds for Independence Samplers by Gareth O. Roberts * and Jeffrey S. Rosenthal ** (September 28; revised July 29.) 1. Introduction. Markov chain Monte Carlo (MCMC)
More informationWeighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing
Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes
More informationThe Calderon-Vaillancourt Theorem
The Calderon-Vaillancourt Theorem What follows is a completely self contained proof of the Calderon-Vaillancourt Theorem on the L 2 boundedness of pseudo-differential operators. 1 The result Definition
More informationLimit theorems for dependent regularly varying functions of Markov chains
Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationSTABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL
First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki
More informationLecture 12: Detailed balance and Eigenfunction methods
Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),
More informationApproximating a Diffusion by a Finite-State Hidden Markov Model
Approximating a Diffusion by a Finite-State Hidden Markov Model I. Kontoyiannis S.P. Meyn November 28, 216 Abstract For a wide class of continuous-time Markov processes evolving on an open, connected subset
More informationOn the Convergence of the Concave-Convex Procedure
On the Convergence of the Concave-Convex Procedure Bharath K. Sriperumbudur and Gert R. G. Lanckriet UC San Diego OPT 2009 Outline Difference of convex functions (d.c.) program Applications in machine
More informationEffective dynamics for the (overdamped) Langevin equation
Effective dynamics for the (overdamped) Langevin equation Frédéric Legoll ENPC and INRIA joint work with T. Lelièvre (ENPC and INRIA) Enumath conference, MS Numerical methods for molecular dynamics EnuMath
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationComputational Complexity of Metropolis-Hastings Methods in High Dimensions
Computational Complexity of Metropolis-Hastings Methods in High Dimensions Alexandros Beskos and Andrew Stuart Abstract This article contains an overview of the literature concerning the computational
More information25.1 Markov Chain Monte Carlo (MCMC)
CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,
More informationReversible Markov chains
Reversible Markov chains Variational representations and ordering Chris Sherlock Abstract This pedagogical document explains three variational representations that are useful when comparing the efficiencies
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationZero-sum semi-markov games in Borel spaces: discounted and average payoff
Zero-sum semi-markov games in Borel spaces: discounted and average payoff Fernando Luque-Vásquez Departamento de Matemáticas Universidad de Sonora México May 2002 Abstract We study two-person zero-sum
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationA class of domains with fractal boundaries: Functions spaces and numerical methods
A class of domains with fractal boundaries: Functions spaces and numerical methods Yves Achdou joint work with T. Deheuvels and N. Tchou Laboratoire J-L Lions, Université Paris Diderot École Centrale -
More informationExtremal behaviour of chaotic dynamics
Extremal behaviour of chaotic dynamics Ana Cristina Moreira Freitas CMUP & FEP, Universidade do Porto joint work with Jorge Freitas and Mike Todd (CMUP & FEP) 1 / 24 Extreme Value Theory Consider a stationary
More informationOperations Research Letters. Instability of FIFO in a simple queueing system with arbitrarily low loads
Operations Research Letters 37 (2009) 312 316 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Instability of FIFO in a simple queueing
More informationCONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents
CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS ARI FREEDMAN Abstract. In this expository paper, I will give an overview of the necessary conditions for convergence in Markov chains on finite state spaces.
More informationSIMILAR MARKOV CHAINS
SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in
More informationTotal Expected Discounted Reward MDPs: Existence of Optimal Policies
Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600
More informationNotes on Distributions
Notes on Distributions Functional Analysis 1 Locally Convex Spaces Definition 1. A vector space (over R or C) is said to be a topological vector space (TVS) if it is a Hausdorff topological space and the
More informationStability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk
Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid
More informationMath 456: Mathematical Modeling. Tuesday, March 6th, 2018
Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary
More informationNote that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +
Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n
More informationA concentration theorem for the equilibrium measure of Markov chains with nonnegative coarse Ricci curvature
A concentration theorem for the equilibrium measure of Markov chains with nonnegative coarse Ricci curvature arxiv:103.897v1 math.pr] 13 Mar 01 Laurent Veysseire Abstract In this article, we prove a concentration
More informationA NOTE ON THE POINCARÉ AND CHEEGER INEQUALITIES FOR SIMPLE RANDOM WALK ON A CONNECTED GRAPH. John Pike
A NOTE ON THE POINCARÉ AND CHEEGER INEQUALITIES FOR SIMPLE RANDOM WALK ON A CONNECTED GRAPH John Pike Department of Mathematics University of Southern California jpike@usc.edu Abstract. In 1991, Persi
More information