Probabilistic Juggling
|
|
- Margaret Houston
- 6 years ago
- Views:
Transcription
1 Probabilistic Juggling Arvind Ayyer (joint work with Jeremie Bouttier, Sylvie Corteel and Francois Nunzi) arxiv:1402:3752 Conference on STOCHASTIC SYSTEMS AND APPLICATIONS (aka Borkarfest) Indian Institute of Science Bangalore September 10, 2014
2 Outline 1 Solitary infallible juggler 2 Markov chain on set partitions 3 Superhuman juggler 4 Errant juggler with a partner
3 Multivariate Juggling Markov Chain (MJMC) Data: Siteswap notation h: the maximum height the juggler can throw. l: the number of balls he juggles. k = h l: the number of empty spaces he has to throw the balls in x a probability distribution on {0,..., k}. State space: St h,k {, } h Configurations B = (b 1, b 2,..., b h ) St h,k
4 Example: h = 8, k = l = 4
5 Example: h = 8, k = l = 4
6 Example: h = 8, k = l = 4
7 Example: h = 8, k = l = 4
8 Example: h = 8, k = l = 4 x 0 x 1 x 2 x 3 x 4
9 History 1994 Juggling sequences Buhler, Eisenbud, Graham and Wright (AMM) Connections to other combinatorial structures 2005 Juggling probabilities Warrington (AMM) x i uniform 2012 Juggler s exclusion process on Z, Leskelä and Varpanen
10 MJMC on St 4,2 x 2 1 x 0 x 1 x 0 x 1 x 2 x 1 1 x 2 1
11 MJMC on St 4,2 Basis: (gravity ) (,,,,, ) Transition matrix: x 0 x 1 x x x 1 x x 0 0 x 1 0 x Unnormalised Stationary distribution: 1 x 1 + x 2 x 2 (x 1 + x 2 ) 2 x 2 (x 1 + x 2 ) x 2 2
12 Results about the MJMC Proposition (A, Bouttier, Corteel, Nunzi, 14) If x i > 0 for all i {0,..., k}, then the MJMC is irreducible and aperiodic. Warrington s proof of this was incomplete!
13 Results about the MJMC For B St h,k, define e i (B) = #{j > i : b j = } and (B) = i {1,...,h} b i = (1 + e i (B)) Theorem (Warrington 05) If x is uniform, the stationary probability distribution of B St h,k is π(b) = (B) { h+1 k+1 }
14 Stirling Numbers of the Second Kind { n } j is the number of ways to partition an n-set into j parts. { } { } { } n + 1 n n = j + ; j j j 1 { } n = δ n,0. 0 For example { 4 2} = 7, since {1, 2, 3, 4} can be partitioned as 123 4, 3 124, 2 134, 1 234, 12 34, 13 24, The stationary probability distribution on St 4,2 becomes
15 Stationary distribution of the MJMC Theorem (A, Bouttier, Corteel, Nunzi, 14) The stationary distribution π of the MJMC is given by π(b) = 1 Z h,k i {1,...,h} b i = ( xei (B) + + x k ), where E i (B) = #{j < i b j = }, and Z h,k Z h,k (x 0,..., x k ) is the normalisation factor. Back to example (x 0 + x 1 + x 2 ) 2 (x 0 + x 1 + x 2 )(x 1 + x 2 ) (x 0 + x 1 + x 2 )x 2 (x 1 + x 2 ) 2 x 2 (x 1 + x 2 ) x 2 2
16 Normalisation factor Z h,k Let y m = k x j for m = 0,..., k. j=m Let h n (z 0,..., z k ) be the complete homogeneous symmetric polynomial of degree n, e.g., h 2 (z 0, z 1, z 2 ) = z z 0 z 1 + z 0 z 2 + z z 1 z 2 + z 2 2. Lemma (A, Bouttier, Corteel, Nunzi, 14) The normalisation factor can be written as Z h,k = h l (y 0, y 1,..., y k ).
17 Special cases Corollary (A, Bouttier, Corteel, Nunzi, 14) { } h + 1 Z h,k (1, 1,..., 1, 1) =, k + 1 { } h + 1 Z h,k (q k, q k 1,..., q, 1) = k + 1, q Z h,k (1, q,..., q k 1, q k ) = q k(h k) { h + 1 k + 1 }, 1/q Z h,k ((1 q), (1 q)q,..., (1 q)q k 1, q k ) = ( ) h. k q
18 Enriched Multivariate Juggling Markov Chain (EMJMC) Data: H = h + 1: Cardinality of the set {1,..., H} K = k + 1: Number of parts of the H-set x a probability distribution on {0,..., K 1}. State space: S(H, K) σ S(H, K) written in increasing order of block maxima, e.g., S(7, 3).
19 Example: ?
20 Example: ?
21 Example: ?
22 Example: ?
23 Example: ?
24 Example: ?
25 Example: ?
26 Example: ? x 0 x 1 x 2 x3 x 4 9
27 Results about the EMJMC Proposition (A, Bouttier, Corteel, Nunzi, 14) If x i > 0 for all i {0,..., k}, then the EMJMC is irreducible and aperiodic.
28 Arches (s, t) is an arch of σ S(H, K) if s and t are nearest neighbours in a block. C σ (s, t) be the number of blocks containing at least one element in {s, s + 1,..., t 1, t}. Example with S(7, 3)
29 Results about the EMJMC Theorem (A, Bouttier, Corteel, Nunzi, 14) The stationary distribution π of the EMJMC is given by ˆπ(σ) = 1 Ẑ H,K (s,t) arch of σ x K Cσ(s,t) where Ẑ H,K = Z h,k (x 0,..., x k ) is the normalisation factor. Corollary (Warrington, 05) When x is uniform, the stationary distribution on S(H, K) is uniform.
30 Lumping Proof for EMJMC by verifying the master equation Proof for MJMC by lumping/projection from EMJMC as follows:
31 Lumping Proof for EMJMC by verifying the master equation Proof for MJMC by lumping/projection from EMJMC as follows: Define ψ : S(H, K) St h,k so that b i = iff i is a block maximum in σ.
32 Full chain on S(4, 2) Red arrows and arches have probability x 0 Green arrows and arches have probability x 1 Gray arrows have probability x 0 + x 1 = ψ
33 Unbounded Multivariate Juggling Markov Chain (UMJMC) l balls, which can be thrown to arbitrary heights.
34 Unbounded Multivariate Juggling Markov Chain (UMJMC) l balls, which can be thrown to arbitrary heights. State space St (l) {, } N x a probability distribution on N
35 Unbounded Multivariate Juggling Markov Chain (UMJMC) l balls, which can be thrown to arbitrary heights. State space St (l) {, } N x a probability distribution on N Formally, states are infinite words in and with exactly l occurences of. T i (A) St (l) is the word obtained by replacing the (i + 1)-th occurrence of in A by. The transition probability from A = a 1 a 2 a 3 to B is 1 if a 1 = and B = a 2 a 3, P A,B = x i if a 1 = and B = T i (a 2 a 3 ), 0 otherwise.
36 Results about UMJMC Proposition (A, Bouttier, Corteel, Nunzi, 14) If x 0 and infinitely many x i s are nonzero, then the UMJMC is irreducible and aperiodic. Theorem (A, Bouttier, Corteel, Nunzi, 14) The unique invariant measure (up to constant of proportionality) of the UMJMC is given by w(b) = i N, b i = y Ei (B) where B = b 1 b 2 b 3 St (l), E i (B) = #{j < i b j = } and y m = j=m x j.
37 More about the invariant measure Theorem The invariant measure of the UMJMC is finite if and only if i x i <, i=0 in which case its total mass reads Z (l) = h l (y 0, y 1, y 2,...). Further, the UMJMC is positive recurrent if and only if the above holds. In that case, there is a unique stationary probability distribution, and the chain started from any initial state converges to it in total variation as time tends to infinity.
38 Remarks Both the solitary and superhuman juggling chains can be interpreted in terms of natural Markov chains on integer partitions. The superhuman juggling chain can also be naturally generalised to have an infinite number of balls (IMJMC). The conditions for the existence of a probability measure and positive recurrence in the IMJMC are identical to the UMJMC.
39 Multivariate Annihilation Juggling Markov Chain (MAJMC) Data: h: the maximum height the juggler can throw. z a probability distribution on {0,..., h}, with a z 0. State space: St h = {, } h
40 Example: h = 8
41 Example: h = 8
42 Example: h = 8 z 1 z 2 z 3 z 4 z 5 a + z 6 + z 7 + z 8
43 MAJMC for h = 2 Transition matrix in the basis (,,, ) z 1 0 z 2 + a 0 P = z 1 0 z 2 + a 0, 0 z 1 z 2 a 0 z 1 z 2 a Stationary probability distribution ( z 2 1, z 1 (z 2 + a), (z 1 + z 2 ) (z 2 + a), a (z 2 + a) )
44 Results for the MAJMC Theorem (A, Bouttier, Corteel, Nunzi, 14) The stationary probability distribution of the annihilation model is Π(B) = h ( z1 + + z ei (B)+1) k (z j z h + a), i=1 b i = j=1 where e i (B) = #{j : i < j h, b j = }. Moreover, Π(B) = (z z h + a) h = 1 B
45 Convergence to stationarity Theorem For any initial probability distribution η over St h, the distribution at time h is equal to the stationary distribution, namely ηp h = Π. In particular, the only eigenvalues of the transition matrix P are 1 (with multiplicity 1) and 0.
46 Thank you for your attention!
Multivariate juggling probabilities
Multivariate juggling probabilities Arvind Ayyer, Jérémie Bouttier, Sylvie Corteel, François Nunzi To cite this version: Arvind Ayyer, Jérémie Bouttier, Sylvie Corteel, François Nunzi. Multivariate juggling
More informationStochastic Models: Markov Chains and their Generalizations
Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction
More informationarxiv: v1 [math.co] 24 Jan 2016
RANDOMLY JUGGLING BACKWARDS ALLEN KNUTSON to Ron Graham, former president of the International Juggler s Association, on his 80th birthday arxiv:1601.06391v1 [math.co] 24 Jan 2016 ABSTRACT. We recall the
More informationEnumerating multiplex juggling patterns
Enumerating multiplex juggling patterns Steve Butler Jeongyoon Choi Kimyung Kim Kyuhyeok Seo Abstract Mathematics has been used in the exploration and enumeration of juggling patterns. In the case when
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Simulated Annealing Barnabás Póczos & Ryan Tibshirani Andrey Markov Markov Chains 2 Markov Chains Markov chain: Homogen Markov chain: 3 Markov Chains Assume that the state
More informationAsymptotic Counting Theorems for Primitive. Juggling Patterns
Asymptotic Counting Theorems for Primitive Juggling Patterns Erik R. Tou January 11, 2018 1 Introduction Juggling patterns are typically described using siteswap notation, which is based on the regular
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationNon-homogeneous random walks on a semi-infinite strip
Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationThe Hilbert-Galton board
ALEA, Lat Am J Probab Math Stat 15, 755 774 (2018) DOI: 1030757/ALEAv15-28 The Hilbert-Galton board Arvind Ayyer and Sanjay Ramassamy Department of Mathematics Indian Institute of Science Bangalore - 560012,
More informationEnumerating (Multiplex) Juggling Sequences
Ann. Comb. 13 (2010) 413 424 DOI 10.1007/s00026-009-0040-y Published online January 12, 2010 2009 The Author(s) This article is published with open access at Springerlink.com Annals of Combinatorics Enumerating
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationRandom Times and Their Properties
Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More information25.1 Ergodicity and Metric Transitivity
Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces
More informationMarkov Random Fields
Markov Random Fields 1. Markov property The Markov property of a stochastic sequence {X n } n 0 implies that for all n 1, X n is independent of (X k : k / {n 1, n, n + 1}), given (X n 1, X n+1 ). Another
More informationUniversal Juggling Cycles
Universal Juggling Cycles Fan Chung y Ron Graham z Abstract During the past several decades, it has become popular among jugglers (and juggling mathematicians) to represent certain periodic juggling patterns
More informationStochastic Processes (Week 6)
Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationA MARKOV CHAIN ON THE SYMMETRIC GROUP WHICH IS SCHUBERT POSITIVE?
A MARKOV CHAIN ON THE SYMMETRIC GROUP WHICH IS SCHUBERT POSITIVE? THOMAS LAM AND LAUREN WILLIAMS Abstract. We study a multivariate Markov chain on the symmetric group with remarkable enumerative properties.
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results
Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More information5 Set Operations, Functions, and Counting
5 Set Operations, Functions, and Counting Let N denote the positive integers, N 0 := N {0} be the non-negative integers and Z = N 0 ( N) the positive and negative integers including 0, Q the rational numbers,
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationLECTURE 3. Last time:
LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate
More informationThe spectrum of an asymmetric annihilation process
The spectrum of an asymmetric annihilation process arxiv:1008.2134v1 [math.co] 12 Aug 2010 Arvind Ayyer 1 and Volker Strehl 2 1 Institut de Physique Théorique, C. E. A. Saclay, 91191 Gif-sur-Yvette Cedex,
More informationAn Introduction to Entropy and Subshifts of. Finite Type
An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationCombinatorial Criteria for Uniqueness of Gibbs Measures
Combinatorial Criteria for Uniqueness of Gibbs Measures Dror Weitz* School of Mathematics, Institute for Advanced Study, Princeton, New Jersey 08540; e-mail: dror@ias.edu Received 17 November 2003; accepted
More informationJuggling card sequences
Juggling card sequences Steve Butler Fan Chung Jay Cummings Ron Graham Abstract Juggling patterns can be described by a sequence of cards which keep track of the relative order of the balls at each step.
More informationThe Distribution of Mixing Times in Markov Chains
The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution
More informationAsymptotic properties of imprecise Markov chains
FACULTY OF ENGINEERING Asymptotic properties of imprecise Markov chains Filip Hermans Gert De Cooman SYSTeMS Ghent University 10 September 2009, München Imprecise Markov chain X 0 X 1 X 2 X 3 Every node
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationTransience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:
Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in
More informationJOSEPH B. KADANE AND JIASHUN JIN, IN HONOR OF ARNOLD ZELLNER
UNIFORM DISTRIBUTIONS ON THE INTEGERS: A CONNECTION TO THE BERNOUILLI RANDOM WALK JOSEPH B. KADANE AND JIASHUN JIN, IN HONOR OF ARNOLD ZELLNER Abstract. Associate to each subset of the integers its almost
More informationLecture 3: September 10
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 3: September 10 Lecturer: Prof. Alistair Sinclair Scribes: Andrew H. Chan, Piyush Srivastava Disclaimer: These notes have not
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationThe Theory behind PageRank
The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability
More informationMTHSC 3190 Section 2.9 Sets a first look
MTHSC 3190 Section 2.9 Sets a first look Definition A set is a repetition free unordered collection of objects called elements. Definition A set is a repetition free unordered collection of objects called
More informationSummary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)
Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection
More informationChap 8. State Feedback and State Estimators
Chap 8. State Feedback and State Estimators Outlines Introduction State feedback Regulation and tracking State estimator Feedback from estimated states State feedback-multivariable case State estimators-multivariable
More informationLectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes
Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities
More informationThe totally nonnegative Grassmannian, juggling patterns, and the affine flag manifold
The totally nonnegative Grassmannian, juggling patterns, and the affine flag manifold Allen Knutson (UCSD/Cornell), Thomas Lam (Harvard), David Speyer (MIT) The Mathematical Legacy of Bertram Kostant,
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationJuggling Theory Part I. Siteswap State Diagrams
Juggling Theory Part I. Siteswap State Diagrams Alexander Zinser June, 6 th 010 Abstract The siteswap notation is a widely spread tool to describe juggling patterns [, 3]. To find new siteswaps or transitions
More informationMulti Stage Queuing Model in Level Dependent Quasi Birth Death Process
International Journal of Statistics and Systems ISSN 973-2675 Volume 12, Number 2 (217, pp. 293-31 Research India Publications http://www.ripublication.com Multi Stage Queuing Model in Level Dependent
More informationMarkov Chains. Andreas Klappenecker by Andreas Klappenecker. All rights reserved. Texas A&M University
Markov Chains Andreas Klappenecker Texas A&M University 208 by Andreas Klappenecker. All rights reserved. / 58 Stochastic Processes A stochastic process X tx ptq: t P T u is a collection of random variables.
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationExclusion processes with drift. Arvind Ayyer
Exclusion processes with drift Arvind Ayyer Arvind Ayyer, Department of Mathematics, Indian Institute of Science, Bangalore - 560012, India E-mail address: arvind@math.iisc.ernet.in Preface Contents Chapter
More informationDiscrete solid-on-solid models
Discrete solid-on-solid models University of Alberta 2018 COSy, University of Manitoba - June 7 Discrete processes, stochastic PDEs, deterministic PDEs Table: Deterministic PDEs Heat-diffusion equation
More informationTMA 4265 Stochastic Processes Semester project, fall 2014 Student number and
TMA 4265 Stochastic Processes Semester project, fall 2014 Student number 730631 and 732038 Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}.
More informationCombinatorial Criteria for Uniqueness of Gibbs Measures
Combinatorial Criteria for Uniqueness of Gibbs Measures Dror Weitz School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, U.S.A. dror@ias.edu April 12, 2005 Abstract We generalize previously
More information7.1 Coupling from the Past
Georgia Tech Fall 2006 Markov Chain Monte Carlo Methods Lecture 7: September 12, 2006 Coupling from the Past Eric Vigoda 7.1 Coupling from the Past 7.1.1 Introduction We saw in the last lecture how Markov
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationSome generalized juggling processes (extended abstract)
FPSAC 25, Daejeon, South Korea DMTCS pro. FPSAC 5, 25, 925 9 Some generalized juggling proesses (extended abstrat) Arvind Ayyer, Jérémie Bouttier 2, Svante Linusson and François Nunzi 4: Department of
More informationarxiv: v1 [math.co] 14 Apr 2008
arxiv:0804.2229v1 [math.co] 14 Apr 2008 Counting the Number of Site Swap Juggling Patterns with Respect to Particular Ceilings Carl Bracken School of Mathematical Sciences University College Dublin Ireland
More informationRecurrence Relations
Recurrence Relations Simplest Case of General Solutions Ioan Despi despi@turing.une.edu.au University of New England September 13, 2013 Outline Ioan Despi AMTH140 2 of 1 Simplest Case of General Solutions
More informationarxiv: v2 [math.co] 17 Mar 2016
COUNTING ZERO KERNEL PAIRS OVER A FINITE FIELD SAMRITH RAM arxiv:1509.08053v2 [math.co] 17 Mar 2016 Abstract. Helmke et al. have recently given a formula for the number of reachable pairs of matrices over
More informationECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains
ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer
More informationFINITE MARKOV CHAINS
Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona FINITE MARKOV CHAINS Lidia Pinilla Peralta Director: Realitzat a: David Márquez-Carreras Departament de Probabilitat,
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More informationStat-491-Fall2014-Assignment-III
Stat-491-Fall2014-Assignment-III Hariharan Narayanan November 6, 2014 1. (4 points). 3 white balls and 3 black balls are distributed in two urns in such a way that each urn contains 3 balls. At each step
More informationF 99 Final Quiz Solutions
Massachusetts Institute of Technology Handout 53 6.04J/18.06J: Mathematics for Computer Science May 11, 000 Professors David Karger and Nancy Lynch F 99 Final Quiz Solutions Problem 1 [18 points] Injections
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationConvergence Rate of Markov Chains
Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationConstruction of Smooth Fractal Surfaces Using Hermite Fractal Interpolation Functions. P. Bouboulis, L. Dalla and M. Kostaki-Kosta
BULLETIN OF THE GREEK MATHEMATICAL SOCIETY Volume 54 27 (79 95) Construction of Smooth Fractal Surfaces Using Hermite Fractal Interpolation Functions P. Bouboulis L. Dalla and M. Kostaki-Kosta Received
More informationReducing Markov Chains for Performance Evaluation
1 Reducing Markov Chains for erformance Evaluation Y. ribadi, J..M. Voeten and B.D. Theelen Information and Communication Systems Group, Faculty of Electrical Engineering Eindhoven Embedded Systems Institute
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More informationNormal Forms of Propositional Logic
Normal Forms of Propositional Logic Bow-Yaw Wang Institute of Information Science Academia Sinica, Taiwan September 12, 2017 Bow-Yaw Wang (Academia Sinica) Normal Forms of Propositional Logic September
More informationON THE ASYMPTOTIC DISTRIBUTION OF PARAMETERS IN RANDOM WEIGHTED STAIRCASE TABLEAUX
ON THE ASYMPTOTIC DISTRIBUTION OF PARAMETERS IN RANDOM WEIGHTED STAIRCASE TABLEAUX PAWE L HITCZENKO AND AMANDA PARSHALL Abstract In this paper, we study staircase tableaux, a combinatorial object introduced
More informationBucket brigades - an example of self-organized production. April 20, 2016
Bucket brigades - an example of self-organized production. April 20, 2016 Bucket Brigade Definition A bucket brigade is a flexible worker system defined by rules that determine what each worker does next:
More informationPropp-Wilson Algorithm (and sampling the Ising model)
Propp-Wilson Algorithm (and sampling the Ising model) Danny Leshem, Nov 2009 References: Haggstrom, O. (2002) Finite Markov Chains and Algorithmic Applications, ch. 10-11 Propp, J. & Wilson, D. (1996)
More informationProbability (continued)
DS-GA 1002 Lecture notes 2 September 21, 15 Probability (continued) 1 Random variables (continued) 1.1 Conditioning on an event Given a random variable X with a certain distribution, imagine that it is
More informationA Functional Central Limit Theorem for an ARMA(p, q) Process with Markov Switching
Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 339 345 DOI: http://dxdoiorg/105351/csam2013204339 A Functional Central Limit Theorem for an ARMAp, q) Process with Markov Switching
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationStochastic Processes
MTHE/STAT455, STAT855 Fall 202 Stochastic Processes Final Exam, Solutions (5 marks) (a) (0 marks) Condition on the first pair to bond; each of the n adjacent pairs is equally likely to bond Given that
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationNote that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +
Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n
More informationSplitting Subspaces, Singer Cycles and Linear Recurrences
Splitting Subspaces, Singer Cycles and Linear Recurrences Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India http://www.math.iitb.ac.in/ srg/ Séminaire de Théorie
More information