Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Waiting Time to Absorption
|
|
- Louisa Douglas
- 6 years ago
- Views:
Transcription
1 Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE Voice: Fax: Topics in Probability Theory and Stochastic Processes Steven R. Dunbar Waiting Time to Absorption Rating Mathematically Mature: proofs. may contain mathematics beyond calculus with
2 Section Starter Question For a Markov chain with an absorbing state, describe the random variable for the time until the chain gets absorbed. Key Concepts. For a Markov chain with r transient states and N + r absorbing states reorder the states so the absorbing states come last and nonabsorbing, i.e. transient, states come first. Then the transition matrix has the canonical form: ( ) Q R. Here Q is a r r submatrix of transition probabilities among the transient states, R is the r (N + r) transition probabilities from the transient states to the absorbing states, 0 is a (N + r) r matrix of 0s representing the transition probabilities from absorbing states to transient states, and I is an (N + r) (N + r) identity matrix. 2. The matrix N = (I Q) is the fundamental matrix for the absorbing Markov chain. The entries N ij of this matrix have a probabilistic interpretation. The entries N ij are the expected number of times that the chain started from state i will be in state j before ultimate absorption. 3. A compact expression in vector-matrix form for the waiting time w to absorption is (I Q)w = so w = (I Q). 2
3 Vocabulary. Let {X n } be a finite-state Markov chain with states labeled 0,, 2, 3,..., N. Suppose that states 0,, 2,..., r are transient in that P (n) ij 0 as n for 0 i, j r, while states r,..., N are absorbing, P ii = for r n N. Define the random absorption time T = min {n 0 : X n r}. 2. The absorption probability matrix B is the probability of starting at state i and ending at a given absorbing state j. 3. For a Markov chain with r transient states and N + r absorbing states. If necessary, reorder the states so the absorbing states come last and non-absorbing, i.e. transient, states come first. Then the transition matrix has the canonical form: ( ) Q R. The matrix N = (I Q) is the fundamental matrix for the for the absorbing Markov chain. 4. The Nth harmonic number H N is H N = N. 3
4 Mathematical Ideas Theory Let {X n } be a finite-state Markov chain with states labeled 0,, 2, 3,..., N. Let the transition probability matrix be P. Reorder the states so the absorbing states come last and non-absorbing, i.e. transient, states come first. The states 0,, 2,..., r are transient in that P (n) ij 0 as n for 0 i, j r, while states r,..., N are absorbing, P ii = for r n N. Then the transition probability matrix has the canonical form ( ) Q R () Here Q is a r r submatrix of transition probabilities among the transient states, R is the r (N + r) transition probabilities from the transient states to the absorbing states, 0 is a (N + r) r matrix of 0s representing the transition probabilities from absorbing states to transient states, and I is an (N + r) (N + r) identity matrix. Starting at one of the transient states X 0 = i where 0 i r such a process will remain in the transient states for some random duration. Ultimately, the process gets trapped in one of the absorbing states i = r,... N. Of special interest is the mean time until absorption. Also of interest is the probability distribution of the states into which absorption takes place. Define the random absorption time T = min {n 0 : X n r}. The expected value of this random time w i = E [T X 0 = i], i = 0,..., r. is an important first measure of the random variable T. First-step analysis tells us that the absorption time from state i is the first step to another transient state, plus a weighted average according to the transition probabilities over the transient states, of the absorption times from the other transient states. In symbols, first-step analysis says r w i = + P ij w j. 4 j=0
5 Let j (X n ) be the indicator function for state j of the Markov chain X n. Recall that this means j (X n ) = if X n = j and j (X n ) = 0 otherwise. For a Markov chain with r transient states and N + r absorbing states, reorder the states so the absorbing states come last and non-absorbing, i.e. transient, states come first. Then the transition matrix has the canonical form: ( Q R Here Q is a r r submatrix of transition probabilities among the transient states, R is the r (N + r) transition probabilities from the transient states, 0 is a (N + r) r matrix of 0s representing the transition probabilities from absorbing states to transient states, and I is an (N + r) (N + r) identity matrix. Expressing the first-step analysis compactly in vector-matrix form as (I Q)w = then w = (I Q). The matrix N = (I Q) is the fundamental matrix for the for the absorbing Markov chain. The entries N ij of this matrix have a probabilistic interpretation. The entries N ij are the expected number of times that the chain started from state i will be in state j before ultimate absorption. To see why this is true, think of how (I Q) would look when written as a geometric series. The absorption probabilities B are the probability of starting at state i and ending up at a given absorbing state j. The absorption probabilities come from N by the matrix product B = NR = (I Q) R. That is, add up all the probabilities of going to state j, weighted by the number of times we expect to be in the transient states. For an absorbing Markov chain, the nth power P n of the transition probability matrix will approach a limit matrix of P of the form Examples ( Q B Example. Its your 30th birthday, and your friends bought you a cake with 30 candles on it. You make a wish and try to blow them out. Every time you blow, you blow out a random number of candles between one and the ). ).
6 number that remain, including one and that other number. How many times on average do you blow before you extinguish all the candles? Let the states be the number of candles blown out, so the N = 3 states are 0,, 2, 3,..., 30. Instead of solving this at once, try a smaller problem first, with the number of candles N =. Then the transition probability matrix is Then the equations for the waiting time, that is the number of attempts needed to blow out the candles are w 0 = + w + w 2 + w 3 + w 4 w = + 4 w w w 4 w 2 = + 3 w w 4 w 3 = + 2 w 4 w 4 =. Solving this recursively, w 4 =, w 3 = +, w 2 2 = + ( + ) +, so w 2 = + +. Continuing to solve recursively 2 3 w = w 0 = The inductive pattern is clear. The waiting time with N candles is the Nth harmonic number H N w 0 = H N = N. For the original problem with 30 candles the transient states are, 2, 3,..., 30 and the absorbing state is 0. Then the transition probability matrix as in () 6
7 is The quantity of interest is the expected waiting time until absorption into the state 0. Expressing the first-step analysis compactly in vector-matrix form as (I Q)v = Substituting in the values from the transition matrix and solving with Octave, this is w 0 = H Sources This section is adapted from An Introduction to Stochastic Modeling by Taylor and Karlin and Random Walks and Electrical Networks by Doyle and Snell. The example is adapted from the January 3, 207 Riddler problem from the webpage fivethirtyeight.com Algorithms, Scripts, Simulations Algorithm Scripts Commands are Q = diag(./ [30:-:] ) * ( triu( ones(30,30) ) - eye(30) ); wait = inverse( eye(30) - Q) * ones(30,); wait() HN = sum(./ [30:-:] ) 7
8 Problems to Work for Understanding Reading Suggestion: References [] Peter G. Doyle and J. Laurie Snell. Random Walks and Electric Networks. The Mathematical Association of America, 984. [2] H. M. Taylor and Samuel Karlin. An Introduction to Stochastic Modeling. Academic Press, third edition,
9 Outside Readings and Links: I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable effort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk. I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reflects the thoughts, interests and opinions of its author. They do not explicitly represent official positions or policies of my employer. Information on this website is subject to change without notice. Steve Dunbar s Home Page, to Steve Dunbar, sdunbar at unl dot edu Last modified: Processed from L A TEX source on January 30, 207 9
Selected Topics in Probability and Stochastic Processes Steve Dunbar. Partial Converse of the Central Limit Theorem
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Selected Topics in Probability
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Notation and Problems of Hidden Markov Models
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Topics in Probability Theory
More informationStochastic Processes and Advanced Mathematical Finance. Intuitive Introduction to Diffusions
Steven R. Dunbar Department of Mathematics 03 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 40-47-3731 Fax: 40-47-8466 Stochastic Processes and Advanced
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Binomial Distribution
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebrasa-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Topics in Probability Theory
More informationStochastic Processes and Advanced Mathematical Finance
Steven R. Dunbar Department of Mathematics 23 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-13 http://www.math.unl.edu Voice: 42-472-3731 Fax: 42-472-8466 Stochastic Processes and Advanced
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Worst Case and Average Case Behavior of the Simplex Algorithm
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebrasa-Lincoln Lincoln, NE 68588-030 http://www.math.unl.edu Voice: 402-472-373 Fax: 402-472-8466 Topics in Probability Theory and
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Binomial Distribution
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebrasa-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Topics in Probability Theory
More informationStochastic Processes and Advanced Mathematical Finance. Path Properties of Brownian Motion
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More informationStochastic Processes and Advanced Mathematical Finance. Stochastic Processes
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Stirling s Formula in Real and Complex Variables
Steven R. Dunbar Department of Mathematics 203 Aver Hall Universit of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Topics in Probabilit Theor and
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Stirling s Formula from the Sum of Average Differences
Steve R Dubar Departmet of Mathematics 03 Avery Hall Uiversity of Nebraska-Licol Licol, NE 68588-030 http://wwwmathuledu Voice: 40-47-373 Fax: 40-47-8466 Topics i Probability Theory ad Stochastic Processes
More informationStochastic Processes and Advanced Mathematical Finance. Randomness
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Evaluation of the Gaussian Density Integral
Steve R. Dubar Departmet of Mathematics 3 Avery Hall Uiversity of Nebraska-Licol Licol, NE 68588-13 http://www.math.ul.edu Voice: 4-47-3731 Fax: 4-47-8466 Topics i Probability Theory ad Stochastic Processes
More informationMarkov Chains Absorption Hamid R. Rabiee
Markov Chains Absorption Hamid R. Rabiee Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is (i.e., p ii = ). A
More informationMarkov Chains Absorption (cont d) Hamid R. Rabiee
Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Examples of Hidden Markov Models
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Topics in Probability Theory
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More informationStochastic Processes and Advanced Mathematical Finance. Duration of the Gambler s Ruin
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More information0.1 Naive formulation of PageRank
PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more
More informationExamples of Countable State Markov Chains Thursday, October 16, :12 PM
stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without
More informationDesigning Information Devices and Systems I Spring 2019 Lecture Notes Note 3
EECS 6A Designing Information Devices and Systems I Spring 209 Lecture Notes Note 3 3. Linear Dependence Recall the simple tomography example from Note, in which we tried to determine the composition of
More informationTransience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:
Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in
More information1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =
/ 7 8 / / / /4 4 5 / /4 / 8 / 6 P = 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Andrei Andreevich Markov (856 9) In Example. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 P (n) = 0
More informationClassification of Countable State Markov Chains
Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive
More informationFDST Markov Chain Models
FDST Markov Chain Models Tuesday, February 11, 2014 2:01 PM Homework 1 due Friday, February 21 at 2 PM. Reading: Karlin and Taylor, Sections 2.1-2.3 Almost all of our Markov chain models will be time-homogenous,
More informationObjectives. Materials
Activity 8 Exploring Infinite Series Objectives Identify a geometric series Determine convergence and sum of geometric series Identify a series that satisfies the alternating series test Use a graphing
More informationThe Kemeny constant of a Markov chain
The Kemeny constant of a Markov chain Peter Doyle Version 1.0 dated 14 September 009 GNU FDL Abstract Given an ergodic finite-state Markov chain, let M iw denote the mean time from i to equilibrium, meaning
More informationSection 9.8. First let s get some practice with determining the interval of convergence of power series.
First let s get some practice with determining the interval of convergence of power series. First let s get some practice with determining the interval of convergence of power series. Example (1) Determine
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More informationDesigning Information Devices and Systems I Spring 2018 Lecture Notes Note 3
EECS 16A Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 3 3.1 Linear Dependence Recall the simple tomography example from Note 1, in which we tried to determine the composition
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More information2 Discrete-Time Markov Chains
2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More information1 Review of the dot product
Any typographical or other corrections about these notes are welcome. Review of the dot product The dot product on R n is an operation that takes two vectors and returns a number. It is defined by n u
More information1 Ways to Describe a Stochastic Process
purdue university cs 59000-nmc networks & matrix computations LECTURE NOTES David F. Gleich September 22, 2011 Scribe Notes: Debbie Perouli 1 Ways to Describe a Stochastic Process We will use the biased
More informationMath 166: Topics in Contemporary Mathematics II
Math 166: Topics in Contemporary Mathematics II Xin Ma Texas A&M University November 26, 2017 Xin Ma (TAMU) Math 166 November 26, 2017 1 / 14 A Review A Markov process is a finite sequence of experiments
More informationLecture 1: Brief Review on Stochastic Processes
Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for
More informationRandom walks and electric networks
Random walks and electric networks Peter G. Doyle J. Laurie Snell Version dated 5 July 2006 GNU FDL Acknowledgement This work is derived from the book Random Walks and Electric Networks, originally published
More informationA Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit
A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit David Vener Department of Mathematics, MIT May 5, 3 Introduction In 8.366, we discussed the relationship
More informationOnline Social Networks and Media. Link Analysis and Web Search
Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information
More informationHomework set 2 - Solutions
Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences
More informationSequences and infinite series
Sequences and infinite series D. DeTurck University of Pennsylvania March 29, 208 D. DeTurck Math 04 002 208A: Sequence and series / 54 Sequences The lists of numbers you generate using a numerical method
More informationNew Jersey Center for Teaching and Learning. Progressive Mathematics Initiative
Slide 1 / 70 New Jersey Center for Teaching and Learning Progressive Mathematics Initiative This material is made freely available at www.njctl.org and is intended for the non-commercial use of students
More informationAt the boundary states, we take the same rules except we forbid leaving the state space, so,.
Birth-death chains Monday, October 19, 2015 2:22 PM Example: Birth-Death Chain State space From any state we allow the following transitions: with probability (birth) with probability (death) with probability
More informationOn Thresholds of Moving Averages With Prescribed On Target Significant Levels
On Thresholds of Moving Averages With Prescribed On Target Significant Levels A. R. Soltani, S. A. Al-Awadhi, and W. M. Al-Shemeri Department of Statistics and OR Faculty of Science, Kuwait University
More informationA Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute
More informationMarkov Decision Processes
Markov Decision Processes Lecture notes for the course Games on Graphs B. Srivathsan Chennai Mathematical Institute, India 1 Markov Chains We will define Markov chains in a manner that will be useful to
More informationProblem 1: (3 points) Recall that the dot product of two vectors in R 3 is
Linear Algebra, Spring 206 Homework 3 Name: Problem : (3 points) Recall that the dot product of two vectors in R 3 is a x b y = ax + by + cz, c z and this is essentially the same as the matrix multiplication
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More information88 CONTINUOUS MARKOV CHAINS
88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state
More informationMarkov Chains for the RISK Board Game Revisited
VOL. 76, NO. 2, APRIL 2003 129 because of similarity. The parameter h for a given object can be thought of as the radius of a sphere that has the same ratio of V to A as the object. It will henceforth
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationstochnotes Page 1
stochnotes110308 Page 1 Kolmogorov forward and backward equations and Poisson process Monday, November 03, 2008 11:58 AM How can we apply the Kolmogorov equations to calculate various statistics of interest?
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationLecture 10: Powers of Matrices, Difference Equations
Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each
More informationIEOR 6711: Professor Whitt. Introduction to Markov Chains
IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationECE 541 Project Report: Modeling the Game of RISK Using Markov Chains
Contents ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains Stochastic Signals and Systems Rutgers University, Fall 2014 Sijie Xiong, RUID: 151004243 Email: sx37@rutgers.edu 1 The Game
More informationLab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018
Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google
More information4.7.1 Computing a stationary distribution
At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More information4452 Mathematical Modeling Lecture 16: Markov Processes
Math Modeling Lecture 16: Markov Processes Page 1 4452 Mathematical Modeling Lecture 16: Markov Processes Introduction A stochastic model is one in which random effects are incorporated into the model.
More information1 Gambler s Ruin Problem
Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More informationInventory Model (Karlin and Taylor, Sec. 2.3)
stochnotes091108 Page 1 Markov Chain Models and Basic Computations Thursday, September 11, 2008 11:50 AM Homework 1 is posted, due Monday, September 22. Two more examples. Inventory Model (Karlin and Taylor,
More information(implicitly assuming time-homogeneity from here on)
Continuous-Time Markov Chains Models Tuesday, November 15, 2011 2:02 PM The fundamental object describing the dynamics of a CTMC (continuous-time Markov chain) is the probability transition (matrix) function:
More informationJanacek statistics tables (2009 edition) are available on your desk. Do not turn over until you are told to do so by the Invigilator.
UNIVERSITY OF EAST ANGLIA School of Mathematics Main Series UG Examination 2016 17 CALCULUS AND PROBABILITY MTHB4006Y Time allowed: 2 Hours Attempt THREE questions. Janacek statistics tables (2009 edition)
More informationCountable state discrete time Markov Chains
Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations
More information18.600: Lecture 32 Markov Chains
18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence
More informationSequence modelling. Marco Saerens (UCL) Slides references
Sequence modelling Marco Saerens (UCL) Slides references Many slides and figures have been adapted from the slides associated to the following books: Alpaydin (2004), Introduction to machine learning.
More informationMath Homework 5 Solutions
Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram
More informationModelling and Design of Service-Time; Phase-Type Distributions.
Service Engineering June 97 Last Revised: April 8, 004 Modelling and Design of Service-Time; Phase-Type Distributions. References. Issaev, Eva,: Fitting phase-type distributions to data from a call center,
More informationFinding Limits Graphically and Numerically
Finding Limits Graphically and Numerically 1. Welcome to finding limits graphically and numerically. My name is Tuesday Johnson and I m a lecturer at the University of Texas El Paso. 2. With each lecture
More informationCOMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017
COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University SEQUENTIAL DATA So far, when thinking
More informationChapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.
35 Mixed Chains In this chapter we learn how to analyze Markov chains that consists of transient and absorbing states. Later we will see that this analysis extends easily to chains with (nonabsorbing)
More informationA Note on Google s PageRank
A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to
More informationBirth-death chain models (countable state)
Countable State Birth-Death Chains and Branching Processes Tuesday, March 25, 2014 1:59 PM Homework 3 posted, due Friday, April 18. Birth-death chain models (countable state) S = We'll characterize the
More informationRandom Walk on a Graph
IOR 67: Stochastic Models I Second Midterm xam, hapters 3 & 4, November 2, 200 SOLUTIONS Justify your answers; show your work.. Random Walk on a raph (25 points) Random Walk on a raph 2 5 F B 3 3 2 Figure
More informationContinuous Time Markov Chain Examples
Continuous Markov Chain Examples Example Consider a continuous time Markov chain on S {,, } The Markov chain is a model that describes the current status of a match between two particular contestants:
More informationMAA704, Perron-Frobenius theory and Markov chains.
November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationMAPLE Worksheet Number 7 Derivatives in Calculus
MAPLE Worksheet Number 7 Derivatives in Calculus The MAPLE command for computing the derivative of a function depends on which of the two ways we used to define the function, as a symbol or as an operation.
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More information3 Probabilistic Turing Machines
CS 252 - Probabilistic Turing Machines (Algorithms) Additional Reading 3 and Homework problems 3 Probabilistic Turing Machines When we talked about computability, we found that nondeterminism was sometimes
More information18.175: Lecture 30 Markov chains
18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know
More informationExperiment 9. Emission Spectra. measure the emission spectrum of a source of light using the digital spectrometer.
Experiment 9 Emission Spectra 9.1 Objectives By the end of this experiment, you will be able to: measure the emission spectrum of a source of light using the digital spectrometer. find the wavelength of
More informationMAT137 Calculus! Lecture 45
official website http://uoft.me/mat137 MAT137 Calculus! Lecture 45 Today: Taylor Polynomials Taylor Series Next: Taylor Series Power Series Definition (Power Series) A power series is a series of the form
More information2.23 Combining Forces
2.23 Combining Forces The Product and Quotient Rules of Differentiation While the derivative of a sum is the sum of the derivatives, it turns out that the rules for computing derivatives of products and
More informationChapter 3: Markov Processes First hitting times
Chapter 3: Markov Processes First hitting times L. Breuer University of Kent, UK November 3, 2010 Example: M/M/c/c queue Arrivals: Poisson process with rate λ > 0 Example: M/M/c/c queue Arrivals: Poisson
More informationReadings: Finish Section 5.2
LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout
More information1 Basic Combinatorics
1 Basic Combinatorics 1.1 Sets and sequences Sets. A set is an unordered collection of distinct objects. The objects are called elements of the set. We use braces to denote a set, for example, the set
More informationUniversity of British Columbia Math 307, Final
1 University of British Columbia Math 37, Final April 29, 214 12.-2.3pm Name: Student Number: Signature: Instructor: Instructions: 1. No notes, books or calculators are allowed. A MATLAB/Octave formula
More informationSTOCHASTIC MODELING OF ENVIRONMENTAL TIME SERIES. Richard W. Katz LECTURE 5
STOCHASTIC MODELING OF ENVIRONMENTAL TIME SERIES Richard W Katz LECTURE 5 (1) Hidden Markov Models: Applications (2) Hidden Markov Models: Viterbi Algorithm (3) Non-Homogeneous Hidden Markov Model (1)
More informationAQR Unit 4: Using Recursion in Models and Decision Making Sequence Notes. Name: Date: Sequences
Name: Date: Sequences A number sequence is a set of numbers, usually separated by commas, arranged in an order. The first term is referred to as t 1, the second term as t 2, the third term as t 3 and so
More informationMATH 341 MIDTERM 2. (a) [5 pts] Demonstrate that A and B are row equivalent by providing a sequence of row operations leading from A to B.
11/01/2011 Bormashenko MATH 341 MIDTERM 2 Show your work for all the problems. Good luck! (1) Let A and B be defined as follows: 1 1 2 A =, B = 1 2 3 0 2 ] 2 1 3 4 Name: (a) 5 pts] Demonstrate that A and
More informationStatistics 992 Continuous-time Markov Chains Spring 2004
Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics
More informationMAT 1302B Mathematical Methods II
MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture
More informationDiscrete Fourier transform (DFT)
Discrete Fourier transform (DFT) Alejandro Ribeiro January 19, 2018 Let x : [0, N 1] C be a discrete signal of duration N and having elements x(n) for n [0, N 1]. The discrete Fourier transform (DFT) of
More information