18.600: Lecture 32 Markov Chains
|
|
- Melissa Andrews
- 5 years ago
- Views:
Transcription
1 18.600: Lecture 32 Markov Chains Scott Sheffield MIT
2 Outline Markov chains Examples Ergodicity and stationarity
3 Outline Markov chains Examples Ergodicity and stationarity
4 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}.
5 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n.
6 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n. Sequence is called a Markov chain if we have a fixed collection of numbers P ij (one for each pair i, j {0, 1,..., M}) such that whenever the system is in state i, there is probability P ij that system will next be in state j.
7 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n. Sequence is called a Markov chain if we have a fixed collection of numbers P ij (one for each pair i, j {0, 1,..., M}) such that whenever the system is in state i, there is probability P ij that system will next be in state j. Precisely, P{X n+1 = j X n = i, X n 1 = i n 1,..., X 1 = i 1, X 0 = i 0 } = P ij.
8 Markov chains Consider a sequence of random variables X 0, X 1, X 2,... each taking values in the same state space, which for now we take to be a finite set that we label by {0, 1,..., M}. Interpret X n as state of the system at time n. Sequence is called a Markov chain if we have a fixed collection of numbers P ij (one for each pair i, j {0, 1,..., M}) such that whenever the system is in state i, there is probability P ij that system will next be in state j. Precisely, P{X n+1 = j X n = i, X n 1 = i n 1,..., X 1 = i 1, X 0 = i 0 } = P ij. Kind of an almost memoryless property. Probability distribution for next state depends only on the current state (and not on the rest of the state history).
9 Simple example For example, imagine a simple weather model with two states: rainy and sunny.
10 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny.
11 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy.
12 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain.
13 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain. Given that it is rainy today, how many days to I expect to have to wait to see a sunny day?
14 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain. Given that it is rainy today, how many days to I expect to have to wait to see a sunny day? Given that it is sunny today, how many days to I expect to have to wait to see a rainy day?
15 Simple example For example, imagine a simple weather model with two states: rainy and sunny. If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. In this climate, sun tends to last longer than rain. Given that it is rainy today, how many days to I expect to have to wait to see a sunny day? Given that it is sunny today, how many days to I expect to have to wait to see a rainy day? Over the long haul, what fraction of days are sunny?
16 Matrix representation To describe a Markov chain, we need to define P ij for any i, j {0, 1,..., M}.
17 Matrix representation To describe a Markov chain, we need to define P ij for any i, j {0, 1,..., M}. It is convenient to represent the collection of transition probabilities P ij as a matrix: A = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM
18 Matrix representation To describe a Markov chain, we need to define P ij for any i, j {0, 1,..., M}. It is convenient to represent the collection of transition probabilities P ij as a matrix: A = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM For this to make sense, we require P ij 0 for all i, j and M j=0 P ij = 1 for each i. That is, the rows sum to one.
19 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero.
20 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? ( ) p0 p 1... p M P 00 P P 0M P 10 P P 1M P M0 P M1... P MM
21 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? ( ) p0 p 1... p M P 00 P P 0M P 10 P P 1M P M0 P M1... P MM Answer: the probability distribution at time one.
22 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? ( ) p0 p 1... p M P 00 P P 0M P 10 P P 1M P M0 P M1... P MM Answer: the probability distribution at time one. How about the following product? ( p0 p 1... p M ) A n
23 Transitions via matrices Suppose that p i is the probability that system is in state i at time zero. What does the following product represent? ( ) p0 p 1... p M P 00 P P 0M P 10 P P 1M P M0 P M1... P MM Answer: the probability distribution at time one. How about the following product? ( p0 p 1... p M ) A n Answer: the probability distribution at time n.
24 Powers of transition matrix We write P (n) ij over n steps. for the probability to go from state i to state j
25 Powers of transition matrix We write P (n) ij for the probability to go from state i to state j over n steps. From the matrix point of view P (n) 00 P (n) P (n) 0M P (n) 10 P (n) P (n) P (n) M0 1M P (n) M1... P (n) MM = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM n
26 Powers of transition matrix We write P (n) ij for the probability to go from state i to state j over n steps. From the matrix point of view P (n) 00 P (n) P (n) 0M P (n) 10 P (n) P (n) P (n) M0 1M P (n) M1... P (n) MM = P 00 P P 0M P 10 P P 1M P M0 P M1... P MM If A is the one-step transition matrix, then A n is the n-step transition matrix. n
27 Questions What does it mean if all of the rows are identical?
28 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables.
29 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity?
30 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity? Answer: states never change.
31 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity? Answer: states never change. What if each P ij is either one or zero?
32 Questions What does it mean if all of the rows are identical? Answer: state sequence X i consists of i.i.d. random variables. What if matrix is the identity? Answer: states never change. What if each P ij is either one or zero? Answer: state evolution is deterministic.
33 Outline Markov chains Examples Ergodicity and stationarity
34 Outline Markov chains Examples Ergodicity and stationarity
35 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy.
36 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. Let rainy be state zero, sunny state one, and write the transition matrix by ( ).5.5 A =.2.8
37 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. Let rainy be state zero, sunny state one, and write the transition matrix by ( ).5.5 A =.2.8 Note that A 2 = ( )
38 Simple example Consider the simple weather example: If it s rainy one day, there s a.5 chance it will be rainy the next day, a.5 chance it will be sunny. If it s sunny one day, there s a.8 chance it will be sunny the next day, a.2 chance it will be rainy. Let rainy be state zero, sunny state one, and write the transition matrix by ( ).5.5 A =.2.8 Note that ( ) A = ( ) Can compute A 10 =
39 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged
40 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged Can we assign a probability to each arrow?
41 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged Can we assign a probability to each arrow? Markov model implies time spent in any state (e.g., a marriage) before leaving is a geometric random variable.
42 Does relationship status have the Markov property? In a relationship Single It s complicated Married Engaged Can we assign a probability to each arrow? Markov model implies time spent in any state (e.g., a marriage) before leaving is a geometric random variable. Not true... Can we make a better model with more states?
43 Outline Markov chains Examples Ergodicity and stationarity
44 Outline Markov chains Examples Ergodicity and stationarity
45 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries.
46 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one.
47 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one. This means that the row vector π = ( ) π 0 π 1... π M is a left eigenvector of A with eigenvalue 1, i.e., πa = π.
48 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one. This means that the row vector π = ( ) π 0 π 1... π M is a left eigenvector of A with eigenvalue 1, i.e., πa = π. We call π the stationary distribution of the Markov chain.
49 Ergodic Markov chains Say Markov chain is ergodic if some power of the transition matrix has all non-zero entries. Turns out that if chain has this property, then π j := lim n P (n) ij exists and the π j are the unique non-negative solutions of π j = M k=0 π kp kj that sum to one. This means that the row vector π = ( ) π 0 π 1... π M is a left eigenvector of A with eigenvalue 1, i.e., πa = π. We call π the stationary distribution of the Markov chain. One can solve the system of linear equations π j = M k=0 π kp kj to compute the values π j. Equivalent to considering A fixed and solving πa = π. Or solving (A I )π = 0. This determines π up to a multiplicative constant, and fact that π j = 1 determines the constant.
50 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π.
51 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π. This means that.5π 0 +.2π 1 = π 0 and.5π 0 +.8π 1 = π 1 and we also know that π 0 + π 1 = 1. Solving these equations gives π 0 = 2/7 and π 1 = 5/7, so π = ( 2/7 5/7 ).
52 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π. This means that.5π 0 +.2π 1 = π 0 and.5π 0 +.8π 1 = π 1 and we also know that π 0 + π 1 = 1. Solving these equations gives π 0 = 2/7 and π 1 = 5/7, so π = ( 2/7 5/7 ). Indeed, πa = ( 2/7 5/7 ) ( ) = ( 2/7 5/7 ) = π.
53 Simple example If A = ( ), then we know πa = ( π 0 π 1 ) ( ) = ( ) π 0 π 1 = π. This means that.5π 0 +.2π 1 = π 0 and.5π 0 +.8π 1 = π 1 and we also know that π 0 + π 1 = 1. Solving these equations gives π 0 = 2/7 and π 1 = 5/7, so π = ( 2/7 5/7 ). Indeed, πa = ( 2/7 5/7 ) ( Recall ( that ) A 10 = ) = ( 2/7 5/7 ) = π. ( 2/7 5/7 2/7 5/7 ) = ( π π )
18.440: Lecture 33 Markov Chains
18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a
More information18.175: Lecture 30 Markov chains
18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know
More informationMath Camp Notes: Linear Algebra II
Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationMarkov Chains (Part 4)
Markov Chains (Part 4) Steady State Probabilities and First Passage Times Markov Chains - 1 Steady-State Probabilities Remember, for the inventory example we had (8) P &.286 =.286.286 %.286 For an irreducible
More informationDiscrete Markov Chain. Theory and use
Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical
More informationLinear Algebra Solutions 1
Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7
More informationToday. Next lecture. (Ch 14) Markov chains and hidden Markov models
Today (Ch 14) Markov chains and hidden Markov models Graphical representation Transition probability matrix Propagating state distributions The stationary distribution Next lecture (Ch 14) Markov chains
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationLecture 7: Stochastic Dynamic Programing and Markov Processes
Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationMATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems
Lecture 31: Some Applications of Eigenvectors: Markov Chains and Chemical Reaction Systems Winfried Just Department of Mathematics, Ohio University April 9 11, 2018 Review: Eigenvectors and left eigenvectors
More informationISM206 Lecture, May 12, 2005 Markov Chain
ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov
More informationMAA704, Perron-Frobenius theory and Markov chains.
November 19, 2013 Lecture overview Today we will look at: Permutation and graphs. Perron frobenius for non-negative. Stochastic, and their relation to theory. Hitting and hitting probabilities of chain.
More informationMarkov Chains and MCMC
Markov Chains and MCMC CompSci 590.02 Instructor: AshwinMachanavajjhala Lecture 4 : 590.02 Spring 13 1 Recap: Monte Carlo Method If U is a universe of items, and G is a subset satisfying some property,
More informationStochastic Modelling Unit 1: Markov chain models
Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More informationCOMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017
COMS 4721: Machine Learning for Data Science Lecture 20, 4/11/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University SEQUENTIAL DATA So far, when thinking
More informationMarkov Chains. Sarah Filippi Department of Statistics TA: Luke Kelly
Markov Chains Sarah Filippi Department of Statistics http://www.stats.ox.ac.uk/~filippi TA: Luke Kelly With grateful acknowledgements to Prof. Yee Whye Teh's slides from 2013 14. Schedule 09:30-10:30 Lecture:
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationMarkov Chains. Chapter 16. Markov Chains - 1
Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider
More informationMarkov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.
Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationStochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search
Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t
More informationLab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018
Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationLecture 1 Systems of Linear Equations and Matrices
Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces
More informationLecture 5: Introduction to Markov Chains
Lecture 5: Introduction to Markov Chains Winfried Just Department of Mathematics, Ohio University January 24 26, 2018 weather.com light The weather is a stochastic process. For now we can assume that this
More informationMath 304 Handout: Linear algebra, graphs, and networks.
Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all
More informationDefinition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.
Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring
More informationEntropy Rate of Stochastic Processes
Entropy Rate of Stochastic Processes Timo Mulder tmamulder@gmail.com Jorn Peters jornpeters@gmail.com February 8, 205 The entropy rate of independent and identically distributed events can on average be
More information2 DISCRETE-TIME MARKOV CHAINS
1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state
More informationLesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains
AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent
More informationTMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013
TMA4115 - Calculus 3 Lecture 21, April 3 Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013 www.ntnu.no TMA4115 - Calculus 3, Lecture 21 Review of last week s lecture Last week
More informationISE/OR 760 Applied Stochastic Modeling
ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /
More informationHidden Markov Models Part 1: Introduction
Hidden Markov Models Part 1: Introduction CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Modeling Sequential Data Suppose that
More informationContinuous Time Markov Chain Examples
Continuous Markov Chain Examples Example Consider a continuous time Markov chain on S {,, } The Markov chain is a model that describes the current status of a match between two particular contestants:
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationData analysis and stochastic modeling
Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt
More informationInformation Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST
Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationLanguage Acquisition and Parameters: Part II
Language Acquisition and Parameters: Part II Matilde Marcolli CS0: Mathematical and Computational Linguistics Winter 205 Transition Matrices in the Markov Chain Model absorbing states correspond to local
More informationNumerical methods for lattice field theory
Numerical methods for lattice field theory Mike Peardon Trinity College Dublin August 9, 2007 Mike Peardon (Trinity College Dublin) Numerical methods for lattice field theory August 9, 2007 1 / 24 Numerical
More informationHMM part 1. Dr Philip Jackson
Centre for Vision Speech & Signal Processing University of Surrey, Guildford GU2 7XH. HMM part 1 Dr Philip Jackson Probability fundamentals Markov models State topology diagrams Hidden Markov models -
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationMarkov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015
Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of
More informationFrom Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?
Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains
ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer
More informationMarkov Chain Model for ALOHA protocol
Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationInformation Theory and Statistics Lecture 3: Stationary ergodic processes
Information Theory and Statistics Lecture 3: Stationary ergodic processes Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Measurable space Definition (measurable space) Measurable space
More information= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.
3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0
More informationALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS
ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS Giorgio Picci with T. Taylor, ASU Tempe AZ. Wofgang Runggaldier s Birthday, Brixen July 2007 1 CONSENSUS FOR RANDOM GOSSIP ALGORITHMS Consider a finite
More informationLecture 3: Markov Decision Processes
Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov
More informationn(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3
Introduction to Probability Due:August 8th, 211 Solutions of Final Exam Solve all the problems 1. (15 points) You have three coins, showing Head with probabilities p 1, p 2 and p 3. You perform two different
More informationUnderstanding MCMC. Marcel Lüthi, University of Basel. Slides based on presentation by Sandro Schönborn
Understanding MCMC Marcel Lüthi, University of Basel Slides based on presentation by Sandro Schönborn 1 The big picture which satisfies detailed balance condition for p(x) an aperiodic and irreducable
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More informationMarkov Processes. Stochastic process. Markov process
Markov Processes Stochastic process movement through a series of well-defined states in a way that involves some element of randomness for our purposes, states are microstates in the governing ensemble
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More information4.7.1 Computing a stationary distribution
At a high-level our interest in the rest of this section will be to understand the limiting distribution, when it exists and how to compute it To compute it, we will try to reason about when the limiting
More informationLectures on Probability and Statistical Models
Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered
More information0.1 Naive formulation of PageRank
PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationMarkov Model. Model representing the different resident states of a system, and the transitions between the different states
Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior
More informationDiscrete Time Markov Chain (DTMC)
Discrete Time Markov Chain (DTMC) John Boccio February 3, 204 Sources Taylor & Karlin, An Introduction to Stochastic Modeling, 3rd edition. Chapters 3-4. Ross, Introduction to Probability Models, 8th edition,
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More informationHomework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM.
Stationary Distributions: Application Monday, October 05, 2015 2:04 PM Homework 2 will be posted by tomorrow morning, due Friday, October 16 at 5 PM. To prepare to describe the conditions under which the
More information3F1 Information Theory, Lecture 3
3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method
CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method Tim Roughgarden & Gregory Valiant April 15, 015 This lecture began with an extended recap of Lecture 7. Recall that
More informationLecture 3: Markov chains.
1 BIOINFORMATIK II PROBABILITY & STATISTICS Summer semester 2008 The University of Zürich and ETH Zürich Lecture 3: Markov chains. Prof. Andrew Barbour Dr. Nicolas Pétrélis Adapted from a course by Dr.
More informationLecture 6: Entropy Rate
Lecture 6: Entropy Rate Entropy rate H(X) Random walk on graph Dr. Yao Xie, ECE587, Information Theory, Duke University Coin tossing versus poker Toss a fair coin and see and sequence Head, Tail, Tail,
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationChapter 11 Advanced Topic Stochastic Processes
Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section
More informationLecture 03. Math 22 Summer 2017 Section 2 June 26, 2017
Lecture 03 Math 22 Summer 2017 Section 2 June 26, 2017 Just for today (10 minutes) Review row reduction algorithm (40 minutes) 1.3 (15 minutes) Classwork Review row reduction algorithm Review row reduction
More informationLTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather
1. Markov chain LTCC. Exercises Let X 0, X 1, X 2,... be a Markov chain with state space {1, 2, 3, 4} and transition matrix 1/2 1/2 0 0 P = 0 1/2 1/3 1/6. 0 0 0 1 (a) What happens if the chain starts in
More informationChapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains
Chapter 0 Finite-State Markov Chains Introductory Example: Googling Markov Chains Google means many things: it is an Internet search engine, the company that produces the search engine, and a verb meaning
More informationCDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv
More informationPlease simplify your answers to the extent reasonable without a calculator. Show your work. Explain your answers, concisely.
Please simplify your answers to the extent reasonable without a calculator. Show your work. Explain your answers, concisely. 1. Consider a game which involves flipping a coin: winning $1 when it lands
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationTMA 4265 Stochastic Processes Semester project, fall 2014 Student number and
TMA 4265 Stochastic Processes Semester project, fall 2014 Student number 730631 and 732038 Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}.
More informationLecture 11: Hidden Markov Models
Lecture 11: Hidden Markov Models Cognitive Systems - Machine Learning Cognitive Systems, Applied Computer Science, Bamberg University slides by Dr. Philip Jackson Centre for Vision, Speech & Signal Processing
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationChapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan
Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process
More information4.1 Markov Processes and Markov Chains
Chapter Markov Processes. Markov Processes and Markov Chains Recall the following example from Section.. Two competing Broadband companies, A and B, each currently have 0% of the market share. Suppose
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationFinite-Horizon Statistics for Markov chains
Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update
More informationPerron Frobenius Theory
Perron Frobenius Theory Oskar Perron Georg Frobenius (1880 1975) (1849 1917) Stefan Güttel Perron Frobenius Theory 1 / 10 Positive and Nonnegative Matrices Let A, B R m n. A B if a ij b ij i, j, A > B
More informationAPPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One
APPM /556 Markov Processes Fall 9, Some Review Problems for Exam One (Note: You will not have to invert any matrices on the exam and you will not be expected to solve systems of or more unknowns. There
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationMarkov Chains Absorption (cont d) Hamid R. Rabiee
Markov Chains Absorption (cont d) Hamid R. Rabiee 1 Absorbing Markov Chain An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1 (i.e.,
More informationAdvanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur
Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:
More informationLecture 15, 16: Diagonalization
Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose
More informationDaily Update. Math 290: Elementary Linear Algebra Fall 2018
Daily Update Math 90: Elementary Linear Algebra Fall 08 Lecture 7: Tuesday, December 4 After reviewing the definitions of a linear transformation, and the kernel and range of a linear transformation, we
More information