Introduction to Stochastic Processes

Similar documents
Some Definition and Example of Markov Chain

1 Stat 605. Homework I. Due Feb. 1, 2011

Coupling AMS Short Course

Problem Points S C O R E Total: 120

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Lecture 4 - Random walk, ruin problems and random processes

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

1 Random Walks and Electrical Networks

Markov Chains. Chapter 16. Markov Chains - 1

Problem Points S C O R E Total: 120

Introduction to Markov Chains and Riffle Shuffling

Math Homework 5 Solutions

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

18.440: Lecture 33 Markov Chains

0.1 Naive formulation of PageRank

The coupling method - Simons Counting Complexity Bootcamp, 2016

Lecture 23. Random walks

Lecture 9. Expectations of discrete random variables

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

1 Gambler s Ruin Problem

Lecture 6: Entropy Rate

Math 381 Discrete Mathematical Modeling

Lecture 14. More discrete random variables

CMPSCI 240: Reasoning Under Uncertainty

RANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents

CONVERGENCE THEOREM FOR FINITE MARKOV CHAINS. Contents

Markov Chains and Mixing Times

33 The Gambler's Ruin 1

The Markov Chain Monte Carlo Method

T. Liggett Mathematics 171 Final Exam June 8, 2011

MAS275 Probability Modelling Exercises

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

Be sure this exam has 9 pages including the cover. The University of British Columbia

Stochastic Processes

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

1 Gambler s Ruin Problem

Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016

1 Ways to Describe a Stochastic Process

Chapter 11 Advanced Topic Stochastic Processes

Hitting Probabilities

RANDOM WALKS IN ONE DIMENSION

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Expected Value 7/7/2006

MA : Introductory Probability

Statistics 251/551 spring 2013 Homework # 4 Due: Wednesday 20 February

Inference for Stochastic Processes

Introduction to Stochastic Processes

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Answers to selected exercises

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS

CS155: Probability and Computing: Randomized Algorithms and Probabilistic Analysis

Probability Theory and Simulation Methods

Lecture 6 Random walks - advanced methods

STAT 414: Introduction to Probability Theory

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

Lecture 5: Random Walks and Markov Chain

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

Stat 516, Homework 1

6.262: Discrete Stochastic Processes 2/2/11. Lecture 1: Introduction and Probability review

STAT 418: Probability and Stochastic Processes

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Random Walks and Quantum Walks

Course : Algebraic Combinatorics

Stochastic processes and stopping time Exercises

Martingale Theory for Finance

Lecture 10. Variance and standard deviation

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

Name: Exam 2 Solutions. March 13, 2017

MAT 271E Probability and Statistics

Homework set 2 - Solutions

Math 166: Topics in Contemporary Mathematics II

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

Readings: Finish Section 5.2

MARKOV CHAIN MONTE CARLO

Random point patterns and counting processes

MARKOV CHAINS AND HIDDEN MARKOV MODELS

M4A42 APPLIED STOCHASTIC PROCESSES

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7

EOS-310 Severe & Unusual Weather Spring, 2009 Associate Prof. Zafer Boybeyi 1/20/09

Week 3 Sept. 17 Sept. 21

Mathematica reimbursement

Markov Chains on Countable State Space

6.01: Introduction to EECS I. Discrete Probability and State Estimation

Math-Stat-491-Fall2014-Notes-V

Randomized Algorithms

The Boundary Problem: Markov Chain Solution

Random walk on a polygon

MATH HOMEWORK PROBLEMS D. MCCLENDON

Lectures on Probability and Statistical Models

. Find E(V ) and var(v ).

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013

Module 6:Random walks and related areas Lecture 24:Random woalk and other areas. The Lecture Contains: Random Walk.

Lecture 5: Introduction to Markov Chains

Is this a fair game? Great Expectations. win $1 for each match lose $1 if no match

Transcription:

18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15

Course description About this course Course description : This course is an introduction to Markov chains, random walks, martingales. Time and place : Course : Monday and Wednesday, 11 :00 am-12 :30pm. Bibliography : Markov Chains and Mixing Times, by David A. Levin, Yuval Peres, Elizabeth L. Wilmer Hao Wu (MIT) 18.445 04 February 2015 2 / 15

Grading Grading : 5 Homeworks (10% each) Midterm (15%, April 1st.) Final (35%, May) Homeworks : Homeworks will be collected at the end of the class on the due date. Due dates : Feb. 23rd, Mar. 9th, Apr. 6th, Apr. 22nd, May. 4th Collaboration on homework is encouraged. Individually written solutions are required. Exams : The midterm and the final are closed book, closed notes, no calculators. Hao Wu (MIT) 18.445 04 February 2015 3 / 15

Today s goal 1 2 3 4 Definitions Gambler s ruin coupon collecting stationary distribution Hao Wu (MIT) 18.445 04 February 2015 4 / 15

Ω : finite state space P : transition matrix Ω Ω Definition A sequence of random variables (X 0, X 1, X 2,...) is a Markov chain with state space Ω and transition matrix P if for all n 0, and all sequences (x 0, x 1,..., x n, x n+1 ), we have that P[X n+1 = x n+1 X 0 = x 0,..., X n = x n ] = P[X n+1 = x n+1 X n = x n ] = P(x n, x n+1 ). Hao Wu (MIT) 18.445 04 February 2015 5 / 15

Gambler s ruin Consider a gambler betting on the outcome of a sequence of independent fair coin tosses. If head, he gains one dollar. If tail, he loses one dollar. If he reaches a fortune of N dollars, he stops. If his purse is ever empty, he stops. Questions : What are the probabilities of the two possible fates? How long will it take for the gambler to arrive at one of the two possible fates? Hao Wu (MIT) 18.445 04 February 2015 6 / 15

Gambler s ruin The gambler s situation can be modeled by a Markov chain on the state space {0, 1,..., N} : X 0 : initial money in purse X n : the gambler s fortune at time n P[X n+1 = X n + 1 X n ] = 1/2, P[X n+1 = X n 1 X n ] = 1/2. The states 0 and N are absorbing. τ : the time that the gambler stops. Answer to the questions Theorem Assume that X 0 = k for some 0 k N. Then k P[X τ = N] =, E[τ] = k(n k). N Hao Wu (MIT) 18.445 04 February 2015 7 / 15

Coupon collecting A company issues N different types of coupons. A collector desires a complete set. Question : How many coupons must he obtain so that his collection contains all N types. Assumption : each coupon is equally likely to be each of the N types. Hao Wu (MIT) 18.445 04 February 2015 8 / 15

Coupon collecting The collector s situation can be modeled by a Markov chain on the state space {0, 1,..., N} : X 0 = 0 X n : the number of different types among the collector s first n coupons. P[X n+1 = k + 1 X n = k] = (N k)/n, P[X n+1 = k X n = k] = k/n. τ : the first time that the collector obtains all N types. Hao Wu (MIT) 18.445 04 February 2015 9 / 15

Coupon collecting Answer to the question. Theorem A more precise answer. Theorem For any c > 0, we have that N 1 E[τ] = N N log N. k k=1 c P[τ > N log N + cn] e. Hao Wu (MIT) 18.445 04 February 2015 10 / 15

Notations Ω : state space µ : measure on Ω P, Q : transition matrices Ω Ω f : function on Ω Notations µp : measure on Ω PQ : transition matrix Pf : function on Ω Associative (µp)q = µ(pq) (PQ)f = P(Qf ) Hao Wu (MIT) 18.445 04 February 2015 11 / 15

Notations Consider a Markov chain with state space Ω and transition matrix P. Recall that P[X n+1 = y X n = x] = P(x, y). µ 0 : the distribution of X 0 µ n : the distribution of X n Then we have that µ n+1 = µ n P. µ n = µ 0 P n. E[f (X n )] = µ 0 P n f. Hao Wu (MIT) 18.445 04 February 2015 12 / 15

Stationary distribution About this course Consider a Markov chain with state space Ω and transition matrix P. Recall that P[X n+1 = y X n = x] = P(x, y). µ 0 : the distribution of X 0 µ n : the distribution of X n Definition We call a probability measure π is stationary if π = πp. If π is stationary and the initial measure µ 0 equals π, then µ n = π, n. Hao Wu (MIT) 18.445 04 February 2015 13 / 15

Random walks on graphs Definition A graph G = (V, E) consists of a vertex set V and an edge set E : V : set of vertices E : set of pairs of vertices When (x, y) E, we write x y : x and y are joined by an edge. We say y is a neighbor of x. For x V, deg( x) : the number of neighbors of x. Definition Given a graph G = (V, E), we define simple random walk on G to be the Markov chain with state space V and transition matrix : { 1/deg(x) if y x P(x, y) =. 0 else Hao Wu (MIT) 18.445 04 February 2015 14 / 15

Random walks on graphs Definition Given a graph G = (V, E), we define simple random walk on G to be the Markov chain with state space V and transition matrix : { 1/deg(x) if y x P(x, y) =. 0 else Theorem Define deg(x) π(x) =, x V. 2 E Then π is a stationary distribution for the simple random walk on the graph. Hao Wu (MIT) 18.445 04 February 2015 15 / 15

MIT OpenCourseWare http://ocw.mit.edu 18.445 Introduction to Stochastic Processes Spring 2015 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.