APPM 4/5560 Markov Processes Fall 2019, Some Review Problems for Exam One

Similar documents
On random walks. i=1 U i, where x Z is the starting point of

Markov Processes Hamid R. Rabiee

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

Probability, Random Processes and Inference

IE 336 Seat # Name. Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

At the boundary states, we take the same rules except we forbid leaving the state space, so,.

Markov Chains (Part 3)

IEOR 6711: Professor Whitt. Introduction to Markov Chains

Interlude: Practice Final

ISE/OR 760 Applied Stochastic Modeling

Chapter 16 focused on decision making in the face of uncertainty about one future

Markov Chains on Countable State Space

18.175: Lecture 30 Markov chains

ISyE 6650 Probabilistic Models Fall 2007

Markov Chains Handout for Stat 110

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Stat-491-Fall2014-Assignment-III

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STOCHASTIC PROCESSES Basic notions

Homework Solution 4 for APPM4/5560 Markov Processes

Special Mathematics. Tutorial 13. Markov chains

Discrete Markov Chain. Theory and use

Markov Chains Introduction

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Markov Chains (Part 4)

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

2 DISCRETE-TIME MARKOV CHAINS

2. Transience and Recurrence

IE 336 Seat # Name (clearly) Closed book. One page of hand-written notes, front and back. No calculator. 60 minutes.

18.440: Lecture 33 Markov Chains

Math , Fall 2012: HW 5 Solutions

18.600: Lecture 32 Markov Chains

MATH6 - Introduction to Finite Mathematics

Discrete Time Markov Chain (DTMC)

Classification of Countable State Markov Chains

LTCC. Exercises. (1) Two possible weather conditions on any day: {rainy, sunny} (2) Tomorrow s weather depends only on today s weather

Math 710 Homework 1. Austin Mohr September 2, 2010

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

Stochastic Modelling Unit 1: Markov chain models

Chapter 6: Probability The Study of Randomness

Lecture 9 Classification of States

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3

Uncertainty Runs Rampant in the Universe C. Ebeling circa Markov Chains. A Stochastic Process. Into each life a little uncertainty must fall.

Worksheet 2 Problems

Chapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

2 Discrete-Time Markov Chains

Markov Chains. Contents

1 Gambler s Ruin Problem

Markov Chains, Random Walks on Graphs, and the Laplacian

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

(b). What is an expression for the exact value of P(X = 4)? 2. (a). Suppose that the moment generating function for X is M (t) = 2et +1 3

Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes

Math Camp Notes: Linear Algebra II

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

MARKOV PROCESSES. Valerio Di Valerio

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Each trial has only two possible outcomes success and failure. The possible outcomes are exactly the same for each trial.

Homework 6 Solutions

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

Homework set 2 - Solutions

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

Countable state discrete time Markov Chains

Lecture 2: September 8

Weather. science centers. created by: The Curriculum Corner.

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS

ISM206 Lecture, May 12, 2005 Markov Chain

STAT STOCHASTIC PROCESSES. Contents

Markov Chains. Chapter 16. Markov Chains - 1

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

Math 463/563 Homework #2 - Solutions

Conditional Probability


STOCHASTIC MODELS LECTURE 1 MARKOV CHAINS. Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Linear Algebra Solutions 1

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

MAS275 Probability Modelling Exercises

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

M.Sc.(Mathematics with Applications in Computer Science) Probability and Statistics

Math Homework 5 Solutions

Bayesian Methods with Monte Carlo Markov Chains II

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

EECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Thank you for choosing AIMS!

P (A B) P ((B C) A) P (B A) = P (B A) + P (C A) P (A) = P (B A) + P (C A) = Q(A) + Q(B).

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

Homework set 3 - Solutions

Lecture 20 : Markov Chains

ABSTRACT MARKOV CHAINS, RANDOM WALKS, AND CARD SHUFFLING. Nolan Outlaw. May 2015

FINITE MARKOV CHAINS

MATH HOMEWORK PROBLEMS D. MCCLENDON

Chapter 10 Markov Chains and Transition Matrices

Third Problem Assignment

Transcription:

APPM /556 Markov Processes Fall 9, Some Review Problems for Exam One (Note: You will not have to invert any matrices on the exam and you will not be expected to solve systems of or more unknowns. There are some problems like this on this review. If they appeared on the exam, they will either be super trivial or you will be directed to set up but not solve the system.). One card is selected from a deck of 5 cards and is discarded. A second card is then selected from the remaning 5 cards. (a) What is the probability that the second card selected is an ace? (b) Given that an ace was drawn as the second card, what is the probability that the discarded card was an ace?. Thirteen cards numbered,,..., are shuffled and dealt face up one at a time. We say that a match occurs if the kth card revealed is card number k. Let N be the total number of matches that occur in the thirteen cards. Find E[N]. (Write N as the sum of indicators I, I,..., I, where I i = if the ith card dealt is a match. ). A Markov chain {X n } on the state space {, } has transition probability matrix P =.8..5.5 (a) If, initially, we are equally likely to start in either state, find P (X = ). (b) Find P (X = X = ). (c) What is the long run proportion of visits the chain will make to state?. Find all the communication classes of the Markov chain with the given transition probability matrix and classify each class as either recurrent or transient. Is the chain irreducible? 5. An urn contains red balls and yellow ball. At each point in time, a ball is selected at random from the urn and replaced with a yellow ball. The selection process continues until all of the red balls are removed from the urn. What is the expected duration of the process?

6. A rat with a bad cold (so he can t smell!) is put into compartment of the maze shown in Figure. At each time step, the rat moves to another compartment. (He never stays where he is.) He chooses a departure door from each compartment at random. We want to know the probability that he finds the food before he gets shocked. (a) Set up an appropriate transition matrix and the system of equations you would need to solve this. Do not solve the system!!! (b) Is this system periodic? Aperiodic? Neither? Figure : Rat Maze for Problem 7. Consider the Markov chain on S = {,, } running according to the transition probability matrix (a) What is the long run percentage of time that the chain is in state? (b) If the chain starts in state, what is the expected number of steps until the chain hits state? (c) If the chain starts in state, and if T is the first time that it hits either one of the states or, what is the probability that the chain will be in state at time T? (d) Starting in state, what is the mean time that the process spends in state prior to first hitting state? (e) Starting in state, what is the mean time that the process spends in state prior to returning to state? (f) Find P (X = X, X, X = ). Without computing it, do you expect P (X = X, X = ) to be smaller or larger? Explain. 8. Give an example of a transition probability matrix for a Markov chain having period. 9. Suppose that the probability it rains today is. if neither of the last two days was rainy, but.6 if at least one of the last two days was rainy.

(a) Set this problem up as a four state Markov chain. (b) What is the probability that it will rain on Wednesday given that it did not rain on Sunday or Monday? (c) Find the stationary distribution for this chain.. Let π be a stationary distribution of a Markov chain and let i and j be two states such that π i > and i j. Show that π j >.. A Markov chain {X n } on the state space {, } has transition probability matrix.6...7 (a) If, initially, we are equally likely to start in either state, find P (X = ). (b) What is the long run proportion of visits the chain will make to state?. An urn contains two red bugs and one green bug. Bugs are chosen at random, one by one, from the urn. If a red bug is chosen, it is removed. If the green bug is chosen, it is returned to the urn. The selection process continues until all of the red bugs are removed from the urn. What is the mean duration of the game?. Consider the Markov chain whose transition probability matrix is given by..7. SET UP BUT DO NOT SOLVE, systems of equations to answer the following two questions. (Define variables used and specify what needs to be solved for.) (a) Starting in state, determine the probability that the Markov chain ends in state. (b) Determine the mean time to absorption for this Markov chain.. A Markov chain on states {,,,,, 5} has a transition probability matrix given by P = 5 / / /8 7/8 /6 / /6 /6 / / 5/ 5/ 5 Find all communication classes and specify whether the states in each are transient or recurrent.

5. Consider the Markov chain on S = {,, } running according to the transition probability matrix Specify how to find P (X 5 = X, X, X, X, X = ). (You need not actually compute the answer.) 6. For the Markov chain given in problem above, SET UP BUT DO NOT SOLVE systems of equations to answer the following questions. (a) Find the stationary distribution. (b) If the chain starts in state, find the expected number of times the chain hits state before hitting state. 7. On a remote and mysterious island, a sunny day is followed by another sunny day with probability.9, whereas a rainy day is followed by another rainy day with probability.. Suppose that there are only sunny or rainy days. In the long run, what fraction of days are sunny? 8. Find all communication classes of the Markov chain with the given transition probability matrix and classify each state as either recurrent or transient. Is the chain irreducible? 5 / / /8 7/8 / / /8 /8 / /6 / / 5 9. In front of you are five urns which are painted red, green, yellow, and blue. The red urn contains 5 red balls, the green urn contains green balls, the yellow urn contains yellow balls and one ball each of red, green, and blue, and the blue urn contains one red, two green, and four yellow balls. A ball is drawn randomly from an urn, its color noted, and it is replaced in the urn. Then a ball is drawn from the urn matching the previous ball. This process is repeated indefinitely. (a) Set up the transition probability matrix for {X n } where X n is the color of the urn drawn from on the nth trial. (b) Set up, but DO NOT SOLVE, a system of equations to answer the following question. If your first draw is from the yellow urn, what is the probability you eventually wind up in the red one? (Identify all variables used and indicate what you need to solve for.)

. Consider the Markov chain on S = {,, } running according to the transition probability matrix Specify how to find P (X 5 = X, X, X, X, X = ). (You need not actually compute the answer.). A study of the strengths of Ivy League football teams shows that if a school has a strong team one year it is equally likely to have a strong team or average team next year; if it has an average team, half the time it is average next year, and if it changes it is just as likely to become strong as weak; if it is weak it has / probability of remaining so and / of becoming average. A school has a weak team. Set up BUT DO NOT SOLVE a Markov chain and a system of equations to determine the mean number of years until the school has a strong team. (Identify all variables used and indicate what you need to solve for.). A two state Markov chain has transition probability matrix α α β β (a) Find the first return distribution g (n). (b) Use part (a), as opposed to a first step analysis to find, starting at, the mean number of steps it will take to return to.. Write down and interpret the Chapman-Kolmogorov equations.. Let {X n } be a Markov chain on the state space S = {,,,,...}. Which of the following are stopping times for the chain? Whenever one is not, give a brief explanation as to why it is not. (a) T = min{n : X n < } (b) T = the third time X n visits state (c) T = max{n : X n = 5} (d) Let A = {,, 7}. Define T = max{n A : X n = 5}. (e) Let A = {,, 7}. Define T = min{n A : X n = 5}. (f) W = T where T is defined in part (a). (g) the sum of two stopping times S and T

5. Consider a Markov chain on S = {,, } with the following transition probability matrix. Let T i = min{n : X n = i}. Verify that π i = /E i [T i ] for i =,,....8...6....