Module 8. Lecture 3: Markov chain

Similar documents
Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

1 Gambler s Ruin Problem

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -18 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010

8 STOCHASTIC PROCESSES

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -20 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Understanding and Using Availability

Homework Solution 4 for APPM4/5560 Markov Processes

CSE 599d - Quantum Computing When Quantum Computers Fall Apart

Calculation of MTTF values with Markov Models for Safety Instrumented Systems

Participation Factors. However, it does not give the influence of each state on the mode.

QUEUING MODELS AND MARKOV PROCESSES

Asymptotic Properties of the Markov Chain Model method of finding Markov chains Generators of..

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Random variables. Lecture 5 - Discrete Distributions. Discrete Probability distributions. Example - Discrete probability model

Understanding and Using Availability

Optimism, Delay and (In)Efficiency in a Stochastic Model of Bargaining

ijcrb.webs.com INTERDISCIPLINARY JOURNAL OF CONTEMPORARY RESEARCH IN BUSINESS MAY 2013 VOL 5, NO 1 Abstract

Extension of Minimax to Infinite Matrices

ECON Answers Homework #2

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)

CHAPTER 5 STATISTICAL INFERENCE. 1.0 Hypothesis Testing. 2.0 Decision Errors. 3.0 How a Hypothesis is Tested. 4.0 Test for Goodness of Fit

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12

Introduction to Probability and Statistics

General Linear Model Introduction, Classes of Linear models and Estimation

Developing A Deterioration Probabilistic Model for Rail Wear

Distributed Rule-Based Inference in the Presence of Redundant Information

Statics and dynamics: some elementary concepts

Economics 101. Lecture 7 - Monopoly and Oligopoly

Analysis of some entrance probabilities for killed birth-death processes

Handout #3: Peak Load Pricing

Simplifications to Conservation Equations

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Applications to stochastic PDE

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies

MATH 3240Q Introduction to Number Theory Homework 7

dn i where we have used the Gibbs equation for the Gibbs energy and the definition of chemical potential

Sums of independent random variables

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Improved Capacity Bounds for the Binary Energy Harvesting Channel

4. Score normalization technical details We now discuss the technical details of the score normalization method.

= p(t)(1 λδt + o(δt)) (from axioms) Now p(0) = 1, so c = 0 giving p(t) = e λt as required. 5 For a non-homogeneous process we have

A recipe for an unpredictable random number generator

Homework 2: Solution

Distribution of winners in truel games

Approximating min-max k-clustering

15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Availability and Maintainability. Piero Baraldi

CHAPTER-5 PERFORMANCE ANALYSIS OF AN M/M/1/K QUEUE WITH PREEMPTIVE PRIORITY

Lesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains

ECE 534 Information Theory - Midterm 2

MPRI Cours I. Motivations. Lecture VI: continued fractions and applications. II. Continued fractions

On the capacity of the general trapdoor channel with feedback

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

28 Lecture 28: Transfer matrix, Symmetry breaking

arxiv:cond-mat/ v2 25 Sep 2002

Real Analysis 1 Fall Homework 3. a n.

Analysis of M/M/n/K Queue with Multiple Priorities

Chapter 8 Markov Chains and Some Applications ( 馬哥夫鏈 )

Lecture 14: Introduction to Decision Making

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Chapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population

Einführung in Stochastische Prozesse und Zeitreihenanalyse Vorlesung, 2013S, 2.0h March 2015 Hubalek/Scherrer

1-way quantum finite automata: strengths, weaknesses and generalizations

arxiv: v1 [cs.sy] 3 Nov 2018

OPTIMIZATION OF IMMOVABLE MATERIAL LAYER AT DRYING. Volodymyr Didukh, Ruslan Kirchuk

Wolfgang POESSNECKER and Ulrich GROSS*

START Selected Topics in Assurance

δq T = nr ln(v B/V A )

EE 508 Lecture 13. Statistical Characterization of Filter Characteristics

Liquid water static energy page 1/8

LIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).

Outline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Convex Optimization methods for Computing Channel Capacity

Chapter 1 Fundamentals

Understanding MCMC. Marcel Lüthi, University of Basel. Slides based on presentation by Sandro Schönborn

Session 5: Review of Classical Astrodynamics

6 Stationary Distributions

Markov Processes. Stochastic process. Markov process

Markov Model. Model representing the different resident states of a system, and the transitions between the different states

Matrix Multiplication

CERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education

Research Note REGRESSION ANALYSIS IN MARKOV CHAIN * A. Y. ALAMUTI AND M. R. MESHKANI **

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

Lecture 5: Introduction to Markov Chains

Note special lecture series by Emmanuel Candes on compressed sensing Monday and Tuesday 4-5 PM (room information on rpinfo)

State Estimation with ARMarkov Models

Robust Solutions to Markov Decision Problems

UNCERTAINLY MEASUREMENT

Statistical downscaling daily rainfall statistics from seasonal forecasts using canonical correlation analysis or a hidden Markov model?

Round-off Errors and Computer Arithmetic - (1.2)

IMPROVED BOUNDS IN THE SCALED ENFLO TYPE INEQUALITY FOR BANACH SPACES

The Properties of Pure Diagonal Bilinear Models

MARKOV PROCESSES. Valerio Di Valerio

Transcription:

Lecture 3: Markov chain

A Markov chain is a stochastic rocess having the roerty that the value of the rocess X t at time t, deends only on its value at time t-1, X t-1 and not on the sequence X t-2, X t-3,, X 0 that the rocess assed through to arrive at X t-1 For 1 st order Markov chain or Single ste M.C. ( X = a X = a X = a X = a X = a t j t 1 i t 2 k t 3 1 0 q) = rob ( X = a X = a t j t 1 i) rob,,,...,

Diagrammatically, it may be reresented as, X o X t-2 X t-1 X t We will be able to write this as t-1 t time eriod X t-1 =a i X t =a j State i transited to State j t = P X = a X = a t j t 1 i P is the robability that it goes in to state j, starting with array i here.

Transition robability It is the robability that state i will transit to state j t-2 t t-3 t-2 t-1 t t t+ If = τ t, τ then, the series is called homogeneous Markov Chain [i.e., transition robabilities remain the same across the time]

Here analysis is done only for: Single ste (1 st order) homogeneous M. C. If t is month then will not be homogeneous (seasonal change) = transition robability for i to j i=1,2,, m and j=1,2,.,m where m is the no. of ossible states that the rocess can occuy. TPM, (Total robability matrix) Probability stating 11 12 13 1m that i=1 go into 21 22 23 2m P = = j=2 m1 m2 m3 mm Sum of each row=1

t t+ τ If = t, τ Each row must add to 1 m = 1, j=1 i Such matrices whose individual rows add u to 1 are called the stochastic matrices = ( 1), total no. of robability values that need to be estimated 2 m m mm Estimate = m n j = 1 n from historical data

Historic data 1 2 100 Time eriod 1 1 2 1 2 3 States (random) e.g. No. of times it went into state 1 out of these 50 times = 20, No. of times it transited to state 2 = 20 No. of times it transited to state 3 = 15 Then 20 = 11 ; 50 15 = 12 ; 50 15 = 13. 50 Deficit, non-deficit Drought, non-drought Two states Two states

(n) j : robability that the rocess will be in state j after n time stes t-1 n Time interval i j - state (0) j : Initial robability of being in state j,,...,...a sum vector n = ( n) ( n) ( n) 1 2 m 1 xm Probability of being in state 1 in time ste n

If o (1) create at t=1 = (1) ( o ) Probability vector at time 1 is given P from robability 11 12 13 1m (0) (0) ( ) 21 22 23 2m =,,..., o 1 2 m m1 m2 m3 mm (0) (0) (0) + +... + m m 1 11 2 21 1 Probability that event will start from state 2 Probability of transition from 2 to 1

Probability that the state is 1 in eriod 1 (1) (1) (1) =,,..., 1 2 m Probability that state is 2 in time eriod 1 Probability that state is m in time eriod 1 Simillarly, (2) (1) = P. PP (0) =.. Any time eriod n, = P ( n) (0) n = P (0) 2 After time n, rob. to be in articular state j (TPM) Initial rob. vector

=,after a large m then steady state robability (n+ m) (n) condition is achieved. Once the steady state is reached, (n+ m) (n) = = So, =.P Examle: A 2-state Markov Chain; for a sequence of wet and dry sells i = 1 dry; i = 2 wet d w 0.9 0.1 (i) P [day 1 is wet day 0 is dry] P = 0.5 0.5 = P [X t = 2 X t-1 = 1] = 12 = (1) 2 = 0.1 (Ans) d w

Examle Problem (ii) P[day 2 is wet day 0 is dry] (2) 2 = (2) (1) (2) = day wet P [ ] = 0.9 0.1 [ 0.86 0.14] Because day 0 is dry 0.9 0.1 0.5 0.5 Dry wet Probability that day 2 will be wet (2) = 2 = 0.14

Examle Problem (iii) Prob [day 100 is wet day 1 is dry] ie.., P (100) 2 0.9 0.1 P = 0.5 0.5 Here, the fact that day 1 was dry, would not significantly affect the robability of rain on day 100. So n can be assumed to be large and solve the roblem based on steady-state robabilities

Examle Problem To determine steady state, 2 or P PP.. or or or 0.86 0.14 = = 0.7 0.3 0.8376 0.1624 = = 0.8120 0.1880 4 2 2 P P. P. 0.8334 0.1666 = P P = 0.8320 0.1672 8 4 4 P.. 0.8333 0.1667 = P P = 0.8333 0.1667 16 8 8 P.. All the rows same =(0.8333; 0.1667) dry wet