ECE 6960: Adv. Random Processes & Applications Lecture Notes, Fall 2010
|
|
- Clifton Patrick
- 5 years ago
- Views:
Transcription
1 ECE 6960: Adv. Random Processes & Alications Lecture Notes, Fall 2010 Lecture 16 Today: (1) Markov Processes, (2) Markov Chains, (3) State Classification Intro Please turn in H 6 today. Read Chater 11, sections 1-3, for Thursday. Read the Bianchi aer for next Tuesday. 1 Markov Processes e re going to talk about random rocesses which have limited memory. Def n: Markov Process A discrete-time random rocess X n is Markov if it has the roerty that P [X n+1 X n,x n 1,X n 2,...] = P [X n+1 X n ] A continuous-time random rocess X(t) is Markov if it has the roerty that P [X(t n+1 ) X(t n ),X(t n 1 ),X(t n 2 ),...] = P [X(t n+1 ) X(t n )] If at time n you write a distribution for X n+1 given all ast values of X, the distribution is no different from that just using the resent value X n. Given the resent, the ast does not matter. Note that how you define X n is u to you. Examles Foreachone, writep [X(t n+1 ) X(t n ),X(t n 1 ),...]and P [X(t n+1 ) X(t n )]: Brownian motion: The value of X n+1 is equal to X n lus the random motion that occurs between time n and n+1. This motion is i.i.d. in a Brownian motion rocess. Any indeendent increments rocess (e.g., Poisson rocess). Gambling or investments. Digital systems. The state is described by what is in the comuter s memory; and the transitions may be non-random (described by a deterministic algorithm) or random. Randomness may arrive from inut signals.
2 ECE Fall Notes: The value X n is also called the state. The change from X n to X n+1 is called the state transition. i.i.d. r..s are also Markov. Ther.v. X n canbeeitherdiscrete-valuedorcontinuous-valued in order to have the Markov roerty. However, it must be discrete-valued in order to be reresented in a Markov chain, which we will talk about next. 2 Markov Chains hen X n is a Markov rocess and: 1. The r.v.s X n are discrete-valued, and 2. The transition robabilities P [X n+1 X n ] are not a function of n, we can reresent it as a Markov chain. Because the event sace Ω is countable, we tyically reresent our range S X as a set of integers. (If it wasn t, we could consider Y i = g(x i ) to be a function which assigned a unique integer to each element of S X.) Def n: Transition Probability The robability of transition from state i to state j is denoted i,j, i,j P [X n+1 = j X n = i] 2.1 Visualization e make diagrams to show the ossible rogression of a Markov rocess. Each state is a circle; while each transition is an arrow, labeled with the robability of that transition. Examle: Discrete Telegrah ave r.. Let X n be a Binomial r.. with arameter, and let Y n = n ( 1) X i = ( 1) n i=1 X i = Y n 1 ( 1) Xn i=1 Each time a trial is a success, the r.. Y n switches from 1 to -1 or vice versa. See the state transition diagram drawn in Fig. 1. Examle: (Miller & Childers) Collect Them All This is the fast food chain romotion with a series of toys for kids who are told to Collect them all!. Let there be four toys, and
3 ECE Fall Figure 1: A state transition diagram for the Discrete Telegrah ave. let X n be the number out of four that you ve collected after your nth visit to the chain. How many states are there? hat are the transition robabilities? Figure 2: A state transition diagram for the Collect Them All! random rocess. 2.2 Single Ste Transition Probability Matrices This is Section 4.1. The transition robabilities satisfy: 1. i,j 0 2. j i,j = 1 Note: i i,j 1! Don t make this mistake. The robability of leaving state i for any state i is equal to 1. Def n: State Transition Probability Matrix The state transition robability matrix P of an N-state Markov chain is given by: 1,1 1,2 1,N 2,1 2,2 2,N..... N,1 N,2 N,N Note: The rows sum to one; the columns may not. There may be N states, but they may not have values 1, 2, 3,...,N. Thus if we don t have such values, we may create an
4 ECE Fall intermediate r.v. n which is equal to the rank of the value of X n, or n = rankx n, for some arbitrary ranking system. Examle: Discrete telegrah wave hat is the the TPM of the Discrete Telegrah ave r..? Use: n = 1 for X n = 1, and n = 2 for X n = 1: [ ] [ ] 1, 1 1,1 1 = 1, 1 1,1 1 Examle: Collect Them All hat is the the TPM of the Collect them all examle? Use n = X n +1: = 1,1 1,2 1,3 1,4 1,5 2,1 2,2 2,3 2,4 2,5 3,1 3,2 3,3 3,4 3,5 4,1 4,2 4,3 4,4 4,5 5,1 5,2 5,3 5,4 5, Examle: Gambling $50 You start at a casino with 5 $10 chis. Each time n you bet one chi. You win with robability 0.45, and lose with robability If you run out, you will sto betting. Also, you decide beforehand to sto if you doubleyour money. hat is the TPMfor this random rocess? Examle: aiting in a finite queue A mail server (bank) can deliver one (customer) at each
5 ECE Fall minute. But, X n more s (customers) arrive in minute n, where X n is (i.i.d.) Poisson with arameter λ = 1 er minute. s (eole) who can t be handled immediately are queued. But if the number in the queue, Y n, is equal to 2, the queue is full, and s will be droed (customers won t stay and wait). Thus the number of s in the queue (eole in line) is given by hat is the P [X n = k]? Y n+1 = min(2,max(0,y n 1)+X n ) P [X n = k] = (λt)k e λt = 1 k! ek! P [X n = 0] = 1/e 0.37, and P [X n = 1] = 1/e 0.37, and P [X n = 2] = 1/(2e) ,0 0,1 0,2 1,0 1,1 1,2 2,0 2,1 2,2 = Examle: Chute and Ladder See Figure 3. You roll a die (a fair die) and move forward that number of squares. Then, if you land on to of a chute, you have to fall down to a lower square; if you land on bottom of a ladder, you climb u to the higher square. The object is to land on inner. You don t need to get there with an exact roll. This is a Markov Chain: your future square only deends on your resent square and your roll. hat are the states? They are S X = {1,2,4,5,7} Since you ll never stay on 3 or 6, we don t need to include them as states. (e could but there would just be 0 robability of landing on them, so why bother.) This is the transition robability matrix: 7 inner Start Figure 3: Playing board for the game, Chute and Ladder.
6 ECE Fall /6 2/6 2/6 1/ /6 2/6 2/ /6 1/6 4/ /6 0 5/ Examle: Countably Infinite Markov Chain e can also have a countably infinite number of states. It is a discrete-valued r.. after all; we might still have an infinite number of states. For examle, if we didn t ever sto gambling at a fixed uer number. Or, if we allowed ourselves to get into arbitrary debts. Such an examle, where we gamble $1 at each time, is shown in Figure Figure 4: Examle of a Markov Chain with a countably infinite state sace. Examle: Random Backoff In medium access control (MAC) rotocols for acket radio channels, asender may transmitbuthave its acket collide with a acket from another sender who sent at the same time. If a collision occurs (which haens with robability ), each will wait a random back-off time rior to transmitting. This random back-off time is chosen to be a uniform in {1,...,} for some maximum wait time (ignoring the ossible increase in after multile collisions). Figure 5 shows a transition diagram Figure 5: Markov Chain of the waiting time in a random back-off MAC rotocol. A TPM for this random rocess is,
7 ECE Fall Multi-ste Markov Chain Dynamics Initialization e might not know in exactly which state the markov chain will start. For examle, for the bank queue examle, we might have eole lined u when the bank oens. Let s say we ve measured over many days and found that at time zero, the number of eole is uniformly distributed, i.e., { 1/3, x = 0,1,2 P [X 0 = k] = 0, o.w. e reresent this kind of information in a vector: In general, (0) = [P [X 0 = 0],P [X 0 = 1],P [X 0 = 2]] (n) = [P [X n = 0],P [X n = 1],P [X n = 2]] The only requirement is that the sum of (n) is 1 for any n Multile-Ste Transition Matrix This is in Ross Section 4.2. Def n: n-ste transition Matrix The n-ste transition robability matrix P(n) of Markov chain X n has (i, j)th element i,j (n) = P [X n+m = j X m = i]
8 ECE Fall Theorem: Chaman-Kolmogorov equations: For a Markov chain, the n-ste transition matrix satisfies P(n+m) = P(n)P(m) Proof: Consider the (i,j) element of P(n+m), i,j (n+m), i,j (n+m) = P [X n+m = j X 0 = i] = P [X n+m = j,x n = k X 0 = i] k hy is this ste true? i,j (n+m) = k = k = k = k P [X n+m = j X n = k,x 0 = i]p [X n = k X 0 = i] P [X n+m = j X n = k]p [X n = k X 0 = i] k,j (m) i,k (n) i,k (n) k,j (m) This latter form shows the matrix multilication. hen you have a sum of matrix elements, you should be able to recognize when that exression can be written as a matrix multilication. Here, the dummy index is on the inside of the subscrits. This is how we can see that i,j (n+m) is equal to the sum of the roducts of row i of P(n) and column j of P(m). Thus P(n+m) = P(n)P(m) This means, to find the two-ste transition matrix, you multily (matrix multily) P and P together. In general, the n-ste transition matrix is P(n) = [P(1)] n The state robabilities at time n can be found as (n) T = (0) T [P(1)] n 3 Markov Chain State Classification This is Leon-Garcia There are quite a few definitions and terms which accomany Markov chains. Def n: Accessible A state j is accessible from state i if i,j (n) > 0 for some n 0.
9 ECE Fall Notes: Note that a state always communicates with itself. Accessible is also that there is a ositive robability that, starting at state i, state j will ever be entered. Def n: Communicate States i and j communicate if: State j is accessible from state i, and State i is accessible from state j. Notes: e write i j if states i and j communicate. Of course it is a symmetric relation. Communicates with is also transitive, i.e., if i j and k j then i k. Def n: Class States which communicate with each other are in the same class. Def n: Irreducible If all states in a Markov chain are in one class, then the chain is irreducible. Examle: Ross, 4.12 Consider the 4-state Markov chain with states {0,1,2,3} and TPM P = hich states communicate? hat class(es) exist? Is this MC irreducible? Def n: Absorbing A state is absorbing if no other state is accessible from it.
1 Gambler s Ruin Problem
Coyright c 2017 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More informationModule 8. Lecture 3: Markov chain
Lecture 3: Markov chain A Markov chain is a stochastic rocess having the roerty that the value of the rocess X t at time t, deends only on its value at time t-1, X t-1 and not on the sequence X t-2, X
More informationAnalysis of some entrance probabilities for killed birth-death processes
Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction
More informationChapter 16 focused on decision making in the face of uncertainty about one future
9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account
More informationLectures on Markov Chains
Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationNumerical Linear Algebra
Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and
More informationAnalysis of M/M/n/K Queue with Multiple Priorities
Analysis of M/M/n/K Queue with Multile Priorities Coyright, Sanjay K. Bose For a P-riority system, class P of highest riority Indeendent, Poisson arrival rocesses for each class with i as average arrival
More information(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only
More informationQUEUING MODELS AND MARKOV PROCESSES
QUEUING MODELS AND MARKOV ROCESSES Queues form when customer demand for a service cannot be met immediately. They occur because of fluctuations in demand levels so that models of queuing are intrinsically
More informationISM206 Lecture, May 12, 2005 Markov Chain
ISM206 Lecture, May 12, 2005 Markov Chain Instructor: Kevin Ross Scribe: Pritam Roy May 26, 2005 1 Outline of topics for the 10 AM lecture The topics are: Discrete Time Markov Chain Examples Chapman-Kolmogorov
More information8 STOCHASTIC PROCESSES
8 STOCHASTIC PROCESSES The word stochastic is derived from the Greek στoχαστικoς, meaning to aim at a target. Stochastic rocesses involve state which changes in a random way. A Markov rocess is a articular
More informationECE 534 Information Theory - Midterm 2
ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You
More informationHomework Solution 4 for APPM4/5560 Markov Processes
Homework Solution 4 for APPM4/556 Markov Processes 9.Reflecting random walk on the line. Consider the oints,,, 4 to be marked on a straight line. Let X n be a Markov chain that moves to the right with
More informationMarkov Chains Introduction
Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this
More informationOutline. EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Simple Error Detection Coding
Outline EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Error detection using arity Hamming code for error detection/correction Linear Feedback Shift
More informationOutline. Markov Chains and Markov Models. Outline. Markov Chains. Markov Chains Definitions Huizhen Yu
and Markov Models Huizhen Yu janey.yu@cs.helsinki.fi Det. Comuter Science, Univ. of Helsinki Some Proerties of Probabilistic Models, Sring, 200 Huizhen Yu (U.H.) and Markov Models Jan. 2 / 32 Huizhen Yu
More informationThe Longest Run of Heads
The Longest Run of Heads Review by Amarioarei Alexandru This aer is a review of older and recent results concerning the distribution of the longest head run in a coin tossing sequence, roblem that arise
More informationName of the Student:
SUBJECT NAME : Probability & Queueing Theory SUBJECT CODE : MA 6453 MATERIAL NAME : Part A questions REGULATION : R2013 UPDATED ON : November 2017 (Upto N/D 2017 QP) (Scan the above QR code for the direct
More informationExtension of Minimax to Infinite Matrices
Extension of Minimax to Infinite Matrices Chris Calabro June 21, 2004 Abstract Von Neumann s minimax theorem is tyically alied to a finite ayoff matrix A R m n. Here we show that (i) if m, n are both inite,
More information6 Stationary Distributions
6 Stationary Distributions 6. Definition and Examles Definition 6.. Let {X n } be a Markov chain on S with transition robability matrix P. A distribution π on S is called stationary (or invariant) if π
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains
ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer
More informationSolution: (Course X071570: Stochastic Processes)
Solution I (Course X071570: Stochastic Processes) October 24, 2013 Exercise 1.1: Find all functions f from the integers to the real numbers satisfying f(n) = 1 2 f(n + 1) + 1 f(n 1) 1. 2 A secial solution
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationTMA4265 Stochastic processes ST2101 Stochastic simulation and modelling
Norwegian University of Science and Technology Department of Mathematical Sciences Page of 7 English Contact during examination: Øyvind Bakke Telephone: 73 9 8 26, 99 4 673 TMA426 Stochastic processes
More informationA MONOTONICITY RESULT FOR A G/GI/c QUEUE WITH BALKING OR RENEGING
J. Al. Prob. 43, 1201 1205 (2006) Printed in Israel Alied Probability Trust 2006 A MONOTONICITY RESULT FOR A G/GI/c QUEUE WITH BALKING OR RENEGING SERHAN ZIYA, University of North Carolina HAYRIYE AYHAN
More informationAn Analysis of TCP over Random Access Satellite Links
An Analysis of over Random Access Satellite Links Chunmei Liu and Eytan Modiano Massachusetts Institute of Technology Cambridge, MA 0239 Email: mayliu, modiano@mit.edu Abstract This aer analyzes the erformance
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationAn Introduction to Information Theory: Notes
An Introduction to Information Theory: Notes Jon Shlens jonshlens@ucsd.edu 03 February 003 Preliminaries. Goals. Define basic set-u of information theory. Derive why entroy is the measure of information
More informationSolutions to In Class Problems Week 15, Wed.
Massachusetts Institute of Technology 6.04J/18.06J, Fall 05: Mathematics for Comuter Science December 14 Prof. Albert R. Meyer and Prof. Ronitt Rubinfeld revised December 14, 005, 1404 minutes Solutions
More informationMATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More informationJohn Weatherwax. Analysis of Parallel Depth First Search Algorithms
Sulementary Discussions and Solutions to Selected Problems in: Introduction to Parallel Comuting by Viin Kumar, Ananth Grama, Anshul Guta, & George Karyis John Weatherwax Chater 8 Analysis of Parallel
More informationLecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes
Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities
More informationContinuous-Time Markov Chain
Continuous-Time Markov Chain Consider the process {X(t),t 0} with state space {0, 1, 2,...}. The process {X(t),t 0} is a continuous-time Markov chain if for all s, t 0 and nonnegative integers i, j, x(u),
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More information15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #17: Prediction from Expert Advice last changed: October 25, 2018
5-45/65: Design & Analysis of Algorithms October 23, 208 Lecture #7: Prediction from Exert Advice last changed: October 25, 208 Prediction with Exert Advice Today we ll study the roblem of making redictions
More informationNANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION MH4702/MAS446/MTH437 Probabilistic Methods in OR
NANYANG TECHNOLOGICAL UNIVERSITY SEMESTER I EXAMINATION 2013-201 MH702/MAS6/MTH37 Probabilistic Methods in OR December 2013 TIME ALLOWED: 2 HOURS INSTRUCTIONS TO CANDIDATES 1. This examination paper contains
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationChapter 7 Sampling and Sampling Distributions. Introduction. Selecting a Sample. Introduction. Sampling from a Finite Population
Chater 7 and s Selecting a Samle Point Estimation Introduction to s of Proerties of Point Estimators Other Methods Introduction An element is the entity on which data are collected. A oulation is a collection
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More information1 Probability Spaces and Random Variables
1 Probability Saces and Random Variables 1.1 Probability saces Ω: samle sace consisting of elementary events (or samle oints). F : the set of events P: robability 1.2 Kolmogorov s axioms Definition 1.2.1
More informationMarkov Chains. Chapter 16. Markov Chains - 1
Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider
More informationOnline Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies
Online Aendix to Accomany AComarisonof Traditional and Oen-Access Aointment Scheduling Policies Lawrence W. Robinson Johnson Graduate School of Management Cornell University Ithaca, NY 14853-6201 lwr2@cornell.edu
More informationThe Transition Probability Function P ij (t)
The Transition Probability Function P ij (t) Consider a continuous time Markov chain {X(t), t 0}. We are interested in the probability that in t time units the process will be in state j, given that it
More informationCSE 599d - Quantum Computing When Quantum Computers Fall Apart
CSE 599d - Quantum Comuting When Quantum Comuters Fall Aart Dave Bacon Deartment of Comuter Science & Engineering, University of Washington In this lecture we are going to begin discussing what haens to
More informationISE/OR 760 Applied Stochastic Modeling
ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /
More informationECE 541 Project Report: Modeling the Game of RISK Using Markov Chains
Contents ECE 541 Project Report: Modeling the Game of RISK Using Markov Chains Stochastic Signals and Systems Rutgers University, Fall 2014 Sijie Xiong, RUID: 151004243 Email: sx37@rutgers.edu 1 The Game
More informationDistributed Rule-Based Inference in the Presence of Redundant Information
istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced
More informationInterlude: Practice Final
8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point
More informationIntroduction to Probability for Graphical Models
Introduction to Probability for Grahical Models CSC 4 Kaustav Kundu Thursday January 4, 06 *Most slides based on Kevin Swersky s slides, Inmar Givoni s slides, Danny Tarlow s slides, Jaser Snoek s slides,
More informationAnswers to selected exercises
Answers to selected exercises A First Course in Stochastic Models, Henk C. Tijms 1.1 ( ) 1.2 (a) Let waiting time if passengers already arrived,. Then,, (b) { (c) Long-run fraction for is (d) Let waiting
More information18.175: Lecture 30 Markov chains
18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know
More informationBERNOULLI TRIALS and RELATED PROBABILITY DISTRIBUTIONS
BERNOULLI TRIALS and RELATED PROBABILITY DISTRIBUTIONS A BERNOULLI TRIALS Consider tossing a coin several times It is generally agreed that the following aly here ) Each time the coin is tossed there are
More informationCalculation of MTTF values with Markov Models for Safety Instrumented Systems
7th WEA International Conference on APPLIE COMPUTE CIENCE, Venice, Italy, November -3, 7 3 Calculation of MTTF values with Markov Models for afety Instrumented ystems BÖCÖK J., UGLJEA E., MACHMU. University
More informationRandom variables. Lecture 5 - Discrete Distributions. Discrete Probability distributions. Example - Discrete probability model
Random Variables Random variables Lecture 5 - Discrete Distributions Sta02 / BME02 Colin Rundel Setember 8, 204 A random variable is a numeric uantity whose value deends on the outcome of a random event
More informationMarkov Chains (Part 3)
Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is
More informationRandom Variables Example:
Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the
More informationChapter 8 Markov Chains and Some Applications ( 馬哥夫鏈 )
Chater 8 arkov Chains and oe Alications ( 馬哥夫鏈 Consider a sequence of rando variables,,, and suose that the set of ossible values of these rando variables is {,,,, }, which is called the state sace. It
More informationChapter 1: PROBABILITY BASICS
Charles Boncelet, obability, Statistics, and Random Signals," Oxford University ess, 0. ISBN: 978-0-9-0005-0 Chater : PROBABILITY BASICS Sections. What Is obability?. Exeriments, Outcomes, and Events.
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationLecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking
Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov
More informationMAS275 Probability Modelling Exercises
MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationModel checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]
Chater 5 Model checking, verification of CTL One must verify or exel... doubts, and convert them into the certainty of YES or NO. [Thomas Carlyle] 5. The verification setting Page 66 We introduce linear
More informationStatics and dynamics: some elementary concepts
1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and
More informationChapter 5. Continuous-Time Markov Chains. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan
Chapter 5. Continuous-Time Markov Chains Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Continuous-Time Markov Chains Consider a continuous-time stochastic process
More informationElementary Analysis in Q p
Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some
More informationMarkov Chain Model for ALOHA protocol
Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability
More informationImproved Capacity Bounds for the Binary Energy Harvesting Channel
Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,
More informationConvolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.
Convolutional Codes Goals Lecture Be able to encode using a convolutional code Be able to decode a convolutional code received over a binary symmetric channel or an additive white Gaussian channel Convolutional
More information2 DISCRETE-TIME MARKOV CHAINS
1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state
More informationLECTURE #6 BIRTH-DEATH PROCESS
LECTURE #6 BIRTH-DEATH PROCESS 204528 Queueing Theory and Applications in Networks Assoc. Prof., Ph.D. (รศ.ดร. อน นต ผลเพ ม) Computer Engineering Department, Kasetsart University Outline 2 Birth-Death
More informationReadings: Finish Section 5.2
LECTURE 19 Readings: Finish Section 5.2 Lecture outline Markov Processes I Checkout counter example. Markov process: definition. -step transition probabilities. Classification of states. Example: Checkout
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationThe University of the State of New York REGENTS HIGH SCHOOL EXAMINATION COURSE III. Wednesday, August 16, :30 to 11:30 a.m.
The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION THREE-YEAR SEQUENCE FOR HIGH SCHOOL MATHEMATICS COURSE III Wednesday, August 6, 000 8:0 to :0 a.m., only Notice... Scientific calculators
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationQueuing Theory. Using the Math. Management Science
Queuing Theory Using the Math 1 Markov Processes (Chains) A process consisting of a countable sequence of stages, that can be judged at each stage to fall into future states independent of how the process
More information18.600: Lecture 32 Markov Chains
18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence
More informationThe story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes
The story of the film so far... Mathematics for Informatics 4a José Figueroa-O Farrill Lecture 19 28 March 2012 We have been studying stochastic processes; i.e., systems whose time evolution has an element
More informationWhat is a random variable
OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE MATH 256 Probability and Random Processes 04 Random Variables Fall 20 Yrd. Doç. Dr. Didem Kivanc Tureli didemk@ieee.org didem.kivanc@okan.edu.tr
More informationCHAPTER-5 PERFORMANCE ANALYSIS OF AN M/M/1/K QUEUE WITH PREEMPTIVE PRIORITY
CHAPTER-5 PERFORMANCE ANALYSIS OF AN M/M//K QUEUE WITH PREEMPTIVE PRIORITY 5. INTRODUCTION In last chater we discussed the case of non-reemtive riority. Now we tae the case of reemtive riority. Preemtive
More informationQ = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?
IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross
More informationQueuing Theory. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011
Queuing Theory Richard Lockhart Simon Fraser University STAT 870 Summer 2011 Richard Lockhart (Simon Fraser University) Queuing Theory STAT 870 Summer 2011 1 / 15 Purposes of Today s Lecture Describe general
More informationMarkov Chains. INDER K. RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai , India
Markov Chains INDER K RANA Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400076, India email: ikrana@iitbacin Abstract These notes were originally prepared for a College
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationPart I Stochastic variables and Markov chains
Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)
More informationLectures on Probability and Statistical Models
Lectures on Probability and Statistical Models Phil Pollett Professor of Mathematics The University of Queensland c These materials can be used for any educational purpose provided they are are not altered
More informationHotelling s Two- Sample T 2
Chater 600 Hotelling s Two- Samle T Introduction This module calculates ower for the Hotelling s two-grou, T-squared (T) test statistic. Hotelling s T is an extension of the univariate two-samle t-test
More information18.440: Lecture 33 Markov Chains
18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a
More informationMATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK
Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment
More informationDiscrete Random Variable
Discrete Random Variable Outcome of a random experiment need not to be a number. We are generally interested in some measurement or numerical attribute of the outcome, rather than the outcome itself. n
More informationLecture 4a: Continuous-Time Markov Chain Models
Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time
More informationChater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Definition and classification. Probability axioms and first results
Discrete time Markov chains Discrete Time Markov Chains, Definition and classification 1 1 Applied Mathematics and Computer Science 02407 Stochastic Processes 1, September 5 2017 Today: Short recap of
More informationCHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules
CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is
More informationChi-Squared Tests. Semester 1. Chi-Squared Tests
Semester 1 Goodness of Fit Up to now, we have tested hypotheses concerning the values of population parameters such as the population mean or proportion. We have not considered testing hypotheses about
More informationDr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur
Analysis of Variance and Design of Exeriment-I MODULE II LECTURE -4 GENERAL LINEAR HPOTHESIS AND ANALSIS OF VARIANCE Dr. Shalabh Deartment of Mathematics and Statistics Indian Institute of Technology Kanur
More information