Homework set 2 - Solutions

Size: px
Start display at page:

Download "Homework set 2 - Solutions"

Transcription

1 Homework set 2 - Solutions Math 495 Renato Feres Simulating a Markov chain in R Generating sample sequences of a finite state Markov chain. The following is a simple program for generating sample sequences of a Markov chain. It consists of a function that takes in three arguments: N (number of steps), π 0 (the initial probability distribution), P (the transition probabilities matrix), and produces a vector of length N : (X 1,..., X N ) where each X i lies in the state space {1,2,..., s. Note that π 0 must be a vector of length s and P must be an s-by-s matrix. Here is the program: #It is assumed that the states are {1,2,...,s #where s is the length of pi0, and P is s-by-s. #pi0 is the probability distribution of the initial step #and P is the transition probabilities matrix. Markov=function(N,pi0,P) { #Number of states s=length(pi0) X=matrix(0,1,N) a=sample(c(1:s),1,replace=true,pi0) X[1]=a for (i in 2:N) { a=sample(c(1:s),1,replace=true,p[a,]) X[i]=a X To test that it works as expected, let us do the following experiment. Let π 0 = (1,0) and P = ( We generate N = 10 5 steps of the sequence: X=Markov(100000,pi0,P) and compute the relative frequency of occurrences of state 1 (the set of states being {1, 2.) It is not difficult to compute the exact stationary distribution: π = (0.6,0.4). Let us compute the approximate stationary probability of 1 using the Markov chain simulation: ). #Define the probability distribution of X0 pi0=c(1,0)

2 #Define the transition probabilities matrix P=matrix(0,2,2) P[1,]=c(0.8,0.2) P[2,]=c(0.3,0.7) #Number of steps N=10^5 > X=Markov(N,pi0,P) > sum(x==1)/n [1] The result is sufficiently close to 0.6 to make me believe that the program has no bugs. Here is a variant of the above program. We wish to find the number of steps, beginning at state i, till the chain with transition matrix P reaches a different state j. We count step 0 but not the step at which the chain is at j. #This program gives the number of steps in one #run of the Markov chain with transition probabilities #Matrix P, starting at i, till it reaches j. We count #the 0th step but not the step when the chain is at j. #Be careful: if the probability of eventually reaching j from #i is 0, the program will not stop! Number_of_steps=function(i,j,P) { T=0 u=dim(p) s=u[1] X=i while (X!=j){ X=sample(c(1:s),1,replace=TRUE,P[X,]) T=T+1 return(t) Suppose we wish to find the expected number of steps it takes to get to state 2 starting at 1 for the two-state chain with transition matrix P given above. We can do the following: P=matrix(0,2,2) P[1,]=c(0.8,0.2) P[2,]=c(0.3,0.7) N=1000 T=0*c(1:N) for (n in 1:N){ T[n]=Number_of_steps(1,2,P) sum(t)/n 2

3 Note that we are approximating the expected time by the mean number of steps of N sample values. One run of this program, for N = 1000, gave me the value It is not difficult (using the material of section 1.4 of the textbook) to show that the exact number is 5. Let us do one more experiment based on the chain of the first example above. Here it will be convenient to label the states as 0 or 1, so the state set is S = {0,1. We are interested in the frequency over the long run of patterns of 4 consecutive symbols: 0010, 1001, etc. Each pattern corresponds to an integer expressed in base 2. So, for example, , , and so forth. In general, the correspondence between patterns and integers is given by a 0 a 1 a 2 a 3 a a a 2 w 2 + a This naturally defines another Markov chain with 16 states and state transitions a i a i+1 a i+2 a i+3 a i+1 a i+2 a i+3 a i+4. #Define the probability distribution of X0 pi0=c(1,0) #Define the transition probabilities matrix P=matrix(0,2,2) P[1,]=c(0.8,0.2) P[2,]=c(0.3,0.7) #Number of steps N=10^5 X=Markov(N,pi0,P) sum(x==1)/n #Subtract 1 from each entry of X so that states are 0 and 1 X=X-1 #Initialize an array Y. It will have integer entries from 0 to 15 #Make it all zeros initially: Y=matrix(0,1,N-3) #Now for each index i associate the integer from 0 to 15 corresponding #to the binary string of 4 symbols given by X[i]... X[i+3] for (i in 1:(N-3)) { Y[i]=X[i]*1+X[i+1]*2+X[i+2]*4+X[i+3]*8 #We can get an idea of the relative frequencies of the various binary p #patterns by plotting a histogram: hist(y,freq=false,breaks=seq(from=-0.5,to=15.5,by=1)) #We can also get the relative frequencies of the 16 patterns directly: rel_freq = matrix(0,1,16) for (k in 0:15) { rel_freq[k+1]=sum(y==k)/(n-3) #The same vector of relative frequencies could have been obtained #from the histogram as follows: h=hist(y,freq=false,breaks=seq(from=-0.5,to=15.5,by=1)) rel_freq_from_hist = h$density 3

4 The histogram is shown next. Figure 1: Histogram for the relative frequencies of patterns of 4 symbols for a two-state Markov chain. Problems 1. (Similar to Exercise 1.5, page 36 in text.) Consider the Markov chain with state space S = {0,...,5 and transition matrix P = (a) Draw a states and transitions graph. (b) What are the communication classes? (c) Which communication classes are recurrent and which are transient? (d) Suppose the system starts in state 2. What is the probability that it will be in state 2 at some large time? (e) Suppose the system starts in state 5. What is the probability that it will be in state 5 at some large time? (f) Numerically obtain P 500. Are your answers to d and e supported by the values of the entries of this matrix? Solution. (a) The transitions diagram is shown below in Figure 2 (b) It is clear from the diagram that the communication classes are {1,2, {3,5, and {0,4. 4

5 Figure 2: Graph of Problem 2. (c) The classes {1,2 and {0,4 are recurrent while {3,5 is transient. (d) The subset of states {1, 2 is invariant. This means that if the Markov chain has initial probability distribution supported ( on this ) set, it will never leave the set. In this case, the chain behavior is determined by the matrix block. The stationary distribution is given by the left-eigenvector associated to the eigenvalues This this the row vector π that satisfies π(i P) = 0. In detail, (π 1,π 2 ) ( which gives 0.3π 1 = 0.5π 2. Since π i 0 and π 1 + π 2 = 1 we obtain ( 5 π = 8, 3 ). 8 ) = (0,0) Thus in the long run the probability that the chain will be back at 2 is 3/8. (e) Because 5 is a transient state, the probability that the state will be 5 at some large time approaches 0. (f) I will do this in R. > P=matrix(0,nrow=6,ncol=6) > P[1,]=c(0.1,0,0,0,0.9,0) > P[2,]=c(0,0.7,0.3,0,0,0) > P[3,]=c(0,0.5,0.5,0,0,0) > P[4,]=c(0,0.25,0.25,0,0.25,0.25) > P[5,]=c(0.7,0,0,0,0.3,0) > P[6,]=c(0,0.2,0,0.2,0.2,0.4) > P%^%500 [,1] [,2] [,3] [,4] [,5] [,6] [1,] e e+00 [2,] e e+00 [3,] e e+00 [4,] e e-151 [5,] e e+00 [6,] e e-151 > 3/8 5

6 [1] The matrix P 500 confirms our conclusions: the probability of 2, having started at 2, seems to stabilize at 0.375, which is indeed 3/8. Regardless of where the chain begins, the probability of being at state 5 (this corresponds to the 6th column) at step 500 is negligibly small. 2. (Text, Exercise 1.19, page 40.) Suppose we flip a fair coin repeatedly until we have flipped four consecutive heads. (a) Give a Markov chain (by drawing its transition diagram with the transition probabilities indicated next to the arrows) that represents this random experiment. (Hint: consider a Markov chain with state space S = {0,1,2,3,4. The state at any given time is the number of consecutive heads since the last time tail came up. The state is reset to 0 at each occurrence of tail and 4 is an absorbing state.) (b) Write down the transitions matrix P. (c) Describe the communication classes and indicate which are transient and which are recurrent. (d) What is the expected number of flips that are needed? (e) Do a computer simulation to verify that your answer is reasonable. I suggest the following: modify the Markov chain program given in the preliminaries section of this assignment so that it gives the number of steps till a given state is reached (in this case, the state 4) for each run of the chain. Then run the chain, starting at 0, a large number of times so as to get a large number of sample values of the random variable T 4. Then compute the sample mean of T 4. This mean should, by the law of large numbers, approximate the expected value we want. Solution. (a) The transitions diagram is given in Figure 3. Figure 3: Diagram for Problem 2. (b) The transitions matrix is P = 1/2 1/ /2 0 1/ / /2 0 1/ /

7 (c) It is clear from the diagram that {0,1,2,3 is a transient communication class and {4 is a recurrent communication class. (d) My solution to this problem will seem longer than necessary because it will derive the relevant part of the theory given in section 1.4 of the textbook rather than use the results given there. Define the substochastic matrix 1/2 1/2 0 0 Q = 1/2 0 1/2 0 1/ /2. 1/ Note that Q gives the transition probabilities among the transient states. Let N (k) be the number of times the chain passes through transient state k before reaching the recurrent state 4 and let 1 k be the indicator function of the state k. Thus 1 k (X n ) = 1 if at time step n the state of the chain is k, and 0 if not. Thus N (k) = 1 k (X 0 ) +1 k (X 1 ) + and the expected number of visits to k before arriving at 4 is E[N (k) X 0 = 0] = E[1 k (X n ) X 0 = 0] = Q n 0k. n=0 n=0 Also note that the number of steps before getting to state 4 (counting time 0 as one step) is T 4 = N (0) + N (1) + N (2) + N (3) and the expected value of T 4 is the sum of the expected values of the N (k), k 4. A key remark is I +Q +Q 2 + = (I Q) 1 =: M. Therefore, E[T 4 X 0 = 0] = M 00 + M 01 + M 02 + M 03. A matrix inversion exercise (you may use R or other software if you like) gives M = The value we want is then E[T 4 X 0 = 0] = = 30. (e) See the example in the preliminaries to this homework set. The expected value we want can be computed as follows: P=matrix(0,nrow=5,ncol=5) P[1:4,1]=1/2 P[1,2]=1/2 P[2,3]=1/2 7

8 P[3,4]=1/2 P[4,5]=1/2 P[5,5]=1 N=10000 T=0*c(1:N) for (n in 1:N){ T[n]=Number_of_steps(1,5,P) sum(t)/n With N = sample values of T 4, here are a few values for the sample mean I obtained: 30.22,29.51, This seems to confirm our solution (Text, Exercise 1.8, page 36.) Consider simple random walk on the graph below. (Recall that a simple random walk on a graph is the Markov chain which at each time moves to an adjacent vertex, each adjacent vertex having the same probability.) (a) Draw the transitions diagram with the values of the transition probabilities next to each arrow. (b) Write the transitions matrix P. (States A,B, etc. correspond to row and column indices 1,2, etc.) (c) In the long run, about what fraction of time is spent in vertex A? (d) Suppose a walker starts in vertex A. What is the expected number of steps until the walker returns to A? (e) Suppose a walker starts in vertex C. What is the expected number of visits to B before reaching A? (f) Suppose a walker starts in vertex D. What is the probability that the walker reaches A before reaching B? (g) Again assume the walker starts in C. What is the expected number of steps until the walker reaches A? A B C D E Figure 4: Graph of Problem 3. Solution. (a) The transitions diagram is shown in Figure 7. (b) The matrix P is immediately read from the diagram: P = 0 1/3 1/3 1/3 0 1/3 0 1/3 0 1/3 1/2 1/ / /2 0 1/2 0 1/2 0. 8

9 A B C D E Figure 5: Transition probabilities diagram for the random walk of Problem 3. (c) This chain is clearly a recurrent since it is both irreducible and finite. We know that, in the long run, the fraction of time spent in a given vertex equals the stationary probability of that state. Thus we need to compute the stationary probability vector π. Recall that, by definition, πp = π. This equation amounts to a linear system, which can be solved in several ways. In R you may use the command eigen(t(p))$vec[,1] to obtain the (unnormalized) right-eigenvector of the transpose of P for the eigenvalue 1. (You will want to also look at the eigenvalues, eigen(t(p))$val to be sure that the first eigenvalues is indeed 1.) It is also not hard to solve the system by the method of row-reduction. In any event, you should get π = (1/4,1/4,1/6,1/6,1/6). From this we immediately read the fraction of time 1/4 that the chain spends in A in the long run. (d) We know that the expected number of steps back to A is 1/π(A), which is 4. (e) The approach here is similar to that of problem 2 (d). We first modify the matrix P so that state A becomes absorbing: /3 0 1/3 0 1/3 P = 1/2 1/ / /2 0 1/2 0 1/2 0 The problem is now to find to total number of visits to B for a chain that starts at C having transition matrix P. Recall that the key is to study the (southeast) block submatrix 0 1/3 0 1/3 Q = 1/ /2. 1/2 0 1/2 0 Let M = (I Q) 1. Then by the same argument used above in Problem 2 (d), the expected number of times the walker visits B starting at C is given by the entry M 21. I will calculate M using R: P=matrix(0,5,5) P[1,2:4]=1/3 9

10 P[2,c(1,3,5)]=1/3 P[3,1:2]=1/2 P[4,c(1,5)]=1/2 P[5,c(2,4)]=1/2 Q=P[2:5,2:5] solve(diag(4)-q) [,1] [,2] [,3] [,4] [1,] [2,] [3,] [4,] This gives the values M 21 = With some guesswork based on this matrix we can find the exact inverse, and the value M 21 = 9/11. (f) The basic idea is explained in page 29 of the textbook. Let us change the matrix P so that states A and B are now absorbing. I will call the new transition matrix P. So P = 1/2 1/ / /2 0 1/2 0 1/2 0 ( I 0 ) Thus P = S Q where S = 1/2 1/2 1/ /2, Q = /2 0 1/2 0. Define the matrix α as the solution to the equation α = S +Qα. Therefore, α = (I Q) 1 S. Then the probability that the chain starting at D will reach A before B is α 2,1. It is now an easy computation to obtain (I Q) 1 = /3 2/3 0 2/3 4/3 and α = (I Q) 1 S = /3 2/3 0 2/3 4/3 1/2 1/2 1/ /2 = 1/2 1/2 2/3 1/3 1/3 2/3. We conclude that the probability of reaching A before B, starting at D is 2/3. (g) As we saw in problem 2, this is given by M 21 + M 22 + M 23 + M 24 =

11 4. (From somewhere on the internet.) I assume for simplicity that you are the only player of this board game. Start at square 1 and move at each step to one, two, or three squares ahead according to the outcome of spinning a roulette. If you land at the bottom of a ladder, immediate climb it. If you land on the head of a snake, slide down to the tip of the snake s tail. The game ends at reaching square 12. The only way to reach 12 from squares 10 and 11 is by drawing 2 or 1, respectively; if the roulette gives a higher number than necessary, stay where you are. We wish to run a stochastic simulation of this game by using the Markov chain program. Figure 6: The snakes-and-ladders game. (a) What is the expected number of steps to go from 1 to 12? What is its standard deviation? (b) Draw a histogram of the distribution of the number of steps. (Be careful with the breaks!) Before writing your program, convince yourself that my transition probabilities diagram given below correctly interprets the rules of the game. The simulation consists of running the game till finish a large number of times and obtaining the statistics asked in the problem. Figure 7: Markov transitions diagram for the snakes-and-ladders game. Remarks about this problem: First note that our original program Markov needs some changes. In that program we have a deterministic stopping time N, but now this time is random and we cannot tell in advance what it is. The following modifications should take care of this problem. The input variable TerminalStates is an array 11

12 giving the states at which the chain should stop. In this example we have the single element array c(8) since 8 is the index of state 12. (The mathematical chain simply goes on forever inside this terminal set; for the snakes and ladders example, a sample path of the chain could be but in the program we wish to interrupt it at the first occurrence of 12.) ################################ #We modify the first Markov program to allow #for a set of terminal states, at which the #chain must stop. Let TerminalStates be the array of #such states. Markov_stopped=function(pi0,P,TerminalStates) { #Number of states s=length(pi0) X=matrix(0,1,1000) a=sample(c(1:s),1,replace=true,pi0) X[1]=a i=1 while (X[i] %in% TerminalStates == FALSE) { a=sample(c(1:s),1,replace=true,p[a,]) i=i+1 X[i]=a U=X[1:i] return(u) ############################### Because the states and their indices do not coincide, it could be helpful to translate the output of the previous program (a string of indices) into a string of actual states. This is a minor point, but the following script does the translation: ############################### #This function associates to each of #1, 2, 3, 4, 5, 6, 7, 8 the corresponding states: #1, 2, 3, 5, 8, 10, 11, 12 Substitute_state = function(x) { a1 = which(x==4) a2 = which(x==5) a3 = which(x==6) a4 = which(x==7) a5 = which(x==8) u = x u[a1] = 5 12

13 u[a2] = 8 u[a3] = 10 u[a4] = 11 u[a5] = 12 return(u) ################################ To test the program, here s what one trial run would look like (you ll need to define pi0=c(1, 0, 0, 0, 0, 0, 0, 0) and P; the latter is the matrix of transition probabilities.) > Substitute_state(Markov_stopped(pi0,P,c(8))) [1] Having a program to simulated the Markov chain with a stopping time, we can now answer the questions posed. For example, for part (a), run the chain from state 1 till it stops a large number times (say, n = 10000). Collect the number of steps in each run into a vector U then find mean(u) and sd(u). Solution. (a) Before running the new Markov chain program, we need the transition probabilities matrix and the initial probability vector. They are: P = matrix(0,8,8,byrow=true) a = 1/3 b = 2/3 P[1,]=c(0, a, a, a, 0, 0, 0, 0) P[2,]=c(0, 0, a, b, 0, 0, 0, 0) P[3,]=c(0, 0, 0, b, 0, 0, a, 0) P[4,]=c(0, a, 0, 0, a, 0, a, 0) P[5,]=c(0, 0, 0, 0, a, a, a, 0) P[6,]=c(0, 0, 0, 0, 0, a, a, a) P[7,]=c(0, 0, 0, 0, 0, 0, b, a) P[8,]=c(0, 0, 0, 0, 0, 0, 0, 1) pi0=c(1, 0, 0, 0, 0, 0, 0, 0) We then run the experiment: #Part (a): number of steps to reach terminal state 12 #The number of trial runs (sample paths) of the Markov chain is N = # #For each trial we keep a record of the number os steps in a vector U. #I ll initialize U as a vector of zeros: U = 0*c(1:N) 13

14 #We can now run the experiment: for (i in 1:N) { X=Markov_stopped(pi0, P, c(8)) U[i]=length(X) #The values we want are > mean(u) [1] > sd(u) [1] Therefore, if U is the random variable giving the number of steps till reaching the terminal state, then Mean value of U = ; Standard deviation of U = (b) The histogram is obtained with the R-command hist(u,breaks=seq(from=0.5, to=35.5, by=1),freq=false) Figure 8: Histogram for the number of steps till reaching the terminal state; part (a) of problem 4. 14

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

Homework set 3 - Solutions

Homework set 3 - Solutions Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it

More information

Math 304 Handout: Linear algebra, graphs, and networks.

Math 304 Handout: Linear algebra, graphs, and networks. Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all

More information

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011

Solutions to Homework Discrete Stochastic Processes MIT, Spring 2011 Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

CHAPTER 6. Markov Chains

CHAPTER 6. Markov Chains CHAPTER 6 Markov Chains 6.1. Introduction A(finite)Markovchainisaprocess withafinitenumberofstates (or outcomes, or events) in which the probability of being in a particular state at step n+1depends only

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet YOUR NAME INSTRUCTIONS: No notes, no calculators, and no communications devices are permitted. Please keep all materials away from your

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Chapter 35 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal. 35 Mixed Chains In this chapter we learn how to analyze Markov chains that consists of transient and absorbing states. Later we will see that this analysis extends easily to chains with (nonabsorbing)

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems

MATH3200, Lecture 31: Applications of Eigenvectors. Markov Chains and Chemical Reaction Systems Lecture 31: Some Applications of Eigenvectors: Markov Chains and Chemical Reaction Systems Winfried Just Department of Mathematics, Ohio University April 9 11, 2018 Review: Eigenvectors and left eigenvectors

More information

Stochastic Processes

Stochastic Processes qmc082.tex. Version of 30 September 2010. Lecture Notes on Quantum Mechanics No. 8 R. B. Griffiths References: Stochastic Processes CQT = R. B. Griffiths, Consistent Quantum Theory (Cambridge, 2002) DeGroot

More information

ECE 501b Homework #4 Due: 10/22/2012

ECE 501b Homework #4 Due: 10/22/2012 1. Game Strategy: Consider a multiplayer boardgame that takes place on the following board and with the following rules: 7 8 9 10 6 11 5 12 4 3 2 1 The board contains six squares that are property (the

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics February 19, 2018 CS 361: Probability & Statistics Random variables Markov s inequality This theorem says that for any random variable X and any value a, we have A random variable is unlikely to have an

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6) Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences

More information

CONTENTS. Preface List of Symbols and Notation

CONTENTS. Preface List of Symbols and Notation CONTENTS Preface List of Symbols and Notation xi xv 1 Introduction and Review 1 1.1 Deterministic and Stochastic Models 1 1.2 What is a Stochastic Process? 5 1.3 Monte Carlo Simulation 10 1.4 Conditional

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Elementary Discrete Probability

Elementary Discrete Probability Elementary Discrete Probability MATH 472 Financial Mathematics J Robert Buchanan 2018 Objectives In this lesson we will learn: the terminology of elementary probability, elementary rules of probability,

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES

MATH 56A SPRING 2008 STOCHASTIC PROCESSES MATH 56A SPRING 008 STOCHASTIC PROCESSES KIYOSHI IGUSA Contents 4. Optimal Stopping Time 95 4.1. Definitions 95 4.. The basic problem 95 4.3. Solutions to basic problem 97 4.4. Cost functions 101 4.5.

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Interlude: Practice Final

Interlude: Practice Final 8 POISSON PROCESS 08 Interlude: Practice Final This practice exam covers the material from the chapters 9 through 8. Give yourself 0 minutes to solve the six problems, which you may assume have equal point

More information

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and

TMA 4265 Stochastic Processes Semester project, fall 2014 Student number and TMA 4265 Stochastic Processes Semester project, fall 2014 Student number 730631 and 732038 Exercise 1 We shall study a discrete Markov chain (MC) {X n } n=0 with state space S = {0, 1, 2, 3, 4, 5, 6}.

More information

Lecture 5: Random Walks and Markov Chain

Lecture 5: Random Walks and Markov Chain Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Markov Chains on Countable State Space

Markov Chains on Countable State Space Markov Chains on Countable State Space 1 Markov Chains Introduction 1. Consider a discrete time Markov chain {X i, i = 1, 2,...} that takes values on a countable (finite or infinite) set S = {x 1, x 2,...},

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

1 Ways to Describe a Stochastic Process

1 Ways to Describe a Stochastic Process purdue university cs 59000-nmc networks & matrix computations LECTURE NOTES David F. Gleich September 22, 2011 Scribe Notes: Debbie Perouli 1 Ways to Describe a Stochastic Process We will use the biased

More information

Math493 - Fall HW 2 Solutions

Math493 - Fall HW 2 Solutions Math493 - Fall 2017 - HW 2 Solutions Renato Feres - Wash. U. Preliminaries. In this assignment you will do a few more simulations in the style of the first assignment to explore conditional probability,

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models

Today. Next lecture. (Ch 14) Markov chains and hidden Markov models Today (Ch 14) Markov chains and hidden Markov models Graphical representation Transition probability matrix Propagating state distributions The stationary distribution Next lecture (Ch 14) Markov chains

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Fall 2018 Introduction to Artificial Intelligence Practice Final You have approximately 2 hours 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Lecture 2 : CS6205 Advanced Modeling and Simulation

Lecture 2 : CS6205 Advanced Modeling and Simulation Lecture 2 : CS6205 Advanced Modeling and Simulation Lee Hwee Kuan 21 Aug. 2013 For the purpose of learning stochastic simulations for the first time. We shall only consider probabilities on finite discrete

More information

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M.

Chapter 29 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. 29 Markov Chains Definition of a Markov Chain Markov chains are one of the most fun tools of probability; they give a lot of power for very little effort. We will restrict ourselves to finite Markov chains.

More information

CS261: Problem Set #3

CS261: Problem Set #3 CS261: Problem Set #3 Due by 11:59 PM on Tuesday, February 23, 2016 Instructions: (1) Form a group of 1-3 students. You should turn in only one write-up for your entire group. (2) Submission instructions:

More information

Lecture #5. Dependencies along the genome

Lecture #5. Dependencies along the genome Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3., Polanski&Kimmel Section 2.8. Prepared by Shlomo Moran, based on Danny Geiger s and Nir Friedman s. Dependencies along the genome

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

1 More finite deterministic automata

1 More finite deterministic automata CS 125 Section #6 Finite automata October 18, 2016 1 More finite deterministic automata Exercise. Consider the following game with two players: Repeatedly flip a coin. On heads, player 1 gets a point.

More information

To factor an expression means to write it as a product of factors instead of a sum of terms. The expression 3x

To factor an expression means to write it as a product of factors instead of a sum of terms. The expression 3x Factoring trinomials In general, we are factoring ax + bx + c where a, b, and c are real numbers. To factor an expression means to write it as a product of factors instead of a sum of terms. The expression

More information

The Boundary Problem: Markov Chain Solution

The Boundary Problem: Markov Chain Solution MATH 529 The Boundary Problem: Markov Chain Solution Consider a random walk X that starts at positive height j, and on each independent step, moves upward a units with probability p, moves downward b units

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014

Statistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014 Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

Notes for Math 450 Stochastic Petri nets and reactions

Notes for Math 450 Stochastic Petri nets and reactions Notes for Math 450 Stochastic Petri nets and reactions Renato Feres Petri nets Petri nets are a special class of networks, introduced in 96 by Carl Adam Petri, that provide a convenient language and graphical

More information

MATH MW Elementary Probability Course Notes Part I: Models and Counting

MATH MW Elementary Probability Course Notes Part I: Models and Counting MATH 2030 3.00MW Elementary Probability Course Notes Part I: Models and Counting Tom Salisbury salt@yorku.ca York University Winter 2010 Introduction [Jan 5] Probability: the mathematics used for Statistics

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Introduction to Stochastic Processes

Introduction to Stochastic Processes 18.445 Introduction to Stochastic Processes Lecture 1: Introduction to finite Markov chains Hao Wu MIT 04 February 2015 Hao Wu (MIT) 18.445 04 February 2015 1 / 15 Course description About this course

More information

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?

(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes? IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only

More information

Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman.

Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman. Math 224 Fall 2017 Homework 1 Drew Armstrong Problems from Probability and Statistical Inference (9th ed.) by Hogg, Tanis and Zimmerman. Section 1.1, Exercises 4,5,6,7,9,12. Solutions to Book Problems.

More information

Chapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains

Chapter 10. Finite-State Markov Chains. Introductory Example: Googling Markov Chains Chapter 0 Finite-State Markov Chains Introductory Example: Googling Markov Chains Google means many things: it is an Internet search engine, the company that produces the search engine, and a verb meaning

More information

Chapter 11 Advanced Topic Stochastic Processes

Chapter 11 Advanced Topic Stochastic Processes Chapter 11 Advanced Topic Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

RVs and their probability distributions

RVs and their probability distributions RVs and their probability distributions RVs and their probability distributions In these notes, I will use the following notation: The probability distribution (function) on a sample space will be denoted

More information

30 Classification of States

30 Classification of States 30 Classification of States In a Markov chain, each state can be placed in one of the three classifications. 1 Since each state falls into one and only one category, these categories partition the states.

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata CISC 4090: Theory of Computation Chapter Regular Languages Xiaolan Zhang, adapted from slides by Prof. Werschulz Section.: Finite Automata Fordham University Department of Computer and Information Sciences

More information

Monte Carlo importance sampling and Markov chain

Monte Carlo importance sampling and Markov chain Monte Carlo importance sampling and Markov chain If a configuration in phase space is denoted by X, the probability for configuration according to Boltzman is ρ(x) e βe(x) β = 1 T (1) How to sample over

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

MATH HOMEWORK PROBLEMS D. MCCLENDON

MATH HOMEWORK PROBLEMS D. MCCLENDON MATH 46- HOMEWORK PROBLEMS D. MCCLENDON. Consider a Markov chain with state space S = {0, }, where p = P (0, ) and q = P (, 0); compute the following in terms of p and q: (a) P (X 2 = 0 X = ) (b) P (X

More information

18.600: Lecture 32 Markov Chains

18.600: Lecture 32 Markov Chains 18.600: Lecture 32 Markov Chains Scott Sheffield MIT Outline Markov chains Examples Ergodicity and stationarity Outline Markov chains Examples Ergodicity and stationarity Markov chains Consider a sequence

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

18.440: Lecture 33 Markov Chains

18.440: Lecture 33 Markov Chains 18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

Eleventh Problem Assignment

Eleventh Problem Assignment EECS April, 27 PROBLEM (2 points) The outcomes of successive flips of a particular coin are dependent and are found to be described fully by the conditional probabilities P(H n+ H n ) = P(T n+ T n ) =

More information

Math493 - Fall HW 4 Solutions

Math493 - Fall HW 4 Solutions Math493 - Fall 2017 - HW 4 Solutions Renato Feres - Wash. U. Preliminaries We have up to this point ignored a central aspect of the Monte Carlo method: How to estimate errors? Clearly, the larger the sample

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Markov processes, lab 1

Markov processes, lab 1 Lunds Universitet Matematikcentrum Matematisk statistik FMSF15/MASC03 Markovprocesser Markov processes, lab 1 The aim of the lab is to demonstrate how Markov chains work and how one can use MATLAB as a

More information

ISyE 6650 Probabilistic Models Fall 2007

ISyE 6650 Probabilistic Models Fall 2007 ISyE 6650 Probabilistic Models Fall 2007 Homework 4 Solution 1. (Ross 4.3) In this case, the state of the system is determined by the weather conditions in the last three days. Letting D indicate a dry

More information

Final Examination. Adrian Georgi Josh Karen Lee Min Nikos Tina. There are 12 problems totaling 150 points. Total time is 170 minutes.

Final Examination. Adrian Georgi Josh Karen Lee Min Nikos Tina. There are 12 problems totaling 150 points. Total time is 170 minutes. Massachusetts Institute of Technology 6.042J/18.062J, Fall 02: Mathematics for Computer Science Prof. Albert Meyer and Dr. Radhika Nagpal Final Examination Your name: Circle the name of your Tutorial Instructor:

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled Experiments are not graded, and should not be turned in. They are designed for

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

STA 247 Solutions to Assignment #1

STA 247 Solutions to Assignment #1 STA 247 Solutions to Assignment #1 Question 1: Suppose you throw three six-sided dice (coloured red, green, and blue) repeatedly, until the three dice all show different numbers. Assuming that these dice

More information

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018

ECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018 ECE534, Spring 2018: s for Problem Set #4 Due Friday April 6, 2018 1. MMSE Estimation, Data Processing and Innovations The random variables X, Y, Z on a common probability space (Ω, F, P ) are said to

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) In class, we saw this graph, with each node representing people who are following each other on Twitter: Our

More information

Quiz 1 Date: Monday, October 17, 2016

Quiz 1 Date: Monday, October 17, 2016 10-704 Information Processing and Learning Fall 016 Quiz 1 Date: Monday, October 17, 016 Name: Andrew ID: Department: Guidelines: 1. PLEASE DO NOT TURN THIS PAGE UNTIL INSTRUCTED.. Write your name, Andrew

More information

1 The Basic Counting Principles

1 The Basic Counting Principles 1 The Basic Counting Principles The Multiplication Rule If an operation consists of k steps and the first step can be performed in n 1 ways, the second step can be performed in n ways [regardless of how

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

Handout 1: Mathematical Background

Handout 1: Mathematical Background Handout 1: Mathematical Background Boaz Barak September 18, 2007 This is a brief review of some mathematical tools, especially probability theory that we will use. This material is mostly from discrete

More information

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds

More information