Quantitative Verification

Size: px
Start display at page:

Download "Quantitative Verification"

Transcription

1 Quantitative Verification Chapter 3: Markov chains Jan Křetínský Technical University of Munich Winter 207/8 / 84

2 Motivation 2 / 84

3 Example: Simulation of a die by coins Knuth & Yao die Simulating a Fair Die by a Fair Coin s 0 s,2,3 s 4,5,6 s 2 s,2,3 2,3 s 4,5 2 s 4,5, Question: Quiz What is the probability of obtaining 2? Is the probability of obtaining 3 equal to 6? / 84

4 DTMC - Graph-based Definition Definition: A discrete-time Markov chain (DTMC) is a tuple (S, P, π 0 ) where S is the set of states, P : S S [0, ] with s S P(s, s ) = is the transitions matrix, and π 0 [0, ] S with s S π 0(s) = is the initial distribution. 4 / 84

5 Example: Craps Two dice game: Modelling Markov Chains First: {7, } win, {2, 3, 2} lose, else s= Next rolls: = s win, = 7 lose, else iterate won 2 9 init 9 lost 5 / 84

6 Example: Craps Two dice game: Modelling Markov Chains First: {7, } win, {2, 3, 2} lose, else s= Next rolls: = s win, = 7 lose, else iterate won 2 9 init lost 6 / 84

7 Example: Craps Two dice game: Modelling Markov Chains First: {7, } win, {2, 3, 2} lose, else s= Next rolls: = s win, = 7 lose, else iterate won init lost / 84

8 Example: Zero Configuration Networking (Zeroconf) Previously: Manual assignment of IP addresses Zeroconf: Dynamic configuration of local IPv4 addresses Advantage: Simple devices able to communicate automatically Automatic Private IP Addressing (APIPA) RFC 3927 Used when DHCP is configured but unavailable Pick randomly an address from Find out whether anybody else uses this address (by sending several ARP requests) Model: Randomly pick an address among the K (65024) addresses. With m hosts in the network, collision probability is q = m K. Send 4 ARP requests. In case of collision, the probability of no answer to the ARP request is p (due to the lossy channel) 8 / 84

9 Example: Zero Configuration Networking (Zeroconf) start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6 For 00 hosts and p = 0.00, the probability of error is / 84

10 Probabilistic Model Checking What is probabilistic model checking? Probabilistic specifications, e.g. probability of reaching bad states shall be smaller than 0.0. Probabilistic model checking is an automatic verification technique for this purpose. Why quantities? Randomized algorithms Faults e.g. due to the environment, lossy channels Performance analysis, e.g. reliability, availability 0 / 84

11 Basics of Probability Theory (Recap) / 84

12 What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of / 84

13 What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of 0.5. But... [Bertrand s Paradox] Draw a random chord on the unit circle. What is the probability that its length exceeds the length of a side of the equilateral triangle in the circle? 3 2 / 84

14 What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of 0.5. But... [Bertrand s Paradox] Draw a random chord on the unit circle. What is the probability that its length exceeds the length of a side of the equilateral triangle in the circle? / 84

15 What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of 0.5. But... [Bertrand s Paradox] Draw a random chord on the unit circle. What is the probability that its length exceeds the length of a side of the equilateral triangle in the circle? / 84

16 Probability Theory - Probability Space Definition: Probability Function Given sample space Ω and σ-algebra F, a probability function P : F [0, ] satisfies: P(A) 0 for A F, P(Ω) =, and P( i= A i) = i= P(A i) for pairwise disjoint A i F Definition: Probability Space A probability space is a tuple (Ω, F, P) with a sample space Ω, σ-algebra F 2 Ω and probability function P. Example A random real number taken uniformly from the interval [0, ]. Sample space: Ω = [0, ]. 3 / 84

17 Probability Theory - Probability Space Definition: Probability Function Given sample space Ω and σ-algebra F, a probability function P : F [0, ] satisfies: P(A) 0 for A F, P(Ω) =, and P( i= A i) = i= P(A i) for pairwise disjoint A i F Definition: Probability Space A probability space is a tuple (Ω, F, P) with a sample space Ω, σ-algebra F 2 Ω and probability function P. Example A random real number taken uniformly from the interval [0, ]. Sample space: Ω = [0, ]. σ-algebra: F is the minimal superset of {[a, b] 0 a b } closed under complementation and countable union. Probability function: P([a, b]) = (b a), by Carathéodory s extension theorem there is a unique way how to extend it to all elements of F. 3 / 84

18 Random Variables 4 / 84

19 Random Variables - Introduction Definition: Random Variable A random variable X is a measurable function X : Ω I to some I. Elements of I are called random elements. Often I = R: X! 0 42 " Example (Bernoulli Trials) Throwing a coin 3 times: Ω 3 = {hhh, hht, hth, htt, thh, tht, tth, ttt}. We define 3 random variables X i : Ω {h, t}. For all x, y, z {h, t}, X (xyz) = x, X 2 (xyz) = y, X 3 (xyz) = z. 5 / 84

20 Stochastic Processes and Markov Chains 6 / 84

21 Stochastic Processes - Definition Definition: Given a probability space (Ω, F, P), a stochastic process is a family of random variables {X t t T } defined on (Ω, F, P). For each X t we assume X t : Ω S where S = {s, s 2,... } is a finite or countable set called state space. A stochastic process {X t t T } is called discrete-time if T = N or continuous-time if T = R 0. For the following lectures we focus on discrete time. 7 / 84

22 Discrete-time Stochastic Processes - Construction () Example: Weather Forecast S = {sun, rain}, we model time as discrete a random variable for each day: X0 is the weather today, Xi is the weather in i days. how can we set up the probability space to measure e.g. P(X i = sun)? 8 / 84

23 Discrete-time Stochastic Processes - Construction (2) Let us fix a state space S. How can we construct the probability space (Ω, F, P)? Definition: Sample Space Ω We define Ω = S. Then, each X n maps a sample ω = ω 0 ω... onto the respective state at time n, i.e., (X n )(ω) = ω n S. 9 / 84

24 Discrete-time Stochastic Processes - Construction (3) Definition: Cylinder Set For s 0 s n S n+, we set the cylinder C(s 0... s n ) = {s 0 s n ω Ω}. S Example: S = {s, s 2, s 3 } and C(s s 3 ) s s2 s s2 s s2 s s2 s s2 s s Definition: σ-algebra F We define F to be the smallest σ-algebra that contains all cylinder sets, i.e., {C(s 0... s n ) n N, s i S} F. s3 s3 s3 s3 s3 s3... t Check: Is each X i measurable? (on the discrete set S we assume the full σ-algebra 2 S ). 20 / 84

25 Discrete-time Stochastic Processes - Construction (4) How to specify the probability Function P? We only needs to specify for each s 0 s n S n P(C(s 0... s n )). This amounts to specifying. P(C(s 0 )) for each s 0 S, and 2. P(C(s 0... s i ) C(s 0... s i )) for each s 0 s i S i since P(C(s 0... s n )) = P(C(s 0... s n ) C(s 0... s n )) P(C(s 0... s n )) n = P(C(s 0 )) P(C(s 0... s i ) C(s 0... s i )) i= 2 / 84

26 Discrete-time Stochastic Processes - Construction (4) How to specify the probability Function P? We only needs to specify for each s 0 s n S n P(C(s 0... s n )). This amounts to specifying. P(C(s 0 )) for each s 0 S, and 2. P(C(s 0... s i ) C(s 0... s i )) for each s 0 s i S i since P(C(s 0... s n )) = P(C(s 0... s n ) C(s 0... s n )) P(C(s 0... s n )) n = P(C(s 0 )) P(C(s 0... s i ) C(s 0... s i )) i= Still, lots of possibilities... 2 / 84

27 Discrete-time Stochastic Processes - Construction (5) Weather Example: Option - statistics of days of a year the forecast starts on Jan 0, a distribution p j over {sun, rain} for each j 365, for each i N and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p i % 365 (s i ) Weather Example: Option 2 - two past days a distribution p s s for each s, s S, over {sun, rain} for each i 2 and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si 2 s i (s i ) 22 / 84

28 Discrete-time Stochastic Processes - Construction (5) Weather Example: Option - statistics of days of a year the forecast starts on Jan 0, a distribution p j over {sun, rain} for each j 365, for each i N and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p i % 365 (s i ) Weather Example: Option 2 - two past days a distribution p s s for each s, s S, over {sun, rain} for each i 2 and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si 2 s i (s i ) Not time-homogeneous. Not Markovian. Here: time-homogeneous Markovian stochastic processes 22 / 84

29 Stochastic Processes - Restrictions Definition: Markov A discrete-time stochastic process {X n n N} is Markov if P(X n = s n X n = s n,..., X 0 = s 0 ) = P(X n = s n X n = s n ) for all n > and s 0,..., s n S with P(X n = s n ) > 0. Definition: Time-homogeneous A discrete-time Markov process {X n n N} is time-homogeneous if P(X n+ = s X n = s) = P(X = s X 0 = s) for all n > and s, s S with P(X 0 = s) > / 84

30 Stochastic Processes - Restrictions Definition: Markov A discrete-time stochastic process {X n n N} is Markov if P(X n = s n X n = s n,..., X 0 = s 0 ) = P(X n = s n X n = s n ) for all n > and s 0,..., s n S with P(X n = s n ) > 0. Definition: Time-homogeneous A discrete-time Markov process {X n n N} is time-homogeneous if P(X n+ = s X n = s) = P(X = s X 0 = s) for all n > and s, s S with P(X 0 = s) > / 84

31 Discrete-time Stochastic Processes - Construction (6) Weather Example: Option 3 - one past day a distribution p s over {sun, rain} for each s S, for each i and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si (s i ) a distribution π over {sun, rain} such that P(C(s 0 )) = π(s 0 ). 24 / 84

32 Discrete-time Stochastic Processes - Construction (6) Weather Example: Option 3 - one past day a distribution p s over {sun, rain} for each s S, for each i and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si (s i ) a distribution π over {sun, rain} such that P(C(s 0 )) = π(s 0 ). Overly restrictive, isn t it? 24 / 84

33 Discrete-time Stochastic Processes - Construction (6) Weather Example: Option 3 - one past day a distribution p s over {sun, rain} for each s S, for each i and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si (s i ) a distribution π over {sun, rain} such that P(C(s 0 )) = π(s 0 ). Overly restrictive, isn t it? Not really one only needs to extend the state space S = {,..., 365} {sun, rain} {sun, rain}, now each state encodes current day of the year, current weather, and weather yesterday, we can define over S a time-homogeneous Markov process based on both Options & 2 given earlier. 24 / 84

34 Discrete-time Markov Chains DTMC 25 / 84

35 DTMC - Relation of Definitions Stochastic process Graph based Given a discrete-time homogeneous Markov process {X (n) n N} with state space S, defined on a probability space (Ω, F, P) we take over the state space S and define P(s, s ) = P(X n = s X n = s) for an arbitrary n N and π 0 (s) = P(X 0 = s). Graph based stochastic process Given a DTMC (S, P, π 0 ), we set Ω to S, F to the smallest σ-algebra containing all cylinder sets and P(C(s 0... s n )) = π 0 (s 0 ) P(s i, s i ) i n which uniquely defines the probability function P on F. 26 / 84

36 DTMC - Conditional Probability and Expectation Let (S, P, π 0 ) be a DTMC. We denote by P s the probability function of DTMC (S, P, δ s ) where { δ s (s if s = s ) = 0 otherwise E s the expectation with respect to P s 27 / 84

37 Analysis questions Transient analysis Steady-state analysis Rewards Reachability Probabilistic logics 28 / 84

38 DTMC - Transient Analysis 29 / 84

39 DTMC - Transient Analysis - Example () Example: Gambling with a Limit /2 0 0 /2 /2 20 /2 / /2 What is the probability of being in state 0 after 3 steps? 30 / 84

40 DTMC - Transient Analysis - n-step Probabilities Definition: Given a DTMC (S, P, π 0 ), we assume w.l.o.g. S = {0,,... } and write p ij = P(i, j). Further, we have P () = P = (p ij ) is the -step transition matrix P (n) = (p (n) ij ) denotes the n-step transition matrix with p (n) ij = P(X n = j X 0 = i) (= P(X k+n = j X k = i)). How can we compute these probabilities? 3 / 84

41 DTMC - Transient Analysis - Chapman-Kolmogorov Definition: Chapman-Kolmogorov Equation Application of the law of total probability to the n-step transition probabilities p (n) ij results in the Chapman-Kolmogorov Equation p (n) ij = h S p (m) ih p(n m) hj 0 < m < n. Consequently, we have P (n) = PP (n ) = = P n. 32 / 84

42 DTMC - Transient Analysis - Chapman-Kolmogorov Definition: Chapman-Kolmogorov Equation Application of the law of total probability to the n-step transition probabilities p (n) ij results in the Chapman-Kolmogorov Equation p (n) ij = h S p (m) ih p(n m) hj 0 < m < n. Consequently, we have P (n) = PP (n ) = = P n. Definition: Transient Probability Distribution The transient probability distribution at time n > 0 is defined by π n = π 0 P n = π n P. 32 / 84

43 DTMC - Transient Analysis - Example (2) /2 /2 /2 Example: P = /2 /2 / P 2 = For π 0 = [ ], π 2 = π 0 P 2 = [ ]. For, π 0 = [ ], π 2 = π 0 P 2 = [ ]. Actually, π n = [ ] for all n N! 33 / 84

44 DTMC - Transient Analysis - Example (2) /2 /2 /2 Example: P = /2 /2 / P 2 = For π 0 = [ ], π 2 = π 0 P 2 = [ ]. For, π 0 = [ ], π 2 = π 0 P 2 = [ ]. Actually, π n = [ ] for all n N! Are there other stable distributions? 33 / 84

45 DTMC - Steady State Analysis 34 / 84

46 DTMC - Steady State Analysis - Definitions Definition: Stationary Distribution A distribution π is stationary if π = πp. Stationary distribution is generally not unique. 35 / 84

47 DTMC - Steady State Analysis - Definitions Definition: Stationary Distribution A distribution π is stationary if π = πp. Stationary distribution is generally not unique. Definition: Limiting Distribution π := lim π n = lim π 0P n = π 0 n n lim n Pn = π 0 P. The limit can depend on π 0 and does not need to exist. 35 / 84

48 DTMC - Steady State Analysis - Definitions Definition: Stationary Distribution A distribution π is stationary if π = πp. Stationary distribution is generally not unique. Definition: Limiting Distribution π := lim π n = lim π 0P n = π 0 n n lim n Pn = π 0 P. The limit can depend on π 0 and does not need to exist. Connection between stationary and limiting? 35 / 84

49 DTMC - Steady-State Analysis - Periodicity Example: Gambling with Social Guarantees /2 /2 / /2 /2 /2 What are the stationary and limiting distributions? 36 / 84

50 DTMC - Steady-State Analysis - Periodicity Example: Gambling with Social Guarantees /2 /2 /2 /2 /2 /2 What are the stationary and limiting distributions? Definition: Periodicity The period of a state i is defined as d i = gcd{n p n ii > 0}. A state i is called aperiodic if d i = and periodic with period d i otherwise. A Markov chain is aperiodic if all states are aperiodic. Lemma In a finite aperiodic Markov chain, the limiting distribution exists. 36 / 84

51 DTMC - Steady-State Analysis - Irreducibility () Example /2 /2 / /2 /2 /2 37 / 84

52 DTMC - Steady-State Analysis - Irreducibility (2) Definition: A DTMC is called irreducible if for all states i, j S we have p n ij > 0 for some n. Lemma In an aperiodic and irreducible Markov chain, the limiting distribution exists and does not depend on π 0. 0! 0! 0! / 84

53 DTMC - Steady-State Analysis - Irreducibility (3) /2 /2 /2 /2 / /2 /2 /2 /2 /2 What is the stationary / limiting distribution? 39 / 84

54 DTMC - Steady-State Analysis - Irreducibility (3) /2 /2 /2 /2 / /2 /2 /2 /2 /2 What is the stationary / limiting distribution? /2 /2 /2 / /2 /2 /2 /2 /2 39 / 84

55 DTMC - Steady-State Analysis - Irreducibility (3) /2 /2 /2 /2 / /2 /2 /2 /2 /2 What is the stationary / limiting distribution? /2 /2 /2 / /2 /2 /2 /2 /2 Lemma In a finite aperiodic and irreducible Markov chain, the limiting distribution exists, does not depend on π 0, and equals the unique stationary distribution. 39 / 84

56 DTMC - Steady-State Analysis - Recurrence () Definition: Let f (n) ij = P(X n = j k < n : X k j X 0 = i) for n be the n-step hitting probability. The hitting probability is defined as and a state i is called transient if f ii < and recurrent if f ii =. f ij = n= f (n) ij 40 / 84

57 DTMC - Steady-State Analysis - Recurrence (2) Definition: Denoting expectation m ij = n= n f (n) ij, a recurrent state i is called positive recurrent or recurrent non-null if m ii < and recurrent null if m ii =. Lemma The states of an irreducible DTMC are all of the same type, i.e., all periodic or all aperiodic and transient or all aperiodic and recurrent null or all aperiodic and recurrent non-null. 4 / 84

58 DTMC - Steady-State Analysis - Ergodicity Definition: Ergodicity A DTMC is ergodic if all its states are irreducible, aperiodic and recurrent non-null. Theorem In an ergodic Markov chain, the limiting distribution exists, does not depend on π 0, and equals the unique stationary distribution. As a consequence, the steady-state distribution can be computed by solving the equation system π = πp, x S π s =. Note: The Lemma for finite DTMC follows from the theorem as every irreducible finite DTMC is positive recurrent. 42 / 84

59 DTMC - Steady-State Analysis - Ergodicity Example: Unbounded Gambling with House Edge p p p p p p p p The DTMC is only ergodic for p [0, 0.5). 43 / 84

60 DTMC - Rewards 44 / 84

61 DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. 45 / 84

62 DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as 45 / 84

63 DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) 45 / 84

64 DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? 45 / 84

65 DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? discounted reward i= λi r(s i ) for some 0 < λ < 45 / 84

66 DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? discounted reward i= λi r(s i ) for some 0 < λ < average reward n lim n n i=0 r(s i) also called long-run average or mean payoff 45 / 84

67 DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? discounted reward i= λi r(s i ) for some 0 < λ < average reward n lim n n i=0 r(s i) also called long-run average or mean payoff Definition The expected average reward is EAR := lim n n n E[r(X i )] i=0 45 / 84

68 DTMC - Rewards - Solution Sketch Definition: Time-average Distribution ˆπ = lim n n n π i. ˆπ(s) expresses the ratio of time spent in s on the long run. i=0 More details later for Markov decision processes. 46 / 84

69 DTMC - Rewards - Solution Sketch Definition: Time-average Distribution ˆπ = lim n n n π i. ˆπ(s) expresses the ratio of time spent in s on the long run. Lemma. E[r(X i )] = s S π i(s) r(s). i=0 2. If ˆπ exists then EAR = s S ˆπ(s) r(s). 3. If limiting distribution exists, it coincides with ˆπ. More details later for Markov decision processes. 46 / 84

70 DTMC - Rewards - Solution Sketch Definition: Time-average Distribution ˆπ = lim n n n π i. ˆπ(s) expresses the ratio of time spent in s on the long run. Lemma. E[r(X i )] = s S π i(s) r(s). i=0 2. If ˆπ exists then EAR = s S ˆπ(s) r(s). 3. If limiting distribution exists, it coincides with ˆπ. Algortithm. Compute ˆπ (or limiting distribution if possible). 2. Return s S ˆπ(s) r(s). More details later for Markov decision processes. 46 / 84

71 DTMC - Reachability 47 / 84

72 DTMC - Reachability Definition: Reachability Given a DTMC (S, P, π 0 ), what is the probability of eventually reaching a set of goal states B S? S s xs B Let x(s) denote P s ( B) where B = {s 0 s i : s i B}. Then s B: x(s) = s S \ B : x(s) = 48 / 84

73 DTMC - Reachability Definition: Reachability Given a DTMC (S, P, π 0 ), what is the probability of eventually reaching a set of goal states B S? S s xs B Let x(s) denote P s ( B) where B = {s 0 s i : s i B}. Then s B: x(s) = s S \ B : x(s) = t S\B P(s, t)x(t) + u B P(s, u). 48 / 84

74 DTMC - Reachability Lemma (Reachability Matrix Form) Given a DTMC (S, P, π 0 ), the column vector x = (x(s)) s S\B of probabilities x(s) = P s ( B) satisfies the constraint x = Ax + b, where matrix A is the submatrix of P for states S \ B and b = (b(s)) s S\B is the column vector with b(s) = u B P(s, u). 49 / 84

75 DTMC - Reachability Example: 0.5 s s 0.25 s s0 P = B = {s3} A b The vector x = [ x 0 x x 2 ] T = [ ] T satisfies the equation system x = Ax + b. 50 / 84

76 DTMC - Reachability Example: 0.5 s s 0.25 s s0 P = B = {s3} A b The vector x = [ x 0 x x 2 ] T = [ ] T satisfies the equation system x = Ax + b. Is it the only solution? 50 / 84

77 DTMC - Reachability Example: 0.5 s s 0.25 s s0 P = B = {s3} A b The vector x = [ x 0 x x 2 ] T = [ ] T satisfies the equation system x = Ax + b. Is it the only solution? No! Consider, e.g., [ ] or [ ] T. While reaching the goal, such bad states need to be avoided. We first generalise such avoiding. 50 / 84

78 DTMC - Conditional Reachability Definition: Let B, C S. The (unbounded) probability of reaching B from state s under the condition that C is not left before is defined as P s (C U B) where C U B = {s 0 s i : s i B j < i : s j C}. 5 / 84

79 DTMC - Conditional Reachability Definition: Let B, C S. The (unbounded) probability of reaching B from state s under the condition that C is not left before is defined as P s (C U B) where C U B = {s 0 s i : s i B j < i : s j C}. The probability of reaching B from state s within n steps under the condition that C is not left before is defined as P s (C U n B) where C U n B = {s 0 s i n : s i B j < i : s j C}. What is the equation system for these probabilities? 5 / 84

80 DTMC - Conditional Reachability - Solution Let S =0 = {s P s (C U B) = 0} and S? = S \ (S =0 B). 52 / 84

81 DTMC - Conditional Reachability - Solution Let S =0 = {s P s (C U B) = 0} and S? = S \ (S =0 B). Theorem: The column vector x = (x(s)) s S? of probabilities x(s) = P s (C U B) is the unique solution of the equation system x = Ax + b, where A = (P(s, t)) s,t S?, b = (b(s)) s S? with b(s) = u B P(s, u). Furthermore, for x 0 = (0) s S? and x i = Ax i + b for any i,. x n (s) = P s (C U n B) for s S?, 2. x i is increasing, and 3. x = lim n x n. 52 / 84

82 DTMC - Conditional Reachability - Proof Proof Sketch: (x s ) x S? is a solution: by inserting into definition. Unique solution: By contradiction. Assume y is another solution, then x y = A(x y). One can show that A I is invertible, thus (A I)(x y) = 0 yields x y = (A I) 0 = 0 and finally x = y 2. Furthermore,. From the definitions, by straightforward induction. 2. From. since C U n B C U n+ B. 3. Since C U B = n N C U n B. 2 cf. page 766 of Principles of Model Checking 53 / 84

83 Algorithmic aspects 54 / 84

84 Algorithmic Aspects - Summary of Equation Systems Equation Systems Transient analysis: π n = π 0 P n = π n P Steady-state analysis: πp = π, π = s S π(s) = (ergodic) Reachability: x = Ax + b (with (x(s)) s S? ) Solution Techniques. Analytic solution, e.g. by Gaussian elimination 2. Iterative power method (π n π and x n x for n ) 3. Iterative methods for solving large systems of linear equations, e.g. Jacobi, Gauss-Seidel Missing pieces a. finding out whether a DTMC is ergodic, b. computing S? = S \ {s P s ( B) = 0}, c. efficient representation of P. 55 / 84

85 Algorithmic Aspects: a. Ergodicity of finite DTMC () Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n pii n > 0} =. A state i is called positive recurrent if f ii = and m ii <. How do we tell that a finite DTMC is ergodic? 56 / 84

86 Algorithmic Aspects: a. Ergodicity of finite DTMC () Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n pii n > 0} =. A state i is called positive recurrent if f ii = and m ii <. How do we tell that a finite DTMC is ergodic? By analysis of the induced graph! For a DTMC (S, P, π(0)) we define the induced directed graph (S, E) with E = {(s, s ) P(s, s ) > 0}. Recall: A directed graph is called strongly connected if there is a path from each vertex to every other vertex. Strongly connected components (SCC) are its maximal strongly connected subgraphs. A SCC T is bottom (BSCC) if no s T is reachable from T. 56 / 84

87 Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: 57 / 84

88 Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: The DTMC is irreducible iff the induced graph is strongly connected. 57 / 84

89 Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: The DTMC is irreducible iff the induced graph is strongly connected. A state in a BSCC is aperiodic iff the BSCC is aperiodic, i.e. the greatest common divisor of the lengths of all its cycles is. 57 / 84

90 Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: The DTMC is irreducible iff the induced graph is strongly connected. A state in a BSCC is aperiodic iff the BSCC is aperiodic, i.e. the greatest common divisor of the lengths of all its cycles is. A state is positive recurrent iff it belongs to a BSCC otherwise it is transient. 57 / 84

91 Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? 58 / 84

92 Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? gcd{n s : P n (s, s) > 0} = 58 / 84

93 Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? gcd{n s : P n (s, s) > 0} = in time O(n + m)? 58 / 84

94 Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? gcd{n s : P n (s, s) > 0} = in time O(n + m)? By the following DFS-based procedure: Algorithm: PERIOD(vertex v, unsigned level : init 0) global period : init 0; 2 if period = then 3 return 4 end 5 if v is unmarked then 6 mark v; 7 v level = level; 8 for v out(v) do 9 PERIOD(v,level + ) 0 end else 2 period = gcd(period, level v level ); 3 end 58 / 84

95 Algorithmic Aspects: b. Computing the set S? We have S? = S \ (B S =0 ) where S =0 = {s P s ( B) = 0}. Hence, s S =0 iff p n ss = 0 for all n and s B. 59 / 84

96 Algorithmic Aspects: b. Computing the set S? We have S? = S \ (B S =0 ) where S =0 = {s P s ( B) = 0}. Hence, s S =0 iff p n ss = 0 for all n and s B. This can be again easily checked from the induced graph: Lemma We have s S =0 iff there is no path from s to any state from B. Proof. Easy from the fact that p n ss > 0 iff there is a path of length n to s. 59 / 84

97 Algorithmic Aspects: c. Efficient Representations start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6 Zhang (Saarland University, Germany) Quantitative Model Checking August 24 th,2009 9/35 60 / 84

98 Algorithmic Aspects: c. Efficient Representations start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6. There are many 0 entries in the transition matrix. Sparse matrices offer a more concise storage. Zhang (Saarland University, Germany) Quantitative Model Checking August 24 th,2009 9/35 60 / 84

99 Algorithmic Aspects: c. Efficient Representations start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6. There are many 0 entries in the transition matrix. Sparse matrices offer a more concise storage. Zhang (Saarland University, Germany) Quantitative Model Checking August 24 th,2009 9/35 2. There are many similar entries in the transition matrix. Multi-terminal binary decision diagrams offer a more concise storage, using automata theory. 60 / 84

100 DTMC - Probabilistic Temporal Logics for Specifying Complex Properties 6 / 84

101 Logics - Adding Labels to DTMC Definition: A labeled DTMC is a tuple D = (S, P, π 0, L) with L : S 2 AP, where AP is a set of atomic propositions and L is a labeling function, where L(s) specifies which properties hold in state s S. 62 / 84

102 Logics - Examples of Properties H H H O O O States and transitions state = configuration of the game; transition = rolling the dice and acting (randomly) based on the result. O O O H H H State labels init, rwin, bwin, rkicked, bkicked,... r30, r2,..., b30, b2,..., Examples of Properties the game cannot return back to start at any time, the game eventually ends with prob. at any time, the game ends within 00 dice rolls with prob. 0.5 the probability of winning without ever being kicked out is 0.3 How to specify them formally? 63 / 84

103 Logics - Temporal Logics - non-probabilistic () Linear-time view corresponds to our (human) perception of time can specify properties of one concrete linear execution of the system Example: eventually red player is kicked out followed immediately by blue player being kicked out. Branching-time view views future as a set of all possibilities can specify properties of all executions from a given state specifies execution trees Example: in every computation it is always possible to return to the initial state. 64 / 84

104 Logics - Temporal Logics - non-probabilistic (2) Linear Temporal Logic (LTL) Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: eventually red player is kicked out followed immediately by blue player being kicked out: F (rkicked X bkicked) Question: do all executions satisfy the given LTL formula? Computation Tree Logic (CTL) Syntax for specifying states: Syntax for specifying executions: φ = true a φ φ φ A ψ E ψ ψ = X φ φ U φ F φ G φ Example: in all computations it is always possible to return to initial state: A G E F init Question: does the given state satisfy the given CTL state formula? 65 / 84

105 Logics - LTL Syntax ψ = true a ψ ψ ψ X ψ ψ U ψ. Semantics (for a path ω = s 0 s ) ω = true (always), ω = a iff a L(s 0 ), ω = ψ ψ 2 iff ω = ψ and ω = ψ 2, ω = ψ iff ω = ψ, ω = X ψ iff s s 2 = ψ, Ψ ω = ψ U ψ 2 iff i 0 : s i s i+ = ψ 2 and j < i : s j s j+ = ψ. Ψ Ψ Ψ 2 Syntactic sugar F ψ G ψ 66 / 84

106 Logics - LTL Syntax ψ = true a ψ ψ ψ X ψ ψ U ψ. Semantics (for a path ω = s 0 s ) ω = true (always), ω = a iff a L(s 0 ), ω = ψ ψ 2 iff ω = ψ and ω = ψ 2, ω = ψ iff ω = ψ, ω = X ψ iff s s 2 = ψ, Ψ ω = ψ U ψ 2 iff i 0 : s i s i+ = ψ 2 and j < i : s j s j+ = ψ. Ψ Ψ Ψ 2 Syntactic sugar F ψ true U ψ G ψ (true U ψ) ( F ψ) 66 / 84

107 Logics - CTL Syntax State formulae: φ = true a φ φ φ A ψ E ψ where ψ is a path formula. Semantics For a state s: s = true s = a s = φ φ 2 (always), iff a L(s), iff s = φ and s = φ 2, s = φ iff s = φ, s = Aψ iff ω = ψ for all paths ω = s 0 s with s 0 = s, s = Eψ iff ω = ψ for some path ω = s 0 s with s 0 = s. Path formulae: ψ = X φ φ U φ where φ is a state formula. For a path ω = s 0 s : ω = X φ iff s s 2 satisfies φ, Φ ω = φ U φ 2 iff i : s i s i+ = φ 2 and j < i : s j s j+ = φ. Φ Φ Φ2 67 / 84

108 Logics - Temporal Logics - non-probabilistic (2) Linear Temporal Logic (LTL) Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: eventually red player is kicked out followed immediately by blue player being kicked out: F (rkicked X bkicked) Question: do all executions satisfy the given LTL formula? Computation Tree Logic (CTL) Syntax for specifying states: Syntax for specifying executions: φ = true a φ φ φ A ψ E ψ ψ = X φ φ U φ F φ G φ Example: in all computations it is always possible to return to initial state: A G E F init Question: does the given state satisfy the given CTL state formula? 68 / 84

109 Logics - Temporal Logics - probabilistic Linear Temporal Logic (LTL) + probabilities Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: with prob. 0.8, eventually red player is kicked out followed immediately by blue player being kicked out: P(F (rkicked X bkicked)) 0.8 Question: is the formula satisfied by executions of given probability? 69 / 84

110 Logics - Temporal Logics - probabilistic Linear Temporal Logic (LTL) + probabilities Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: with prob. 0.8, eventually red player is kicked out followed immediately by blue player being kicked out: P(F (rkicked X bkicked)) 0.8 Question: is the formula satisfied by executions of given probability? Probabilitic Computation Tree Logic (PCTL) Syntax for specifying states: Syntax for specifying executions: φ = true a φ φ φ P J ψ ψ = X φ φ U φ φ U k φ F φ G φ Example: with prob. at least 0.5 the probability to return to initial state is always at least 0.: P 0.5 G P 0. F init Question: does the given state satisfy the given PCTL state formula? 69 / 84

111 Logics - PCTL - Examples Syntactic sugar: φ φ 2 ( φ φ 2 ), φ φ 2 φ φ 2, etc. 0.5 denotes the interval [0, 0.5], = denotes [, ], etc. Examples: A fair die: i {,...,6} P = (F i). 6 The probability of winning Who wants to be a millionaire without using any joker should be negligible: P <e 0 ( (J 50% J audience J telephone ) U win). 70 / 84

112 Logics - PCTL - Semantics Semantics For a state s: s = true s = a s = φ φ 2 (always), iff a L(s), iff s = φ and s = φ 2, s = φ iff s = φ, s = P J (ψ) iff P s (Paths(ψ)) J For a path ω = s 0 s : ω = X φ iff s s 2 satisfies φ,! ω = φ U φ 2 iff i :! s i s i+!. =. φ. 2. and!!2 j < i : s j s j+ = φ.!....!!2 ω = φ U n φ 2 iff i n :! s i s i+ = φ 2 and j < i : s j s j+ = φ.!....!!2 7 / 84

113 Logics - Examples of Properties O O O H H H H H H O O O Examples of Properties. the game cannot return back to start 2. at any time, the game eventually ends with prob. 3. at any time, the game ends within 00 dice rolls with prob the probability of winning without ever being kicked out is 0.3 Formally 72 / 84

114 Logics - Examples of Properties O O O H H H H H H O O O Examples of Properties. the game cannot return back to start 2. at any time, the game eventually ends with prob. 3. at any time, the game ends within 00 dice rolls with prob the probability of winning without ever being kicked out is 0.3 Formally. P(X G init) = (LTL + prob.) P = (X P =0 (G init)) (PCTL) 2. P = (G P = (F (rwin bwin))) (PCTL) 3. P = (G P 0.5 (F 00 (rwin bwin))) (PCTL) 4. P(( rkicked bkicked) U (rwin bwin) 0.3 (LTL + prob.) 72 / 84

115 PCTL Model Checking Algorithm 73 / 84

116 PCTL Model Checking Definition: PCTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Φ a PCTL state formula and s S. The model checking problem is to decide whether s = Φ. Theorem The PCTL model checking problem can be decided in time polynomial in D, linear in Φ, and linear in the maximum step bound n. 74 / 84

117 PCTL Model Checking - Algorithm - Outline () Algorithm: Consider the bottom-up traversal of the parse tree of Φ: The leaves are a AP or true and the inner nodes are: unary labelled with the operator or PJ (X ); binary labelled with an operator, PJ ( U ), or P J ( U n ). Example: a P 0.2 ( b U a P 0.9 ( c)) P!0.2( U ) P"0.9( U ) b true c Compute Sat(Ψ) = {s S s = Ψ} for each node Ψ of the tree in a bottom-up fashion. Then s = Φ iff s Sat(Φ). 75 / 84

118 PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: 76 / 84

119 PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: Lemma Sat(true) = S, Sat(a) = {s a L(s)} 76 / 84

120 PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: Lemma Sat(true) = S, Sat(a) = {s a L(s)} Induction step of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ given the sets Sat(Ψ ) for all state sub-formulas Ψ of Ψ: Lemma Sat(Φ Φ 2 ) = Sat( Φ) = 76 / 84

121 PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: Lemma Sat(true) = S, Sat(a) = {s a L(s)} Induction step of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ given the sets Sat(Ψ ) for all state sub-formulas Ψ of Ψ: Lemma Sat(Φ Φ 2 ) = Sat(Φ ) Sat(Φ 2 ) Sat( Φ) = S \ Sat(Φ) Sat(P J (Φ)) = {s P s (Paths(Φ)) J} discussed on the next slide. 76 / 84

122 PCTL Model Checking - Algorithm - Path Operator Lemma Next: P s (Paths(X Φ)) = Bounded Until: P s (Paths(Φ U n Φ 2 )) = Unbounded Until: P s (Paths(Φ U Φ 2 )) = 77 / 84

123 PCTL Model Checking - Algorithm - Path Operator Lemma Next: Bounded Until: P s (Paths(X Φ)) = s Sat(Φ) P(s, s ) P s (Paths(Φ U n Φ 2 )) = P s (Sat(Φ ) U n Sat(Φ 2 )) Unbounded Until: P s (Paths(Φ U Φ 2 )) = P s (Sat(Φ ) U Sat(Φ 2 )) 77 / 84

124 PCTL Model Checking - Algorithm - Path Operator Lemma Next: Bounded Until: P s (Paths(X Φ)) = s Sat(Φ) P(s, s ) P s (Paths(Φ U n Φ 2 )) = P s (Sat(Φ ) U n Sat(Φ 2 )) Unbounded Until: P s (Paths(Φ U Φ 2 )) = P s (Sat(Φ ) U Sat(Φ 2 )) As before: can be reduced to transient analysis and to unbounded reachability. 77 / 84

125 PCTL Model Checking - Algorithm - Complexity Precise algorithm Computation for every node in the parse tree and for every state: All node types except for path operator trivial. Next: Trivial. Until: Solving equation systems can be done by polynomially many elementary arithmetic operations. Bounded until: Matrix vector multiplications can be done by polynomial many elementary arithmetic operations as well. Overall complexity: Polynomial in D, linear in Φ and the maximum step bound n. In practice The until and bounded until probabilities computed approximatively: rounding off probabilities in matrix-vector multiplication, using approximative iterative methods without error guarantees. 78 / 84

126 pltl Model Checking Algorithm 79 / 84

127 LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). 80 / 84

128 LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). Algorithm Outline. Construct from Ψ a deterministic Rabin automaton A recognizing words satisfying Ψ, i.e. Paths(Ψ) := {L(ω) (2 Ap ) ω = Ψ} 80 / 84

129 LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). Algorithm Outline. Construct from Ψ a deterministic Rabin automaton A recognizing words satisfying Ψ, i.e. Paths(Ψ) := {L(ω) (2 Ap ) ω = Ψ} 2. Construct a product DTMC D A that embeds the deterministic execution of A into the Markov chain. 80 / 84

130 LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). Algorithm Outline. Construct from Ψ a deterministic Rabin automaton A recognizing words satisfying Ψ, i.e. Paths(Ψ) := {L(ω) (2 Ap ) ω = Ψ} 2. Construct a product DTMC D A that embeds the deterministic execution of A into the Markov chain. 3. Compute in D A the probability of paths where A satisfies the acceptance condition. 80 / 84

131 LTL Model Checking - ω-automata (.) Deterministic Rabin automaton (DRA): (Q, Σ, δ, q 0, Acc) a DFA with a different acceptance condition, Acc = {(E i, F i ) i k} each accepting infinite path must visit for some i all states of Ei at most finitely often and some state of Fi infinitely often. 8 / 84

132 LTL Model Checking - ω-automata (.) Deterministic Rabin automaton (DRA): (Q, Σ, δ, q 0, Acc) a DFA with a different acceptance condition, Acc = {(E i, F i ) i k} each accepting infinite path must visit for some i all states of Ei at most finitely often and some state of Fi infinitely often. Example Give some automata recognizing the language of formulas (a X b) auc FGa GFa 8 / 84

133 LTL Model Checking - ω-automata (.) Deterministic Rabin automaton (DRA): (Q, Σ, δ, q 0, Acc) a DFA with a different acceptance condition, Acc = {(E i, F i ) i k} each accepting infinite path must visit for some i all states of Ei at most finitely often and some state of Fi infinitely often. Example Give some automata recognizing the language of formulas (a X b) auc FGa GFa Lemma (Vardi&Wolper 86, Safra 88) For any LTL formula Ψ there is a DRA A recognizing Paths(Ψ) with A 2 2O( Ψ ). 8 / 84

134 LTL Model Checking - Product DTMC (2.) For a labelled DTMC D = (S, P, π 0, L) and a DRA A = (Q, 2 Ap, δ, q 0, {(E i, F i ) i k}) we define. a DTMC D A = (S Q, P, π 0 ): P ((s, q), (s, q )) = P(s, s ) if δ(q, L(s )) = q and 0, otherwise; π 0 ((s, q s)) = π 0(s) if δ(q 0, L(s)) = q s and 0, otherwise; and 82 / 84

135 LTL Model Checking - Product DTMC (2.) For a labelled DTMC D = (S, P, π 0, L) and a DRA A = (Q, 2 Ap, δ, q 0, {(E i, F i ) i k}) we define. a DTMC D A = (S Q, P, π 0 ): P ((s, q), (s, q )) = P(s, s ) if δ(q, L(s )) = q and 0, otherwise; π 0 ((s, q s)) = π 0(s) if δ(q 0, L(s)) = q s and 0, otherwise; and 2. {(E i, F i ) i k} where for each i: E i F i = {(s, q) q E i, s S}, = {(s, q) q F i, s S}, 82 / 84

136 LTL Model Checking - Product DTMC (2.) For a labelled DTMC D = (S, P, π 0, L) and a DRA A = (Q, 2 Ap, δ, q 0, {(E i, F i ) i k}) we define. a DTMC D A = (S Q, P, π 0 ): P ((s, q), (s, q )) = P(s, s ) if δ(q, L(s )) = q and 0, otherwise; π 0 ((s, q s)) = π 0(s) if δ(q 0, L(s)) = q s and 0, otherwise; and 2. {(E i, F i ) i k} where for each i: E i F i = {(s, q) q E i, s S}, = {(s, q) q F i, s S}, Lemma The construction preserves probability of accepting as Ps D (Lang(A)) = P D A (s,q s )({ω i : inf(ω) E i =, inf(ω) F i }) where inf(ω) is the set of states visited in ω infinitely often. Proof sketch. We have a one-to-one correspondence between executions of D and D A (as A is deterministic), mapping Lang(A) to { }, and preserving probabilities. 82 / 84

137 LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? 83 / 84

138 LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? Identify the BSCCs (C j ) j of D A that for some i k,. contain no state from E i and 2. contain some state from F i. Lemma P D A (s,q s ) ({ω i : inf(ω) E i =, inf(ω) F i }) = P D A (s,q s ) ( j C j). 83 / 84

139 LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? Identify the BSCCs (C j ) j of D A that for some i k,. contain no state from E i and 2. contain some state from F i. Lemma P D A (s,q s ) ({ω i : inf(ω) E i =, inf(ω) F i }) = P D A (s,q s ) ( j C j). Proof sketch. Note that some BSCC of each finite DTMC is reached with probability (short paths with prob. bounded from below), Rabin acceptance condition does not depend on any finite prefix of the infinite word, every state of a finite irreducible DTMC is visited infinitely often with probability regardless of the choice of initial state. 83 / 84

140 LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? Identify the BSCCs (C j ) j of D A that for some i k,. contain no state from E i and 2. contain some state from F i. Lemma P D A (s,q s ) ({ω i : inf(ω) E i =, inf(ω) F i }) = P D A (s,q s ) ( j C j). Proof sketch. Note that some BSCC of each finite DTMC is reached with probability (short paths with prob. bounded from below), Rabin acceptance condition does not depend on any finite prefix of the infinite word, every state of a finite irreducible DTMC is visited infinitely often with probability regardless of the choice of initial state. Corollary P D s (Lang(A)) = P D A (s,q s ) ( j C j). 83 / 84

141 LTL Model Checking - Algorithm - Complexity Doubly exponential in Ψ and polynomial in D (for the algorithm presented here):. A and hence also D A is of size 2 2O( Ψ ) 2. BSCC computation: Tarjan algorithm - linear in D A (number of states + transitions) 3. Unbounded reachability: system of linear equations ( D A ): exact solution: cubic in the size of the system approximative solution: efficient in practice 84 / 84

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 2011 Dr. Dave Parker Department of Computer Science University of Oxford Probabilistic model checking System Probabilistic model e.g. Markov chain Result 0.5

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Next few lectures Today: Discrete-time Markov chains (continued) Mon 2pm: Probabilistic

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Overview PCTL for MDPs syntax, semantics, examples PCTL model checking next, bounded

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 2011 Dr. Dave Parker Department of Computer Science University of Oxford Overview CSL model checking basic algorithm untimed properties time-bounded until the

More information

Chapter 4: Computation tree logic

Chapter 4: Computation tree logic INFOF412 Formal verification of computer systems Chapter 4: Computation tree logic Mickael Randour Formal Methods and Verification group Computer Science Department, ULB March 2017 1 CTL: a specification

More information

SFM-11:CONNECT Summer School, Bertinoro, June 2011

SFM-11:CONNECT Summer School, Bertinoro, June 2011 SFM-:CONNECT Summer School, Bertinoro, June 20 EU-FP7: CONNECT LSCITS/PSS VERIWARE Part 3 Markov decision processes Overview Lectures and 2: Introduction 2 Discrete-time Markov chains 3 Markov decision

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 2011 Dr. Dave Parker Department of Computer Science University of Oxford Overview Temporal logic Non-probabilistic temporal logic CTL Probabilistic temporal

More information

Overview. overview / 357

Overview. overview / 357 Overview overview6.1 Introduction Modelling parallel systems Linear Time Properties Regular Properties Linear Temporal Logic (LTL) Computation Tree Logic syntax and semantics of CTL expressiveness of CTL

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Stochastic Model Checking

Stochastic Model Checking Stochastic Model Checking Marta Kwiatkowska, Gethin Norman, and David Parker School of Computer Science, University of Birmingham Edgbaston, Birmingham B15 2TT, United Kingdom Abstract. This tutorial presents

More information

Computation Tree Logic

Computation Tree Logic Computation Tree Logic Hao Zheng Department of Computer Science and Engineering University of South Florida Tampa, FL 33620 Email: zheng@cse.usf.edu Phone: (813)974-4757 Fax: (813)974-5456 Hao Zheng (CSE,

More information

Chapter 6: Computation Tree Logic

Chapter 6: Computation Tree Logic Chapter 6: Computation Tree Logic Prof. Ali Movaghar Verification of Reactive Systems Outline We introduce Computation Tree Logic (CTL), a branching temporal logic for specifying system properties. A comparison

More information

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège Temporal logics and explicit-state model checking Pierre Wolper Université de Liège 1 Topics to be covered Introducing explicit-state model checking Finite automata on infinite words Temporal Logics and

More information

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous

More information

Topics in Verification AZADEH FARZAN FALL 2017

Topics in Verification AZADEH FARZAN FALL 2017 Topics in Verification AZADEH FARZAN FALL 2017 Last time LTL Syntax ϕ ::= true a ϕ 1 ϕ 2 ϕ ϕ ϕ 1 U ϕ 2 a AP. ϕ def = trueu ϕ ϕ def = ϕ g intuitive meaning of and is obt Limitations of LTL pay pay τ τ soda

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Lecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability

Lecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability Lecture Notes 1 Basic Probability Set Theory Elements of Probability Conditional probability Sequential Calculation of Probability Total Probability and Bayes Rule Independence Counting EE 178/278A: Basic

More information

Automata-based Verification - III

Automata-based Verification - III COMP30172: Advanced Algorithms Automata-based Verification - III Howard Barringer Room KB2.20: email: howard.barringer@manchester.ac.uk March 2009 Third Topic Infinite Word Automata Motivation Büchi Automata

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Probabilistic model checking with PRISM

Probabilistic model checking with PRISM Probabilistic model checking with PRISM Marta Kwiatkowska Department of Computer Science, University of Oxford 4th SSFT, Menlo College, May 204 Part 2 Markov decision processes Overview (Part 2) Introduction

More information

the time it takes until a radioactive substance undergoes a decay

the time it takes until a radioactive substance undergoes a decay 1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete

More information

Automata-based Verification - III

Automata-based Verification - III CS3172: Advanced Algorithms Automata-based Verification - III Howard Barringer Room KB2.20/22: email: howard.barringer@manchester.ac.uk March 2005 Third Topic Infinite Word Automata Motivation Büchi Automata

More information

ω-automata Automata that accept (or reject) words of infinite length. Languages of infinite words appear:

ω-automata Automata that accept (or reject) words of infinite length. Languages of infinite words appear: ω-automata ω-automata Automata that accept (or reject) words of infinite length. Languages of infinite words appear: in verification, as encodings of non-terminating executions of a program. in arithmetic,

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Discrete Structures for Computer Science

Discrete Structures for Computer Science Discrete Structures for Computer Science William Garrison bill@cs.pitt.edu 6311 Sennott Square Lecture #24: Probability Theory Based on materials developed by Dr. Adam Lee Not all events are equally likely

More information

Deep Learning for Computer Vision

Deep Learning for Computer Vision Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair

More information

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC) 2.1 Classification of DTMC states Prof. Dr. Anne Remke Design and Analysis of Communication Systems University

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Reasoning about Time and Reliability

Reasoning about Time and Reliability Reasoning about Time and Reliability Probabilistic CTL model checking Daniel Bruns Institut für theoretische Informatik Universität Karlsruhe 13. Juli 2007 Seminar Theorie und Anwendung von Model Checking

More information

Ratio and Weight Objectives in Annotated Markov Chains

Ratio and Weight Objectives in Annotated Markov Chains Technische Universität Dresden - Faculty of Computer Science Chair of Algebraic and Logical Foundations of Computer Science Diploma Thesis Ratio and Weight Objectives in Annotated Markov Chains Jana Schubert

More information

Probabilistic Model Checking (1)

Probabilistic Model Checking (1) Probabilistic Model Checking () Lecture # of GLOBAN Summerschool Joost-Pieter Katoen Software Modeling and Verification Group affiliated to University of Twente, Formal Methods and Tools Warsaw University,

More information

What is a random variable

What is a random variable OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE MATH 256 Probability and Random Processes 04 Random Variables Fall 20 Yrd. Doç. Dr. Didem Kivanc Tureli didemk@ieee.org didem.kivanc@okan.edu.tr

More information

Counterexamples in Probabilistic LTL Model Checking for Markov Chains

Counterexamples in Probabilistic LTL Model Checking for Markov Chains Counterexamples in Probabilistic LTL Model Checking for Markov Chains Matthias Schmalz 1 Daniele Varacca 2 Hagen Völzer 3 1 ETH Zurich, Switzerland 2 PPS - CNRS & Univ. Paris 7, France 3 IBM Research Zurich,

More information

Computation Tree Logic (CTL) & Basic Model Checking Algorithms

Computation Tree Logic (CTL) & Basic Model Checking Algorithms Computation Tree Logic (CTL) & Basic Model Checking Algorithms Martin Fränzle Carl von Ossietzky Universität Dpt. of Computing Science Res. Grp. Hybride Systeme Oldenburg, Germany 02917: CTL & Model Checking

More information

Chapter 3: Linear temporal logic

Chapter 3: Linear temporal logic INFOF412 Formal verification of computer systems Chapter 3: Linear temporal logic Mickael Randour Formal Methods and Verification group Computer Science Department, ULB March 2017 1 LTL: a specification

More information

Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability

Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability Chapter 2 Conditional Probability and Independence 2.1 Conditional Probability Example: Two dice are tossed. What is the probability that the sum is 8? This is an easy exercise: we have a sample space

More information

ISE/OR 760 Applied Stochastic Modeling

ISE/OR 760 Applied Stochastic Modeling ISE/OR 760 Applied Stochastic Modeling Topic 2: Discrete Time Markov Chain Yunan Liu Department of Industrial and Systems Engineering NC State University Yunan Liu (NC State University) ISE/OR 760 1 /

More information

Automata, Logic and Games: Theory and Application

Automata, Logic and Games: Theory and Application Automata, Logic and Games: Theory and Application 1. Büchi Automata and S1S Luke Ong University of Oxford TACL Summer School University of Salerno, 14-19 June 2015 Luke Ong Büchi Automata & S1S 14-19 June

More information

Limiting Behavior of Markov Chains with Eager Attractors

Limiting Behavior of Markov Chains with Eager Attractors Limiting Behavior of Markov Chains with Eager Attractors Parosh Aziz Abdulla Uppsala University, Sweden. parosh@it.uu.se Noomene Ben Henda Uppsala University, Sweden. Noomene.BenHenda@it.uu.se Sven Sandberg

More information

Motivation for introducing probabilities

Motivation for introducing probabilities for introducing probabilities Reaching the goals is often not sufficient: it is important that the expected costs do not outweigh the benefit of reaching the goals. 1 Objective: maximize benefits - costs.

More information

Verification of Probabilistic Systems with Faulty Communication

Verification of Probabilistic Systems with Faulty Communication Verification of Probabilistic Systems with Faulty Communication P. A. Abdulla 1, N. Bertrand 2, A. Rabinovich 3, and Ph. Schnoebelen 2 1 Uppsala University, Sweden 2 LSV, ENS de Cachan, France 3 Tel Aviv

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Stochastic Games with Time The value Min strategies Max strategies Determinacy Finite-state games Cont.-time Markov chains

Stochastic Games with Time The value Min strategies Max strategies Determinacy Finite-state games Cont.-time Markov chains Games with Time Finite-state Masaryk University Brno GASICS 00 /39 Outline Finite-state stochastic processes. Games over event-driven stochastic processes. Strategies,, determinacy. Existing results for

More information

Model Counting for Logical Theories

Model Counting for Logical Theories Model Counting for Logical Theories Wednesday Dmitry Chistikov Rayna Dimitrova Department of Computer Science University of Oxford, UK Max Planck Institute for Software Systems (MPI-SWS) Kaiserslautern

More information

Alternating nonzero automata

Alternating nonzero automata Alternating nonzero automata Application to the satisfiability of CTL [,, P >0, P =1 ] Hugo Gimbert, joint work with Paulin Fournier LaBRI, Université de Bordeaux ANR Stoch-MC 06/07/2017 Control and verification

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) =

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) = CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14 1 Probability First, recall a couple useful facts from last time about probability: Linearity of expectation: E(aX + by ) = ae(x)

More information

Statistical methods in recognition. Why is classification a problem?

Statistical methods in recognition. Why is classification a problem? Statistical methods in recognition Basic steps in classifier design collect training images choose a classification model estimate parameters of classification model from training images evaluate model

More information

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006. Markov Chains As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006 1 Introduction A (finite) Markov chain is a process with a finite number of states (or outcomes, or

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

MAS275 Probability Modelling Exercises

MAS275 Probability Modelling Exercises MAS75 Probability Modelling Exercises Note: these questions are intended to be of variable difficulty. In particular: Questions or part questions labelled (*) are intended to be a bit more challenging.

More information

Probabilistic model checking with PRISM

Probabilistic model checking with PRISM Probabilistic model checking with PRISM Marta Kwiatkowska Department of Computer Science, University of Oxford IMT, Lucca, May 206 Lecture plan Course slides and lab session http://www.prismmodelchecker.org/courses/imt6/

More information

Automata-Theoretic LTL Model-Checking

Automata-Theoretic LTL Model-Checking Automata-Theoretic LTL Model-Checking Arie Gurfinkel arie@cmu.edu SEI/CMU Automata-Theoretic LTL Model-Checking p.1 LTL - Linear Time Logic (Pn 77) Determines Patterns on Infinite Traces Atomic Propositions

More information

Lecture 4 An Introduction to Stochastic Processes

Lecture 4 An Introduction to Stochastic Processes Lecture 4 An Introduction to Stochastic Processes Prof. Massimo Guidolin Prep Course in Quantitative Methods for Finance August-September 2017 Plan of the lecture Motivation and definitions Filtrations

More information

What is Probability? Probability. Sample Spaces and Events. Simple Event

What is Probability? Probability. Sample Spaces and Events. Simple Event What is Probability? Probability Peter Lo Probability is the numerical measure of likelihood that the event will occur. Simple Event Joint Event Compound Event Lies between 0 & 1 Sum of events is 1 1.5

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

Probabilistic Aspects of Computer Science: Markovian Models

Probabilistic Aspects of Computer Science: Markovian Models Probabilistic Aspects of Computer Science: Markovian Models S. Haddad September 29, 207 Professor at ENS Cachan, haddad@lsv.ens-cachan.fr, http://www.lsv.ens-cachan.fr/ haddad/ Abstract These lecture notes

More information

Essentials on the Analysis of Randomized Algorithms

Essentials on the Analysis of Randomized Algorithms Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random

More information

Lecture notes for Analysis of Algorithms : Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with

More information

Markov Chains. Chapter 16. Markov Chains - 1

Markov Chains. Chapter 16. Markov Chains - 1 Markov Chains Chapter 16 Markov Chains - 1 Why Study Markov Chains? Decision Analysis focuses on decision making in the face of uncertainty about one future event. However, many decisions need to consider

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Introduction to Probability and Statistics Lecture 2: Conditional probability Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/ psarkar/teaching

More information

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev CS4705 Probability Review and Naïve Bayes Slides from Dragomir Radev Classification using a Generative Approach Previously on NLP discriminative models P C D here is a line with all the social media posts

More information

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature. Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G.

CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G. CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability Arthur G. Werschulz Fordham University Department of Computer and Information Sciences Copyright Arthur G. Werschulz, 2017.

More information

Introduction and basic definitions

Introduction and basic definitions Chapter 1 Introduction and basic definitions 1.1 Sample space, events, elementary probability Exercise 1.1 Prove that P( ) = 0. Solution of Exercise 1.1 : Events S (where S is the sample space) and are

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Strategy Synthesis for Markov Decision Processes and Branching-Time Logics

Strategy Synthesis for Markov Decision Processes and Branching-Time Logics Strategy Synthesis for Markov Decision Processes and Branching-Time Logics Tomáš Brázdil and Vojtěch Forejt Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno, Czech Republic. {brazdil,forejt}@fi.muni.cz

More information

Automata, Logic and Games: Theory and Application

Automata, Logic and Games: Theory and Application Automata, Logic and Games: Theory and Application 2 Parity Games, Tree Automata, and S2S Luke Ong University of Oxford TACL Summer School University of Salerno, 14-19 June 2015 Luke Ong S2S 14-19 June

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

Alternating Time Temporal Logics*

Alternating Time Temporal Logics* Alternating Time Temporal Logics* Sophie Pinchinat Visiting Research Fellow at RSISE Marie Curie Outgoing International Fellowship * @article{alur2002, title={alternating-time Temporal Logic}, author={alur,

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Probabilistic Model Checking and Strategy Synthesis for Robot Navigation

Probabilistic Model Checking and Strategy Synthesis for Robot Navigation Probabilistic Model Checking and Strategy Synthesis for Robot Navigation Dave Parker University of Birmingham (joint work with Bruno Lacerda, Nick Hawes) AIMS CDT, Oxford, May 2015 Overview Probabilistic

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

On the coinductive nature of centralizers

On the coinductive nature of centralizers On the coinductive nature of centralizers Charles Grellois INRIA & University of Bologna Séminaire du LIFO Jan 16, 2017 Charles Grellois (INRIA & Bologna) On the coinductive nature of centralizers Jan

More information

http://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is

More information

Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability

Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability Chapter 2 Conditional Probability and Independence 2.1 Conditional Probability Probability assigns a likelihood to results of experiments that have not yet been conducted. Suppose that the experiment has

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Alan Bundy. Automated Reasoning LTL Model Checking

Alan Bundy. Automated Reasoning LTL Model Checking Automated Reasoning LTL Model Checking Alan Bundy Lecture 9, page 1 Introduction So far we have looked at theorem proving Powerful, especially where good sets of rewrite rules or decision procedures have

More information

Automata on Infinite words and LTL Model Checking

Automata on Infinite words and LTL Model Checking Automata on Infinite words and LTL Model Checking Rodica Condurache Lecture 4 Lecture 4 Automata on Infinite words and LTL Model Checking 1 / 35 Labeled Transition Systems Let AP be the (finite) set of

More information

Computing rewards in probabilistic pushdown systems

Computing rewards in probabilistic pushdown systems Computing rewards in probabilistic pushdown systems Javier Esparza Software Reliability and Security Group University of Stuttgart Joint work with Tomáš Brázdil Antonín Kučera Richard Mayr 1 Motivation

More information

Bounded Synthesis. Sven Schewe and Bernd Finkbeiner. Universität des Saarlandes, Saarbrücken, Germany

Bounded Synthesis. Sven Schewe and Bernd Finkbeiner. Universität des Saarlandes, Saarbrücken, Germany Bounded Synthesis Sven Schewe and Bernd Finkbeiner Universität des Saarlandes, 66123 Saarbrücken, Germany Abstract. The bounded synthesis problem is to construct an implementation that satisfies a given

More information

CS256/Spring 2008 Lecture #11 Zohar Manna. Beyond Temporal Logics

CS256/Spring 2008 Lecture #11 Zohar Manna. Beyond Temporal Logics CS256/Spring 2008 Lecture #11 Zohar Manna Beyond Temporal Logics Temporal logic expresses properties of infinite sequences of states, but there are interesting properties that cannot be expressed, e.g.,

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Some Definition and Example of Markov Chain

Some Definition and Example of Markov Chain Some Definition and Example of Markov Chain Bowen Dai The Ohio State University April 5 th 2016 Introduction Definition and Notation Simple example of Markov Chain Aim Have some taste of Markov Chain and

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Intelligent Systems (AI-2)

Intelligent Systems (AI-2) Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Oct, 3, 2016 CPSC 422, Lecture 11 Slide 1 422 big picture: Where are we? Query Planning Deterministic Logics First Order Logics Ontologies

More information

From Stochastic Processes to Stochastic Petri Nets

From Stochastic Processes to Stochastic Petri Nets From Stochastic Processes to Stochastic Petri Nets Serge Haddad LSV CNRS & ENS Cachan & INRIA Saclay Advanced Course on Petri Nets, the 16th September 2010, Rostock 1 Stochastic Processes and Markov Chains

More information

From Liveness to Promptness

From Liveness to Promptness From Liveness to Promptness Orna Kupferman Hebrew University Nir Piterman EPFL Moshe Y. Vardi Rice University Abstract Liveness temporal properties state that something good eventually happens, e.g., every

More information

Probability: from intuition to mathematics

Probability: from intuition to mathematics 1 / 28 Probability: from intuition to mathematics Ting-Kam Leonard Wong University of Southern California EPYMT 2017 2 / 28 Probability has a right and a left hand. On the right is the rigorous foundational

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

EE 178 Lecture Notes 0 Course Introduction. About EE178. About Probability. Course Goals. Course Topics. Lecture Notes EE 178

EE 178 Lecture Notes 0 Course Introduction. About EE178. About Probability. Course Goals. Course Topics. Lecture Notes EE 178 EE 178 Lecture Notes 0 Course Introduction About EE178 About Probability Course Goals Course Topics Lecture Notes EE 178: Course Introduction Page 0 1 EE 178 EE 178 provides an introduction to probabilistic

More information