Quantitative Verification

Similar documents
Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Chapter 4: Computation tree logic

SFM-11:CONNECT Summer School, Bertinoro, June 2011

The Theory behind PageRank

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains and Stochastic Sampling

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Overview. overview / 357

Quantitative Model Checking (QMC) - SS12

Stochastic Model Checking

Computation Tree Logic

Chapter 6: Computation Tree Logic

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège

Outlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)

Topics in Verification AZADEH FARZAN FALL 2017

Randomized Algorithms

Lecture Notes 1 Basic Probability. Elements of Probability. Conditional probability. Sequential Calculation of Probability

Automata-based Verification - III

Markov Processes Hamid R. Rabiee

Probabilistic model checking with PRISM

the time it takes until a radioactive substance undergoes a decay

Automata-based Verification - III

ω-automata Automata that accept (or reject) words of infinite length. Languages of infinite words appear:

Markov Chains (Part 3)

Discrete Structures for Computer Science

Deep Learning for Computer Vision

Quantitative Evaluation of Emedded Systems Solution 2: Discrete Time Markov Chains (DTMC)

Lecture 20 : Markov Chains

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

Reasoning about Time and Reliability

Ratio and Weight Objectives in Annotated Markov Chains

Probabilistic Model Checking (1)

What is a random variable

Counterexamples in Probabilistic LTL Model Checking for Markov Chains

Computation Tree Logic (CTL) & Basic Model Checking Algorithms

Chapter 3: Linear temporal logic

Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability

ISE/OR 760 Applied Stochastic Modeling

Automata, Logic and Games: Theory and Application

Limiting Behavior of Markov Chains with Eager Attractors

Motivation for introducing probabilities

Verification of Probabilistic Systems with Faulty Communication

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Stochastic Games with Time The value Min strategies Max strategies Determinacy Finite-state games Cont.-time Markov chains

Model Counting for Logical Theories

Alternating nonzero automata

Markov chains. Randomness and Computation. Markov chains. Markov processes

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) =

Statistical methods in recognition. Why is classification a problem?

Markov Chains. As part of Interdisciplinary Mathematical Modeling, By Warren Weckesser Copyright c 2006.

Lecture 9 Classification of States

MAS275 Probability Modelling Exercises

Probabilistic model checking with PRISM

Automata-Theoretic LTL Model-Checking

Lecture 4 An Introduction to Stochastic Processes

What is Probability? Probability. Sample Spaces and Events. Simple Event

2 DISCRETE-TIME MARKOV CHAINS

Probabilistic Aspects of Computer Science: Markovian Models

Essentials on the Analysis of Randomized Algorithms

Lecture notes for Analysis of Algorithms : Markov decision processes

Markov Chains. Chapter 16. Markov Chains - 1

SDS 321: Introduction to Probability and Statistics

CS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Markov Chains, Stochastic Processes, and Matrix Decompositions

CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G.

Markov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued

Introduction and basic definitions

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Strategy Synthesis for Markov Decision Processes and Branching-Time Logics

Automata, Logic and Games: Theory and Application

MARKOV PROCESSES. Valerio Di Valerio

Alternating Time Temporal Logics*

Chapter 16 focused on decision making in the face of uncertainty about one future

Probabilistic Model Checking and Strategy Synthesis for Robot Navigation

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

On the coinductive nature of centralizers


Chapter 2. Conditional Probability and Independence. 2.1 Conditional Probability

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Alan Bundy. Automated Reasoning LTL Model Checking

Automata on Infinite words and LTL Model Checking

Computing rewards in probabilistic pushdown systems

Bounded Synthesis. Sven Schewe and Bernd Finkbeiner. Universität des Saarlandes, Saarbrücken, Germany

CS256/Spring 2008 Lecture #11 Zohar Manna. Beyond Temporal Logics

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Some Definition and Example of Markov Chain

Lectures on Markov Chains

6 Markov Chain Monte Carlo (MCMC)

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Intelligent Systems (AI-2)

From Stochastic Processes to Stochastic Petri Nets

From Liveness to Promptness

Probability: from intuition to mathematics

An Introduction to Entropy and Subshifts of. Finite Type

EE 178 Lecture Notes 0 Course Introduction. About EE178. About Probability. Course Goals. Course Topics. Lecture Notes EE 178

Transcription:

Quantitative Verification Chapter 3: Markov chains Jan Křetínský Technical University of Munich Winter 207/8 / 84

Motivation 2 / 84

Example: Simulation of a die by coins Knuth & Yao die Simulating a Fair Die by a Fair Coin s 0 s,2,3 s 4,5,6 s 2 s,2,3 2,3 s 4,5 2 s 4,5,6 2 2 2 2 2 2 3 4 5 6 Question: Quiz What is the probability of obtaining 2? Is the probability of obtaining 3 equal to 6? 2 2 2 2 2 2 2 3 / 84

DTMC - Graph-based Definition Definition: A discrete-time Markov chain (DTMC) is a tuple (S, P, π 0 ) where S is the set of states, P : S S [0, ] with s S P(s, s ) = is the transitions matrix, and π 0 [0, ] S with s S π 0(s) = is the initial distribution. 4 / 84

Example: Craps Two dice game: Modelling Markov Chains First: {7, } win, {2, 3, 2} lose, else s= Next rolls: = s win, = 7 lose, else iterate won 2 9 init 9 lost 5 / 84

Example: Craps Two dice game: Modelling Markov Chains First: {7, } win, {2, 3, 2} lose, else s= Next rolls: = s win, = 7 lose, else iterate won 2 9 init 2 9 5 36 5 36 9 2 4 5 6 8 9 0 9 lost 6 / 84

Example: Craps Two dice game: Modelling Markov Chains First: {7, } win, {2, 3, 2} lose, else s= Next rolls: = s win, = 7 lose, else iterate won 2 9 5 36 2 9 init 5 36 9 2 3 4 2 9 4 5 6 8 9 0 3 8 6 25 36 6 5 36 6 9 lost 6 5 36 25 36 6 6 9 3 2 8 3 4 7 / 84

Example: Zero Configuration Networking (Zeroconf) Previously: Manual assignment of IP addresses Zeroconf: Dynamic configuration of local IPv4 addresses Advantage: Simple devices able to communicate automatically Automatic Private IP Addressing (APIPA) RFC 3927 Used when DHCP is configured but unavailable Pick randomly an address from 69.254..0 69.254.254.255 Find out whether anybody else uses this address (by sending several ARP requests) Model: Randomly pick an address among the K (65024) addresses. With m hosts in the network, collision probability is q = m K. Send 4 ARP requests. In case of collision, the probability of no answer to the ARP request is p (due to the lossy channel) 8 / 84

Example: Zero Configuration Networking (Zeroconf) start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6 For 00 hosts and p = 0.00, the probability of error is.55 0 5. 9 / 84

Probabilistic Model Checking What is probabilistic model checking? Probabilistic specifications, e.g. probability of reaching bad states shall be smaller than 0.0. Probabilistic model checking is an automatic verification technique for this purpose. Why quantities? Randomized algorithms Faults e.g. due to the environment, lossy channels Performance analysis, e.g. reliability, availability 0 / 84

Basics of Probability Theory (Recap) / 84

What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of 0.5. 2 / 84

What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of 0.5. But... [Bertrand s Paradox] Draw a random chord on the unit circle. What is the probability that its length exceeds the length of a side of the equilateral triangle in the circle? 3 2 / 84

What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of 0.5. But... [Bertrand s Paradox] Draw a random chord on the unit circle. What is the probability that its length exceeds the length of a side of the equilateral triangle in the circle? 3 2 2 / 84

What are probabilities? - Intuition Throwing a fair coin: The outcome head has a probability of 0.5. The outcome tail has a probability of 0.5. But... [Bertrand s Paradox] Draw a random chord on the unit circle. What is the probability that its length exceeds the length of a side of the equilateral triangle in the circle? 3 2 4 2 / 84

Probability Theory - Probability Space Definition: Probability Function Given sample space Ω and σ-algebra F, a probability function P : F [0, ] satisfies: P(A) 0 for A F, P(Ω) =, and P( i= A i) = i= P(A i) for pairwise disjoint A i F Definition: Probability Space A probability space is a tuple (Ω, F, P) with a sample space Ω, σ-algebra F 2 Ω and probability function P. Example A random real number taken uniformly from the interval [0, ]. Sample space: Ω = [0, ]. 3 / 84

Probability Theory - Probability Space Definition: Probability Function Given sample space Ω and σ-algebra F, a probability function P : F [0, ] satisfies: P(A) 0 for A F, P(Ω) =, and P( i= A i) = i= P(A i) for pairwise disjoint A i F Definition: Probability Space A probability space is a tuple (Ω, F, P) with a sample space Ω, σ-algebra F 2 Ω and probability function P. Example A random real number taken uniformly from the interval [0, ]. Sample space: Ω = [0, ]. σ-algebra: F is the minimal superset of {[a, b] 0 a b } closed under complementation and countable union. Probability function: P([a, b]) = (b a), by Carathéodory s extension theorem there is a unique way how to extend it to all elements of F. 3 / 84

Random Variables 4 / 84

Random Variables - Introduction Definition: Random Variable A random variable X is a measurable function X : Ω I to some I. Elements of I are called random elements. Often I = R: X! 0 42 " Example (Bernoulli Trials) Throwing a coin 3 times: Ω 3 = {hhh, hht, hth, htt, thh, tht, tth, ttt}. We define 3 random variables X i : Ω {h, t}. For all x, y, z {h, t}, X (xyz) = x, X 2 (xyz) = y, X 3 (xyz) = z. 5 / 84

Stochastic Processes and Markov Chains 6 / 84

Stochastic Processes - Definition Definition: Given a probability space (Ω, F, P), a stochastic process is a family of random variables {X t t T } defined on (Ω, F, P). For each X t we assume X t : Ω S where S = {s, s 2,... } is a finite or countable set called state space. A stochastic process {X t t T } is called discrete-time if T = N or continuous-time if T = R 0. For the following lectures we focus on discrete time. 7 / 84

Discrete-time Stochastic Processes - Construction () Example: Weather Forecast S = {sun, rain}, we model time as discrete a random variable for each day: X0 is the weather today, Xi is the weather in i days. how can we set up the probability space to measure e.g. P(X i = sun)? 8 / 84

Discrete-time Stochastic Processes - Construction (2) Let us fix a state space S. How can we construct the probability space (Ω, F, P)? Definition: Sample Space Ω We define Ω = S. Then, each X n maps a sample ω = ω 0 ω... onto the respective state at time n, i.e., (X n )(ω) = ω n S. 9 / 84

Discrete-time Stochastic Processes - Construction (3) Definition: Cylinder Set For s 0 s n S n+, we set the cylinder C(s 0... s n ) = {s 0 s n ω Ω}. S Example: S = {s, s 2, s 3 } and C(s s 3 ) s s2 s s2 s s2 s s2 s s2 s s2...... Definition: σ-algebra F We define F to be the smallest σ-algebra that contains all cylinder sets, i.e., {C(s 0... s n ) n N, s i S} F. s3 s3 s3 s3 s3 s3... t Check: Is each X i measurable? (on the discrete set S we assume the full σ-algebra 2 S ). 20 / 84

Discrete-time Stochastic Processes - Construction (4) How to specify the probability Function P? We only needs to specify for each s 0 s n S n P(C(s 0... s n )). This amounts to specifying. P(C(s 0 )) for each s 0 S, and 2. P(C(s 0... s i ) C(s 0... s i )) for each s 0 s i S i since P(C(s 0... s n )) = P(C(s 0... s n ) C(s 0... s n )) P(C(s 0... s n )) n = P(C(s 0 )) P(C(s 0... s i ) C(s 0... s i )) i= 2 / 84

Discrete-time Stochastic Processes - Construction (4) How to specify the probability Function P? We only needs to specify for each s 0 s n S n P(C(s 0... s n )). This amounts to specifying. P(C(s 0 )) for each s 0 S, and 2. P(C(s 0... s i ) C(s 0... s i )) for each s 0 s i S i since P(C(s 0... s n )) = P(C(s 0... s n ) C(s 0... s n )) P(C(s 0... s n )) n = P(C(s 0 )) P(C(s 0... s i ) C(s 0... s i )) i= Still, lots of possibilities... 2 / 84

Discrete-time Stochastic Processes - Construction (5) Weather Example: Option - statistics of days of a year the forecast starts on Jan 0, a distribution p j over {sun, rain} for each j 365, for each i N and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p i % 365 (s i ) Weather Example: Option 2 - two past days a distribution p s s for each s, s S, over {sun, rain} for each i 2 and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si 2 s i (s i ) 22 / 84

Discrete-time Stochastic Processes - Construction (5) Weather Example: Option - statistics of days of a year the forecast starts on Jan 0, a distribution p j over {sun, rain} for each j 365, for each i N and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p i % 365 (s i ) Weather Example: Option 2 - two past days a distribution p s s for each s, s S, over {sun, rain} for each i 2 and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si 2 s i (s i ) Not time-homogeneous. Not Markovian. Here: time-homogeneous Markovian stochastic processes 22 / 84

Stochastic Processes - Restrictions Definition: Markov A discrete-time stochastic process {X n n N} is Markov if P(X n = s n X n = s n,..., X 0 = s 0 ) = P(X n = s n X n = s n ) for all n > and s 0,..., s n S with P(X n = s n ) > 0. Definition: Time-homogeneous A discrete-time Markov process {X n n N} is time-homogeneous if P(X n+ = s X n = s) = P(X = s X 0 = s) for all n > and s, s S with P(X 0 = s) > 0. 23 / 84

Stochastic Processes - Restrictions Definition: Markov A discrete-time stochastic process {X n n N} is Markov if P(X n = s n X n = s n,..., X 0 = s 0 ) = P(X n = s n X n = s n ) for all n > and s 0,..., s n S with P(X n = s n ) > 0. Definition: Time-homogeneous A discrete-time Markov process {X n n N} is time-homogeneous if P(X n+ = s X n = s) = P(X = s X 0 = s) for all n > and s, s S with P(X 0 = s) > 0. 23 / 84

Discrete-time Stochastic Processes - Construction (6) Weather Example: Option 3 - one past day a distribution p s over {sun, rain} for each s S, for each i and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si (s i ) a distribution π over {sun, rain} such that P(C(s 0 )) = π(s 0 ). 24 / 84

Discrete-time Stochastic Processes - Construction (6) Weather Example: Option 3 - one past day a distribution p s over {sun, rain} for each s S, for each i and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si (s i ) a distribution π over {sun, rain} such that P(C(s 0 )) = π(s 0 ). Overly restrictive, isn t it? 24 / 84

Discrete-time Stochastic Processes - Construction (6) Weather Example: Option 3 - one past day a distribution p s over {sun, rain} for each s S, for each i and s 0 s i S i+ P(C(s 0... s i ) C(s 0... s i )) = p si (s i ) a distribution π over {sun, rain} such that P(C(s 0 )) = π(s 0 ). Overly restrictive, isn t it? Not really one only needs to extend the state space S = {,..., 365} {sun, rain} {sun, rain}, now each state encodes current day of the year, current weather, and weather yesterday, we can define over S a time-homogeneous Markov process based on both Options & 2 given earlier. 24 / 84

Discrete-time Markov Chains DTMC 25 / 84

DTMC - Relation of Definitions Stochastic process Graph based Given a discrete-time homogeneous Markov process {X (n) n N} with state space S, defined on a probability space (Ω, F, P) we take over the state space S and define P(s, s ) = P(X n = s X n = s) for an arbitrary n N and π 0 (s) = P(X 0 = s). Graph based stochastic process Given a DTMC (S, P, π 0 ), we set Ω to S, F to the smallest σ-algebra containing all cylinder sets and P(C(s 0... s n )) = π 0 (s 0 ) P(s i, s i ) i n which uniquely defines the probability function P on F. 26 / 84

DTMC - Conditional Probability and Expectation Let (S, P, π 0 ) be a DTMC. We denote by P s the probability function of DTMC (S, P, δ s ) where { δ s (s if s = s ) = 0 otherwise E s the expectation with respect to P s 27 / 84

Analysis questions Transient analysis Steady-state analysis Rewards Reachability Probabilistic logics 28 / 84

DTMC - Transient Analysis 29 / 84

DTMC - Transient Analysis - Example () Example: Gambling with a Limit /2 0 0 /2 /2 20 /2 /2 30 40 /2 What is the probability of being in state 0 after 3 steps? 30 / 84

DTMC - Transient Analysis - n-step Probabilities Definition: Given a DTMC (S, P, π 0 ), we assume w.l.o.g. S = {0,,... } and write p ij = P(i, j). Further, we have P () = P = (p ij ) is the -step transition matrix P (n) = (p (n) ij ) denotes the n-step transition matrix with p (n) ij = P(X n = j X 0 = i) (= P(X k+n = j X k = i)). How can we compute these probabilities? 3 / 84

DTMC - Transient Analysis - Chapman-Kolmogorov Definition: Chapman-Kolmogorov Equation Application of the law of total probability to the n-step transition probabilities p (n) ij results in the Chapman-Kolmogorov Equation p (n) ij = h S p (m) ih p(n m) hj 0 < m < n. Consequently, we have P (n) = PP (n ) = = P n. 32 / 84

DTMC - Transient Analysis - Chapman-Kolmogorov Definition: Chapman-Kolmogorov Equation Application of the law of total probability to the n-step transition probabilities p (n) ij results in the Chapman-Kolmogorov Equation p (n) ij = h S p (m) ih p(n m) hj 0 < m < n. Consequently, we have P (n) = PP (n ) = = P n. Definition: Transient Probability Distribution The transient probability distribution at time n > 0 is defined by π n = π 0 P n = π n P. 32 / 84

DTMC - Transient Analysis - Example (2) 0 0 20 30 40 /2 /2 /2 Example: 0 0 0 0 0.5 0 0.5 0 0 P = 0 0.5 0 0.5 0 0 0 0.5 0 0.5 0 0 0 0 /2 /2 /2 0 0 0 0 0.5 0.25 0 0.25 0 P 2 = 0.25 0 0.5 0 0.25 0 0.25 0 0.25 0.5 0 0 0 0 For π 0 = [ 0 0 0 0 ], π 2 = π 0 P 2 = [ 0.25 0 0.5 0 0.25 ]. For, π 0 = [ 0.4 0 0 0 0.6 ], π 2 = π 0 P 2 = [ 0.4 0 0 0 0.6 ]. Actually, π n = [ 0.4 0 0 0 0.6 ] for all n N! 33 / 84

DTMC - Transient Analysis - Example (2) 0 0 20 30 40 /2 /2 /2 Example: 0 0 0 0 0.5 0 0.5 0 0 P = 0 0.5 0 0.5 0 0 0 0.5 0 0.5 0 0 0 0 /2 /2 /2 0 0 0 0 0.5 0.25 0 0.25 0 P 2 = 0.25 0 0.5 0 0.25 0 0.25 0 0.25 0.5 0 0 0 0 For π 0 = [ 0 0 0 0 ], π 2 = π 0 P 2 = [ 0.25 0 0.5 0 0.25 ]. For, π 0 = [ 0.4 0 0 0 0.6 ], π 2 = π 0 P 2 = [ 0.4 0 0 0 0.6 ]. Actually, π n = [ 0.4 0 0 0 0.6 ] for all n N! Are there other stable distributions? 33 / 84

DTMC - Steady State Analysis 34 / 84

DTMC - Steady State Analysis - Definitions Definition: Stationary Distribution A distribution π is stationary if π = πp. Stationary distribution is generally not unique. 35 / 84

DTMC - Steady State Analysis - Definitions Definition: Stationary Distribution A distribution π is stationary if π = πp. Stationary distribution is generally not unique. Definition: Limiting Distribution π := lim π n = lim π 0P n = π 0 n n lim n Pn = π 0 P. The limit can depend on π 0 and does not need to exist. 35 / 84

DTMC - Steady State Analysis - Definitions Definition: Stationary Distribution A distribution π is stationary if π = πp. Stationary distribution is generally not unique. Definition: Limiting Distribution π := lim π n = lim π 0P n = π 0 n n lim n Pn = π 0 P. The limit can depend on π 0 and does not need to exist. Connection between stationary and limiting? 35 / 84

DTMC - Steady-State Analysis - Periodicity Example: Gambling with Social Guarantees /2 /2 /2 0 0 20 30 40 /2 /2 /2 What are the stationary and limiting distributions? 36 / 84

DTMC - Steady-State Analysis - Periodicity Example: Gambling with Social Guarantees 0 0 20 30 40 /2 /2 /2 /2 /2 /2 What are the stationary and limiting distributions? Definition: Periodicity The period of a state i is defined as d i = gcd{n p n ii > 0}. A state i is called aperiodic if d i = and periodic with period d i otherwise. A Markov chain is aperiodic if all states are aperiodic. Lemma In a finite aperiodic Markov chain, the limiting distribution exists. 36 / 84

DTMC - Steady-State Analysis - Irreducibility () Example /2 /2 /2 0 0 20 30 40 /2 /2 /2 37 / 84

DTMC - Steady-State Analysis - Irreducibility (2) Definition: A DTMC is called irreducible if for all states i, j S we have p n ij > 0 for some n. Lemma In an aperiodic and irreducible Markov chain, the limiting distribution exists and does not depend on π 0. 0! 0! 0! 3 4 4 2 0 2 38 / 84

DTMC - Steady-State Analysis - Irreducibility (3) /2 /2 /2 /2 /2 0 0 20 30 40 /2 /2 /2 /2 /2 What is the stationary / limiting distribution? 39 / 84

DTMC - Steady-State Analysis - Irreducibility (3) /2 /2 /2 /2 /2 0 0 20 30 40 /2 /2 /2 /2 /2 What is the stationary / limiting distribution? /2 /2 /2 /2 0 0 20 30... /2 /2 /2 /2 /2 39 / 84

DTMC - Steady-State Analysis - Irreducibility (3) /2 /2 /2 /2 /2 0 0 20 30 40 /2 /2 /2 /2 /2 What is the stationary / limiting distribution? /2 /2 /2 /2 0 0 20 30... /2 /2 /2 /2 /2 Lemma In a finite aperiodic and irreducible Markov chain, the limiting distribution exists, does not depend on π 0, and equals the unique stationary distribution. 39 / 84

DTMC - Steady-State Analysis - Recurrence () Definition: Let f (n) ij = P(X n = j k < n : X k j X 0 = i) for n be the n-step hitting probability. The hitting probability is defined as and a state i is called transient if f ii < and recurrent if f ii =. f ij = n= f (n) ij 40 / 84

DTMC - Steady-State Analysis - Recurrence (2) Definition: Denoting expectation m ij = n= n f (n) ij, a recurrent state i is called positive recurrent or recurrent non-null if m ii < and recurrent null if m ii =. Lemma The states of an irreducible DTMC are all of the same type, i.e., all periodic or all aperiodic and transient or all aperiodic and recurrent null or all aperiodic and recurrent non-null. 4 / 84

DTMC - Steady-State Analysis - Ergodicity Definition: Ergodicity A DTMC is ergodic if all its states are irreducible, aperiodic and recurrent non-null. Theorem In an ergodic Markov chain, the limiting distribution exists, does not depend on π 0, and equals the unique stationary distribution. As a consequence, the steady-state distribution can be computed by solving the equation system π = πp, x S π s =. Note: The Lemma for finite DTMC follows from the theorem as every irreducible finite DTMC is positive recurrent. 42 / 84

DTMC - Steady-State Analysis - Ergodicity Example: Unbounded Gambling with House Edge p p p 0 0 20 30... p p p p p The DTMC is only ergodic for p [0, 0.5). 43 / 84

DTMC - Rewards 44 / 84

DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. 45 / 84

DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as 45 / 84

DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) 45 / 84

DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? 45 / 84

DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? discounted reward i= λi r(s i ) for some 0 < λ < 45 / 84

DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? discounted reward i= λi r(s i ) for some 0 < λ < average reward n lim n n i=0 r(s i) also called long-run average or mean payoff 45 / 84

DTMC - Rewards - Definitions Definition A reward Markov chain is a tuple (S, P, π 0, r) where (S, P, π 0 ) is a Markov chain and r : S Z is a reward function. Every run ρ = s 0, s,... induces a sequence of values r(s 0 ), r(s ),... Value of the whole run can be defined as total reward T i= r(s i) But what if T =? discounted reward i= λi r(s i ) for some 0 < λ < average reward n lim n n i=0 r(s i) also called long-run average or mean payoff Definition The expected average reward is EAR := lim n n n E[r(X i )] i=0 45 / 84

DTMC - Rewards - Solution Sketch Definition: Time-average Distribution ˆπ = lim n n n π i. ˆπ(s) expresses the ratio of time spent in s on the long run. i=0 More details later for Markov decision processes. 46 / 84

DTMC - Rewards - Solution Sketch Definition: Time-average Distribution ˆπ = lim n n n π i. ˆπ(s) expresses the ratio of time spent in s on the long run. Lemma. E[r(X i )] = s S π i(s) r(s). i=0 2. If ˆπ exists then EAR = s S ˆπ(s) r(s). 3. If limiting distribution exists, it coincides with ˆπ. More details later for Markov decision processes. 46 / 84

DTMC - Rewards - Solution Sketch Definition: Time-average Distribution ˆπ = lim n n n π i. ˆπ(s) expresses the ratio of time spent in s on the long run. Lemma. E[r(X i )] = s S π i(s) r(s). i=0 2. If ˆπ exists then EAR = s S ˆπ(s) r(s). 3. If limiting distribution exists, it coincides with ˆπ. Algortithm. Compute ˆπ (or limiting distribution if possible). 2. Return s S ˆπ(s) r(s). More details later for Markov decision processes. 46 / 84

DTMC - Reachability 47 / 84

DTMC - Reachability Definition: Reachability Given a DTMC (S, P, π 0 ), what is the probability of eventually reaching a set of goal states B S? S s xs B Let x(s) denote P s ( B) where B = {s 0 s i : s i B}. Then s B: x(s) = s S \ B : x(s) = 48 / 84

DTMC - Reachability Definition: Reachability Given a DTMC (S, P, π 0 ), what is the probability of eventually reaching a set of goal states B S? S s xs B Let x(s) denote P s ( B) where B = {s 0 s i : s i B}. Then s B: x(s) = s S \ B : x(s) = t S\B P(s, t)x(t) + u B P(s, u). 48 / 84

DTMC - Reachability Lemma (Reachability Matrix Form) Given a DTMC (S, P, π 0 ), the column vector x = (x(s)) s S\B of probabilities x(s) = P s ( B) satisfies the constraint x = Ax + b, where matrix A is the submatrix of P for states S \ B and b = (b(s)) s S\B is the column vector with b(s) = u B P(s, u). 49 / 84

DTMC - Reachability Example: 0.5 s2 0.25 s 0.25 s3 0.5 0.5 s0 P = B = {s3} A b 0 0.5 0.5 0 0 0.5 0.25 0.25 0 0 0 0 0 0 The vector x = [ x 0 x x 2 ] T = [ 0.25 0.5 0 ] T satisfies the equation system x = Ax + b. 50 / 84

DTMC - Reachability Example: 0.5 s2 0.25 s 0.25 s3 0.5 0.5 s0 P = B = {s3} A b 0 0.5 0.5 0 0 0.5 0.25 0.25 0 0 0 0 0 0 The vector x = [ x 0 x x 2 ] T = [ 0.25 0.5 0 ] T satisfies the equation system x = Ax + b. Is it the only solution? 50 / 84

DTMC - Reachability Example: 0.5 s2 0.25 s 0.25 s3 0.5 0.5 s0 P = B = {s3} A b 0 0.5 0.5 0 0 0.5 0.25 0.25 0 0 0 0 0 0 The vector x = [ x 0 x x 2 ] T = [ 0.25 0.5 0 ] T satisfies the equation system x = Ax + b. Is it the only solution? No! Consider, e.g., [ 0.55 0.7 0.4 ] or [ ] T. While reaching the goal, such bad states need to be avoided. We first generalise such avoiding. 50 / 84

DTMC - Conditional Reachability Definition: Let B, C S. The (unbounded) probability of reaching B from state s under the condition that C is not left before is defined as P s (C U B) where C U B = {s 0 s i : s i B j < i : s j C}. 5 / 84

DTMC - Conditional Reachability Definition: Let B, C S. The (unbounded) probability of reaching B from state s under the condition that C is not left before is defined as P s (C U B) where C U B = {s 0 s i : s i B j < i : s j C}. The probability of reaching B from state s within n steps under the condition that C is not left before is defined as P s (C U n B) where C U n B = {s 0 s i n : s i B j < i : s j C}. What is the equation system for these probabilities? 5 / 84

DTMC - Conditional Reachability - Solution Let S =0 = {s P s (C U B) = 0} and S? = S \ (S =0 B). 52 / 84

DTMC - Conditional Reachability - Solution Let S =0 = {s P s (C U B) = 0} and S? = S \ (S =0 B). Theorem: The column vector x = (x(s)) s S? of probabilities x(s) = P s (C U B) is the unique solution of the equation system x = Ax + b, where A = (P(s, t)) s,t S?, b = (b(s)) s S? with b(s) = u B P(s, u). Furthermore, for x 0 = (0) s S? and x i = Ax i + b for any i,. x n (s) = P s (C U n B) for s S?, 2. x i is increasing, and 3. x = lim n x n. 52 / 84

DTMC - Conditional Reachability - Proof Proof Sketch: (x s ) x S? is a solution: by inserting into definition. Unique solution: By contradiction. Assume y is another solution, then x y = A(x y). One can show that A I is invertible, thus (A I)(x y) = 0 yields x y = (A I) 0 = 0 and finally x = y 2. Furthermore,. From the definitions, by straightforward induction. 2. From. since C U n B C U n+ B. 3. Since C U B = n N C U n B. 2 cf. page 766 of Principles of Model Checking 53 / 84

Algorithmic aspects 54 / 84

Algorithmic Aspects - Summary of Equation Systems Equation Systems Transient analysis: π n = π 0 P n = π n P Steady-state analysis: πp = π, π = s S π(s) = (ergodic) Reachability: x = Ax + b (with (x(s)) s S? ) Solution Techniques. Analytic solution, e.g. by Gaussian elimination 2. Iterative power method (π n π and x n x for n ) 3. Iterative methods for solving large systems of linear equations, e.g. Jacobi, Gauss-Seidel Missing pieces a. finding out whether a DTMC is ergodic, b. computing S? = S \ {s P s ( B) = 0}, c. efficient representation of P. 55 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC () Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n pii n > 0} =. A state i is called positive recurrent if f ii = and m ii <. How do we tell that a finite DTMC is ergodic? 56 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC () Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n pii n > 0} =. A state i is called positive recurrent if f ii = and m ii <. How do we tell that a finite DTMC is ergodic? By analysis of the induced graph! For a DTMC (S, P, π(0)) we define the induced directed graph (S, E) with E = {(s, s ) P(s, s ) > 0}. Recall: A directed graph is called strongly connected if there is a path from each vertex to every other vertex. Strongly connected components (SCC) are its maximal strongly connected subgraphs. A SCC T is bottom (BSCC) if no s T is reachable from T. 56 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: 57 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: The DTMC is irreducible iff the induced graph is strongly connected. 57 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: The DTMC is irreducible iff the induced graph is strongly connected. A state in a BSCC is aperiodic iff the BSCC is aperiodic, i.e. the greatest common divisor of the lengths of all its cycles is. 57 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (2) Ergodicity = Irreducibility + Aperidocity + P. Recurrence A DTMC is called irreducible if for all states i, j S we have > 0 for some n. p n ij A state i is called aperiodic if gcd{n p n ii > 0} =. A state i is called positive recurrent if f ii = and m ii <. Theorem: For finite DTMCs, it holds that: The DTMC is irreducible iff the induced graph is strongly connected. A state in a BSCC is aperiodic iff the BSCC is aperiodic, i.e. the greatest common divisor of the lengths of all its cycles is. A state is positive recurrent iff it belongs to a BSCC otherwise it is transient. 57 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? 58 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? gcd{n s : P n (s, s) > 0} = 58 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? gcd{n s : P n (s, s) > 0} = in time O(n + m)? 58 / 84

Algorithmic Aspects: a. Ergodicity of finite DTMC (3) How to check: is gcd of the lengths of all cycles of a strongly connected graph? gcd{n s : P n (s, s) > 0} = in time O(n + m)? By the following DFS-based procedure: Algorithm: PERIOD(vertex v, unsigned level : init 0) global period : init 0; 2 if period = then 3 return 4 end 5 if v is unmarked then 6 mark v; 7 v level = level; 8 for v out(v) do 9 PERIOD(v,level + ) 0 end else 2 period = gcd(period, level v level ); 3 end 58 / 84

Algorithmic Aspects: b. Computing the set S? We have S? = S \ (B S =0 ) where S =0 = {s P s ( B) = 0}. Hence, s S =0 iff p n ss = 0 for all n and s B. 59 / 84

Algorithmic Aspects: b. Computing the set S? We have S? = S \ (B S =0 ) where S =0 = {s P s ( B) = 0}. Hence, s S =0 iff p n ss = 0 for all n and s B. This can be again easily checked from the induced graph: Lemma We have s S =0 iff there is no path from s to any state from B. Proof. Easy from the fact that p n ss > 0 iff there is a path of length n to s. 59 / 84

Algorithmic Aspects: c. Efficient Representations start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6 Zhang (Saarland University, Germany) Quantitative Model Checking August 24 th,2009 9/35 60 / 84

Algorithmic Aspects: c. Efficient Representations start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6. There are many 0 entries in the transition matrix. Sparse matrices offer a more concise storage. Zhang (Saarland University, Germany) Quantitative Model Checking August 24 th,2009 9/35 60 / 84

Algorithmic Aspects: c. Efficient Representations start Zeroconf as a Markov chain s q 0 s p s p 2 s p 3 s 4 s 7 q p p p p p s 5 s 8 error s 6. There are many 0 entries in the transition matrix. Sparse matrices offer a more concise storage. Zhang (Saarland University, Germany) Quantitative Model Checking August 24 th,2009 9/35 2. There are many similar entries in the transition matrix. Multi-terminal binary decision diagrams offer a more concise storage, using automata theory. 60 / 84

DTMC - Probabilistic Temporal Logics for Specifying Complex Properties 6 / 84

Logics - Adding Labels to DTMC Definition: A labeled DTMC is a tuple D = (S, P, π 0, L) with L : S 2 AP, where AP is a set of atomic propositions and L is a labeling function, where L(s) specifies which properties hold in state s S. 62 / 84

Logics - Examples of Properties H H H O O O States and transitions state = configuration of the game; transition = rolling the dice and acting (randomly) based on the result. O O O H H H State labels init, rwin, bwin, rkicked, bkicked,... r30, r2,..., b30, b2,..., Examples of Properties the game cannot return back to start at any time, the game eventually ends with prob. at any time, the game ends within 00 dice rolls with prob. 0.5 the probability of winning without ever being kicked out is 0.3 How to specify them formally? 63 / 84

Logics - Temporal Logics - non-probabilistic () Linear-time view corresponds to our (human) perception of time can specify properties of one concrete linear execution of the system Example: eventually red player is kicked out followed immediately by blue player being kicked out. Branching-time view views future as a set of all possibilities can specify properties of all executions from a given state specifies execution trees Example: in every computation it is always possible to return to the initial state. 64 / 84

Logics - Temporal Logics - non-probabilistic (2) Linear Temporal Logic (LTL) Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: eventually red player is kicked out followed immediately by blue player being kicked out: F (rkicked X bkicked) Question: do all executions satisfy the given LTL formula? Computation Tree Logic (CTL) Syntax for specifying states: Syntax for specifying executions: φ = true a φ φ φ A ψ E ψ ψ = X φ φ U φ F φ G φ Example: in all computations it is always possible to return to initial state: A G E F init Question: does the given state satisfy the given CTL state formula? 65 / 84

Logics - LTL Syntax ψ = true a ψ ψ ψ X ψ ψ U ψ. Semantics (for a path ω = s 0 s ) ω = true (always), ω = a iff a L(s 0 ), ω = ψ ψ 2 iff ω = ψ and ω = ψ 2, ω = ψ iff ω = ψ, ω = X ψ iff s s 2 = ψ, Ψ ω = ψ U ψ 2 iff i 0 : s i s i+ = ψ 2 and j < i : s j s j+ = ψ. Ψ Ψ Ψ 2 Syntactic sugar F ψ G ψ 66 / 84

Logics - LTL Syntax ψ = true a ψ ψ ψ X ψ ψ U ψ. Semantics (for a path ω = s 0 s ) ω = true (always), ω = a iff a L(s 0 ), ω = ψ ψ 2 iff ω = ψ and ω = ψ 2, ω = ψ iff ω = ψ, ω = X ψ iff s s 2 = ψ, Ψ ω = ψ U ψ 2 iff i 0 : s i s i+ = ψ 2 and j < i : s j s j+ = ψ. Ψ Ψ Ψ 2 Syntactic sugar F ψ true U ψ G ψ (true U ψ) ( F ψ) 66 / 84

Logics - CTL Syntax State formulae: φ = true a φ φ φ A ψ E ψ where ψ is a path formula. Semantics For a state s: s = true s = a s = φ φ 2 (always), iff a L(s), iff s = φ and s = φ 2, s = φ iff s = φ, s = Aψ iff ω = ψ for all paths ω = s 0 s with s 0 = s, s = Eψ iff ω = ψ for some path ω = s 0 s with s 0 = s. Path formulae: ψ = X φ φ U φ where φ is a state formula. For a path ω = s 0 s : ω = X φ iff s s 2 satisfies φ, Φ ω = φ U φ 2 iff i : s i s i+ = φ 2 and j < i : s j s j+ = φ. Φ Φ Φ2 67 / 84

Logics - Temporal Logics - non-probabilistic (2) Linear Temporal Logic (LTL) Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: eventually red player is kicked out followed immediately by blue player being kicked out: F (rkicked X bkicked) Question: do all executions satisfy the given LTL formula? Computation Tree Logic (CTL) Syntax for specifying states: Syntax for specifying executions: φ = true a φ φ φ A ψ E ψ ψ = X φ φ U φ F φ G φ Example: in all computations it is always possible to return to initial state: A G E F init Question: does the given state satisfy the given CTL state formula? 68 / 84

Logics - Temporal Logics - probabilistic Linear Temporal Logic (LTL) + probabilities Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: with prob. 0.8, eventually red player is kicked out followed immediately by blue player being kicked out: P(F (rkicked X bkicked)) 0.8 Question: is the formula satisfied by executions of given probability? 69 / 84

Logics - Temporal Logics - probabilistic Linear Temporal Logic (LTL) + probabilities Syntax for formulae specifying executions: ψ = true a ψ ψ ψ X ψ ψ U ψ F ψ G ψ Example: with prob. 0.8, eventually red player is kicked out followed immediately by blue player being kicked out: P(F (rkicked X bkicked)) 0.8 Question: is the formula satisfied by executions of given probability? Probabilitic Computation Tree Logic (PCTL) Syntax for specifying states: Syntax for specifying executions: φ = true a φ φ φ P J ψ ψ = X φ φ U φ φ U k φ F φ G φ Example: with prob. at least 0.5 the probability to return to initial state is always at least 0.: P 0.5 G P 0. F init Question: does the given state satisfy the given PCTL state formula? 69 / 84

Logics - PCTL - Examples Syntactic sugar: φ φ 2 ( φ φ 2 ), φ φ 2 φ φ 2, etc. 0.5 denotes the interval [0, 0.5], = denotes [, ], etc. Examples: A fair die: i {,...,6} P = (F i). 6 The probability of winning Who wants to be a millionaire without using any joker should be negligible: P <e 0 ( (J 50% J audience J telephone ) U win). 70 / 84

Logics - PCTL - Semantics Semantics For a state s: s = true s = a s = φ φ 2 (always), iff a L(s), iff s = φ and s = φ 2, s = φ iff s = φ, s = P J (ψ) iff P s (Paths(ψ)) J For a path ω = s 0 s : ω = X φ iff s s 2 satisfies φ,! ω = φ U φ 2 iff i :! s i s i+!. =. φ. 2. and!!2 j < i : s j s j+ = φ.!....!!2 ω = φ U n φ 2 iff i n :! s i s i+ = φ 2 and j < i : s j s j+ = φ.!....!!2 7 / 84

Logics - Examples of Properties O O O H H H H H H O O O Examples of Properties. the game cannot return back to start 2. at any time, the game eventually ends with prob. 3. at any time, the game ends within 00 dice rolls with prob. 0.5 4. the probability of winning without ever being kicked out is 0.3 Formally 72 / 84

Logics - Examples of Properties O O O H H H H H H O O O Examples of Properties. the game cannot return back to start 2. at any time, the game eventually ends with prob. 3. at any time, the game ends within 00 dice rolls with prob. 0.5 4. the probability of winning without ever being kicked out is 0.3 Formally. P(X G init) = (LTL + prob.) P = (X P =0 (G init)) (PCTL) 2. P = (G P = (F (rwin bwin))) (PCTL) 3. P = (G P 0.5 (F 00 (rwin bwin))) (PCTL) 4. P(( rkicked bkicked) U (rwin bwin) 0.3 (LTL + prob.) 72 / 84

PCTL Model Checking Algorithm 73 / 84

PCTL Model Checking Definition: PCTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Φ a PCTL state formula and s S. The model checking problem is to decide whether s = Φ. Theorem The PCTL model checking problem can be decided in time polynomial in D, linear in Φ, and linear in the maximum step bound n. 74 / 84

PCTL Model Checking - Algorithm - Outline () Algorithm: Consider the bottom-up traversal of the parse tree of Φ: The leaves are a AP or true and the inner nodes are: unary labelled with the operator or PJ (X ); binary labelled with an operator, PJ ( U ), or P J ( U n ). Example: a P 0.2 ( b U a P 0.9 ( c)) P!0.2( U ) P"0.9( U ) b true c Compute Sat(Ψ) = {s S s = Ψ} for each node Ψ of the tree in a bottom-up fashion. Then s = Φ iff s Sat(Φ). 75 / 84

PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: 76 / 84

PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: Lemma Sat(true) = S, Sat(a) = {s a L(s)} 76 / 84

PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: Lemma Sat(true) = S, Sat(a) = {s a L(s)} Induction step of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ given the sets Sat(Ψ ) for all state sub-formulas Ψ of Ψ: Lemma Sat(Φ Φ 2 ) = Sat( Φ) = 76 / 84

PCTL Model Checking - Algorithm - Outline (2) Base of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ of the form a or true: Lemma Sat(true) = S, Sat(a) = {s a L(s)} Induction step of the algorithm: We need a procedure to compute Sat(Ψ) for Ψ given the sets Sat(Ψ ) for all state sub-formulas Ψ of Ψ: Lemma Sat(Φ Φ 2 ) = Sat(Φ ) Sat(Φ 2 ) Sat( Φ) = S \ Sat(Φ) Sat(P J (Φ)) = {s P s (Paths(Φ)) J} discussed on the next slide. 76 / 84

PCTL Model Checking - Algorithm - Path Operator Lemma Next: P s (Paths(X Φ)) = Bounded Until: P s (Paths(Φ U n Φ 2 )) = Unbounded Until: P s (Paths(Φ U Φ 2 )) = 77 / 84

PCTL Model Checking - Algorithm - Path Operator Lemma Next: Bounded Until: P s (Paths(X Φ)) = s Sat(Φ) P(s, s ) P s (Paths(Φ U n Φ 2 )) = P s (Sat(Φ ) U n Sat(Φ 2 )) Unbounded Until: P s (Paths(Φ U Φ 2 )) = P s (Sat(Φ ) U Sat(Φ 2 )) 77 / 84

PCTL Model Checking - Algorithm - Path Operator Lemma Next: Bounded Until: P s (Paths(X Φ)) = s Sat(Φ) P(s, s ) P s (Paths(Φ U n Φ 2 )) = P s (Sat(Φ ) U n Sat(Φ 2 )) Unbounded Until: P s (Paths(Φ U Φ 2 )) = P s (Sat(Φ ) U Sat(Φ 2 )) As before: can be reduced to transient analysis and to unbounded reachability. 77 / 84

PCTL Model Checking - Algorithm - Complexity Precise algorithm Computation for every node in the parse tree and for every state: All node types except for path operator trivial. Next: Trivial. Until: Solving equation systems can be done by polynomially many elementary arithmetic operations. Bounded until: Matrix vector multiplications can be done by polynomial many elementary arithmetic operations as well. Overall complexity: Polynomial in D, linear in Φ and the maximum step bound n. In practice The until and bounded until probabilities computed approximatively: rounding off probabilities in matrix-vector multiplication, using approximative iterative methods without error guarantees. 78 / 84

pltl Model Checking Algorithm 79 / 84

LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). 80 / 84

LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). Algorithm Outline. Construct from Ψ a deterministic Rabin automaton A recognizing words satisfying Ψ, i.e. Paths(Ψ) := {L(ω) (2 Ap ) ω = Ψ} 80 / 84

LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). Algorithm Outline. Construct from Ψ a deterministic Rabin automaton A recognizing words satisfying Ψ, i.e. Paths(Ψ) := {L(ω) (2 Ap ) ω = Ψ} 2. Construct a product DTMC D A that embeds the deterministic execution of A into the Markov chain. 80 / 84

LTL Model Checking - Overview Definition: LTL Model Checking Let D = (S, P, π 0, L) be a DTMC, Ψ a LTL formula, s S, and p [0, ]. The model checking problem is to decide whether s = P D s (Paths(Ψ)) p. Theorem The LTL model checking can be decided in time O( D 2 Ψ ). Algorithm Outline. Construct from Ψ a deterministic Rabin automaton A recognizing words satisfying Ψ, i.e. Paths(Ψ) := {L(ω) (2 Ap ) ω = Ψ} 2. Construct a product DTMC D A that embeds the deterministic execution of A into the Markov chain. 3. Compute in D A the probability of paths where A satisfies the acceptance condition. 80 / 84

LTL Model Checking - ω-automata (.) Deterministic Rabin automaton (DRA): (Q, Σ, δ, q 0, Acc) a DFA with a different acceptance condition, Acc = {(E i, F i ) i k} each accepting infinite path must visit for some i all states of Ei at most finitely often and some state of Fi infinitely often. 8 / 84

LTL Model Checking - ω-automata (.) Deterministic Rabin automaton (DRA): (Q, Σ, δ, q 0, Acc) a DFA with a different acceptance condition, Acc = {(E i, F i ) i k} each accepting infinite path must visit for some i all states of Ei at most finitely often and some state of Fi infinitely often. Example Give some automata recognizing the language of formulas (a X b) auc FGa GFa 8 / 84

LTL Model Checking - ω-automata (.) Deterministic Rabin automaton (DRA): (Q, Σ, δ, q 0, Acc) a DFA with a different acceptance condition, Acc = {(E i, F i ) i k} each accepting infinite path must visit for some i all states of Ei at most finitely often and some state of Fi infinitely often. Example Give some automata recognizing the language of formulas (a X b) auc FGa GFa Lemma (Vardi&Wolper 86, Safra 88) For any LTL formula Ψ there is a DRA A recognizing Paths(Ψ) with A 2 2O( Ψ ). 8 / 84

LTL Model Checking - Product DTMC (2.) For a labelled DTMC D = (S, P, π 0, L) and a DRA A = (Q, 2 Ap, δ, q 0, {(E i, F i ) i k}) we define. a DTMC D A = (S Q, P, π 0 ): P ((s, q), (s, q )) = P(s, s ) if δ(q, L(s )) = q and 0, otherwise; π 0 ((s, q s)) = π 0(s) if δ(q 0, L(s)) = q s and 0, otherwise; and 82 / 84

LTL Model Checking - Product DTMC (2.) For a labelled DTMC D = (S, P, π 0, L) and a DRA A = (Q, 2 Ap, δ, q 0, {(E i, F i ) i k}) we define. a DTMC D A = (S Q, P, π 0 ): P ((s, q), (s, q )) = P(s, s ) if δ(q, L(s )) = q and 0, otherwise; π 0 ((s, q s)) = π 0(s) if δ(q 0, L(s)) = q s and 0, otherwise; and 2. {(E i, F i ) i k} where for each i: E i F i = {(s, q) q E i, s S}, = {(s, q) q F i, s S}, 82 / 84

LTL Model Checking - Product DTMC (2.) For a labelled DTMC D = (S, P, π 0, L) and a DRA A = (Q, 2 Ap, δ, q 0, {(E i, F i ) i k}) we define. a DTMC D A = (S Q, P, π 0 ): P ((s, q), (s, q )) = P(s, s ) if δ(q, L(s )) = q and 0, otherwise; π 0 ((s, q s)) = π 0(s) if δ(q 0, L(s)) = q s and 0, otherwise; and 2. {(E i, F i ) i k} where for each i: E i F i = {(s, q) q E i, s S}, = {(s, q) q F i, s S}, Lemma The construction preserves probability of accepting as Ps D (Lang(A)) = P D A (s,q s )({ω i : inf(ω) E i =, inf(ω) F i }) where inf(ω) is the set of states visited in ω infinitely often. Proof sketch. We have a one-to-one correspondence between executions of D and D A (as A is deterministic), mapping Lang(A) to { }, and preserving probabilities. 82 / 84

LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? 83 / 84

LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? Identify the BSCCs (C j ) j of D A that for some i k,. contain no state from E i and 2. contain some state from F i. Lemma P D A (s,q s ) ({ω i : inf(ω) E i =, inf(ω) F i }) = P D A (s,q s ) ( j C j). 83 / 84

LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? Identify the BSCCs (C j ) j of D A that for some i k,. contain no state from E i and 2. contain some state from F i. Lemma P D A (s,q s ) ({ω i : inf(ω) E i =, inf(ω) F i }) = P D A (s,q s ) ( j C j). Proof sketch. Note that some BSCC of each finite DTMC is reached with probability (short paths with prob. bounded from below), Rabin acceptance condition does not depend on any finite prefix of the infinite word, every state of a finite irreducible DTMC is visited infinitely often with probability regardless of the choice of initial state. 83 / 84

LTL Model Checking - Computing Acceptance Pr. (3.) How to check the probability of accepting in D A? Identify the BSCCs (C j ) j of D A that for some i k,. contain no state from E i and 2. contain some state from F i. Lemma P D A (s,q s ) ({ω i : inf(ω) E i =, inf(ω) F i }) = P D A (s,q s ) ( j C j). Proof sketch. Note that some BSCC of each finite DTMC is reached with probability (short paths with prob. bounded from below), Rabin acceptance condition does not depend on any finite prefix of the infinite word, every state of a finite irreducible DTMC is visited infinitely often with probability regardless of the choice of initial state. Corollary P D s (Lang(A)) = P D A (s,q s ) ( j C j). 83 / 84

LTL Model Checking - Algorithm - Complexity Doubly exponential in Ψ and polynomial in D (for the algorithm presented here):. A and hence also D A is of size 2 2O( Ψ ) 2. BSCC computation: Tarjan algorithm - linear in D A (number of states + transitions) 3. Unbounded reachability: system of linear equations ( D A ): exact solution: cubic in the size of the system approximative solution: efficient in practice 84 / 84