Fluid analysis of Markov Models

Size: px
Start display at page:

Download "Fluid analysis of Markov Models"

Transcription

1 Fluid analysis of Markov Models Jeremy Bradley, Richard Hayden, Anton Stefanek Imperial College London Tutorial, SFM:DS June 2013

2 2/60 You can download this presentation now from: or

3 3/60

4 How can we... 4/60

5 scale 4/60

6 resource 4/60

7 provision 4/60

8 design 4/60

9 5/60

10 or 5/60

11 5/60

12 to meet 5/60

13 5/60

14 while minimising 5/60

15 5/60

16 ? 5/60

17 6/60

18 We want to be able to engineer complex systems 6/60

19 6/60 We want to be able to engineer complex systems We want to be able to reason about performance

20 6/60 We want to be able to engineer complex systems We want to be able to reason about performance We want to be able to optimise key cost functions

21 6/60 We want to be able to engineer complex systems We want to be able to reason about performance We want to be able to optimise key cost functions...at the same time

22 Process modelling with Stochastic systems 7/60

23 8/60 Process Algebra A process algebra model consists of agents which engage in actions.

24 8/60 Process Algebra A process algebra model consists of agents which engage in actions. action label α.p agent or component Based on a slide by Jane Hillston

25 8/60 Process Algebra A process algebra model consists of agents which engage in actions. action label α.p agent or component Based on a slide by Jane Hillston

26 8/60 Process Algebra A process algebra model consists of agents which engage in actions. action label α.p agent or component The semantics of the language define an underlying state space by way of a labelled transition system. Based on a slide by Jane Hillston

27 8/60 Process Algebra A process algebra model consists of agents which engage in actions. action label α.p agent or component The semantics of the language define an underlying state space by way of a labelled transition system. A process algebra model Based on a slide by Jane Hillston

28 8/60 Process Algebra A process algebra model consists of agents which engage in actions. action label α.p agent or component The semantics of the language define an underlying state space by way of a labelled transition system. A process algebra model semantic rules Based on a slide by Jane Hillston

29 8/60 Process Algebra A process algebra model consists of agents which engage in actions. action label α.p agent or component The semantics of the language define an underlying state space by way of a labelled transition system. A process algebra model semantic rules Labelled transition system Based on a slide by Jane Hillston

30 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. 9/60

31 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate Based on a slide by Jane Hillston 9/60

32 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate Based on a slide by Jane Hillston 9/60

33 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate Based on a slide by Jane Hillston 9/60

34 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate where the action duration is exponentially distributed with rate λ. Based on a slide by Jane Hillston 9/60

35 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate where the action duration is exponentially distributed with rate λ. The semantics of the language define an underlying state space and also a performance model in terms of a CTMC. Based on a slide by Jane Hillston 9/60

36 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate where the action duration is exponentially distributed with rate λ. The semantics of the language define an underlying state space and also a performance model in terms of a CTMC. SPA model Based on a slide by Jane Hillston 9/60

37 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate where the action duration is exponentially distributed with rate λ. The semantics of the language define an underlying state space and also a performance model in terms of a CTMC. SPA model semantics Based on a slide by Jane Hillston 9/60

38 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate where the action duration is exponentially distributed with rate λ. The semantics of the language define an underlying state space and also a performance model in terms of a CTMC. SPA model semantics LTS Based on a slide by Jane Hillston 9/60

39 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate where the action duration is exponentially distributed with rate λ. The semantics of the language define an underlying state space and also a performance model in terms of a CTMC. SPA model semantics LTS filter Based on a slide by Jane Hillston 9/60

40 Stochastic Process Algebra A stochastic process algebra model also consists of agents which engage in actions, but with the actions having a random duration associated with them. (α, λ).p action label agent or component action rate where the action duration is exponentially distributed with rate λ. The semantics of the language define an underlying state space and also a performance model in terms of a CTMC. semantics filter SPA model LTS CTMC Based on a slide by Jane Hillston 9/60

41 PEPA: Stochastic process algebra Many SPAs exist and capture performance and behavioural features in different ways. e.g. igsmpa [1], IMC [2], sfsp [3], EMPA [4], TIPP [5] PEPA [6] is useful because: it is a formal, algebraic description of a system it is compositional it is parsimonious (succinct) it is easy to learn! it is used in research and in industry [1] Mario Bravetti and Roberto Gorrieri. Interactive Generalized Semi-Markov Processes. In: Process Algebra and Performance Modelling Workshop. Ed. by Jane Hillston and Manuel Silva. Centro Politécnico Superior de la Universidad de Zaragoza. Prensas Universitarias de Zaragoza, 1999, pp [2] Holger Hermanns. Interactive Markov Chains. PhD thesis. Universität Erlangen Nürnberg, [3] Thomas Ayles et al. Adding Performance Evaluation to the LTSA Tool. In: Proceedings of 13th International Conference on Computer Performance Evaluation: Modelling Techniques and Tools [4] Marco Bernardo and Roberto Gorrieri. Extended Markovian Process Algebra. In: CONCUR 96, Proceedings of the 7th International Conference on Concurrency Theory. Ed. by Ugo Montanari and Vladimiro Sassone. Vol Lecture Notes in Computer Science. Springer- Verlag, 1996, pp [5] Norbert Götz, Ulrich Herzog, and Michael Rettelbach. TIPP A Stochastic Process Algebra. In: Process Algebra and Performance Modelling. Ed. by Jane Hillston and Faron Moller. CSR Technical Report. Department of Computer Science, University of Edinburgh, 1993, pp [6] Jane Hillston. A Compositional Approach to Performance Modelling. Vol. 12. Distinguished Dissertations in Computer Science. Cambridge University Press, /60

42 What can you do with PEPA? It allows you to answer key performance questions Steady state analysis 0.05 Steady state: X_ Probability Time, t What is the long-run average behaviour of my system? 11/60

43 What can you do with PEPA? It allows you to answer key performance questions Transient analysis PEPA model: transient X_1 -> X_1 Steady state: X_ Probability Time, t What is the behaviour of my system at time, t? 11/60

44 What can you do with PEPA? It allows you to answer key performance questions Transient analysis PEPA model: transient X_1 -> X_1 Steady state: X_ Probability Time, t What is the behaviour of my system at time, t? 11/60

45 What can you do with PEPA? It allows you to answer key performance questions Transient analysis PEPA model: transient X_1 -> X_1 Steady state: X_ Probability Time, t What is the behaviour of my system at time, t? 11/60

46 11/60 What can you do with PEPA? It allows you to answer key performance questions Passage time analysis How long does it take my system to complete a key transaction?

47 Tool Support PEPA has several methods of execution and analysis, through comprehensive tool support: PEPA Eclipse plugin: Edinburgh [7] Möbius: Urbana-Champaign, Illinois [8] PRISM: Birmingham [9] ipc: Imperial College London [10] gpa: Imperial College London [11] [7] Mirco Tribastone. The PEPA Plug-in Project. In: QEST 07, Proceedings of the 4th Int. Conference on the Quantitative Evaluation of Systems. IEEE Computer Society, 2007, pp [8] Graham Clark et al. The Möbius Modeling Tool. In: Proceedings the 9th International Workshop on Petri Nets and Performance Models. Ed. by B Haverkort and R German. IEEE Computer Society Press, 2001, pp [9] Marta Kwiatkowska, Gethin Norman, and David Parker. PRISM: Probabilistic Symbolic Model Checker. In: TOOLS 02, Proceedings of the 12th International Conference on Modelling Techniques and Tools for Computer Performance Evaluation. Ed. by A J Field et al. Vol Lecture Notes in Computer Science. London: Springer-Verlag, 2002, pp [10] Jeremy T Bradley and William J Knottenbelt. The ipc/hydra Tool Chain for the Analysis of PEPA Models. In: QEST 04, Proceedings of the 1st IEEE Conference on the Quantitative Evaluation of Systems. Ed. by Boudewijn Haverkort et al. University of Twente, Enschede: IEEE Computer Society, 2004, pp [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. A new tool for the performance analysis of massively parallel computer systems. In: Eighth Workshop on Quantitative Aspects of Programming Languages (QAPL 2010), March 27-28, 2010, Paphos, Cyprus. Electronic Proceedings in Theoretical Computer Science. Mar url: 12/60

48 13/60 PEPA Syntax Syntax: def P ::= (a, λ).p P + P P P P/L A = P L Action prefix: (a, λ).p Competitive choice: P 1 + P 2 Cooperation: P 1 L P 2 Action hiding: P/L Constant label: A = def P

49 13/60 PEPA Syntax Syntax: def P ::= (a, λ).p P + P P P P/L A = P L Action prefix: (a, λ).p Competitive choice: P 1 + P 2 Cooperation: P 1 L P 2 Action hiding: P/L Constant label: A = def P

50 13/60 PEPA Syntax Syntax: def P ::= (a, λ).p P + P P P P/L A = P L Action prefix: (a, λ).p Competitive choice: P 1 + P 2 Cooperation: P 1 L P 2 Action hiding: P/L Constant label: A = def P

51 13/60 PEPA Syntax Syntax: def P ::= (a, λ).p P + P P P P/L A = P L Action prefix: (a, λ).p Competitive choice: P 1 + P 2 Cooperation: P 1 L P 2 Action hiding: P/L Constant label: A = def P

52 13/60 PEPA Syntax Syntax: def P ::= (a, λ).p P + P P P P/L A = P L Action prefix: (a, λ).p Competitive choice: P 1 + P 2 Cooperation: P 1 L P 2 Action hiding: P/L Constant label: A = def P

53 13/60 PEPA Syntax Syntax: def P ::= (a, λ).p P + P P P P/L A = P L Action prefix: (a, λ).p Competitive choice: P 1 + P 2 Cooperation: P 1 L P 2 Action hiding: P/L Constant label: A = def P

54 14/60 Why exponential? 1.4 X~exp(1.25)

55 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

56 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

57 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

58 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

59 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

60 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

61 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

62 14/60 Why exponential? Memorylessness 1.4 X~exp(1.25) What is P(X t X > u) if X exp(λ)?

63 15/60 Why exponential? 1. Described by a single parameter

64 15/60 Why exponential? 1. Described by a single parameter 2. Memorylessness

65 15/60 Why exponential? 1. Described by a single parameter 2. Memorylessness 3. Ability to describe other distributions using phase-type combinations

66 16/60 Prefix: (a, λ).a Prefix is used to describe a process that evolves from one state to another by emitting or performing an action

67 16/60 Prefix: (a, λ).a Prefix is used to describe a process that evolves from one state to another by emitting or performing an action Example: P = def (a, λ).q...means that the process P evolves with rate λ to become process Q, by emitting an a-action

68 16/60 Prefix: (a, λ).a Prefix is used to describe a process that evolves from one state to another by emitting or performing an action Example: P = def (a, λ).q...means that the process P evolves with rate λ to become process Q, by emitting an a-action λ is an exponential rate parameter

69 16/60 Prefix: (a, λ).a Prefix is used to describe a process that evolves from one state to another by emitting or performing an action Example: P = def (a, λ).q...means that the process P evolves with rate λ to become process Q, by emitting an a-action λ is an exponential rate parameter As a labelled transition system, this becomes: Prefix : P (a,λ) Q

70 17/60 Choice: P 1 + P 2 PEPA uses a type of choice known as competitive choice

71 17/60 Choice: P 1 + P 2 PEPA uses a type of choice known as competitive choice Example: P = def (a, λ).p 1 + (b, µ).p 2 means that P can evolve either to produce an a-action with rate λ or to produce a b-action with rate µ

72 17/60 Choice: P 1 + P 2 PEPA uses a type of choice known as competitive choice Example: P = def (a, λ).p 1 + (b, µ).p 2 means that P can evolve either to produce an a-action with rate λ or to produce a b-action with rate µ As a labelled transition system: (a, λ) Choice: P (b, µ) P 1 P 2

73 18/60 Choice: P 1 + P 2 P = def (a, λ).p 1 + (b, µ).p 2 This is competitive choice since: P 1 and P 2 are in a race condition the first one to perform an a or a b will dictate the direction of choice for P 1 + P 2 What is the probability that we see an a-action?

74 19/60 Cooperation: P 1 L P 2 P 1 P 2 defines concurrency and communication within L PEPA

75 19/60 Cooperation: P 1 L P 2 P 1 P 2 defines concurrency and communication within L PEPA The L in P 1 P 2 defines the set of actions over which two L components are to cooperate

76 19/60 Cooperation: P 1 L P 2 P 1 P 2 defines concurrency and communication within L PEPA The L in P 1 P 2 defines the set of actions over which two L components are to cooperate Any other actions that P 1 and P 2 can do, not mentioned in L, can happen independently

77 19/60 Cooperation: P 1 L P 2 P 1 P 2 defines concurrency and communication within L PEPA The L in P 1 P 2 defines the set of actions over which two L components are to cooperate Any other actions that P 1 and P 2 can do, not mentioned in L, can happen independently If a L and P 1 enables an a, then P 1 has to wait for P 2 to enable an a before the cooperation can proceed

78 19/60 Cooperation: P 1 L P 2 P 1 P 2 defines concurrency and communication within L PEPA The L in P 1 P 2 defines the set of actions over which two L components are to cooperate Any other actions that P 1 and P 2 can do, not mentioned in L, can happen independently If a L and P 1 enables an a, then P 1 has to wait for P 2 to enable an a before the cooperation can proceed Easy source of deadlock!

79 20/60 Cooperation: P 1 L P 2 If P 1 (a,λ) P 1 and P 2 (a, ) P 2 then: P 1 {a} P 2 (a,λ) P 1 P {a} 2

80 20/60 Cooperation: P 1 L P 2 If P 1 (a,λ) P 1 and P 2 (a, ) P 2 then: P 1 {a} P 2 (a,λ) P 1 P {a} 2 represents a passive rate which, in the cooperation, inherits the λ-rate of from P 1

81 20/60 Cooperation: P 1 L P 2 If P 1 (a,λ) P 1 and P 2 (a, ) P 2 then: P 1 {a} P 2 (a,λ) P 1 P {a} 2 represents a passive rate which, in the cooperation, inherits the λ-rate of from P 1 If both rates are specified and the only a-evolutions allowed from P 1 and P 2 are, P 1 P 1 P (a,min(λ,µ)) 2 P {a} 1 P {a} 2 (a,λ) P 1 and P 2 (a,µ) P 2 then:

82 21/60 Cooperation: P 1 L P 2 The general cooperation case is where: P1 enables m a-actions P 2 enables n a-actions at the moment of cooperation...in which case there are m n possible transitions for P 1 {a} P 2

83 21/60 Cooperation: P 1 L P 2 The general cooperation case is where: P1 enables m a-actions P 2 enables n a-actions at the moment of cooperation...in which case there are m n possible transitions for P 1 {a} P 2 with mn a-actions having cumulative rate P 1 P 2 {a} where R = min(r a (P 1 ), r a (P 2 )) (a,r)

84 21/60 Cooperation: P 1 L P 2 The general cooperation case is where: P1 enables m a-actions P 2 enables n a-actions at the moment of cooperation...in which case there are m n possible transitions for P 1 {a} P 2 with mn a-actions having cumulative rate P 1 P 2 {a} where R = min(r a (P 1 ), r a (P 2 )) (a,r) r a (P) = i:p (a, r i ) r i is the apparent rate of an action a the total rate at which P can do a

85 Hiding: P/L Used to turn observable actions in P into hidden or silent actions in P/L L defines the set of actions to hide If P (a,λ) P : P/{a} (τ,λ) P /{a} τ is the silent action Used to hide complexity and create a component interface Cooperation on τ not allowed 22/60

86 23/60 PEPA: A Transmitter-Receiver System = def (Transmitter Receiver) Network L A simple model of a transmitter receiver over a network

87 23/60 PEPA: A Transmitter-Receiver System = def (Transmitter Receiver) Network L A simple model of a transmitter receiver over a network

88 23/60 PEPA: A Transmitter-Receiver System = def (Transmitter Receiver) Network Transmitter = def (transmit, λ 1 ).(t recover, λ 2 ).Transmitter Receiver = def (receive, ).(r recover, µ).receiver Network = def (transmit, ).(delay, ν 1 ).(receive, ν 2 ).Network L A simple model of a transmitter receiver over a network

89 23/60 PEPA: A Transmitter-Receiver System = def (Transmitter Receiver) Network Transmitter = def (transmit, λ 1 ).(t recover, λ 2 ).Transmitter Receiver = def (receive, ).(r recover, µ).receiver Network = def (transmit, ).(delay, ν 1 ).(receive, ν 2 ).Network where L = {transmit, receive}. L A simple model of a transmitter receiver over a network

90 TR example: Labelled transition system with X 1 (Transmitter Receiver) Network L X 2 (Transmitter Receiver) Network and so on. L 24/60

91 25/60 Voting Example I Voters vote and Pollers record those votes. Pollers can break individually and recover individually. If all Pollers break then they are all repaired in unison. System def = (Voter Voter Voter) ((Poller Poller) Poller group 0) {vote} L L where L = {recover all} L = {recover, break, recover all}

92 26/60 Voting Example II Voter def = (vote, λ).(pause, µ).voter

93 26/60 Voting Example II Voter def = (vote, λ).(pause, µ).voter Poller def = (vote, ).(register, γ).poller + (break, ν).poller broken

94 26/60 Voting Example II Voter def = (vote, λ).(pause, µ).voter Poller def = (vote, ).(register, γ).poller + (break, ν).poller broken Poller broken def = (recover, τ).poller + (recover all, ).Poller

95 27/60 Voting Example III Poller group 0 def = (break, ).Poller group 1

96 27/60 Voting Example III Poller group 0 def = (break, ).Poller group 1 Poller group 1 def = (break, ).Poller group 2 + (recover, ).Poller group 0

97 27/60 Voting Example III Poller group 0 def = (break, ).Poller group 1 Poller group 1 def = (break, ).Poller group 2 + (recover, ).Poller group 0 Poller group 2 def = (recover all, δ).poller group 0

98 An Overview of model-based Fluid Analysis 28/60

99 Mean field/fluid analysis Addresses the state-space explosion problem for discrete-state Markov models of computer and communication systems [12] Jane Hillston. Fluid flow approximation of PEPA models. In: Second International Conference on the Quantitative Evaluation of Systems (QEST). IEEE, Sept. 2005, pp doi: /QEST [13] Michel Benaïm and Jean-Yves Le Boudec. A class of mean field interaction models for computer and communication systems. In: Performance Evaluation (Nov. 2008), pp doi: /j.peva [14] Marco Gribaudo. Analysis of Large Populations of Interacting Objects with Mean Field and Markovian Agents. In: 6th European Performance Engineering Workshop (EPEW). Vol , pp doi: / /60

100 Mean field/fluid analysis Addresses the state-space explosion problem for discrete-state Markov models of computer and communication systems Derives tractable systems of differential equations approximating mean number of components in each local state, for example: Fluid analysis of process algebra models [12] Mean-field analysis of systems of interacting objects [13,14] [12] Jane Hillston. Fluid flow approximation of PEPA models. In: Second International Conference on the Quantitative Evaluation of Systems (QEST). IEEE, Sept. 2005, pp doi: /QEST [13] Michel Benaïm and Jean-Yves Le Boudec. A class of mean field interaction models for computer and communication systems. In: Performance Evaluation (Nov. 2008), pp doi: /j.peva [14] Marco Gribaudo. Analysis of Large Populations of Interacting Objects with Mean Field and Markovian Agents. In: 6th European Performance Engineering Workshop (EPEW). Vol , pp doi: / /60

101 Mean field/fluid analysis Addresses the state-space explosion problem for discrete-state Markov models of computer and communication systems Derives tractable systems of differential equations approximating mean number of components in each local state, for example: Fluid analysis of process algebra models [12] Mean-field analysis of systems of interacting objects [13,14] Can develop these techniques to capture key performance measures of interest from large CTMCs, e.g. passage-time measures, reward-based measures [12] Jane Hillston. Fluid flow approximation of PEPA models. In: Second International Conference on the Quantitative Evaluation of Systems (QEST). IEEE, Sept. 2005, pp doi: /QEST [13] Michel Benaïm and Jean-Yves Le Boudec. A class of mean field interaction models for computer and communication systems. In: Performance Evaluation (Nov. 2008), pp doi: /j.peva [14] Marco Gribaudo. Analysis of Large Populations of Interacting Objects with Mean Field and Markovian Agents. In: 6th European Performance Engineering Workshop (EPEW). Vol , pp doi: / /60

102 A simple agent 30/60

103 A simple agent replicated 30/60

104 A simple agent replicated 30/60

105 A simple agent replicated 30/60

106 A simple agent replicated 30/60

107 A simple agent replicated 30/60

108 A simple agent replicated 30/60

109 A simple agent replicated 30/60

110 A simple agent replicated 30/60

111 A simple agent replicated 30/60

112 A simple agent replicated 30/60

113 30/60 A simple agent replicated Fluid/mean field analysis works best when you have many replicated parallel agents or groups of replicated parallel agents. Agent groups can synchronise.

114 31/60 GPEPA Syntax GPEPA or Grouped PEPA as a syntax that is suspiciously similar to that of PEPA.

115 31/60 GPEPA Syntax GPEPA or Grouped PEPA as a syntax that is suspiciously similar to that of PEPA. An sequential agent, P, can have the following syntax:

116 31/60 GPEPA Syntax GPEPA or Grouped PEPA as a syntax that is suspiciously similar to that of PEPA. An sequential agent, P, can have the following syntax: P ::= (a, λ).p SPA Markovian prefix

117 31/60 GPEPA Syntax GPEPA or Grouped PEPA as a syntax that is suspiciously similar to that of PEPA. An sequential agent, P, can have the following syntax: P ::= (a, λ).p SPA Markovian prefix P + P SPA competitive choice

118 31/60 GPEPA Syntax GPEPA or Grouped PEPA as a syntax that is suspiciously similar to that of PEPA. An sequential agent, P, can have the following syntax: P ::= (a, λ).p SPA Markovian prefix P + P SPA competitive choice C = def P Agent name

119 31/60 GPEPA Syntax GPEPA or Grouped PEPA as a syntax that is suspiciously similar to that of PEPA. An sequential agent, P, can have the following syntax: P ::= (a, λ).p SPA Markovian prefix P + P SPA competitive choice C = def P Agent name Sequential agents allow a modeller to define behaviour with associated exponential delays.

120 32/60 GPEPA Syntax For parallelism and communication between sequential agents, we need compositional agents.

121 32/60 GPEPA Syntax For parallelism and communication between sequential agents, we need compositional agents. A compositional agent, Q, can have the following syntax:

122 32/60 GPEPA Syntax For parallelism and communication between sequential agents, we need compositional agents. A compositional agent, Q, can have the following syntax: Q ::= Q Q L Group cooperation Group{P[n]} Parallel grouping where P[n] represents a parallel group of n sequential agents P. Group represents a group label used to identify the parts of the model that are going to be approximated using fluid analysis.

123 33/60 GPEPA Example Client = def (req, r req ).Client waiting Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client

124 33/60 GPEPA Example Client = def (req, r req ).Client waiting Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server

125 33/60 GPEPA Example Client = def (req, r req ).Client waiting Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

126 33/60 GPEPA Example Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

127 33/60 GPEPA Example Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

128 33/60 GPEPA Example Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

129 33/60 GPEPA Example Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get S(t) + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

130 33/60 GPEPA Example Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get S(t) + (break, r break ).Server broken Server get = def (data, r data ).Server S g (t) Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

131 33/60 GPEPA Example Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get S(t) + (break, r break ).Server broken Server get = def (data, r data ).Server S g (t) Server broken = def (reset, r reset ).Server S b (t) CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

132 34/60 ODEs Means Ideally, we want the distribution of say C(t) for each t

133 34/60 ODEs Means Ideally, we want the distribution of say C(t) for each t This can be too expensive

134 34/60 ODEs Means Ideally, we want the distribution of say C(t) for each t This can be too expensive Can derive ODEs approximating the means de[c(t)]/dt = de[s(t)]/dt = de[c w (t)]/dt = de[s g (t)]/dt = de[c t (t)]/dt = de[s b (t)]/dt =

135 34/60 ODEs Means Ideally, we want the distribution of say C(t) for each t This can be too expensive Can derive ODEs approximating the means de[c(t)]/dt = de[s(t)]/dt = de[c w (t)]/dt = de[s g (t)]/dt = de[c t (t)]/dt = de[s b (t)]/dt = These can be numerically solved, cheaper than simulation

136 35/60 ODEs Means ee[c(t)] 60 ee[s(t)] Count ee[c w (t)] ee[c t(t)] ee[s g (t)] ee[s b(t)] Time, t Time, t

137 36/60 ODEs Higher moments Can extend the ODEs de[c(t)]/dt = de[s(t)]/dt = de[c w (t)]/dt = de[s g (t)]/dt = de[c t (t)]/dt = de[s b (t)]/dt =

138 36/60 ODEs Higher moments Can extend the ODEs de[c(t)]/dt = de[s(t)]/dt = de[c w (t)]/dt = de[s g (t)]/dt = de[c t (t)]/dt = de[s b (t)]/dt = with ODEs for higher moments de[s g (t) 2 ]/dt = de[c(t)s(t)]/dt =.

139 36/60 ODEs Higher moments Can extend the ODEs de[c(t)]/dt = de[s(t)]/dt = de[c w (t)]/dt = de[s g (t)]/dt = de[c t (t)]/dt = de[s b (t)]/dt = with ODEs for higher moments de[s g (t) 2 ]/dt = de[c(t)s(t)]/dt = E.g. can get variance as. Var[C(t)] = E[C(t) 2 ] E[C(t)] 2

140 37/60 ODEs Higher moments ee[c(t)] 60 ee[s(t)] Count ee[c w (t)] ee[c t(t)] ee[s g (t)] ee[s b(t)] Time, t Time, t

141 37/60 ODEs Higher moments ee[c(t)] 60 ee[s(t)] Count ee[c w (t)] ee[c t(t)] ee[s g (t)] ee[s b(t)] Time, t Time, t

142 38/60 Accumulated rewards How to analyse quantities accumulated over time, e.g. energy consumption?

143 38/60 Accumulated rewards Server broken How to analyse quantities accumulated over time, e.g. energy consumption? Servers consume energy in the Server get state Server get state at t Server Time, t

144 38/60 Accumulated rewards Server broken How to analyse quantities accumulated over time, e.g. energy consumption? Servers consume energy in the Server get state Server get state at t Server Time, t

145 38/60 Accumulated rewards How to analyse quantities accumulated over time, e.g. energy consumption? Servers consume energy in the Server get state Server broken Server get Server state at t Count S g (t) Time, t Time, t

146 38/60 Accumulated rewards How to analyse quantities accumulated over time, e.g. energy consumption? Servers consume energy in the Server get state Server broken Server get Server state at t Count S g (t) Time, t Time, t

147 38/60 Accumulated rewards How to analyse quantities accumulated over time, e.g. energy consumption? Servers consume energy in the Server get state Server broken Server get Server state at t Count S g (t) Time, t The total energy consumption is the process t 0 S g (u)du Time, t

148 39/60 ODEs moments of rewards Can extend the ODEs for moments of counts de[c(t)]/dt = de[s(t) 2 ]/dt = de[c w (t)s g (t)]/dt = de[s g (t) 3 ]/dt =..

149 39/60 ODEs moments of rewards Can extend the ODEs for moments of counts de[c(t)]/dt = de[s(t) 2 ]/dt = de[c w (t)s g (t)]/dt = de[s g (t) 3 ]/dt = with ODEs for the mean accumulated rewards. [ ] t de 0 S g (u)du /dt = E[S g (t)] [ ] t de 0 S(u)C(u)du /dt = E[S(t)C(t)]..

150 39/60 ODEs moments of rewards Can extend the ODEs for moments of counts de[c(t)]/dt = de[s(t) 2 ]/dt = de[c w (t)s g (t)]/dt = de[s g (t) 3 ]/dt = with ODEs for the mean accumulated rewards. [ ] t de 0 S g (u)du /dt = E[S g (t)] [ ] t de 0 S(u)C(u)du /dt = E[S(t)C(t)]. and ODEs for higher moments of accumulated rewards [ ( ) ] t 2 de 0 S g (u)du /dt =..

151 40/60 ODEs moments of rewards 150 Mean reward ee[c(t)] Time, t

152 40/60 ODEs moments of rewards 150 Mean reward ee[c(t)] Time, t

153 GPA Grouped PEPA Analyser 41/60

154 42/60 Why tool? de[c(t)sg (t)]/dt+ = ( 1.0) (min((e[c(t)cw (t)]) (r data), (E[C(t)Sg (t)]) (r data))) de[c(t)cw (t)]/dt+ = ( 1.0) (min((e[c(t)cw (t)]) (r data), (E[C(t)Sg (t)]) (r data))) de[c(t)ct (t)]/dt+ = min((e[c(t)cw (t)]) (r data), (E[C(t)Sg (t)]) (r data)) de[sg (t)s(t)]/dt+ = min((e[sg (t)cw (t)]) (r data), (E[Sg (t) 2 ]) (r data)) de[sg (t) 2 ]/dt+ = ( 2.0) (min((e[sg (t)cw (t)]) (r data), (E[Sg (t) 2 ]) (r data))) de[sg (t)cw (t)]/dt+ = ( 1.0) (min((e[sg (t)cw (t)]) (r data), (E[Sg (t) 2 ]) (r data))) de[sg (t)ct (t)]/dt+ = min((e[sg (t)cw (t)]) (r data), (E[Sg (t) 2 ]) (r data)) de[s(t)ct (t)]/dt+ = min((e[cw (t)ct (t)]) (r data), (E[Sg (t)ct (t)]) (r data)) de[sg (t)ct (t)]/dt+ = ( 1.0) (min((e[cw (t)ct (t)]) (r data), (E[Sg (t)ct (t)]) (r data))) de[cw (t)ct (t)]/dt+ = ( 1.0) (min((e[cw (t)ct (t)]) (r data), (E[Sg (t)ct (t)]) (r data))) de[ct (t) 2 ]/dt+ = (2.0) (min((e[cw (t)ct (t)]) (r data), (E[Sg (t)ct (t)]) (r data))) de[cw (t)s(t)]/dt+ = min((e[cw (t) 2 ]) (r data), (E[Sg (t)cw (t)]) (r data)) de[sg (t)cw (t)]/dt+ = ( 1.0) (min((e[cw (t) 2 ]) (r data), (E[Sg (t)cw (t)]) (r data))) de[cw (t) 2 ]/dt+ = ( 2.0) (min((e[cw (t) 2 ]) (r data), (E[Sg (t)cw (t)]) (r data))) de[cw (t)ct (t)]/dt+ = min((e[cw (t) 2 ]) (r data), (E[Sg (t)cw (t)]) (r data)) de[s(t) 2 ]/dt+ = (2.0) ((E[S(t)S b(t)]) (rreset )) de[s(t)s b(t)]/dt+ = ( 1.0) ((E[S(t)S b(t)]) (rreset )) de[s(t)s b(t)]/dt+ = (E[S b(t) 2 ]) (rreset ) de[s b(t) 2 ]/dt+ = ( 2.0) ((E[S b(t) 2 ]) (rreset )) de[s(t)]/dt+ = (E[S b(t)]) (rreset ) de[s b(t)]/dt+ = ( 1.0) ((E[S b(t)]) (rreset )) de[s(t) 2 ]/dt+ = (E[S b(t)]) (rreset ) de[s(t)s b(t)]/dt+ = ( 1.0) ((E[S b(t)]) (rreset )) de[s b(t) 2 ]/dt+ = (E[S b(t)]) (rreset ) de[c(t)s(t)]/dt+ = (E[C(t)S b(t)]) (rreset ) de[c(t)s b(t)]/dt+ = ( 1.0) ((E[C(t)S b(t)]) (rreset )) de[sg (t)s(t)]/dt+ = (E[Sg (t)s b(t)]) (rreset ) de[sg (t)s b(t)]/dt+ = ( 1.0) ((E[Sg (t)s b(t)]) (rreset )) de[s(t)ct (t)]/dt+ = (E[Ct (t)s b(t)]) (rreset ) de[ct (t)s b(t)]/dt+ = ( 1.0) ((E[Ct (t)s b(t)]) (rreset )) de[cw (t)s(t)]/dt+ = (E[Cw (t)s b(t)]) (rreset ) de[cw (t)s b(t)]/dt+ = ( 1.0) ((E[Cw (t)s b(t)]) (rreset )) de[s(t)ct (t)]/dt+ = ( 1.0) (min((e[c(t)ct (t)]) (rreq), (E[S(t)Ct (t)]) (rreq))) de[sg (t)ct (t)]/dt+ = min((e[c(t)ct (t)]) (rreq), (E[S(t)Ct (t)]) (rreq)) de[c(t)ct (t)]/dt+ = ( 1.0) (min((e[c(t)ct (t)]) (rreq), (E[S(t)Ct (t)]) (rreq))) de[cw (t)ct (t)]/dt+ = min((e[c(t)ct (t)]) (rreq), (E[S(t)Ct (t)]) (rreq)) de[cw (t)s(t)]/dt+ = ( 1.0) (min((e[c(t)cw (t)]) (rreq), (E[Cw (t)s(t)]) (rreq))) de[sg (t)cw (t)]/dt+ = min((e[c(t)cw (t)]) (rreq), (E[Cw (t)s(t)]) (rreq)) de[c(t)cw (t)]/dt+ = ( 1.0) (min((e[c(t)cw (t)]) (rreq), (E[Cw (t)s(t)]) (rreq))) de[cw (t) 2 ]/dt+ = (2.0) (min((e[c(t)cw (t)]) (rreq), (E[Cw (t)s(t)]) (rreq))) de[s(t) 2 ]/dt+ = (2.0) (min((e[cw (t)s(t)]) (r data), (E[Sg (t)s(t)]) (r data))) de[sg (t)s(t)]/dt+ = ( 1.0) (min((e[cw (t)s(t)]) (r data), (E[Sg (t)s(t)]) (r data))) de[cw (t)s(t)]/dt+ = ( 1.0) (min((e[cw (t)s(t)]) (r data), (E[Sg (t)s(t)]) (r data))) de[s(t)ct (t)]/dt+ = min((e[cw (t)s(t)]) (r data), (E[Sg (t)s(t)]) (r data)) de[s(t)s b(t)]/dt+ = min((e[cw (t)s b(t)]) (r data), (E[Sg (t)s b(t)]) (r data)) de[sg (t)s b(t)]/dt+ = ( 1.0) (min((e[cw (t)s b(t)]) (r data), (E[Sg (t)s b(t)]) (r data))) de[cw (t)s b(t)]/dt+ = ( 1.0) (min((e[cw (t)s b(t)]) (r data), (E[Sg (t)s b(t)]) (r data))) de[ct (t)s b(t)]/dt+ = min((e[cw (t)s b(t)]) (r data), (E[Sg (t)s b(t)]) (r data)) de[s(t)]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[sg (t)]/dt+ = ( 1.0) (min((e[cw (t)]) (r data), (E[Sg (t)]) (r data))) de[cw (t)]/dt+ = ( 1.0) (min((e[cw (t)]) (r data), (E[Sg (t)]) (r data))) de[ct (t)]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[s(t) 2 ]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[sg (t)s(t)]/dt+ = ( 1.0) (min((e[cw (t)]) (r data), (E[Sg (t)]) (r data))) de[cw (t)s(t)]/dt+ = ( 1.0) (min((e[cw (t)]) (r data), (E[Sg (t)]) (r data))) de[s(t)ct (t)]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[sg (t) 2 ]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[sg (t)cw (t)]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[sg (t)ct (t)]/dt+ = ( 1.0) (min((e[cw (t)]) (r data), (E[Sg (t)]) (r data))) de[cw (t) 2 ]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[cw (t)ct (t)]/dt+ = ( 1.0) (min((e[cw (t)]) (r data), (E[Sg (t)]) (r data))) de[ct (t) 2 ]/dt+ = min((e[cw (t)]) (r data), (E[Sg (t)]) (r data)) de[c(t)s(t)]/dt+ = min((e[c(t)cw (t)]) (r data), (E[C(t)Sg (t)]) (r data)) de[ct (t) 2 ]/dt+ = ( 2.0) ((E[Ct (t) 2 ]) (r think )) de[c(t)cw (t)]/dt+ = (E[Cw (t)ct (t)]) (r think ) de[cw (t)ct (t)]/dt+ = ( 1.0) ((E[Cw (t)ct (t)]) (r think )) de[s(t) 2 ]/dt+ = ( 2.0) (min((e[c(t)s(t)]) (rreq), (E[S(t) 2 ]) (rreq))) de[sg (t)s(t)]/dt+ = min((e[c(t)s(t)]) (rreq), (E[S(t) 2 ]) (rreq)) de[c(t)s(t)]/dt+ = ( 1.0) (min((e[c(t)s(t)]) (rreq), (E[S(t) 2 ]) (rreq))) de[cw (t)s(t)]/dt+ = min((e[c(t)s(t)]) (rreq), (E[S(t) 2 ]) (rreq)) de[s(t)s b(t)]/dt+ = ( 1.0) (min((e[c(t)s b(t)]) (rreq), (E[S(t)S b(t)]) (rreq))) de[sg (t)s b(t)]/dt+ = min((e[c(t)s b(t)]) (rreq), (E[S(t)S b(t)]) (rreq)) de[c(t)s b(t)]/dt+ = ( 1.0) (min((e[c(t)s b(t)]) (rreq), (E[S(t)S b(t)]) (rreq))) de[cw (t)s b(t)]/dt+ = min((e[c(t)s b(t)]) (rreq), (E[S(t)S b(t)]) (rreq)) de[s(t)]/dt+ = ( 1.0) (min((e[c(t)]) (rreq), (E[S(t)]) (rreq))) de[sg (t)]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[c(t)]/dt+ = ( 1.0) (min((e[c(t)]) (rreq), (E[S(t)]) (rreq))) de[cw (t)]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[s(t) 2 ]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[sg (t)s(t)]/dt+ = ( 1.0) (min((e[c(t)]) (rreq), (E[S(t)]) (rreq))) de[c(t)s(t)]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[cw (t)s(t)]/dt+ = ( 1.0) (min((e[c(t)]) (rreq), (E[S(t)]) (rreq))) de[sg (t) 2 ]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[c(t)sg (t)]/dt+ = ( 1.0) (min((e[c(t)]) (rreq), (E[S(t)]) (rreq))) de[sg (t)cw (t)]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[c(t) 2 ]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[c(t)cw (t)]/dt+ = ( 1.0) (min((e[c(t)]) (rreq), (E[S(t)]) (rreq))) de[cw (t) 2 ]/dt+ = min((e[c(t)]) (rreq), (E[S(t)]) (rreq)) de[c(t)s(t)]/dt+ = ( 1.0) (min((e[c(t) 2 ]) (rreq), (E[C(t)S(t)]) (rreq))) de[c(t)sg (t)]/dt+ = min((e[c(t) 2 ]) (rreq), (E[C(t)S(t)]) (rreq)) de[c(t) 2 ]/dt+ = ( 2.0) (min((e[c(t) 2 ]) (rreq), (E[C(t)S(t)]) (rreq))) de[c(t)cw (t)]/dt+ = min((e[c(t) 2 ]) (rreq), (E[C(t)S(t)]) (rreq)) de[sg (t)s(t)]/dt+ = ( 1.0) (min((e[sg (t)c(t)]) (rreq), (E[Sg (t)s(t)]) (rreq))) de[sg (t) 2 ]/dt+ = (2.0) (min((e[sg (t)c(t)]) (rreq), (E[Sg (t)s(t)]) (rreq))) de[c(t)sg (t)]/dt+ = ( 1.0) (min((e[sg (t)c(t)]) (rreq), (E[Sg (t)s(t)]) (rreq))) de[sg (t)cw (t)]/dt+ = min((e[sg (t)c(t)]) (rreq), (E[Sg (t)s(t)]) (rreq))

155 43/60 GPA GPEPA model

156 43/60 GPA GPEPA model generates ODEs

157 43/60 GPA GPEPA model generates ODEs num. solves Moments (approximate)

158 43/60 GPA GPEPA model generates ODEs num. solves Moments (approximate) Measures,passage times,etc.

159 43/60 GPA GPEPA model generates generates ODEs Simulation num. solves Moments (approximate) Measures,passage times,etc.

160 43/60 GPA GPEPA model generates generates ODEs num. solves Moments (approximate) Simulation Moments simulates Measures,passage times,etc.

161 43/60 GPA GPEPA model generates generates ODEs num. solves Moments (approximate) Simulation Moments simulates Measures,passage times,etc. Measures,passage times,etc.

162 43/60 GPA GPEPA model generates generates ODEs num. solves Moments (approximate) Error estimates Simulation Moments simulates Measures,passage times,etc. Measures,passage times,etc.

163 44/60 Grouped PEPA analyser Convenient syntax rreq = 2.0; rthink = 0.2;... c = 100.0; s = 50.0; Client = (request,rreq).client_waiting; Client_waiting = (data,rdata).client_think; Client_think = (think,rthink).client; Server Server_get Server_broken = (request,rreq).server_get + (break,rbreak).server_broken; = (data,rdata).server = (reset,rreset).server; Clients{Client[c]}<request,data>Servers{Server[s]}

164 45/60 GPA commands Analyses odes(stoptime=5.0, stepsize=0.01, density=10){...} simulation(stoptime=5.0,stepsize=0.01,repl.=1000){...} comparison(odes(...){...},simulation(...){...}){...}

165 45/60 GPA commands Analyses odes(stoptime=5.0, stepsize=0.01, density=10){...} simulation(stoptime=5.0,stepsize=0.01,repl.=1000){...} comparison(odes(...){...},simulation(...){...}){...} Plot commands, counts specified with Group:Component plot(e[clients:client],e[acc(clients:client)]); plot(e[acc(clients:client) Servers:Server_get^2]); plot(var[clients:client]);

166 45/60 GPA commands Analyses odes(stoptime=5.0, stepsize=0.01, density=10){...} simulation(stoptime=5.0,stepsize=0.01,repl.=1000){...} comparison(odes(...){...},simulation(...){...}){...} Plot commands, counts specified with Group:Component plot(e[clients:client],e[acc(clients:client)]); plot(e[acc(clients:client) Servers:Server_get^2]); plot(var[clients:client]); plot(e[clients:client]^2.0 + Var[Servers:Server]/s);

167 45/60 GPA commands Analyses odes(stoptime=5.0, stepsize=0.01, density=10){...} simulation(stoptime=5.0,stepsize=0.01,repl.=1000){...} comparison(odes(...){...},simulation(...){...}){...} Plot commands, counts specified with Group:Component plot(e[clients:client],e[acc(clients:client)]); plot(e[acc(clients:client) Servers:Server_get^2]); plot(var[clients:client]); plot(e[clients:client]^2.0 + Var[Servers:Server]/s); plotswitchpoints(1);

168 46/60 GPA passage times Allows general PEPA components NotPassed = (think,rthink).passed; Passed = (think,rthink).passed; ObservedClient = Client<think>NotPassed;

169 46/60 GPA passage times Allows general PEPA components NotPassed = (think,rthink).passed; Passed = (think,rthink).passed; ObservedClient = Client<think>NotPassed; For the CDF of first passage of a client E[C t P(t) + C w t P(t) + C t t P(t)]/c

170 46/60 GPA passage times Allows general PEPA components NotPassed = (think,rthink).passed; Passed = (think,rthink).passed; ObservedClient = Client<think>NotPassed; For the CDF of first passage of a client E[C t P(t) + C w t P(t) + C t t P(t)]/c Can use command plot(e[clients:_<*>passed]/c); For an upper bound on the CDF of first passage of 1/10-th of clients plot(var[clients:_<*>passed] /(Var[Clients:_<*>Passed]+(E[Clients:_<*>Passed]-c/10.0)^2.0));

171 47/60 GPA passage times 1 CDF 0.8 Probability Time t (a) Individual passage time for a client first passage Time t upper bound exact CDF lower bound (b) Global passage time until c/10 first passages

172 48/60 GPA completion times bounds(acc(servers:server get),100.0,2); 1 2 moments 0.8 Probability Time t completion time of t 0 S g (u)du reaching 100

173 48/60 GPA completion times bounds(acc(servers:server get),100.0,2,4); 1 2 moments 0.8 Probability Time t completion time of t 0 S g (u)du reaching 100

174 48/60 GPA completion times bounds(acc(servers:server get),100.0,2,4,6); 1 2 moments 0.8 Probability Time t completion time of t 0 S g (u)du reaching 100

175 49/60 Rapid analysis of very large scale parallel systems ODEs efficient implementation moments of: counts rewards passage and completion times error estimates large numbers of identical components described in GPEPA split-free splitting models Summary

176 49/60 Rapid analysis of very large scale parallel systems ODEs efficient implementation moments of: counts rewards passage and completion times error estimates large numbers of identical components described in GPEPA split-free splitting models Summary parallel solvers hybrid solutions impulse rewards time correlated moments optimisation nature of error new formalism complex synch. guards functional rates phase type Current work

177 49/60 Rapid analysis of very large scale parallel systems ODEs efficient implementation moments of: counts rewards passage and completion times error estimates large numbers of identical components described in GPEPA split-free splitting models Summary parallel solvers hybrid solutions impulse rewards time correlated moments optimisation nature of error new formalism complex synch. guards functional rates phase type Current work

178 49/60 Rapid analysis of very large scale parallel systems ODEs efficient implementation moments of: counts rewards passage and completion times error estimates large numbers of identical components described in GPEPA split-free splitting models Summary parallel solvers hybrid solutions impulse rewards time correlated moments optimisation nature of error new formalism complex synch. guards functional rates phase type Current work Applied to real systems

179 GPA: Download for free GPA tool [11] : [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. A new tool for the performance analysis of massively parallel computer systems. In: Eighth Workshop on Quantitative Aspects of Programming Languages (QAPL 2010), March 27-28, 2010, Paphos, Cyprus. Electronic Proceedings in Theoretical Computer Science. Mar url: 50/60

180 Fluid ODE generation using Population CTMCs 51/60

181 52/60 Populations CTMCs A Population continuous time Markov chain (PCTMC) consists of a finite set of components {1,..., N}, and a set T of transition classes.

182 52/60 Populations CTMCs A Population continuous time Markov chain (PCTMC) consists of a finite set of components {1,..., N}, and a set T of transition classes. Each state in a PCTMC is expressed as an integer vector X = (X 1,..., X N ) Z N

183 52/60 Populations CTMCs A Population continuous time Markov chain (PCTMC) consists of a finite set of components {1,..., N}, and a set T of transition classes. Each state in a PCTMC is expressed as an integer vector X = (X 1,..., X N ) Z N X i represents the current population level of a component i.

184 53/60 PCTMCs: Transition classes A transition class c = (r c, e c ) T describes a stochastic event Event c: X (t) X (t ) at rate r c

185 53/60 PCTMCs: Transition classes A transition class c = (r c, e c ) T describes a stochastic event Event c: X (t) X (t ) at rate r c 1. with exponentially distributed duration D at rate r c ( X (t)) where r c : Z N R is a rate function

186 53/60 PCTMCs: Transition classes A transition class c = (r c, e c ) T describes a stochastic event Event c: X (t) X (t ) at rate r c 1. with exponentially distributed duration D at rate r c ( X (t)) where r c : Z N R is a rate function 2. which changes the current population vector according to the change vector e c

187 53/60 PCTMCs: Transition classes A transition class c = (r c, e c ) T describes a stochastic event Event c: X (t) X (t ) at rate r c 1. with exponentially distributed duration D at rate r c ( X (t)) where r c : Z N R is a rate function 2. which changes the current population vector according to the change vector e c This gives us the following population dynamic formula: Event c: X (t + D) = X (t) + ec D exp(r c )

188 54/60 PCTMCs: Chemical reactions Similar to chemical reaction: s s k t t l at rate r( X )

189 54/60 PCTMCs: Chemical reactions Similar to chemical reaction: s s k t t l at rate r( X ) Change vector for this reaction would involve: e c = { 1,... 1, 1..., 1, 0,..., 0} }{{}}{{} k l

190 PCTMCs: Mean dynamics An important aspect of PCTMC models is that we can easily generate approximations to the evolution of the underlying stochastic process. [15] [15] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /60

191 PCTMCs: Mean dynamics An important aspect of PCTMC models is that we can easily generate approximations to the evolution of the underlying stochastic process. [15] In particular, the equation for a mean of population X i (t) is: d dt E[X i(t)] = (r j, e j ) T e ij r j ( X (t)) [15] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /60

192 56/60 ODE-based dynamics More generally PCTMCs permit the derivation of moments of the underlying stochastic process, i.e. moments of population levels d dt E[M( X (t))] = E[f M ( X (t))] where M( X ) defines the moment to be calculated.

193 56/60 ODE-based dynamics More generally PCTMCs permit the derivation of moments of the underlying stochastic process, i.e. moments of population levels d dt E[M( X (t))] = E[f M ( X (t))] where M( X ) defines the moment to be calculated. Mean of component 1: M( X ) = X 1 2nd moment of component 1: M( X ) = X 2 1 2nd joint moment of components 1 and 2: M( X ) = X 1 X 2

194 Higher moments The higher moment function is defined as: [16] f M ( X (t)) = c T(M( X (t) + e c )M( X (t))) r c ( X (t)) [16] Anton Stefanek. Efficient Computation of Performance Energy Trade-offs in Large Scale Markov Models. PhD thesis. Department of Computing, Imperial College London, /60

195 Higher moments The higher moment function is defined as: [16] f M ( X (t)) = c T(M( X (t) + e c )M( X (t))) r c ( X (t)) Key issue: achieving a closed set of equations with each quantity on right hand side of ODEs having a corresponding ODE. [16] Anton Stefanek. Efficient Computation of Performance Energy Trade-offs in Large Scale Markov Models. PhD thesis. Department of Computing, Imperial College London, /60

196 Higher moments The higher moment function is defined as: [16] f M ( X (t)) = c T(M( X (t) + e c )M( X (t))) r c ( X (t)) Key issue: achieving a closed set of equations with each quantity on right hand side of ODEs having a corresponding ODE. Leads to different dynamics: mean-field, mass action, min-closure, log-normal-closure [16] Anton Stefanek. Efficient Computation of Performance Energy Trade-offs in Large Scale Markov Models. PhD thesis. Department of Computing, Imperial College London, /60

197 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client

198 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server

199 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

200 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

201 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

202 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

203 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get S(t) + (break, r break ).Server broken Server get = def (data, r data ).Server Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

204 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get S(t) + (break, r break ).Server broken Server get = def (data, r data ).Server S g (t) Server broken = def (reset, r reset ).Server CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

205 58/60 Worked example: GPEPA Client = def (req, r req ).Client waiting C(t) Client waiting = def (data, r data ).Client think C w (t) Client think = def (think, r think ).Client C t (t) Server = def (req, r req ).Server get S(t) + (break, r break ).Server broken Server get = def (data, r data ).Server S g (t) Server broken = def (reset, r reset ).Server S b (t) CS(c, s) = Clients{Client[c]} {req,data} Servers{Server[s]}

206 59/60 Worked example: PCTMC In total, there are 5 transition classes: req : data : think : break : reset :

207 59/60 Worked example: PCTMC In total, there are 5 transition classes: req : C(t) + S(t) C w (t) + S g (t) data : think : break : reset : at r req min(c(t), S(t))

208 59/60 Worked example: PCTMC In total, there are 5 transition classes: req : C(t) + S(t) C w (t) + S g (t) data : C w (t) + S g (t) C t (t) + S(t) think : break : reset : at r req min(c(t), S(t)) at r data min(c w (t), S g (t))

209 59/60 Worked example: PCTMC In total, there are 5 transition classes: req : C(t) + S(t) C w (t) + S g (t) data : C w (t) + S g (t) C t (t) + S(t) think : C t (t) C(t) at r think C t (t) break : reset : at r req min(c(t), S(t)) at r data min(c w (t), S g (t))

210 59/60 Worked example: PCTMC In total, there are 5 transition classes: req : C(t) + S(t) C w (t) + S g (t) data : C w (t) + S g (t) C t (t) + S(t) think : C t (t) C(t) at r think C t (t) break : S(t) S b (t) at r break S(t) reset : at r req min(c(t), S(t)) at r data min(c w (t), S g (t))

211 59/60 Worked example: PCTMC In total, there are 5 transition classes: req : C(t) + S(t) C w (t) + S g (t) data : C w (t) + S g (t) C t (t) + S(t) think : C t (t) C(t) at r think C t (t) break : S(t) S b (t) at r break S(t) reset : S b (t) S(t) at r reset S b (t) at r req min(c(t), S(t)) at r data min(c w (t), S g (t))

212 59/60 Worked example: PCTMC In total, there are 5 transition classes: req : C(t) + S(t) C w (t) + S g (t) data : C w (t) + S g (t) C t (t) + S(t) think : C t (t) C(t) at r think C t (t) break : S(t) S b (t) at r break S(t) reset : S b (t) S(t) at r reset S b (t) at r req min(c(t), S(t)) at r data min(c w (t), S g (t)) Then apply PCTMC ODE generation rules to get a fluid GPEPA model.

213 Even more exciting fluid analysis 60/60

214 Scalable passage-time analysis 12/30

215 Scalable passage-time analysis Passage-time distributions are key for specifying service level agreements (SLAs), e.g.: [7,8] file should be transferred within 2 seconds, 95% of the time [7] Richard A. Hayden, Anton Stefanek, and Jeremy T. Bradley. Fluid computation of passage time distributions in large Markov models. In: Theoretical Computer Science (2012), pp doi: /j.tcs [8] Richard A. Hayden, Jeremy T. Bradley, and Allan Clark. Performance Specification and Evaluation with Unified Stochastic Probes and Fluid Analysis. In: IEEE Transactions on Software Engineering 99.PrePrints (2012). doi: /TSE /30

216 13/30 Scalable passage-time analysis Passage-time distributions are key for specifying service level agreements (SLAs), e.g.: connection should be established within 0.25 seconds, 99% of the time

217 13/30 Scalable passage-time analysis Passage-time distributions are key for specifying service level agreements (SLAs), e.g.: connection should be established within 0.25 seconds, 99% of the time We consider two classes of passage-time query:

218 13/30 Scalable passage-time analysis Passage-time distributions are key for specifying service level agreements (SLAs), e.g.: connection should be established within 0.25 seconds, 99% of the time We consider two classes of passage-time query: Individual passage times: track the time taken for an individual to complete a task

219 13/30 Scalable passage-time analysis Passage-time distributions are key for specifying service level agreements (SLAs), e.g.: connection should be established within 0.25 seconds, 99% of the time We consider two classes of passage-time query: Individual passage times: track the time taken for an individual to complete a task Global passage times: track the time taken for all of a large number of individuals to complete a task

220 13/30 Scalable passage-time analysis Passage-time distributions are key for specifying service level agreements (SLAs), e.g.: connection should be established within 0.25 seconds, 99% of the time We consider two classes of passage-time query: Individual passage times: track the time taken for an individual to complete a task Direct approximation to the entire CDF Global passage times: track the time taken for all of a large number of individuals to complete a task

221 13/30 Scalable passage-time analysis Passage-time distributions are key for specifying service level agreements (SLAs), e.g.: connection should be established within 0.25 seconds, 99% of the time We consider two classes of passage-time query: Individual passage times: track the time taken for an individual to complete a task Direct approximation to the entire CDF Global passage times: track the time taken for all of a large number of individuals to complete a task Moment-derived bounds on CDF

222 Individual passage times [7] Client Server fail proc req Client wait reset Server fail res res req Client proc Server proc N C N S How long does it take a single client to make a request, receive a response and process it? [7] Richard A. Hayden, Anton Stefanek, and Jeremy T. Bradley. Fluid computation of passage time distributions in large Markov models. In: Theoretical Computer Science (2012), pp doi: /j.tcs /30

223 14/30 Individual passage times Client Client Server fail proc req Client wait proc req Client wait reset Server fail res res res req Client proc 1 Client proc N C 1 Server proc N S How long does it take a single client to make a request, receive a response and process it?

224 14/30 Individual passage times proc C(t) Client req Client wait Client proc req Client wait proc Client req Client wait Server fail reset Server fail res res res res req Client proc Client proc 1 Client proc N C 1 Server proc N S How long does it take a single client to make a request, receive a response and process it?

225 14/30 Individual passage times proc C(t) Client req Client wait Client proc req Client wait proc Client req Client wait Server fail reset Server fail res res res res req Client proc Client proc Client proc Server proc 1 N C 1 T := inf{t 0 : C(t) = Client }, given that C(0) = Client N S

226 14/30 Individual passage times proc C(t) Client req Client wait Client proc req Client wait proc Client req Client wait Server fail reset Server fail res res res res req Client proc Client proc Client proc Server proc 1 N C 1 T := inf{t 0 : C(t) = Client }, given that C(0) = Client N S P{T t} = P{C(t) {Client, Client wait, Client proc}}

227 14/30 Individual passage times proc C(t) Client req Client wait Client proc req Client wait proc Client req Client wait Server fail reset Server fail res res res res req Client proc Client proc Client proc Server proc 1 N C 1 T := inf{t 0 : C(t) = Client }, given that C(0) = Client N S P{T t} = P{C(t) {Client, Client wait, Client proc}} = E[1 {C(t)=Client }] + E[1 {C(t)=Client wait }] + E[1 {C(t)=Client proc }]

228 14/30 Individual passage times proc C(t) Client req Client wait Client proc req Client wait proc Client req Client wait Server fail reset Server fail res res res res req Client proc Client proc Client proc Server proc 1 N C 1 T := inf{t 0 : C(t) = Client }, given that C(0) = Client N S P{T t} = P{C(t) {Client, Client wait, Client proc}} = E[1 {C(t)=Client }] + E[1 {C(t)=Client wait }] + E[1 {C(t)=Client proc }] = E[N Client (t)] + E[N Client wait (t)] + E[N Client proc (t)]

229 14/30 Individual passage times proc C(t) Client req Client wait Client proc req Client wait proc Client req Client wait Server fail reset Server fail res res res res req Client proc Client proc Client proc Server proc 1 N C 1 T := inf{t 0 : C(t) = Client }, given that C(0) = Client N S P{T t} = P{C(t) {Client, Client wait, Client proc}} = E[1 {C(t)=Client }] + E[1 {C(t)=Client wait }] + E[1 {C(t)=Client proc }] = E[N Client (t)] + E[N Client wait (t)] + E[N Client proc (t)] P{T t} v Client (t) + v Client wait (t) + v Client proc (t)

230 Example individual passage time Probability N C = 10, N S = 6 N C = 20, N S = 12 N C = 50, N S = 30 N C = 100, N S = 60 N C = 200, N S = 120 ODE approximation Time, t 15/30

231 Global passage times [7,9] Client Server fail proc req Client wait reset Server fail res res req Client proc Server proc N C N S How long does it take for half of the clients to make a request, receive a response and process it? [7] Richard A. Hayden, Anton Stefanek, and Jeremy T. Bradley. Fluid computation of passage time distributions in large Markov models. In: Theoretical Computer Science (2012), pp doi: /j.tcs [9] Rena Bakhshi et al. Mean-Field Analysis for the Evaluation of Gossip Protocols. In: QEST 09, Proceedings of the 5th IEEE Conference on the Quantitative Evaluation of Systems. IEEE Computer Society, 2009, pp /30

232 16/30 Global passage times proc Client req Client wait proc Client req Client wait Server fail reset Server fail res res res req Client proc Client proc NC Server proc N S How long does it take for half of the clients to make a request, receive a response and process it?

233 16/30 Global passage times proc Client req Client wait proc Client req Client wait Server fail reset Server fail res res res req Client proc Client proc NC Server proc N S T := inf{t 0 : N C (t) + N C w (t) + N C p (t) N C /2}

234 16/30 Global passage times proc Client req Client wait proc Client req Client wait Server fail reset Server fail res res res req Client proc Client proc NC Server proc N S T := inf{t 0 : N C (t) + N C w (t) + N C p (t) N C /2} Point-mass approximation: T inf{t 0 : v C (t) + v C w (t) + v C p (t) N C /2}

235 16/30 Global passage times Probability N C = 10, N S = 6 N C = 20, N S = 12 N C = 50, N S = 30 N C = 300, N S = 180 N C = 500, N S = Time, t

236 16/30 Global passage times proc Client req Client wait proc Client req Client wait Server fail reset Server fail res res res req Client proc Client proc Server proc NC N S Point-mass approximation: T inf{t 0 : v C (t) + v C w (t) + v C p (t) N C /2} Approximation is very coarse Cannot be applied directly to the same question for all clients

237 Global passage times moment bounds proc Client req Client wait proc Client req Client wait Server fail reset Server fail res res res req Client proc Client proc NC Server proc N S Moment approximations to component counts contain information about the distribution of T [7] [7] Richard A. Hayden, Anton Stefanek, and Jeremy T. Bradley. Fluid computation of passage time distributions in large Markov models. In: Theoretical Computer Science (2012), pp doi: /j.tcs /30

238 Global passage times moment bounds proc Client req Client wait proc Client req Client wait Server fail reset Server fail res res res req Client proc Client proc NC Server proc N S Moment approximations to component counts contain information about the distribution of T Reduced moment problem find maximum and minimum bounding distributions subject to limited moment information [10] [10] Arpád Tari, Miklós Telek, and Peter Buchholz. A Unified Approach to the Moments-based Distribution Estimation Unbounded Support. In: EPEW 05, European Performance Engineering Workshop. Vol Lecture Notes in Computer Science. Versailles: Springer, 2005, pp /30

239 Global passage bounds first moments [7] Half of the clients: Three quarters of the clients: Probability CDF 0.2 Upper Lower Time, t 1 Probability All of the clients: CDF Upper 0.2 Lower Time, t 0.8 Probability CDF 0.2 Upper Lower Time, t [7] Richard A. Hayden, Anton Stefanek, and Jeremy T. Bradley. Fluid computation of passage time distributions in large Markov models. In: Theoretical Computer Science (2012), pp doi: /j.tcs /30

240 Global passage bounds higher moments [7] Probability Time, t 1st order 2nd order 4th order CDF [7] Richard A. Hayden, Anton Stefanek, and Jeremy T. Bradley. Fluid computation of passage time distributions in large Markov models. In: Theoretical Computer Science (2012), pp doi: /j.tcs /30

241 Scalable analysis of accumulated reward measures 20/30

242 Accumulated reward measures [11] Cost, energy, heat,... Constant rate [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

243 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

244 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t state at t Server proc r Sp Server r S Server fail Time, t [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

245 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t state at t Server proc r Sp Server r S Server fail Time, t [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

246 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t state at t N S (t) Server proc r Sp 20 Count 10 Server r S Server fail Time, t Time, t [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

247 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t state at t N S (t) Server proc r Sp 20 r S Count 10 Server r S Server fail Time, t Time, t [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

248 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t state at t N S (t) Server proc r Sp 20 r S N Sp (t) Count 10 Server r S Server fail Time, t Time, t [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

249 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t state at t N S (t) Server proc r Sp 20 r S r Sp N Sp (t) Count 10 Server r S Server fail Time, t Time, t [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

250 Accumulated reward measures [11] Cost, energy, heat,... Constant rate Server e t Server proc e t state at t N S (t) Server proc r Sp 20 r S r Sp N Sp (t) Count 10 Server r S Server fail Time, t Time, t t t total energy(t) = r S N S (u) du + r Sp N Sp (u) du 0 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

251 Moment approximations of accumulated rewards [11] 150 Accumulated quantity t 0 Sp(u) du Time, t Simulation also very costly for rewards [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

252 Moment approximations of accumulated rewards [11] 150 Accumulated quantity t 0 Sp(u) du Time, t Simulation also very costly for rewards [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

253 Moment approximations of accumulated rewards [11] 150 Accumulated quantity t 0 Sp(u) du Time, t Simulation also very costly for rewards [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

254 Moment approximations of accumulated rewards [11] 150 Accumulated quantity Time, t [ ] E t 0 Sp(u) du Simulation also very costly for rewards Can extend the ODE system for count moments with ODEs for moments of accumulated counts: [ d t ] dt E N Sp (u) du = 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

255 Moment approximations of accumulated rewards [11] 150 Accumulated quantity Time, t [ ] E t 0 Sp(u) du Simulation also very costly for rewards Can extend the ODE system for count moments with ODEs for moments of accumulated counts: [ d t t ] dt E N Sp (u) du N S (u) du = 0 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

256 Moment approximations of accumulated rewards [11] 150 Accumulated quantity Time, t [ ] E t 0 Sp(u) du Simulation also very costly for rewards Can extend the ODE system for count moments with ODEs for moments of accumulated counts: [ d t t ] dt E N Sp (u) du N S (u) du = 0 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

257 Moment approximations of accumulated rewards [11] 150 Accumulated quantity Time, t [ ] E t 0 Sp(u) du Simulation also very costly for rewards Can extend the ODE system for count moments with ODEs for moments of accumulated counts: [ d t t ] dt E N Sp (u) du N S (u) du = 0 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

258 Moment approximations of accumulated rewards [11] 150 Accumulated quantity Time, t [ ] E t 0 Sp(u) du Simulation also very costly for rewards Can extend the ODE system for count moments with ODEs for moments of accumulated counts: [ d t t ] dt E N Sp (u) du N S (u) du = 0 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

259 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

260 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

261 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 First-order moments d dt E[N S(t)] = [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

262 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 First-order moments d dt E[N S(t)] = Second-order moments [ ( d t ) ] 2 dt E N S (u) du = 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

263 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 First-order moments d dt E[N S(t)] = Second-order moments [ ( d t ) ] 2 dt E N S (u) du 0 [ t ] = 2E N S (t) N S (u) du 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

264 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 First-order moments d dt E[N S(t)] = Second-order moments [ ( d t ) ] 2 dt E N S (u) du 0 Combined moments [ d t ] dt E N S (t) N S (u) du = 0 [ t ] = 2E N S (t) N S (u) du 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

265 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 First-order moments d dt E[N S(t)] = Second-order moments [ ( d t ) ] 2 dt E N S (u) du 0 Combined moments [ d t ] dt E N S (t) N S (u) du 0 [ t ] = 2E N S (t) N S (u) du 0 [ t ] = E N S (t) N S (u) du + + E[NS 2 (t)] 0 [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

266 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 Second-order moments [ ( d t ) ] 2 dt E N S (u) du 0 Combined moments [ d t ] dt E N S (t) N S (u) du 0 [ t = 2E N S (t) N S (u) du 0 First-order moments d dt E[N S(t)] = Second-order moments d dt E[N S(t)N Sp (t)] = [ t ] = E N S (t) N S (u) du + + E[NS 2 (t)] 0 ] [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

267 Moment approximations of accumulated rewards [11] First-order moments [ d t ] dt E N S (u) du = E[N S (t)] 0 Second-order moments [ ( d t ) ] 2 dt E N S (u) du 0 Combined moments [ d t ] dt E N S (t) N S (u) du 0 [ t = 2E N S (t) N S (u) du 0 First-order moments d dt E[N S(t)] = Second-order moments d dt E[N S(t)N Sp (t)] = [ t ] = E N S (t) N S (u) du + + E[NS 2 (t)] 0 ] [11] Anton Stefanek, Richard A. Hayden, and Jeremy T. Bradley. Fluid analysis of energy consumption using rewards in massively parallel Markov models. In: 2nd ACM/SPEC International Conference on Performance Engineering (ICPE). 2011, pp doi: / /30

268 24/30 Trade-off between energy and performance Client Server fail proc req Client wait reset Server fail res req Client proc res Server proc N C N S

269 24/30 Trade-off between energy and performance Client Server fail proc req Client wait reset Server fail res req Client proc res Server proc N C N S 1 Client serviced before t 150 E[total energy(t)] Probability Energy Time, t Time, t

270 24/30 Trade-off between energy and performance Client Server fail proc req Client wait reset Server fail res req Client proc res Server proc N C N S 1 Client serviced before t 150 E[total energy(t)] Probability SLA 7s 0.99 Energy Time, t Time, t

271 24/30 Trade-off between energy and performance Client Server fail proc req Client wait res Client proc res reset fail Server Server sleep sleep/wakeup req Server proc N C N S 1 Client serviced before t 150 E[total energy(t)] Probability SLA 7s 0.99 Energy Time, t Time, t

272 24/30 Trade-off between energy and performance Client Server fail proc req Client wait res Client proc res reset fail Server Server sleep sleep/wakeup req Server proc N C N S 1 Client serviced before t 150 E[total energy(t)] Probability SLA 7s 0.99 Energy Time, t Time, t

273 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate)

274 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate) Minimise energy consumption while satisfying SLAs

275 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate) Minimise energy consumption while satisfying SLAs Energy consumption 1,200 1, N S r sleep Individual passage-time SLA:

276 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate) Minimise energy consumption while satisfying SLAs Energy consumption 1,200 1, SLA met N S r sleep Individual passage-time SLA: clients must finish in at most 7s 99% of the time

277 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate) Minimise energy consumption while satisfying SLAs Energy consumption 1,200 1, SLA met N S r sleep Individual passage-time SLA: clients must finish in at most 7s 99% of the time

278 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate) Minimise energy consumption while satisfying SLAs Energy consumption 1,200 1, SLA met N S r sleep Individual passage-time SLA: clients must finish in at most 7s 99% of the time

279 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate) Minimise energy consumption while satisfying SLAs Energy consumption 1,200 1, SLA met N S r sleep Individual passage-time SLA: clients must finish in at most 7s 99.5% of the time

280 25/30 Trade-off between energy and performance Scalable analysis allows exploration of many configurations (N S, sleep rate) Minimise energy consumption while satisfying SLAs Energy consumption 1,200 1, SLA met N S r sleep Individual passage-time SLA: clients must finish in at most 7s 99.5% of the time

281 Non-Markovian models 26/30

282 27/30 Non-Markovian models Distributions more general than exponential are required to construct realistic models, for example: Deterministic timeouts in protocols or hardware Heavy-tailed service-time distributions

283 27/30 Non-Markovian models Distributions more general than exponential are required to construct realistic models, for example: Deterministic timeouts in protocols or hardware Heavy-tailed service-time distributions Phase-type approximation is one approach, but can lead to significant increase in a component s local state-space size A 100-phase Erlang approximation to a deterministic distribution of duration 1 has a probability of about 32% of lying outside of [0.9, 1.1]

284 27/30 Non-Markovian models Distributions more general than exponential are required to construct realistic models, for example: Deterministic timeouts in protocols or hardware Heavy-tailed service-time distributions Phase-type approximation is one approach, but can lead to significant increase in a component s local state-space size A 100-phase Erlang approximation to a deterministic distribution of duration 1 has a probability of about 32% of lying outside of [0.9, 1.1] In the case of deterministic distributions, mean-field approach can be generalised using delay differential equations

285 Software update model with deterministic timeouts [12] d γ ρ c e λ ρ βn a(t) N λ a b ρ Old node Updated node [12] Richard A. Hayden. Mean-field approximations for performance models with generally-timed transitions. In: ACM SIGMETRICS Performance Evaluation Review 39.3 (2011), pp doi: / /30

286 Software update model with deterministic timeouts [12] d γ ρ c e λ ρ βn a(t) N λ a b ρ Old node Updated node Ė[N c (t)] = ρe[n c (t)] β N E[N c(t)n a (t)] + λe[n e (t)] [ ( t ) ] βn a (s) E 1 {t γ} λn e (t γ) exp ds exp( ργ) }{{} t γ N Rate of determ. clocks starting at t γ [12] Richard A. Hayden. Mean-field approximations for performance models with generally-timed transitions. In: ACM SIGMETRICS Performance Evaluation Review 39.3 (2011), pp doi: / /30

287 Software update model with deterministic timeouts [12] d γ ρ c e λ ρ βn a(t) N λ a b ρ Old node Updated node Ė[N c (t)] = ρe[n c (t)] β N E[N c(t)n a (t)] + λe[n e (t)] [ ( t βn a (s) E 1 {t γ} λn e (t γ) exp N t γ ) ds exp( ργ) } {{ } Prob. that timeout occurs before node updated or went off ] [12] Richard A. Hayden. Mean-field approximations for performance models with generally-timed transitions. In: ACM SIGMETRICS Performance Evaluation Review 39.3 (2011), pp doi: / /30

288 Software update model with deterministic timeouts [12] d γ ρ c e λ ρ βn a(t) N λ a b ρ Old node Updated node Ė[N c (t)] ρe[n c (t)] β N E[N c(t)]e[n a (t)] + λe[n e (t)] ( t 1 {t γ} λe[n e (t γ)] exp t γ βe[n a (s)] N ) ds exp( ργ) [12] Richard A. Hayden. Mean-field approximations for performance models with generally-timed transitions. In: ACM SIGMETRICS Performance Evaluation Review 39.3 (2011), pp doi: / /30

289 Software update model with deterministic timeouts [12] d γ ρ c e λ ρ βn a(t) N λ a b ρ Old node Updated node v c (t) = ρv c (t) β N v c(t)v a (t) + λv e (t) ( t βv a (s) 1 t γ λv e (t γ) exp N t γ ) ds exp( ργ) [12] Richard A. Hayden. Mean-field approximations for performance models with generally-timed transitions. In: ACM SIGMETRICS Performance Evaluation Review 39.3 (2011), pp doi: / /30

290 29/30 Software update model with deterministic timeouts Rescaled component count Nodes in state c Nodes in state d Nodes in state e Time, t Rescaled component count Nodes in state a Nodes in state b Time, t

291 Summary Fluid analysis provides a scalable analysis framework for massively-parallel performance models, that is able to capture: Arbitrary moments of component counts Passage-time measures Accumulated reward measures Certain forms of non-markovian timing with implementation in the freely-available GPA tool /30

292 Thank you! 2 2 Many thanks to Richard Hayden and Anton Stefanek for their expertise with pgf and pgfplots and their help with this presentation. They also did a substantial portion of the research! 31/30

MASSPA-Modeller: A Spatial Stochastic Process Algebra modelling tool

MASSPA-Modeller: A Spatial Stochastic Process Algebra modelling tool MASSPA-Modeller: A Spatial Stochastic Process Algebra modelling tool Marcel C. Guenther Jeremy T. Bradley Imperial College London, 180 Queen s Gate, London SW7 2AZ, United Kingdom, Email: {mcg05,jb}@doc.ic.ac.uk

More information

Population models from PEPA descriptions

Population models from PEPA descriptions Population models from PEPA descriptions Jane Hillston LFCS, The University of Edinburgh, Edinburgh EH9 3JZ, Scotland. Email: jeh@inf.ed.ac.uk 1 Introduction Stochastic process algebras (e.g. PEPA [10],

More information

Queues at Keil Ferry Terminal. Simulation and Modelling. Quantitative modelling. Available modelling languages

Queues at Keil Ferry Terminal. Simulation and Modelling. Quantitative modelling. Available modelling languages Queues at Keil Ferry Terminal Simulation and Modelling Tony Field and Jeremy Bradley Room 372. Email: jb@doc.ic.ac.uk 1 week time-lapse CCTV of the Keil ferry terminal (http://www.kielmonitor.de/) Multi-server

More information

Performance Analysis

Performance Analysis Performance Analysis Peter Harrison and Jeremy Bradley Room 372. Email: jb@doc.ic.ac.uk Department of Computing, Imperial College London Produced with prosper and L A T E X 436 JTB [02/2009] p. 1 The story

More information

Varieties of Stochastic Calculi

Varieties of Stochastic Calculi Research is what I'm doing when I don't know what I'm doing. Wernher Von Braun. Artificial Biochemistry Varieties of Stochastic Calculi Microsoft Research Trento, 26-5-22..26 www.luca.demon.co.uk/artificialbiochemistry.htm

More information

Extracting Passage Times from PEPA models with the HYDRA Tool: a Case Study

Extracting Passage Times from PEPA models with the HYDRA Tool: a Case Study Extracting Passage Times from PEPA models with the HYDRA Tool: a Case Study Jeremy T. Bradley 1 Nicholas J. Dingle 1 Stephen T. Gilmore 2 William J. Knottenbelt 1 1 Department of Computing, Imperial College

More information

Performing transient and passage time analysis through uniformization and stochastic simulation

Performing transient and passage time analysis through uniformization and stochastic simulation Performing transient and passage time analysis through uniformization and stochastic simulation Simone Fulvio Rollini E H U N I V E R S I T Y T O H F R G E D I N B U Master of Science School of Informatics

More information

Solving Infinite Stochastic Process Algebra Models Through Matrix-Geometric Methods

Solving Infinite Stochastic Process Algebra Models Through Matrix-Geometric Methods Solving Infinite Stochastic Process Algebra Models Through Matrix-Geometric Methods Amani El-Rayes, Marta Kwiatkowska and Gethin Norman University of Birmingham, Birmingham B15 2TT, UK {ahe,mzk,gxn}@csbhamacuk

More information

Stochastic Process Algebras

Stochastic Process Algebras Stochastic Process Algebras Allan Clark, Stephen Gilmore, Jane Hillston, and Mirco Tribastone LFCS, School of Informatics, University of Edinburgh Abstract. In this tutorial we give an introduction to

More information

PRISM: Probabilistic Model Checking for Performance and Reliability Analysis

PRISM: Probabilistic Model Checking for Performance and Reliability Analysis PRISM: Probabilistic Model Checking for Performance and Reliability Analysis Marta Kwiatkowska, Gethin Norman and David Parker Oxford University Computing Laboratory, Wolfson Building, Parks Road, Oxford,

More information

CASPA - A Tool for Symbolic Performance Evaluation and Stochastic Model Checking

CASPA - A Tool for Symbolic Performance Evaluation and Stochastic Model Checking CASPA - A Tool for Symbolic Performance Evaluation and Stochastic Model Checking Boudewijn R. Haverkort 1, Matthias Kuntz 1, Martin Riedl 2, Johann Schuster 2, Markus Siegle 2 1 : Universiteit Twente 2

More information

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions Electronic Notes in Theoretical Computer Science Vol. 85 No. 4 (2003) URL: http://www.elsevier.nl/locate/entsc/volume85.html Proxel-Based Simulation of Stochastic Petri Nets Containing Immediate Transitions

More information

Modeling biological systems with delays in Bio-PEPA

Modeling biological systems with delays in Bio-PEPA Modeling biological systems with delays in Bio-PEPA Giulio Caravagna Dipartimento di Informatica, Università di Pisa, argo Pontecorvo 3, 56127 Pisa, Italy. caravagn@di.unipi.it Jane Hillston aboratory

More information

Stochastic Petri Net. Ben, Yue (Cindy) 2013/05/08

Stochastic Petri Net. Ben, Yue (Cindy) 2013/05/08 Stochastic Petri Net 2013/05/08 2 To study a formal model (personal view) Definition (and maybe history) Brief family tree: the branches and extensions Advantages and disadvantages for each Applications

More information

Probabilistic Action System Trace Semantics

Probabilistic Action System Trace Semantics Probabilistic Action System Trace Semantics Larissa Meinicke April 007 Technical Report SSE-007-0 Division of Systems and Software Engineering Research School of Information Technology and Electrical Engineering

More information

Tuning Systems: From Composition to Performance

Tuning Systems: From Composition to Performance Tuning Systems: From Composition to Performance Jane Hillston School of Informatics, University of Edinburgh, Scotland UK Email: Jane.Hillston@ed.ac.uk This paper gives a summary of some of the work of

More information

Metric Denotational Semantics for PEPA 1

Metric Denotational Semantics for PEPA 1 Metric Denotational Semantics for PEPA Marta Kwiatkowska and Gethin Norman School of Computer Science University of Birmingham, Edgbaston, Birmingham B5 2TT, UK E-mail: {M.Z.Kwiatkowska,G.Norman}@cs.bham.ac.uk

More information

Edinburgh Research Explorer

Edinburgh Research Explorer Edinburgh Research Explorer Improved Continuous Approximation of PEPA Models Through Epidemiological Examples Citation for published version: Benkirane, S, Hillston, J, McCaig, C, Norman, R & Shankland,

More information

CONTINUOUS APPROXIMATION OF PEPA MODELS AND PETRI NETS

CONTINUOUS APPROXIMATION OF PEPA MODELS AND PETRI NETS CONTINUOUS APPROXIMATION OF PEPA MODES AND PETRI NETS Vashti Galpin FCS, School of Informatics, University of Edinburgh, Scotland Email: Vashti.Galpin@ed.ac.uk KEYWORDS PEPA, Petri nets, continuous approximation,

More information

Time(d) Petri Net. Serge Haddad. Petri Nets 2016, June 20th LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA

Time(d) Petri Net. Serge Haddad. Petri Nets 2016, June 20th LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA Time(d) Petri Net Serge Haddad LSV ENS Cachan, Université Paris-Saclay & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2016, June 20th 2016 1 Time and Petri Nets 2 Time Petri Net: Syntax and Semantic

More information

Verifying Randomized Distributed Algorithms with PRISM

Verifying Randomized Distributed Algorithms with PRISM Verifying Randomized Distributed Algorithms with PRISM Marta Kwiatkowska, Gethin Norman, and David Parker University of Birmingham, Birmingham B15 2TT, United Kingdom {M.Z.Kwiatkowska,G.Norman,D.A.Parker}@cs.bham.ac.uk

More information

Reward Based Congruences: Can We Aggregate More?

Reward Based Congruences: Can We Aggregate More? Reward Based Congruences: Can We Aggregate More? Marco Bernardo and Mario Bravetti 2 Università di Torino, Dipartimento di Informatica Corso Svizzera 85, 049 Torino, Italy bernardo@di.unito.it 2 Università

More information

Stochastic Process Algebra models of a Circadian Clock

Stochastic Process Algebra models of a Circadian Clock Stochastic Process Algebra models of a Circadian Clock Jeremy T. Bradley Thomas Thorne Department of Computing, Imperial College London 180 Queen s Gate, London SW7 2BZ, United Kingdom Email: jb@doc.ic.ac.uk

More information

Biochemical simulation by stochastic concurrent constraint programming and hybrid systems

Biochemical simulation by stochastic concurrent constraint programming and hybrid systems Biochemical simulation by stochastic concurrent constraint programming and hybrid systems Luca Bortolussi 1 Alberto Policriti 2,3 1 Dipartimento di Matematica ed Informatica Università degli studi di Trieste

More information

IN recent years several Markovian process algebras

IN recent years several Markovian process algebras IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 1 An Efficient Algorithm for Aggregating PEPA Models Stephen Gilmore, Jane Hillston and Marina Ribaudo Abstract Performance Evaluation Process Algebra (PEPA) is

More information

Symbolic Semantics and Verification of Stochastic Process Algebras. Symbolische Semantik und Verifikation stochastischer Prozessalgebren

Symbolic Semantics and Verification of Stochastic Process Algebras. Symbolische Semantik und Verifikation stochastischer Prozessalgebren Symbolic Semantics and Verification of Stochastic Process Algebras Symbolische Semantik und Verifikation stochastischer Prozessalgebren Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung

More information

Time and Timed Petri Nets

Time and Timed Petri Nets Time and Timed Petri Nets Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr DISC 11, June 9th 2011 1 Time and Petri Nets 2 Timed Models 3 Expressiveness 4 Analysis 1/36 Outline 1 Time

More information

SPA for quantitative analysis: Lecture 6 Modelling Biological Processes

SPA for quantitative analysis: Lecture 6 Modelling Biological Processes 1/ 223 SPA for quantitative analysis: Lecture 6 Modelling Biological Processes Jane Hillston LFCS, School of Informatics The University of Edinburgh Scotland 7th March 2013 Outline 2/ 223 1 Introduction

More information

CONTROLLER DEPENDABILITY ANALYSIS BY PROBABILISTIC MODEL CHECKING. Marta Kwiatkowska, Gethin Norman and David Parker 1

CONTROLLER DEPENDABILITY ANALYSIS BY PROBABILISTIC MODEL CHECKING. Marta Kwiatkowska, Gethin Norman and David Parker 1 CONTROLLER DEPENDABILITY ANALYSIS BY PROBABILISTIC MODEL CHECKING Marta Kwiatkowska, Gethin Norman and David Parker 1 School of Computer Science, University of Birmingham, Birmingham, B15 2TT, United Kingdom

More information

Stochastic Process Algebras: From Individuals to Populations

Stochastic Process Algebras: From Individuals to Populations Stochastic Process Algebras: From Individuals to Populations Jane Hillston 1, Mirco Tribastone 2 and Stephen Gilmore 1 1 School of Informatics, University of Edinburgh, Edinburgh, UK 2 Institut für Informatik,

More information

Process-Algebraic Modelling of Priority Queueing Networks

Process-Algebraic Modelling of Priority Queueing Networks Process-Algebraic Modelling of Priority Queueing Networks Giuliano Casale Department of Computing Imperial College ondon, U.K. gcasale@doc.ic.ac.uk Mirco Tribastone Institut für Informatik udwig-maximilians-universität

More information

Mapping coloured stochastic Petri nets to stochastic process algebras

Mapping coloured stochastic Petri nets to stochastic process algebras Mapping coloured stochastic Petri nets to stochastic process algebras Linda Brodo 1 Istituto Trentino di Cultura - IRST Povo (Trento), Italy Stephen Gilmore 2 Laboratory for Foundations of Computer Science

More information

Implementing a Stochastic Process Algebra within the Möbius Modeling Framework

Implementing a Stochastic Process Algebra within the Möbius Modeling Framework Implementing a Stochastic Process Algebra within the Möbius Modeling Framework Graham Clark and William H. Sanders Dept. of Electrical and Computer Engineering and Coordinated Science aboratory University

More information

Specification models and their analysis Petri Nets

Specification models and their analysis Petri Nets Specification models and their analysis Petri Nets Kai Lampka December 10, 2010 1 30 Part I Petri Nets Basics Petri Nets Introduction A Petri Net (PN) is a weighted(?), bipartite(?) digraph(?) invented

More information

Numerical vs. Statistical Probabilistic Model Checking: An Empirical Study

Numerical vs. Statistical Probabilistic Model Checking: An Empirical Study Numerical vs. Statistical Probabilistic Model Checking: An Empirical Study Håkan L. S. Younes 1, Marta Kwiatkowska 2, Gethin Norman 2, and David Parker 2 1 Computer Science Department, Carnegie Mellon

More information

Compositional Asymmetric Cooperations for Process Algebras with Probabilities, Priorities, and Time

Compositional Asymmetric Cooperations for Process Algebras with Probabilities, Priorities, and Time Electronic Notes in Theoretical Computer Science 39 No. 3 (2000) URL: http://www.elsevier.nl/locate/entcs/volume39.html 34 pages Compositional Asymmetric Cooperations for Process Algebras with Probabilities,

More information

A tool for the numerical solution of cooperating Markov chains in product-form

A tool for the numerical solution of cooperating Markov chains in product-form HET-NETs 2010 ISBN XXX XXX pp. xx xx A tool for the numerical solution of cooperating Markov chains in product-form SIMONETTA BALSAMO GIAN-LUCA DEI ROSSI ANDREA MARIN a a Università Ca Foscari di Venezia

More information

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K "

Queueing Theory I Summary! Little s Law! Queueing System Notation! Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K Queueing Theory I Summary Little s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems " M/M/1 " M/M/m " M/M/1/K " Little s Law a(t): the process that counts the number of arrivals

More information

Modelling Non-linear Crowd Dynamics in Bio-PEPA

Modelling Non-linear Crowd Dynamics in Bio-PEPA Modelling Non-linear Crowd Dynamics in Bio-PEPA Mieke Massink 1, Diego Latella 1, Andrea Bracciali 1,3, and Jane Hillston 2 Istituto di Scienza e Tecnologie dell Informazione A. Faedo, CNR, Italy 1 School

More information

A Markov Reward Model for Software Reliability

A Markov Reward Model for Software Reliability A Markov Reward Model for Software Reliability YoungMin Kwon and Gul Agha Open Systems Laboratory Department of Computer Science University of Illinois at Urbana Champaign {ykwon4, agha}@cs.uiuc.edu ABSTRACT

More information

The purpose of this paper is to demonstrate the exibility of the SPA

The purpose of this paper is to demonstrate the exibility of the SPA TIPP and the Spectral Expansion Method I. Mitrani 1, A. Ost 2, and M. Rettelbach 2 1 University of Newcastle upon Tyne 2 University of Erlangen-Nurnberg Summary. Stochastic Process Algebras (SPA) like

More information

Performance Modelling of Computer Systems

Performance Modelling of Computer Systems Performance Modelling of Computer Systems Mirco Tribastone Institut für Informatik Ludwig-Maximilians-Universität München Fundamentals of Queueing Theory Tribastone (IFI LMU) Performance Modelling of Computer

More information

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days?

Q = (c) Assuming that Ricoh has been working continuously for 7 days, what is the probability that it will remain working at least 8 more days? IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2005, Professor Whitt, Second Midterm Exam Chapters 5-6 in Ross, Thursday, March 31, 11:00am-1:00pm Open Book: but only the Ross

More information

Estimation of mean response time of multi agent systems

Estimation of mean response time of multi agent systems Estimation of mean response time of multi agent systems Tomasz Babczy ski and Jan Magott Institute of Computer Engineering, Control and Robotics Wroc aw University of Technology Tomasz.Babczynski@pwr.wroc.pl

More information

Modelling in Systems Biology

Modelling in Systems Biology Modelling in Systems Biology Maria Grazia Vigliotti thanks to my students Anton Stefanek, Ahmed Guecioueur Imperial College Formal representation of chemical reactions precise qualitative and quantitative

More information

Stochastic Transition Systems for Continuous State Spaces and Non-determinism

Stochastic Transition Systems for Continuous State Spaces and Non-determinism Stochastic Transition Systems for Continuous State Spaces and Non-determinism Stefano Cattani 1, Roberto Segala 2, Marta Kwiatkowska 1, and Gethin Norman 1 1 School of Computer Science, The University

More information

ADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes

ADVANCED ROBOTICS. PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes ADVANCED ROBOTICS PLAN REPRESENTATION Generalized Stochastic Petri nets and Markov Decision Processes Pedro U. Lima Instituto Superior Técnico/Instituto de Sistemas e Robótica September 2009 Reviewed April

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

Formal Methods in Software Engineering

Formal Methods in Software Engineering Formal Methods in Software Engineering Modeling Prof. Dr. Joel Greenyer October 21, 2014 Organizational Issues Tutorial dates: I will offer two tutorial dates Tuesdays 15:00-16:00 in A310 (before the lecture,

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Model Checking Infinite-State Markov Chains

Model Checking Infinite-State Markov Chains Model Checking Infinite-State Markov Chains Anne Remke, Boudewijn R. Haverkort, and Lucia Cloth University of Twente Faculty for Electrical Engineering, Mathematics and Computer Science [anne,brh,lucia]@cs.utwente.nl

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 2011 Dr. Dave Parker Department of Computer Science University of Oxford Overview CSL model checking basic algorithm untimed properties time-bounded until the

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

Introduction Probabilistic Programming ProPPA Inference Results Conclusions. Embedding Machine Learning in Stochastic Process Algebra.

Introduction Probabilistic Programming ProPPA Inference Results Conclusions. Embedding Machine Learning in Stochastic Process Algebra. Embedding Machine Learning in Stochastic Process Algebra Jane Hillston Joint work with Anastasis Georgoulas and Guido Sanguinetti, School of Informatics, University of Edinburgh 16th August 2017 quan col....

More information

Stochastic Abstraction of Programs: Towards Performance-Driven Development

Stochastic Abstraction of Programs: Towards Performance-Driven Development Stochastic Abstraction of Programs: Towards Performance-Driven Development Michael James Andrew Smith E H U N I V E R S I T Y T O H F R G E D I N B U Doctor of Philosophy Laboratory for Foundations of

More information

Stochastic Model Checking

Stochastic Model Checking Stochastic Model Checking Marta Kwiatkowska, Gethin Norman, and David Parker School of Computer Science, University of Birmingham Edgbaston, Birmingham B15 2TT, United Kingdom Abstract. This tutorial presents

More information

Lecture 4 The stochastic ingredient

Lecture 4 The stochastic ingredient Lecture 4 The stochastic ingredient Luca Bortolussi 1 Alberto Policriti 2 1 Dipartimento di Matematica ed Informatica Università degli studi di Trieste Via Valerio 12/a, 34100 Trieste. luca@dmi.units.it

More information

A framework for simulation and symbolic state space analysis of non-markovian models

A framework for simulation and symbolic state space analysis of non-markovian models A framework for simulation and symbolic state space analysis of non-markovian models Laura Carnevali, Lorenzo Ridi, Enrico Vicario SW Technologies Lab (STLab) - Dip. Sistemi e Informatica (DSI) - Univ.

More information

Dependable Computer Systems

Dependable Computer Systems Dependable Computer Systems Part 3: Fault-Tolerance and Modelling Contents Reliability: Basic Mathematical Model Example Failure Rate Functions Probabilistic Structural-Based Modeling: Part 1 Maintenance

More information

2 Conceptual Framework Before introducing the probabilistic concurrent constraint (PCCP) language we have to discuss a basic question: What is a proba

2 Conceptual Framework Before introducing the probabilistic concurrent constraint (PCCP) language we have to discuss a basic question: What is a proba On Probabilistic CCP Alessandra Di Pierro and Herbert Wiklicky fadp,herbertg@cs.city.ac.uk City University London, Northampton Square, London EC1V OHB Abstract This paper investigates a probabilistic version

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Stochastic models in product form: the (E)RCAT methodology

Stochastic models in product form: the (E)RCAT methodology Stochastic models in product form: the (E)RCAT methodology 1 Maria Grazia Vigliotti 2 1 Dipartimento di Informatica Università Ca Foscari di Venezia 2 Department of Computing Imperial College London Second

More information

Some investigations concerning the CTMC and the ODE model derived from Bio-PEPA

Some investigations concerning the CTMC and the ODE model derived from Bio-PEPA FBTC 2008 Some investigations concerning the CTMC and the ODE model derived from Bio-PEPA Federica Ciocchetta 1,a, Andrea Degasperi 2,b, Jane Hillston 3,a and Muffy Calder 4,b a Laboratory for Foundations

More information

Composition of product-form Generalized Stochastic Petri Nets: a modular approach

Composition of product-form Generalized Stochastic Petri Nets: a modular approach Composition of product-form Generalized Stochastic Petri Nets: a modular approach Università Ca Foscari di Venezia Dipartimento di Informatica Italy October 2009 Markov process: steady state analysis Problems

More information

Markovian Agents: A New Quantitative Analytical Framework for Large-Scale Distributed Interacting Systems

Markovian Agents: A New Quantitative Analytical Framework for Large-Scale Distributed Interacting Systems Markovian Agents: A New Quantitative Analytical Framework for Large-Scale Distributed Interacting Systems Andrea Bobbio, Dipartimento di Informatica, Università del Piemonte Orientale, 15121 Alessandria,

More information

Modelling protein trafficking: progress and challenges

Modelling protein trafficking: progress and challenges Modelling protein trafficking: progress and challenges Laboratory for Foundations of Computer Science School of Informatics University of Edinburgh 21 May 2012 Outline Protein Trafficking Modelling Biology

More information

Roberto Bruni, Ugo Montanari DRAFT. Models of Computation. Monograph. May 20, Springer

Roberto Bruni, Ugo Montanari DRAFT. Models of Computation. Monograph. May 20, Springer Roberto Bruni, Ugo Montanari Models of Computation Monograph May 20, 2016 Springer Mathematical reasoning may be regarded rather schematically as the exercise of a combination of two facilities, which

More information

Moment-based Availability Prediction for Bike-Sharing Systems

Moment-based Availability Prediction for Bike-Sharing Systems Moment-based Availability Prediction for Bike-Sharing Systems Jane Hillston Joint work with Cheng Feng and Daniël Reijsbergen LFCS, School of Informatics, University of Edinburgh http://www.quanticol.eu

More information

Topics in Concurrency

Topics in Concurrency Topics in Concurrency Lecture 3 Jonathan Hayman 18 October 2016 Towards a more basic language Aim: removal of variables to reveal symmetry of input and output Transitions for value-passing carry labels

More information

SFM-11:CONNECT Summer School, Bertinoro, June 2011

SFM-11:CONNECT Summer School, Bertinoro, June 2011 SFM-:CONNECT Summer School, Bertinoro, June 20 EU-FP7: CONNECT LSCITS/PSS VERIWARE Part 3 Markov decision processes Overview Lectures and 2: Introduction 2 Discrete-time Markov chains 3 Markov decision

More information

Evaluating the Reliability of NAND Multiplexing with PRISM

Evaluating the Reliability of NAND Multiplexing with PRISM Evaluating the Reliability of NAND Multiplexing with PRISM Gethin Norman, David Parker, Marta Kwiatkowska and Sandeep Shukla Abstract Probabilistic model checking is a formal verification technique for

More information

Formal Verification via MCMAS & PRISM

Formal Verification via MCMAS & PRISM Formal Verification via MCMAS & PRISM Hongyang Qu University of Sheffield 1 December 2015 Outline Motivation for Formal Verification Overview of MCMAS Overview of PRISM Formal verification It is a systematic

More information

PRISM An overview. automatic verification of systems with stochastic behaviour e.g. due to unreliability, uncertainty, randomisation,

PRISM An overview. automatic verification of systems with stochastic behaviour e.g. due to unreliability, uncertainty, randomisation, PRISM An overview PRISM is a probabilistic model checker automatic verification of systems with stochastic behaviour e.g. due to unreliability, uncertainty, randomisation, Construction/analysis of probabilistic

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Publications. Refereed Journal Publications

Publications. Refereed Journal Publications Publications Refereed Journal Publications [A1] [A2] [A3] [A4] [A5] [A6] [A7] [A8] [A9] C. Baier, J.-P. Katoen, H. Hermanns, and V. Wolf. Comparative branching-time semantics for Markov chains. In: Information

More information

Analysing distributed Internet worm attacks using continuous state-space approximation of process algebra models

Analysing distributed Internet worm attacks using continuous state-space approximation of process algebra models Analysing distributed Internet worm attacks using continuous state-space approximation of process algebra models Jeremy T. Bradley Stephen T. Gilmore Jane Hillston April 30, 2007 Abstract Internet worms

More information

14 Random Variables and Simulation

14 Random Variables and Simulation 14 Random Variables and Simulation In this lecture note we consider the relationship between random variables and simulation models. Random variables play two important roles in simulation models. We assume

More information

Spatial and stochastic equivalence relations for PALOMA. Paul Piho

Spatial and stochastic equivalence relations for PALOMA. Paul Piho Spatial and stochastic equivalence relations for PALOMA Paul Piho Master of Science by Research CDT in Pervasive Parallelism School of Informatics University of Edinburgh 2016 Abstract This dissertation

More information

A FRAMEWORK FOR PERFORMABILITY MODELLING USING PROXELS. Sanja Lazarova-Molnar, Graham Horton

A FRAMEWORK FOR PERFORMABILITY MODELLING USING PROXELS. Sanja Lazarova-Molnar, Graham Horton A FRAMEWORK FOR PERFORMABILITY MODELLING USING PROXELS Sanja Lazarova-Molnar, Graham Horton University of Magdeburg Department of Computer Science Universitaetsplatz 2, 39106 Magdeburg, Germany sanja@sim-md.de

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Stochastic Simulation.

Stochastic Simulation. Stochastic Simulation. (and Gillespie s algorithm) Alberto Policriti Dipartimento di Matematica e Informatica Istituto di Genomica Applicata A. Policriti Stochastic Simulation 1/20 Quote of the day D.T.

More information

A Graph Rewriting Semantics for the Polyadic π-calculus

A Graph Rewriting Semantics for the Polyadic π-calculus A Graph Rewriting Semantics for the Polyadic π-calculus BARBARA KÖNIG Fakultät für Informatik, Technische Universität München Abstract We give a hypergraph rewriting semantics for the polyadic π-calculus,

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Automated Performance and Dependability Evaluation using Model Checking

Automated Performance and Dependability Evaluation using Model Checking Automated Performance and Dependability Evaluation using Model Checking Christel Baier a, Boudewijn Haverkort b, Holger Hermanns c,, and Joost-Pieter Katoen c a Institut für Informatik I, University of

More information

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline).

Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time

More information

Probabilistic verification and approximation schemes

Probabilistic verification and approximation schemes Probabilistic verification and approximation schemes Richard Lassaigne Equipe de Logique mathématique, CNRS-Université Paris 7 Joint work with Sylvain Peyronnet (LRDE/EPITA & Equipe de Logique) Plan 1

More information

ON THE APPROXIMATION OF STOCHASTIC CONCURRENT CONSTRAINT PROGRAMMING BY MASTER EQUATION

ON THE APPROXIMATION OF STOCHASTIC CONCURRENT CONSTRAINT PROGRAMMING BY MASTER EQUATION ON THE APPROXIMATION OF STOCHASTIC CONCURRENT CONSTRAINT PROGRAMMING BY MASTER EQUATION Luca Bortolussi 1,2 1 Department of Mathematics and Informatics University of Trieste luca@dmi.units.it 2 Center

More information

INFAMY: An Infinite-State Markov Model Checker

INFAMY: An Infinite-State Markov Model Checker INFAMY: An Infinite-State Markov Model Checker Ernst Moritz Hahn, Holger Hermanns, Björn Wachter, and Lijun Zhang Universität des Saarlandes, Saarbrücken, Germany {emh,hermanns,bwachter,zhang}@cs.uni-sb.de

More information

Semantics and Verification

Semantics and Verification Semantics and Verification Lecture 2 informal introduction to CCS syntax of CCS semantics of CCS 1 / 12 Sequential Fragment Parallelism and Renaming CCS Basics (Sequential Fragment) Nil (or 0) process

More information

Qualitative and Quantitative Analysis of a Bio-PEPA Model of the Gp130/JAK/STAT Signalling Pathway

Qualitative and Quantitative Analysis of a Bio-PEPA Model of the Gp130/JAK/STAT Signalling Pathway Qualitative and Quantitative Analysis of a Bio-PEPA Model of the Gp13/JAK/STAT Signalling Pathway Maria Luisa Guerriero Laboratory for Foundations of Computer Science, The University of Edinburgh, UK Abstract.

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Model Checking CSL Until Formulae with Random Time Bounds

Model Checking CSL Until Formulae with Random Time Bounds Model Checking CSL Until Formulae with Random Time Bounds Marta Kwiatkowska 1, Gethin Norman 1 and António Pacheco 2 1 School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT,

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Probabilistic Model Checking of Security Protocols without Perfect Cryptography Assumption

Probabilistic Model Checking of Security Protocols without Perfect Cryptography Assumption Our Model Checking of Security Protocols without Perfect Cryptography Assumption Czestochowa University of Technology Cardinal Stefan Wyszynski University CN2016 Our 1 2 3 Our 4 5 6 7 Importance of Security

More information

Majority Rule with Differential Latency: An Absorbing Markov Chain to Model Consensus

Majority Rule with Differential Latency: An Absorbing Markov Chain to Model Consensus Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Majority Rule with Differential Latency: An Absorbing Markov Chain to Model Consensus

More information

Multi-State Availability Modeling in Practice

Multi-State Availability Modeling in Practice Multi-State Availability Modeling in Practice Kishor S. Trivedi, Dong Seong Kim, Xiaoyan Yin Depart ment of Electrical and Computer Engineering, Duke University, Durham, NC 27708 USA kst@ee.duke.edu, {dk76,

More information

Analysing Biochemical Oscillation through Probabilistic Model Checking

Analysing Biochemical Oscillation through Probabilistic Model Checking Analysing Biochemical Oscillation through Probabilistic Model Checking Paolo Ballarini 1 and Radu Mardare 1 and Ivan Mura 1 Abstract Automated verification of stochastic models has been proved to be an

More information

Queueing Theory and Simulation. Introduction

Queueing Theory and Simulation. Introduction Queueing Theory and Simulation Based on the slides of Dr. Dharma P. Agrawal, University of Cincinnati and Dr. Hiroyuki Ohsaki Graduate School of Information Science & Technology, Osaka University, Japan

More information

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1.

IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, March 1. IEOR 46: Introduction to Operations Research: Stochastic Models Spring, Professor Whitt Class Lecture Notes: Tuesday, March. Continuous-Time Markov Chains, Ross Chapter 6 Problems for Discussion and Solutions.

More information