Probabilistic Model Checking and [Program] Analysis (CO469)
|
|
- Jade Johnston
- 6 years ago
- Views:
Transcription
1 Probabilistic Model Checking and [Program] Analysis (CO469) Program Analysis Herbert Wiklicky Spring 208 / 64 Overview Topics we will cover in this part will include:. Language WHILE 2. Data Flow Analysis 3. Program Properties 4. Lattice Theory 5. Monotone Frameworks 6. Abstract Interpretation 2 / 64
2 Program Analysis Program analysis is an automated technique for finding out properties of programs without having to execute them. Static Analysis vs Dynamic Testing Compiler Optimisation Program Verification Security Analysis Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis. Springer Verlag, 999/ / 64 Model Checking vs Program Analysis system code m := ; while n > do m := m*n; n := n-; endwhile model S Φ property 4 / 64
3 A First Example Consider the following fragment in some procedural language. : m 2; 2: while n > do 3: m m n; 4: n n 5: end while 6: stop [m 2] ; while [n > ] 2 do [m m n] 3 ; [n n ] 4 end while [stop] 5 We annotate a program such that it becomes clear about what program point we are talking about. 5 / 64 A Parity Analysis Claim: This program fragment always returns an even m, idependently of the initial values of m and n. We can statically determine that in any circumstances the value of m at the last statement will be even for any input n. A program analysis, so-called parity analysis, can determine this by propagating the even/odd or parity information forwards form the start of the program. 6 / 64
4 Properties We will assign to each variable one of three properties: even the value is known to be even odd the value is known to be odd unknown the parity of the value is unknown For both variables m and n we record its parity at each stage of the computation (beginning of each statement). 7 / 64 A First Example Executing the program with abstract values, parity, for m and n. : m 2; unknown(m) unknown(n) 2: while n > do even(m) unknown(n) 3: m m n; even(m) unknown(n) 4: n n even(m) unknown(n) 5: end while even(m) unknown(n) 6: stop even(m) unknown(n) Important: We can restart the loop! 8 / 64
5 A First Example The first program computes 2 times the factorial for any positive value of n. Replacing 2 by in the first statement gives: : m ; unknown(m) unknown(n) 2: while n > do unknown(m) unknown(n) 3: m m n; unknown(m) unknown(n) 4: n n unknown(m) unknown(n) 5: end while unknown(m) unknown(n) 6: stop unknown(m) unknown(n) i.e. the factorial but then the program analysis is unable to tell us anything about the parity of m at the end of the execution. 9 / 64 The WHILE Language 0 / 64
6 Abstract Syntax of WHILE We use the following syntactic categories: a AExp arithmetic expressions b BExp boolean expressions S Stmt statements The language WHILE has the following abstract syntax: a ::= x n a op a a 2 b ::= true false not b b op b b 2 a op r a 2 S ::= x:=a skip S ;S 2 if b then S else S 2 while b do S / 64 Syntactical Categories We assume some countable/finite set of variables is given; x, y, z,... Var variables n, m,... Num numerals l,... Lab labels Numerals (integer constants) will not be further defined and neither will the operators: op a Op a arithmetic operators, e.g. +,,,... op b Op b boolean operators, e.g.,,... op r Op r relational operators, e.g. =, <,,... 2 / 64
7 Labelled Syntax of WHILE The labelled syntax of the language WHILE is given by the following abstract syntax: a ::= x n a op a a 2 b ::= true false not b b op b b 2 a op r a 2 S ::= [x:=a] l [skip] l S ;S 2 if [b] l then S else S 2 while [b] l do S 3 / 64 An Example in WHILE An example of a program written in this WHILE language is the following one which computes the factorial of the number stored in x and leaves the result in z: [ y:=x ] ; [ z:= ] 2 ; while [y > ] 3 do ( [ z:=z y ] 4 ; [ y:=y ] 5 ); [ y:=0 ] 6 Note the use of meta-symbols, brackets, to group statements. 4 / 64
8 Concrete Syntax of WHILE To avoid using brackets (as meta-symbols) we could also use the concrete syntax of the language WHILE as follows: a ::= x n a op a a 2 b ::= true false not b b op b b 2 a op r a 2 S ::= x:=a skip S ;S 2 if b then S else S 2 fi while b do S od Alternatives could be: if b then... else... endif and while b do... endwhile or using begin... end etc. 5 / 64 A Formal Semantics Memory is modelled by an abstract state, i.e. functions of type State = Var Z. For boolean and arithmetic expressions we assume that we know what they evaluate to in a state s State. Then the semantics for AExp is a total function [[. ]] A. : AExp State Z and the semantics of boolean expressions is given by [[. ]] B. : BExp State {tt, ff} 6 / 64
9 Evaluating Expressions Let us look at a program with two variables Var = {x, y}. Two possible states in this case could be for example: s 0 = [x 0, y ] and s = [x, y ] We can evaluate an expression like x + y AExp: [[x + y]] A s 0 = 0 + = [[x + y]] A s = + = 2 or a Boolean expression like x + y BExp: [[x + y ]] B s 0 = = tt [[x + y ]] B s = 2 = ff 7 / 64 Execution and Transitions The configurations describe the current state of the execution. S, s... S is to be executed in state s, s... a terminal state (i.e.., s ). The transition relation specify the (possible) computational steps during the execution starting from a certain configuration S, s S, s and at the end of the computation S, s s 8 / 64
10 SOS Rules (Structured Operational Semantics) (ass) [x:=a] l, s s[x [[a]] A s] (skip) (sq ) (sq T ) [skip] l, s s S, s S, s S ;S 2, s S ;S 2, s S, s s S ;S 2, s S 2, s (if T ) if [b] l then S else S 2, s S, s if [[b]] B s = tt (if F ) if [b] l then S else S 2, s S 2, s if [[b]] B s = ff (wh T ) while [b] l do S, s S; while [b] l do S, s if [[b]] B s = tt (wh F ) while [b] l do S, s s if [[b]] B s = ff 9 / 64 An SOS Example Consider a (perhaps rather vacuous) program like: S [z:=x + y ] l ;while [true] l do [ skip ] l s 0 = [x 0, y, z 0] and s = [x 0, y, z ] Then S, s 0 executes as follows: S, s 0 while [true] l do [ skip ] l, s [ skip ] l ; while [true] l do [ skip ] l, s while [true] l do [ skip ] l, s [ skip ] l ; while [true] l do [ skip ] l, s... With a unique labelling it is enough to keep track of labels, e.g. l, s 0 l, s l, s l, s / 64
11 Data Flow Analysis 2 / 64 Data Flow Analysis The starting point for data flow analysis is a representation of the control flow graph of the program: the nodes of such a graph may represent individual statements as in a flowchart or sequences of statements; arcs specify how control may be passed during program execution. The data flow analysis is usually specified as a set of equations which associate analysis information with program points which correspond to the nodes in the control flow graph. This information may be propagated forwards through the program (e.g. parity analysis) or backwards. When the control flow graph is not explicitly given, we need a preliminary control flow analysis 22 / 64
12 Initial Label When presenting examples of Data Flow Analyses we will use a number of operations on programs and labels. The first of these is init : Stmt Lab which returns the initial label of a statement: init([ x:=a ] l ) = l init([ skip ] l ) = l init(s ;S 2 ) = init(s ) init(if [b] l then S else S 2 ) = l init(while [b] l do S) = l 23 / 64 Final Labels We will also need a function which returns the set of final labels in a statement; whereas a sequence of statements has a single entry, it may have multiple exits (e.g. in the conditional): final : Stmt P(Lab) final([ x:=a ] l ) = {l} final([ skip ] l ) = {l} final(s ;S 2 ) = final(s 2 ) final(if [b] l then S else S 2 ) = final(s ) final(s 2 ) final(while [b] l do S) = {l} The while-loop terminates immediately after the test fails. 24 / 64
13 Blocks To access the statements or test associated with a label in a program we use the function blocks : Stmt P(Block) blocks([ x:=a ] l ) = {[ x:=a ] l } blocks([ skip ] l ) = {[ skip ] l } blocks(s ;S 2 ) = blocks(s ) blocks(s 2 ) blocks(if [b] l then S else S 2 ) = {[b] l } blocks(s ) blocks(s 2 ) blocks(while [b] l do S) = {[b] l } blocks(s) 25 / 64 Blocks and Labels The building blocks of our analysis is given by Block i.e. the set of statements, or elementary blocks, of the form: [ x:=a ] l, or [ skip ] l, as well as tests of the form [b] l. Then the set of labels occurring in a program is given by labels : Stmt P(Lab) where labels(s) = {l [B] l blocks(s)} Clearly init(s) labels(s) and final(s) labels(s). 26 / 64
14 Flow flow : Stmt P(Lab Lab) which maps statements to sets of flows: flow([ x:=a ] l ) = flow([ skip ] l ) = flow(s ;S 2 ) = flow(s ) flow(s 2 ) {(l, init(s 2 )) l final(s )} flow(if [b] l then S else S 2 ) = flow(s ) flow(s 2 ) {(l, init(s )), (l, init(s 2 ))} flow(while [b] l do S) = flow(s) {(l, init(s))} {(l, l) l final(s)} 27 / 64 An Example Flow Consider the following program, power, computing the x-th power of the number stored in y: [ z:= ] ; while [x > ] 2 do ( [ z:=z y ] 3 ; [ x:=x ] 4 ); We have labels(power) = {, 2, 3, 4}, init(power) =, and final(power) = {2}. The function flow produces the set: flow(power) = {(, 2), (2, 3), (3, 4), (4, 2)} 28 / 64
15 Flow Graph [z:=] [x>0] 2 yes [z:=z*y] 3 [x:=x-] 4 no 29 / 64 Forward Analysis The function flow is used in the formulation of forward analyses. Clearly init(s) is the (unique) entry node for the flow graph with nodes labels(s) and edges flow(s). Also labels(s) = {init(s)} {l (l, l ) flow(s)} {l (l, l ) flow(s)} and for composite statements (meaning those not simply of the form [B] l ) the equation remains true when removing the {init(s)} component. 30 / 64
16 Reverse Flow In order to formulate backward analyses we require a function that computes reverse flows: flow R : Stmt P(Lab Lab) flow R (S) = {(l, l ) (l, l) flow(s)} For the power program, flow R produces {(2, ), (2, 4), (3, 2), (4, 3)} 3 / 64 Backward Analysis In case final(s) contains just one element that will be the unique entry node for the flow graph with nodes labels(s) and edges flow R (S). Also labels(s) = final(s) {l (l, l ) flow R (S)} {l (l, l ) flow R (S)} 32 / 64
17 Notation We will use the notation S to represent the program we are analysing (the top-level" statement) and furthermore: Lab to represent the labels (labels(s )) appearing in S, Var to represent the variables (FV(S )) appearing in S, Block to represent the elementary blocks (blocks(s )) occurring in S, and AExp to represent the set of non-trivial arithmetic subexpressions in S as well as AExp(a) and AExp(b) to refer to the set of non-trivial arithmetic subexpressions of a given arithmetic, respectively boolean, expression. An expression is trivial if it is a single variable or constant. 33 / 64 Label Consistency A statement, S, is label consistent if and only if: [B ] l, [B 2 ] l blocks(s) implies B = B 2 Clearly, if all blocks in S are uniquely labelled (meaning that each label occurs only once), then S is label consistent. When S is label consistent the statement or clause where [B] l blocks(s) is unambiguous in defining a partial function from labels to elementary blocks; we shall then say that l labels the block B. 34 / 64
18 Classical Analysis 35 / 64 Data Flow Analysis The general approach for determining program properties for procedural languages via a dataflow analysis: Extract Data Flow Information Formulate Data Flow Equations Update Local Information Collect Global Information Construct Solution(s) of Equations 36 / 64
19 Available Expressions The Available Expressions Analysis will determine: For each program point, which expressions must (are guaranteed to) have already been computed, and not later modified, on all paths to that program point. This information can be used to avoid the re-computation of an expression. For clarity, we will concentrate on arithmetic expressions. 37 / 64 Example Consider the following simple program: [ x:=a + b ] ; [ y:=a b ] 2 ; while [y > a + b] 3 do ( [ a:=a + ] 4 ; [ x:=a + b ] 5 ) It should be clear that the expression a+b is available every time the execution reaches the test (label 3) in the loop; as a consequence, the expression need not be recomputed. 38 / 64
20 AE Analysis kill AE : Block P(AExp ) gen AE : Block P(AExp ) AE entry : Lab P(AExp ) AE exit : Lab P(AExp ) 39 / 64 AE Auxiliary Functions kill AE ([ x:=a ] l ) = {a AExp x FV(a )} kill AE ([ skip ] l ) = kill AE ([b] l ) = gen AE ([ x:=a ] l ) = {a AExp(a) x / FV(a )} gen AE ([ skip ] l ) = gen AE ([b] l ) = AExp(b) 40 / 64
21 AE Equation Schemes AE entry (l) = {, if l = init(s ) {AEexit (l ) (l, l) flow(s )}, otherwise AE exit (l) = (AE entry (l)\kill AE ([B] l )) gen AE (([B] l ) where [B] l blocks(s ) 4 / 64 Largest Solution The analysis is a forward analysis and we are interested in the largest sets satisfying the equation for AE entry and AE exit. [ z:=x + y ] l ;while [true] l do [ skip ] l AE entry (l) = AE entry (l ) = AE exit (l) AE exit (l ) AE entry (l ) = AE exit (l ) AE exit (l) = AE entry (l) {x + y} AE exit (l ) = AE entry (l ) AE exit (l ) = AE entry (l ) 42 / 64
22 Obtaining Solutions [ ] l [ ] l yes [ ] l no After some simplification, we find that: AE entry (l ) = {x + y} AE entry (l ) 43 / 64 AE Example [ x:=a + b ] ; [ y:=a b ] 2 ; while [y > a + b] 3 do ( [ a:=a + ] 4 ; [ x:=a + b ] 5 ) l kill AE (l) gen AE (l) {a + b} 2 {a b} 3 {a + b} 4 {a + b, a b, a + } 5 {a + b} 44 / 64
23 AE Example: Equations [ x:=a + b ] ; [ y:=a b ] 2 ; while [y > a + b] 3 do ( [ a:=a + ] 4 ; [ x:=a + b ] 5 ) AE exit () = AE entry () {a + b} AE exit (2) = AE entry (2) {a b} AE exit (3) = AE entry (3) {a + b} AE exit (4) = AE entry (4)\{a + b, a b, a + } AE exit (5) = AE entry (5) {a + b} 45 / 64 AE Example: Equations [ x:=a + b ] ; [ y:=a b ] 2 ; while [y > a + b] 3 do ( [ a:=a + ] 4 ; [ x:=a + b ] 5 ) AE entry () = AE entry (2) = AE exit () AE entry (3) = AE exit (2) AE exit (5) AE entry (4) = AE exit (3) AE entry (5) = AE exit (4) 46 / 64
24 AE Example: Equations AE entry () = AE entry (2) = AE exit () AE entry (3) = AE exit (2) AE exit (5) AE entry (4) = AE exit (3) AE entry (5) = AE exit (4) AE exit () = AE entry () {a + b} AE exit (2) = AE entry (2) {a b} AE exit (3) = AE entry (3) {a + b} AE exit (4) = AE entry (4)\{a + b, a b, a + } AE exit (5) = AE entry (5) {a + b} 47 / 64 AE Example: Solutions l AE entry (l) AE exit (l) {a + b} 2 {a + b} {a + b, a b} 3 {a + b} {a + b} 4 {a + b} 5 {a + b} Note that, even though a is redefined in the loop, the expression a+b is re-evaluated in the loop and so it is always available on entry to the loop. On the other hand, a*b is available on the first entry to the loop but is killed before the next iteration. 48 / 64
25 Property Lattices 49 / 64 Partially Ordered Set A partial ordering is a relation on a set L, i.e. : L L {tt, ff} or L L that is: reflexive l : l l, transitive l, l 2, l 3 : l l 2 l 2 l 3 l l 3, and anti-symmetric l, l 2 : l l 2 l 2 l l = l 2. A partially ordered set (L, ) is a set L equipped with a partial ordering (sometimes written L ). We shall write l 2 l for l l 2 and l l 2 for l l 2 l l / 64
26 Examples of POS s Example: Integers The integers Z ordered in the usual way, i.e. for i, i 2 Z: i i 2 iff i i 2 Example: Power-Set Take a (finite) set X and consider at the set of all sub-sets of X, i.e. its power set P(X). A partial ordering on P(X) is given by inclusion, i.e. for two sub-sets S, S 2 P(X): S S 2 iff S S 2 5 / 64 Upper/Lower Bounds Given a partially ordered set (L, ). A subset Y of L has l L as an upper bound if l Y : l l and as a lower bound if l Y : l l. 52 / 64
27 Least Upper/Greatest Lower Bounds Given a partially ordered set (L, ) and Y L. A least upper bound l of Y is an upper bound of Y that satisfies l l 0 whenever l 0 is another upper bound of Y ; Similarly, a greatest lower bound l of Y is a lower bound of Y satisfying: l 0 l whenever l 0 is another lower bound of Y. Note that subsets Y of a partially ordered set L need not have least upper bounds nor greatest lower bounds but when they exist they are unique (since is anti-symmetric) and they are denoted Y and Y, respectively. Sometimes is called the join operator and the meet operator and we shall write l l 2 for {l, l 2 } and similarly l l 2 for {l, l 2 }. 53 / 64 Complete Lattice A complete lattice L = (L, ) = (L,,,,, ) is a partially ordered set (L, ) such that all subsets have least upper bounds as well as greatest lower bounds. Furthermore, define = = L is the least element and = = L is the greatest element. 54 / 64
28 Power-Set Lattice Take a (finite) set X and look again at its power set P(X). A partial ordering on P(X) is given as above by inclusion. The meet and join operators are given by (set) intersection S S 2 = S S 2 and (set) union S S 2 = S S 2. The least and greatest elements in P(X) are given by = and = X. 55 / 64 Power-Set: Hasse Diagrams {, 2, 3} {, 2} {, 3} {2, 3} {} {2} {3} {} {2} {3} {, 2} {, 3} {2, 3} {, 2, 3} 56 / 64
29 Properties of Functions I A function f : L L 2 between two partially ordered sets L = (L, ) and L 2 = (L 2, 2 ) is monotone (or isotone or order-preserving) if l, l L : l l f (l) 2 f (l ) A function f : L L 2 is an additive function (or a join morphism, sometimes called a distributive function) if l, l 2 L : f (l l 2 ) = f (l ) f (l 2 ) and it is called a multiplicative function (or a meet morphism) if l, l 2 L : f (l l 2 ) = f (l ) f (l 2 ) 57 / 64 Properties of Functions II The function f : L L 2 is a completely additive function (or a complete join morphism) if for all Y L : ( ) f Y = { 2 f (l ) l Y } whenever Y exists and it is completely multiplicative (or a complete meet morphism) if for all Y L : ( ) f Y = { 2 f (l ) l Y } whenever Y exists 58 / 64
30 Cartesian Product L L 2 Let L = (L, ) and L 2 = (L 2, 2 ) be partially ordered sets. Define L = (L, ) by L = L L 2 = {(l, l 2 ) l L l 2 L 2 } (l, l 2 ) (l 2, l 22 ) iff l l 2 l 2 2 l 22 If additionally each L i = (L i, i, i, i, i, i ) is a complete lattice then so is L = (L,,,,, ) and furthermore Y = ( {l l 2 : (l, l 2 ) Y }, 2{l 2 l : (l, l 2 ) Y } ) and = (, 2 ) and similarly for Y and. 59 / 64 Total Function Space S L Let L = (L, ) be a partially ordered set and let S be a set. Define L = (L, ) by L = {f : S L f is a total function} f f iff s S : f (s) f (s) If additionally L = (L,,,,, ) is a complete lattice then so is L = (L,,,,, ) and furthermore Y = λs. {f (s) f Y } and = λs. and similarly for Y and. 60 / 64
31 Chains A subset Y L of a partially ordered set L = (L, ) is a chain if l, l 2 Y : (l l 2 ) (l 2 l ) Thus a chain is a (possibly empty) subset of L that is totally ordered. We shall say that it is a finite chain if it is a finite subset of L. 6 / 64 Ascending and Descending Chains A sequence (l n ) n = (l n ) n N of elements in L is an ascending chain if n m l n l m Writing (l n ) n also for {l n n N} it is clear that an ascending chain also is a chain. Similarly, a sequence (l n ) n is a descending chain if n m l n l m 62 / 64
32 Stabilising Chains We shall say that a sequence (l n ) n eventually stabilises if and only if n 0 N : n N : n n 0 l n = l n0 For the sequence (l n ) n we write n l n for {l n n N} and similarly we write n l n for {l n n N}. 63 / 64 ACC & DCC We shall say that a partially ordered set L = (L, ) has finite height if and only if all chains are finite. It has finite height at most h if all chains contain at most h + elements; it has finite height h if additionally there is a chain with h + elements. A partially ordered set L satisfies the Ascending Chain Condition (ACC) if and only if all ascending chains eventually stabilise. A partially ordered set L satisfies the Descending Chain Condition (DCC) if and only if all descending chains eventually stabilise. 64 / 64
33 Chain Examples / 64 Reductive and Extensive Functions Consider a monotone function f : L L on a complete lattice L. A fixed point of f is an element l L such that f (l) = l, we write for the set of fixed points. Fix(f ) = {l f (l) = l} The function f is reductive at l if and only if f (l) l and we write Red(f ) = {l f (l) l} for the set of elements upon which f is reductive; we shall say that f itself is reductive if Red(f ) = L. Similarly, the function f is extensive at l if and only if f (l) l, we write Ext(f ) = {l f (l) l} 66 / 64
34 Fixed Points Since L is a complete lattice it is always the case that the set Fix(f ) will have a greatest lower bound in L and we denote it by lfp(f ): lfp(f ) = Fix(f ) = Red(f ) Fix(f ) Red(f ) Similarly, the set Fix(f ) will have a least upper bound in L and we denote it by gfp(f ): gfp(f ) = Fix(f ) = Ext(f ) Fix(f ) Ext(f ) 67 / 64 Existence of Fixed Points If L satisfies the Ascending Chain Condition then there exists n such that f n ( ) = f n+ ( ) and hence lfp(f ) = f n ( ). If L satisfies the Descending Chain Condition then there exists n such that f n ( ) = f n+ ( ) and hence gfp(f ) = f n ( ). Indeed any monotone function f over a partially ordered set satisfying the Ascending Chain Condition is continuous. 68 / 64
35 Fixed Points etc. Red(f ) Fix(f ) Ext(f ) f n ( ) n f n ( ) gfp(f ) lfp(f ) n f n ( ) f n ( ) 69 / 64 Fixed Points and Solutions Given equations over some domain, e.g. integers 6x 3 3x 2 x = 7 We look at it as a recursive equation: 6x 3 3x 2 7 = x or simply: f (x) = x. If x is a fixed point of f then it is a solution to the equation. 70 / 64
36 Lattice Equations Given a system of equations with unknowns x,..., x n over a complete lattice L (fulfilling ACC/DCC). x = f (x,..., x n ) x n = f m (x,..., x n ) Consider the equations as defining a function F : L n L n F(x,..., x n ) = (f (x,..., x n ),..., f m (x,..., x n )) In our case we start with a recursive set of equations: Analysis(i) = f i (Analysis(),..., Analysis(n)). 7 / 64 Chaotic Iteration Iteration: Construct iteratively the smallest or largest solution/fixed point, i.e. lfp(f) or gfp(f ), by starting with x i = xi 0 = or x i = xi 0 = and construct a sequence of approximations like: xi 0 = xi = f (x 0,..., x n 0 ) x k i = f (x k,..., x k n ) until we converge, i.e. the sequence stabilises. 72 / 64
37 An Example Look at the complete lattice P(X) = P({a, b, c, d}). Construct solutions to the following set equations: S = {a} S 4 S 2 = S S 3 S 3 = S 4 {b} S 4 = S 2 {b, c} 73 / 64 Two Solutions Starting from gives: S = {a} {a, b, c} {a, b, c} {a, b, c}... S 2 = {a} {a, b, c} {a, b, c}... S 3 = {b} {b} {b}... S 4 = {b, c} {b, c} {a, b, c} {a, b, c}... Starting from gives: S = {a, b, c, d} {a, b, c, d} {a, b, c, d}... S 2 = {a, b, c, d} {a, b, c, d} {a, b, c, d}... S 3 = {a, b, c, d} {b} {b}... S 4 = {a, b, c, d} {a, b, c, d} {a, b, c, d} / 64
38 Knaster-Tarski Fixed Point Theorem Mathematics literature is full of Fixed Point Theorems, e.g. Theorem (Knaster-Tarski) Let L be a complete lattice and assume that f : L L is an order-preserving map. Then {x L x f (x)} Fix(f ). B.A. Davey and H.A. Priestley: Introduction to Lattices and Order, Cambridge 990. G. Grätzer: Lattice Theory: Foundations, Birkhäuser, / 64 Monotone Frameworks 76 / 64
39 Classical Analyses Each of the four classical analyses considers equations for a label consistent program S and they take the form: Analysis (l) = { ι, if l E {Analysis (l ) (l, l) F}, otherwise Analysis (l) = f l (Analysis (l)) is or (and is or ), F is either flow(s ) or flow R (S ), E is {init(s )} or final(s ), ι specifies the initial or final analysis information, and f l is the transfer function associated with B l blocks(s ). 77 / 64 Forward vs Backward Analysis The forward analyses have F to be flow(s ) and then Analysis concerns entry conditions and Analysis concerns exit conditions; also the equation system presupposes that S has isolated entries. The backward analyses have F to be flow R (S ) and then Analysis concerns exit conditions and Analysis concerns entry conditions; also the equation system presupposes that S has isolated exits. 78 / 64
40 Must vs May Analysis When is we require the greatest sets that solve the equations and we are able to detect properties satisfied by all paths of execution reaching (or leaving) the entry (or exit) of a label; these analyses are often called must analyses. When is we require the least sets that solve the equations and we are able to detect properties satisfied by at least one execution path to (or from) the entry (or exit) of a label; these analyses are often called may analyses. 79 / 64 Tansition Functions The view that we take here is that a program is a transition system; the nodes represent blocks and each block has a transfer function associated with it that specifies how the block acts on the input state. Note that for forward analyses, the input state is the entry state, and for backward analyses, it is the exit state. 80 / 64
41 Monotone & Distributive Frameworks A Monotone Framework consists of: a complete lattice, L, that satisfies the Ascending Chain Condition, and we write for the least upper bound operator; and a set F of monotone functions from L to L that contains the identity function and that is closed under function composition. A Distributive Framework is a Monotone Framework where additionally all functions f in F are required to be distributive: f (l l 2 ) = f (l ) f (l 2 ) 8 / 64 Instance of a Framework An instance, Analysis, of a Monotone or Distributive Framework to consists of: the complete lattice, L, of the framework; the space of transfer functions, F, of the framework; a finite flow, F, that typically is flow(s ) or flow R (S ); a finite set of so-called extremal labels, E, that typically is {init(s )} or final(s ); an extremal value, ι L, for the extremal labels; and a mapping, f, from the labels Lab of F to transfer functions in F. 82 / 64
42 Equations An instance gives rise to a set of equations, Analysis =, of the form considered earlier: Analysis (l) = {Analysis (l ) (l, l) F} ι l E { where ι l ι if l E E = if l / E Analysis (l) = f l (Analysis (l)) 83 / 64 A Non-Distributive Example The Constant Propagation Analysis (CP) will determine: For each program point, whether or not a variable has a constant value whenever execution reaches that point. Such information can be used as the basis for an optimisation known as Constant Folding: all uses of the variable may be replaced by the constant value. 84 / 64
43 CP State: Z The (abstract) states for the CP Analysis are given by: Ŝtate CP = ((Var Z ),,,,, λx. ) where Var is the set of variables appearing in the program. Z = Z { } is partially ordered as follows: z Z : z z, z 2 Z : (z z 2 ) (z = z 2 ) 85 / 64 CP State: Lattice To capture the case where no information is available we extend Var Z with a least element, written (Var Z ). The partial ordering on Ŝtate CP = (Var Z ) is: σ (Var Z ) : σ σ, σ 2 Var Z : σ σ 2 iff x : σ (x) σ 2 (x) and the binary least upper bound operation is then: σ (Var Z ) : σ, σ 2 Var Z : σ = σ = σ x : ( σ σ 2 )(x) = σ (x) σ 2 (x) 86 / 64
44 CP State Evaluation A CP : AExp (Ŝtate CP Z ) { if σ = A CP [[x]] σ = σ(x) otherwise { if σ = A CP [[n]] σ = n otherwise A CP [[a op a a 2 ]] σ = A CP [[a ]] σ ôp a A CP [[a 2 ]] σ The operations on Z are lifted to Z = Z {, } by taking z ôp a z 2 = z op a z 2 if z, z 2 Z (and where op a is the corresponding arithmetic operation on Z), z ôp a z 2 = if z = or z 2 = and z ôp a z 2 = otherwise. 87 / 64 CP Transfer Function F CP = {f f is a monotone function on Ŝtate CP} [x := a] l : f CP l ( σ) = [skip] l : [b] l : f CP l ( σ) = σ f CP l ( σ) = σ { if σ = σ[x A CP [[a]] σ] otherwise 88 / 64
45 CP Flow Constant Propagation (CP) is a forward analysis, so for the program S we take the flow, F, to be flow(s ). The extremal labels, E, are given by {init(s )}, and the extremal value, ι CP, is λx.. The property lattice L and transfer function F CP as above. Lemma: Constant Propagation is a Monotone Framework that is not a Distributive Framework. 89 / 64 Distributive Framework To show that it is not a Distributive Framework consider the transfer function fl CP for [y := x x] l and let σ and σ 2 be such that σ (x) = and σ 2 (x) =. Then σ σ 2 maps x to and thus fl CP ( σ σ 2 ) maps y to and hence fails to record that y has the constant value. However, both fl CP ( σ ) and fl CP ( σ 2 ) map y to and so does fl CP ( σ ) fl CP ( σ 2 ). 90 / 64
46 Constructing Solutions 9 / 64 The MFP Solution () [Not for Exam] INPUT: OUTPUT: An instance of a Monotone Framework: (L, F, F, E, ι, f ) MFP, MFP Step : Initialisation (of W and Analysis) W := nil; for all (l, l ) in F do W := cons((l, l ),W); for all l in F or E do if l E then Analysis[l] := ι else Analysis[l] := L ; 92 / 64
47 The MFP Solution (2&3) [Not for Exam] Step 2: Iteration (updating W and Analysis) while W nil do l := fst(head(w)); l = snd(head(w)); W := tail(w); if f l (Analysis[l]) Analysis[l ] then Analysis[l ] := Analysis[l ] f l (Analysis[l]); for all (l, l ) in F do W := cons((l, l ),W); Step 3: Presenting the result (MFP and MFP ) for all l in F or E do MFP (l) := Analysis[l]; MFP (l) := f l (Analysis[l]) 93 / 64 MFP Termination Given an instance of a Monotone Framework (L, F, F, E, ι, f ) with a property lattice L fullfilling the ACC/DCC. Starting from and using iterative (approximation) methods like Chaotic Iteration or the Worklist Algorithm (which optimises the iterations by only considering updates when necessary ) we can compute solutions Analysis and Analysis. Lemma: The iterative construction of a solution (using chaotic iteration, worklist algorithm) always terminates and it computes the least MFP solution (more precisely MFP and MFP ) to the instance of the framework. 94 / 64
48 MOP Solution: Paths Consider an instance (L, F, F, E, ι, f ) of a Monotone Framework. We shall use the notation l = [l,, l n ] for a sequence of n 0 labels. The paths up to but not including l are: path (l) = {[l,, l n ] n i < n : (l i, l i+ ) F l n = l l E} The paths up to and including l are: path (l) = {[l,, l n ] n i < n : (l i, l i+ ) F l n = l l E} 95 / 64 MOP Solutions For a path l = [l,, l n ] we define the transfer function f l = f ln f l id so that for the empty path we have f [ ] = id where id is the identity function. The MOP solutions are then given by: MOP (l) = {f l (ι) l path (l)} MOP (l) = {f l (ι) l path (l)} 96 / 64
49 MOP Solution: Termination Unfortunately, the MOP solution sometimes cannot be computable (meaning that it is undecidable what the solution is) even though the MFP solution is always easily computable (because of the property space satisfying the Ascending Chain Condition); the following result establishes one such result: Lemma: The MOP solution for the Constant Propagation Analysis is undecidable. 97 / 64 MFP and MOP Solutions Lemma: Consider the MFP and the MOP solutions to an instance (L, F, F, B, ι, f ) of a Monotone Framework; then: MFP MOP and MFP MOP If the framework is a Distributive Framework and if path (l) for all l in E and F then: MFP = MOP and MFP = MOP It is always possible to formulate the MOP solution as an MFP solution over a different property space (like P(L)) and therefore little is lost by focusing on the fixed point approach to Monotone Frameworks. 98 / 64
50 Abstract Interpretation 99 / 64 Model Checking vs Program Analysis system code m := ; while n > do m := m*n; n := n-; endwhile model S Φ property 00 / 64
51 Concrete Semantics vs Abstract Semantics concrete model code m := ; while n > do m := m*n; n := n-; endwhile abstract model α γ property? property # 0 / 64 Cast-out-of-Nines Plausibility check for arithmetic calculations, for example: =?= Perform operations n mod 9 (enough to consider digits sum) = = = 4 3 This is holds because elementary facts like: (a ± b) mod 9 = (a mod 9 ± b mod 9) mod 9 (a b) mod 9 = (a mod 9 b mod 9) mod 9 (0 a ± b) mod 9 = (a ± b) mod 9 Note that there are false positives, cf also [] and [2]. 02 / 64
52 Approximation and Correctness Data-flow analyses can be re-formulated in a different scenario where correctness is guaranteed by construction. Classically, the theory of Abstract Interpretation allows us to construct simplified a (computable) abstract semantics construct approximate solutions obtain the correctness of the approximate solutions Abstract Interpretation also uses other techniques, like widening/narrowing, which we will not cover here. 03 / 64 Notions of Approximation Assume that we have a solution s to a problem. What counts as a (good) approximation s to s? In order theoretic structures we are looking for Safe Approximations s s or s s In quantitative, vector space structures we want Close Approximations s s = min x s x 04 / 64
53 Example: Function Approximation Concrete and abstract domain are step-functions on [a, b]. The set of (real-valued) step-function T n is based on the sub-division of the interval into n sub-intervals. 05 / 64 Close Approximations 06 / 64
54 Close vs Correct Approximations 07 / 64 Abstract Interpretation Often problems may be have too costly solutions or are uncomputable on a concrete space (complete lattice). Aim: Find abstract descriptions on which computations are easier; then relate the concrete and abstract solutions. Definition Let C = (C, C ) and D = (D, D ) be two partially ordered sets. If there are two functions α : C D and γ : D C such that for all c C and all d D: c C γ(d) iff α(c) D d, then (C, α, γ, D) form a Galois connection. 08 / 64
55 Galois Connections Definition Let C = (C, C ) and D = (D, D ) be two partially ordered sets with two order-preserving functions α : C D and γ : D C. Then (C, α, γ, D) form a Galois connection iff (i) α γ is reductive i.e. d D, α γ(d) D d, (ii) γ α is extensive i.e. c C, c C γ α(c). Proposition Let (C, α, γ, D) be a Galois connection. Then α and γ are quasi-inverse, i.e. (i) α γ α = α and (ii) γ α γ = γ 09 / 64 Uniqueness and Duality Given an abstraction α there is a unique concretisation γ. Proposition Let (C, α, γ, D) be a Galois connection, then (i) α uniquely determines γ by γ(d) = {c α(c) D d}, and γ uniquely determines λ via α(c) = {d c C γ(d)}. (ii) α is completely additive and γ is completely multiplicative, and α( ) = and γ( ) =. For a proof see e.g. [3] Lemma / 64
56 Correctness and Optimality Proposition Given α : P(Z) D and γ : D P(Z) a Galois connection with D some property lattice. Consider an operation op : Z Z on Z which is lifted to ôp : P(Z) P(Z) via ôp(z ) = {op(x) x X}, then op # : D D defined as op # = α ôp γ is the most precise function on D satisfying for all Z Z: α(ôp(z )) op # (α(z )) It is enough to consider so-called Galois Insertions. See [] Lemma / 64 General Construction The general construction of correct (and optimal) abstractions f # of concrete function f is as follows: A f B Correct approximation: α γ α γ A # B # f # α f # f # α. Induced semantics: f # = α f γ. 2 / 64
57 Abstract Multiplication How can we justify or obtain correct abstract versions of various operations, e.g. multiplication? # even odd even even even even odd even odd even Abstract Interpretation introduced by Patrick Cousot and Radhia Cousot in 977 allows even to compute abstractions which are correct by construction. 3 / 64 Concrete Semantics [Not for Exam] We will give a language and semantics independent treatment of correctness. To set the scene, imagine some programming language, e.g. WHILE. Its semantics identifies some set V of values (like states, pointers, double precision reals) and specifies how a program S transforms one value v to another v 2 ; we may write this as S v v 2 4 / 64
58 Abstract Analysis [Not for Exam] In a similar way, a program analysis identifies the set L of properties (like shapes of states, abstract closures, lower and upper bounds for reals) and specifies how a program S transforms one property l to another l 2 ; we may write S l l 2 Unlike for general semantics, it is customary to require to be deterministic and thereby define a function; this will allow us to write f S (l ) = l 2 to mean S l l 2. 5 / 64 Situation in While [Not for Exam] We have SOS transitions S, s S, s with S and S programs and s, s State = (Var Z), e.g. z := 2 z, [z 2] [z 4] translates to just an evaluation of the state: z := 2 z [z 2] [z 4] The fact that this also holds for the (abstract) parity means: z := 2 z even(z) even(z) and also z := 2 z odd(z) even(z). 6 / 64
59 Correctness Relations [Not for Exam] Every program analysis should be correct with respect to the semantics. For a class of (so-called first-order) program analyses this is established by directly relating properties to values using a correctness relation: R : V L {tt, ff} or R V L The intention is that v R l formalises our claim that the value v is described by the property l. 7 / 64 Preservation of Correctness [Not for Exam] To be useful one has to prove that the correctness relation R is preserved under computation: if the relation holds between the initial value and the initial property then it also holds between the final value and the final property. This may be formulated as the implication: v R l S v v 2 S l l 2 v 2 R l 2 8 / 64
60 Preservation of Correctness [Not for Exam] This property is also expressed by the following diagram: S v. v 2. R. R. S l l 2 9 / 64 Correctness of Parity [Not for Exam] 0 R even R odd 2 R even 3 R odd 4 R even 5 R odd z := 2 z [z ] [z 2] odd(z) even(z) z := 2 z [z 2] [z 4] even(z) even(z) z := 2 z [z 3] [z 6] odd(z) even(z) R odd p 2 p odd even 2 R even 2 R even p 2 4 p even even 4 R even 3 R odd p 3 6 p odd even 6 R even... It is therefore correct to say: p z := 2 z always produces an even z. 20 / 64
61 Abstract Interpretation and Correctness [Not for Exam] The theory of Abstract Interpretation comes to life when we augment the set of properties L with a preorder (better: lattice) structure and elate this to the correctness relation R. The most common scenario is when L = (L,,,,, ) is a complete lattice with partial ordering. We then impose the following relationship between R and L: v R l l l 2 v R l 2 () ( l L L : v R l) v R ( L ) (2) 2 / 64 Condition () [Not for Exam] Consider the first of these conditions: v R l l l 2 v R l 2 The condition says that the smaller the property is with respect to the partial order, the better (i.e. precise) it is. This is an arbitrary decision in the sense that we could instead have decided that the larger the property is, the better it is, as is indeed the case in much of the literature on Data Flow Analysis; luckily the principle of duality from lattice theory tells us that this difference is only cosmetic. 22 / 64
62 Condition (2) [Not for Exam] Looking at the second condition describing correctness: ( ) ( l L L : v R l) v R L The second condition says that there is always a best property for describing a value. This is important for having to perform only one analysis (using the best property, i.e. the greatest lower bound of the candidates) instead of several analyses (one for each of the candidates). The condition has two immediate consequences: v R v R l v R l 2 v R (l l 2 ) 23 / 64 Again: Parity Example [Not for Exam] The abstract properties even and odd do themselves not form a lattice L, but we can use as usual: L = P({even, odd}), where {even} represents the definitive fact even and {odd} the precise property odd; while the empty set = represents an undefined parity and = {even, odd} stands for any parity. The conditions imposed on the correctness relation R and the property lattice L mean in this case: () Any parity is always a valid description, e.g. 2 R {even} {even} 2 R (2) The most precise parity is valid, e.g. (2 R {even} 2 R ) 2 R ({even} ) i.e. (2 R {even} 2 R ) 2 R {even} 24 / 64
63 Representation Functions An alternative approach to the use of a correctness relation R : V L {tt, ff} between values and properties is to use a representation function: β : V L The idea is that the function β maps a value v to the best property l describing it. The correctness criterion for the analysis will then be formulated as follows: β(v ) l S v v 2 S l l 2 β(v 2 ) l 2 25 / 64 Correctness Criterion [Not for Exam] This is also expressed by the following diagram: S v v 2 β β S l l 2 Thus the idea is that if the initial value v is safely described by l then the final value v 2 will be safely described by the result l 2 of the analysis. 26 / 64
64 Representation and Extraction Functions We can use a representation function β : V L to induce a Galois connection (P(V), α, γ, L) via α(v ) = {β(v) v V } γ(l) = {v V β(v) l} For L = P(D) with D being some set of abstract values we can also use an extraction function, η : V D defined as α(v ) = {η(v) v V } γ(d) = {v η(v) D} in order to construct a Galois connection. 27 / 64 Example: Parity A repesentation function β : Z P({even, odd}) is easily defined by: β(n) = { {even} if k Z s.t. n = 2k {odd} otherwise Correctness implies that the abstract properties are dominated by the actual ones, e.g. β(4) = {even} = {even, odd} is acceptable. This means that we also could use as a representation function β(n) = = {even, odd} for all n Z. Though this would be valid it would also be rather imprecise. 28 / 64
65 Extending Correctness Relations [Not for Exam] Given a property lattice L and a correctness relation R V L and let (L, α, γ, M) be a Galois connection. Define a relation T V M via v S m v R γ(m) This is also a correctness relation, because and v S m m m 2 v R γ(m ) γ(m ) γ(m 2 ) v R γ(m 2 ) v S m 2 m M M : v S m m M : v R γ(m) vr {γ(m) m M } v R γ( M ) v S M 29 / 64 References Abstract Interpretation [] Neil D. Jones and Flemming Nielson: Abstract Interpretation: A semantics-based tool for program analysis. in: Handbook of Logic in Computer Science (Vol. 4), pp , Oxford University Press,995. [2] Patrick Cousot and Radhia Cousot: Abstract Interpretation and application to logic programs. The Journal of Logic Programming, Vol. 3, pp 03 79, 992. [3] Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis. Chapter 4, Springer Verlag, 999/2005. [4] Patrick Cousot: Abstract Interpretation. MIT Course, / 64
66 Appendix Further Classical Analysis [Not for Exam] 3 / 64 Classical Instances Available Reaching Very Busy Live Expressions Definitions Expressions Variables L P(AExp ) P(Var Lab ) P(AExp ) P(Var ) AExp AExp ι {(x,?) x FV(S )} E {init(s )} {init(s )} final(s ) final(s ) F flow(s ) flow(s ) flow R (S ) flow R (S ) F {f : L L l k, l g : f (l) = (l \ l k ) l g } f l f l (l) = (l \ kill([b] l )) gen([b] l ) where [B] l blocks(s ) 32 / 64
67 Reaching Definitions Analysis The Reaching Definitions Analysis is analogous to the previous one except that we are interested in: For each program point, which assignments may have been made and not overwritten, when program execution reaches this point along some path. A main application of Reaching Definitions Analysis is in the construction of direct links between blocks that produce values and blocks that use them. 33 / 64 Example A simple example to illustrate the RD analysis would be: [ x:=5 ] ; [ y:= ] 2 ; while [x > ] 3 do ( [ y:=x y ] 4 ; [ x:=x ] 5 ) All of the assignments reach the entry of 4 (the assignments labelled and 2 reach there on the first iteration); only the assignments labelled, 4 and 5 reach the entry of / 64
68 RD Analysis kill RD : Block P(Var Lab ) gen RD : Block P(Var Lab ) RD entry : Lab P(Var Lab ) RD exit : Lab P(Var Lab ) Remark: Strictly speaking we need P(Var (Lab {?})). 35 / 64 RD Auxiliary Functions kill RD ([ x:=a ] l ) = {(x,?)} {(x, l ) [B] l a definition of x in S } kill RD ([ skip ] l ) = kill RD ([b] l ) = gen RD ([ x:=a ] l ) = {(x, l)} gen RD ([ skip ] l ) = gen RD ([b] l ) = 36 / 64
69 RD Equation Schemes RD entry (l) = { {(x,?) x FV(S )}, if l = init(s ) {RDexit (l ) (l, l) flow(s )}, otherwise RD exit (l) = (RD entry (l)\kill RD ([B] l )) gen RD ([B] l ) where [B] l blocks(s ) 37 / 64 Smallest Solution Similar to before, this is a forward analysis but we are interested in the smallest sets satisfying the equation for RD entry. [ z:=x + y ] l ;while [true] l do [ skip ] l RD entry (l) = {(x,?), (y,?), (z,?)} RD entry (l ) = RD exit (l) RD exit (l ) RD entry (l ) = RD exit (l ) RD exit (l) = (RD entry (l)\{(z,?)}) {(z, l)} RD exit (l ) = RD entry (l ) RD exit (l ) = RD entry (l ) 38 / 64
70 Obtaining Solutions [ ] l [ ] l yes [ ] l no After some simplification, we find that: RD entry (l ) = {(x,?), (y,?), (z, l)} RD entry (l ) 39 / 64 RD Example [ x:=5 ] ; [ y:= ] 2 ; while [x > ] 3 do ( [ y:=x y ] 4 ; [ x:=x ] 5 ) l kill RD (l) gen RD (l) {(x,?), (x, ), (x, 5)} {(x, )} 2 {(y,?), (y, 2), (y, 4)} {(y, 2)} 3 4 {(y,?), (y, 2), (y, 4)} {(y, 4)} 5 {(x,?), (x, ), (x, 5)} {(x, 5)} 40 / 64
71 RD Example: Equations [ x:=5 ] ; [ y:= ] 2 ; while [x > ] 3 do ( [ y:=x y ] 4 ; [ x:=x ] 5 ) RD exit () = (RD entry ()\{(x,?), (x, ), (x, 5)}) {(x, )} RD exit (2) = (RD entry (2)\{(y,?), (y, 2), (y, 4)}) {(y, 2)} RD exit (3) = RD entry (3) RD exit (4) = (RD entry (4)\{(y,?), (y, 2), (y, 4)}) {(y, 4)} RD exit (5) = (RD entry (5)\{(x,?), (x, ), (x, 5)}) {(x, 5)} 4 / 64 RD Example: Equations [ x:=5 ] ; [ y:= ] 2 ; while [x > ] 3 do ( [ y:=x y ] 4 ; [ x:=x ] 5 ) RD entry () = {(x,?), (y,?)} RD entry (2) = RD exit () RD entry (3) = RD exit (2) RD exit (5) RD entry (4) = RD exit (3) RD entry (5) = RD exit (4) 42 / 64
72 RD Example: Equations RD entry () = {(x,?), (y,?)} RD entry (2) = RD exit () RD entry (3) = RD exit (2) RD exit (5) RD entry (4) = RD exit (3) RD entry (5) = RD exit (4) RD exit () = (RD entry ()\{(x,?), (x, ), (x, 5)}) {(x, )} RD exit (2) = (RD entry (2)\{(y,?), (y, 2), (y, 4)}) {(y, 2)} RD exit (3) = RD entry (3) RD exit (4) = (RD entry (4)\{(y,?), (y, 2), (y, 4)}) {(y, 4)} RD exit (5) = (RD entry (5)\{(x,?), (x, ), (x, 5)}) {(x, 5)} 43 / 64 RD Example: Solutions l RD entry (l) RD exit (l) {(x,?), (y,?)} {(y,?), (x, )} 2 {(y,?), (x, )} {(x, ), (y, 2)} 3 {(x, ), (y, 2), (y, 4), (x, 5)} {(x, ), (y, 2), (y, 4), (x, 5)} 4 {(x, ), (y, 2), (y, 4), (x, 5)} {(x, ), (y, 4), (x, 5)} 5 {(x, ), (y, 4), (x, 5)} {(y, 4), (x, 5)} [ x:=5 ] ; [ y:= ] 2 ; while [x > ] 3 do ( [ y:=x y ] 4 ; [ x:=x ] 5 ) 44 / 64
73 Very Busy Expression Analysis An expression is very busy at the exit from a label if, no matter what path is taken from the label, the expression must (is guaranteed to) always be used before any of the variables occurring in it are redefined. The aim of the Very Busy Expressions Analysis is to determine: For each program point, which expressions must (is guaranteed to) be very busy at exit from the point. A possible optimisation based on this information is to evaluate the expression at the block and store its value for later use; this optimisation is sometimes called hoisting the expression. 45 / 64 Example We illustrate this analysis with the following example: if [a > b] then ( [ x:=b a ] 2 ; [ y:=a b ] 3 ) else ( [ y:=b a ] 4 ; [ x:=a b ] 5 ) The expressions a b and b a are both very busy at the start of the program (label ). They can be hoisted resulting in a code size reduction. 46 / 64
74 VB Analysis kill VB : Block P(AExp ) gen VB : Block P(AExp ) VB entry : Lab P(AExp ) VB exit : Lab P(AExp ) The analysis is a backward analysis and we are interested in the largest sets satisfying the equation for VB exit. 47 / 64 VB Auxiliary Functions kill VB ([ x:=a ] l ) = {a AExp x FV(a )} kill VB ([ skip ] l ) = kill VB ([b] l ) = gen VB ([ x:=a ] l ) = AExp(a) gen VB ([ skip ] l ) = gen VB ([b] l ) = AExp(b) 48 / 64
Principles of Program Analysis: Data Flow Analysis
Principles of Program Analysis: Data Flow Analysis Transparencies based on Chapter 2 of the book: Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis. Springer Verlag
More informationIntra-procedural Data Flow Analysis Backwards analyses
Live variables Dead code elimination Very Busy Expressions Code hoisting Bitvector frameworks Monotone frameworks Intra-procedural Data Flow Analysis Backwards analyses Hanne Riis Nielson and Flemming
More informationIterative Dataflow Analysis
Iterative Dataflow Analysis CS 352 9/24/07 Slides adapted from Nielson, Nielson, Hankin Principles of Program Analysis Key Ideas Need a mechanism to evaluate (statically) statements in a program Problem:
More informationPrinciples of Program Analysis: Abstract Interpretation
Principles of Program Analysis: Abstract Interpretation Transparencies based on Chapter 4 of the book: Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis. Springer Verlag
More informationPrinciples of Program Analysis: A Sampler of Approaches
Principles of Program Analysis: A Sampler of Approaches Transparencies based on Chapter 1 of the book: Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis Springer Verlag
More informationPrinciples of Program Analysis: Control Flow Analysis
Principles of Program Analysis: Control Flow Analysis Transparencies based on Chapter 3 of the book: Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis. Springer Verlag
More informationDenotational Semantics
5 Denotational Semantics In the operational approach, we were interested in how a program is executed. This is contrary to the denotational approach, where we are merely interested in the effect of executing
More informationNotes on Abstract Interpretation
Notes on Abstract Interpretation Alexandru Sălcianu salcianu@mit.edu November 2001 1 Introduction This paper summarizes our view of the abstract interpretation field. It is based on the original abstract
More informationAbstract Interpretation: Fixpoints, widening, and narrowing
Abstract Interpretation: Fixpoints, widening, and narrowing CS252r Spring 2011 Slides from Principles of Program Analysis by Nielson, Nielson, and Hankin http://www2.imm.dtu.dk/~riis/ppa/ppasup2004.html
More informationProbabilistic Program Analysis
Probabilistic Program Analysis Data Flow Analysis and Regression Alessandra Di Pierro University of Verona, Italy alessandra.dipierro@univr.it Herbert Wiklicky Imperial College London, UK herbert@doc.ic.ac.uk
More informationAbstract Interpretation: Fixpoints, widening, and narrowing
Abstract Interpretation: Fixpoints, widening, and narrowing CS252r Fall 2015 Slides from Principles of Program Analysis by Nielson, Nielson, and Hankin http://www2.imm.dtu.dk/~riis/ppa/ppasup2004.html
More informationInter-procedural Data Flow Analysis
Flow insensitive analysis Context insensitive analysis Context sensitive analysis Inter-procedural Data Flow Analysis Hanne Riis Nielson and Flemming Nielson email: {riis,nielson}@imm.dtu.dk Informatics
More informationCMSC 631 Program Analysis and Understanding Fall Abstract Interpretation
Program Analysis and Understanding Fall 2017 Abstract Interpretation Based on lectures by David Schmidt, Alex Aiken, Tom Ball, and Cousot & Cousot What is an Abstraction? A property from some domain Blue
More informationStatic Program Analysis
Static Program Analysis Thomas Noll Software Modeling and Verification Group RWTH Aachen University https://moves.rwth-aachen.de/teaching/ss-18/spa/ Recap: Interprocedural Dataflow Analysis Outline of
More informationStatic Program Analysis using Abstract Interpretation
Static Program Analysis using Abstract Interpretation Introduction Static Program Analysis Static program analysis consists of automatically discovering properties of a program that hold for all possible
More informationAAA616: Program Analysis. Lecture 3 Denotational Semantics
AAA616: Program Analysis Lecture 3 Denotational Semantics Hakjoo Oh 2018 Spring Hakjoo Oh AAA616 2018 Spring, Lecture 3 March 28, 2018 1 / 33 Denotational Semantics In denotational semantics, we are interested
More informationSemantics and Verification of Software
Semantics and Verification of Software Thomas Noll Software Modeling and Verification Group RWTH Aachen University http://moves.rwth-aachen.de/teaching/ss-15/sv-sw/ The Denotational Approach Denotational
More informationSemantics with Applications: Model-Based Program Analysis
Semantics with Applications: Model-Based Program Analysis c Hanne Riis Nielson c Flemming Nielson Computer Science Department, Aarhus University, Denmark (October 1996) Contents 1 Introduction 1 1.1 Side-stepping
More informationCS422 - Programming Language Design
1 CS422 - Programming Language Design Denotational Semantics Grigore Roşu Department of Computer Science University of Illinois at Urbana-Champaign 2 Denotational semantics, also known as fix-point semantics,
More informationDataflow Analysis. Chapter 9, Section 9.2, 9.3, 9.4
Dataflow Analysis Chapter 9, Section 9.2, 9.3, 9.4 2 Dataflow Analysis Dataflow analysis is a sub area of static program analysis Used in the compiler back end for optimizations of three address code and
More informationData flow analysis. DataFlow analysis
Data flow analysis DataFlow analysis compile time reasoning about the runtime flow of values in the program represent facts about runtime behavior represent effect of executing each basic block propagate
More informationProbabilistic data flow analysis: a linear equational approach
Probabilistic data flow analysis: a linear equational approach Alessandra Di Pierro University of Verona Verona, Italy alessandra.dipierro@univr.it Herbert Wiklicky Imperial College London London, UK herbert@doc.ic.ac.uk
More informationProbabilistic Semantics and Program Analysis
Probabilistic Semantics and Program Analysis Alessandra Di Pierro 1, Chris Hankin 2, and Herbert Wiklicky 2 1 University of Verona, Ca Vignal 2 - Strada le Grazie 15, 714 Verona, Italy dipierro@sci.univr.it
More informationSets and Motivation for Boolean algebra
SET THEORY Basic concepts Notations Subset Algebra of sets The power set Ordered pairs and Cartesian product Relations on sets Types of relations and their properties Relational matrix and the graph of
More informationAbstract Interpretation and Static Analysis
/ 1 Abstract Interpretation and Static Analysis David Schmidt Kansas State University www.cis.ksu.edu/~schmidt Welcome! / 2 / 3 Four parts 1. Introduction to static analysis what it is and how to apply
More informationMechanics of Static Analysis
Escuela 03 III / 1 Mechanics of Static Analysis David Schmidt Kansas State University www.cis.ksu.edu/~schmidt Escuela 03 III / 2 Outline 1. Small-step semantics: trace generation 2. State generation and
More informationMIT Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology
MIT 6.035 Foundations of Dataflow Analysis Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology Dataflow Analysis Compile-Time Reasoning About Run-Time Values of Variables
More informationSemantics and Verification of Software
Semantics and Verification of Software Thomas Noll Software Modeling and Verification Group RWTH Aachen University http://moves.rwth-aachen.de/teaching/ws-1718/sv-sw/ Recap: The Denotational Approach Semantics
More informationStatic Program Analysis
Static Program Analysis Lecture 16: Abstract Interpretation VI (Counterexample-Guided Abstraction Refinement) Thomas Noll Lehrstuhl für Informatik 2 (Software Modeling and Verification) noll@cs.rwth-aachen.de
More informationIntroduction to data-flow analysis. Data-flow analysis. Control-flow graphs. Data-flow analysis. Example: liveness. Requirements
Data-flow analysis Michel Schinz based on material by Erik Stenman and Michael Schwartzbach Introduction to data-flow analysis Data-flow analysis Example: liveness Data-flow analysis is a global analysis
More informationCMSC 631 Program Analysis and Understanding. Spring Data Flow Analysis
CMSC 631 Program Analysis and Understanding Spring 2013 Data Flow Analysis Data Flow Analysis A framework for proving facts about programs Reasons about lots of little facts Little or no interaction between
More informationCOSE312: Compilers. Lecture 14 Semantic Analysis (4)
COSE312: Compilers Lecture 14 Semantic Analysis (4) Hakjoo Oh 2017 Spring Hakjoo Oh COSE312 2017 Spring, Lecture 14 May 8, 2017 1 / 30 Denotational Semantics In denotational semantics, we are interested
More informationProgram Analysis Probably Counts
Probably Counts 1 c.hankin@imperial.ac.uk joint work with Alessandra Di Pierro 2 and Herbert Wiklicky 1 1 Department of Computing, 2 Dipartimento di Informatica, Università di Verona Computer Journal Lecture,
More informationWhat does the partial order "mean"?
What does the partial order "mean"? Domain theory developed by Dana Scott and Gordon Plotkin in the late '60s use partial order to represent (informally): approximates carries more information than better
More informationLecture Notes: Selected Topics in Discrete Structures. Ulf Nilsson
Lecture Notes: Selected Topics in Discrete Structures Ulf Nilsson Dept of Computer and Information Science Linköping University 581 83 Linköping, Sweden ulfni@ida.liu.se 2004-03-09 Contents Chapter 1.
More informationHigher-Order Chaotic Iteration Sequences
Higher-Order Chaotic Iteration Sequences Mads Rosendahl DIKU, University of Copenhagen Universitetsparken 1, DK-2100 Copenhagen Ø, Denmark E-mail rose@diku.dk 1993 Abstract Chaotic iteration sequences
More informationCours M.2-6 «Interprétation abstraite: applications à la vérification et à l analyse statique» Examen partiel. Patrick Cousot.
Master Parisien de Recherche en Informatique École normale supérieure Année scolaire 2010/2011 Cours M.2-6 «Interprétation abstraite: applications à la vérification et à l analyse statique» Examen partiel
More informationDiscrete Fixpoint Approximation Methods in Program Static Analysis
Discrete Fixpoint Approximation Methods in Program Static Analysis P. Cousot Département de Mathématiques et Informatique École Normale Supérieure Paris
More informationDataflow Analysis. Dragon book, Chapter 9, Section 9.2, 9.3, 9.4
Dataflow Analysis Dragon book, Chapter 9, Section 9.2, 9.3, 9.4 2 Dataflow Analysis Dataflow analysis is a sub-area of static program analysis Used in the compiler back end for optimizations of three-address
More informationReading: Chapter 9.3. Carnegie Mellon
I II Lecture 3 Foundation of Data Flow Analysis Semi-lattice (set of values, meet operator) Transfer functions III Correctness, precision and convergence IV Meaning of Data Flow Solution Reading: Chapter
More informationDataflow analysis. Theory and Applications. cs6463 1
Dataflow analysis Theory and Applications cs6463 1 Control-flow graph Graphical representation of runtime control-flow paths Nodes of graph: basic blocks (straight-line computations) Edges of graph: flows
More informationPrinciples of Program Analysis: Data Flow Analysis
Principles of Program Analysis: Data Flow Analysis Transparencies based on Chapter 2 of the book: Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis. Springer Verlag
More informationCHAPTER 1. Relations. 1. Relations and Their Properties. Discussion
CHAPTER 1 Relations 1. Relations and Their Properties 1.1. Definition of a Relation. Definition 1.1.1. A binary relation from a set A to a set B is a subset R A B. If (a, b) R we say a is Related to b
More informationDataflow Analysis - 2. Monotone Dataflow Frameworks
Dataflow Analysis - 2 Monotone dataflow frameworks Definition Convergence Safety Relation of MOP to MFP Constant propagation Categorization of dataflow problems DataflowAnalysis 2, Sp06 BGRyder 1 Monotone
More informationStatic Program Analysis
Static Program Analysis Lecture 13: Abstract Interpretation III (Abstract Interpretation of WHILE Programs) Thomas Noll Lehrstuhl für Informatik 2 (Software Modeling and Verification) noll@cs.rwth-aachen.de
More informationIntelligent Agents. Formal Characteristics of Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University
Intelligent Agents Formal Characteristics of Planning Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University Extensions to the slides for chapter 3 of Dana Nau with contributions by
More informationHoare Logic (I): Axiomatic Semantics and Program Correctness
Hoare Logic (I): Axiomatic Semantics and Program Correctness (Based on [Apt and Olderog 1991; Gries 1981; Hoare 1969; Kleymann 1999; Sethi 199]) Yih-Kuen Tsay Dept. of Information Management National Taiwan
More informationIntroduction to Kleene Algebras
Introduction to Kleene Algebras Riccardo Pucella Basic Notions Seminar December 1, 2005 Introduction to Kleene Algebras p.1 Idempotent Semirings An idempotent semiring is a structure S = (S, +,, 1, 0)
More information7 RC Simulates RA. Lemma: For every RA expression E(A 1... A k ) there exists a DRC formula F with F V (F ) = {A 1,..., A k } and
7 RC Simulates RA. We now show that DRC (and hence TRC) is at least as expressive as RA. That is, given an RA expression E that mentions at most C, there is an equivalent DRC expression E that mentions
More informationDataflow Analysis. A sample program int fib10(void) { int n = 10; int older = 0; int old = 1; Simple Constant Propagation
-74 Lecture 2 Dataflow Analysis Basic Blocks Related Optimizations SSA Copyright Seth Copen Goldstein 200-8 Dataflow Analysis Last time we looked at code transformations Constant propagation Copy propagation
More informationIntroduction to Abstract Interpretation. ECE 584 Sayan Mitra Lecture 18
Introduction to Abstract Interpretation ECE 584 Sayan Mitra Lecture 18 References Patrick Cousot,RadhiaCousot:Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction
More informationBoolean Algebras. Chapter 2
Chapter 2 Boolean Algebras Let X be an arbitrary set and let P(X) be the class of all subsets of X (the power set of X). Three natural set-theoretic operations on P(X) are the binary operations of union
More informationExercises 1 - Solutions
Exercises 1 - Solutions SAV 2013 1 PL validity For each of the following propositional logic formulae determine whether it is valid or not. If it is valid prove it, otherwise give a counterexample. Note
More informationUniversität Augsburg
Universität Augsburg Properties of Overwriting for Updates in Typed Kleene Algebras Thorsten Ehm Report 2000-7 Dezember 2000 Institut für Informatik D-86135 Augsburg Copyright c Thorsten Ehm Institut für
More informationGeneralizing Data-flow Analysis
Generalizing Data-flow Analysis Announcements PA1 grades have been posted Today Other types of data-flow analysis Reaching definitions, available expressions, reaching constants Abstracting data-flow analysis
More informationWe define a dataflow framework. A dataflow framework consists of:
So far been talking about various dataflow problems (e.g. reaching definitions, live variable analysis) in very informal terms. Now we will discuss a more fundamental approach to handle many of the dataflow
More informationPrecise Relational Invariants Through Strategy Iteration
Precise Relational Invariants Through Strategy Iteration Thomas Gawlitza and Helmut Seidl TU München, Institut für Informatik, I2 85748 München, Germany {gawlitza, seidl}@in.tum.de Abstract. We present
More informationComplete Partial Orders, PCF, and Control
Complete Partial Orders, PCF, and Control Andrew R. Plummer TIE Report Draft January 2010 Abstract We develop the theory of directed complete partial orders and complete partial orders. We review the syntax
More informationSection Summary. Relations and Functions Properties of Relations. Combining Relations
Chapter 9 Chapter Summary Relations and Their Properties n-ary Relations and Their Applications (not currently included in overheads) Representing Relations Closures of Relations (not currently included
More informationBoolean Algebra and Propositional Logic
Boolean Algebra and Propositional Logic Takahiro Kato September 10, 2015 ABSTRACT. This article provides yet another characterization of Boolean algebras and, using this characterization, establishes a
More informationTableau-based decision procedures for the logics of subinterval structures over dense orderings
Tableau-based decision procedures for the logics of subinterval structures over dense orderings Davide Bresolin 1, Valentin Goranko 2, Angelo Montanari 3, and Pietro Sala 3 1 Department of Computer Science,
More informationDFA of non-distributive properties
DFA of non-distributive properties The general pattern of Dataflow Analysis GA (p)= i if p E { GA (q) q F } otherwise GA (p)= f p ( GA (p) ) where : E is the set of initial/final points of the control-flow
More informationA Discrete Duality Between Nonmonotonic Consequence Relations and Convex Geometries
A Discrete Duality Between Nonmonotonic Consequence Relations and Convex Geometries Johannes Marti and Riccardo Pinosio Draft from April 5, 2018 Abstract In this paper we present a duality between nonmonotonic
More informationSpring 2016 Program Analysis and Verification. Lecture 3: Axiomatic Semantics I. Roman Manevich Ben-Gurion University
Spring 2016 Program Analysis and Verification Lecture 3: Axiomatic Semantics I Roman Manevich Ben-Gurion University Warm-up exercises 1. Define program state: 2. Define structural semantics configurations:
More informationTree sets. Reinhard Diestel
1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked
More informationSoftware Verification
Software Verification Grégoire Sutre LaBRI, University of Bordeaux, CNRS, France Summer School on Verification Technology, Systems & Applications September 2008 Grégoire Sutre Software Verification VTSA
More informationDynamic Noninterference Analysis Using Context Sensitive Static Analyses. Gurvan Le Guernic July 14, 2007
Dynamic Noninterference Analysis Using Context Sensitive Static Analyses Gurvan Le Guernic July 14, 2007 1 Abstract This report proposes a dynamic noninterference analysis for sequential programs. This
More informationCSC D70: Compiler Optimization Dataflow-2 and Loops
CSC D70: Compiler Optimization Dataflow-2 and Loops Prof. Gennady Pekhimenko University of Toronto Winter 2018 The content of this lecture is adapted from the lectures of Todd Mowry and Phillip Gibbons
More informationBoolean Algebra and Propositional Logic
Boolean Algebra and Propositional Logic Takahiro Kato June 23, 2015 This article provides yet another characterization of Boolean algebras and, using this characterization, establishes a more direct connection
More informationIntroduction to Logic in Computer Science: Autumn 2006
Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Today s class will be an introduction
More informationA categorical model for a quantum circuit description language
A categorical model for a quantum circuit description language Francisco Rios (joint work with Peter Selinger) Department of Mathematics and Statistics Dalhousie University CT July 16th 22th, 2017 What
More informationFormal Methods in Software Engineering
Formal Methods in Software Engineering An Introduction to Model-Based Analyis and Testing Vesal Vojdani Department of Computer Science University of Tartu Fall 2014 Vesal Vojdani (University of Tartu)
More informationLogical Abstract Domains and Interpretations
Logical Abstract Domains and Interpretations Patrick Cousot 2,3, Radhia Cousot 3,1, and Laurent Mauborgne 3,4 1 Centre National de la Recherche Scientifique, Paris 2 Courant Institute of Mathematical Sciences,
More informationData Flow Analysis. Lecture 6 ECS 240. ECS 240 Data Flow Analysis 1
Data Flow Analysis Lecture 6 ECS 240 ECS 240 Data Flow Analysis 1 The Plan Introduce a few example analyses Generalize to see the underlying theory Discuss some more advanced issues ECS 240 Data Flow Analysis
More informationSpring 2015 Program Analysis and Verification. Lecture 4: Axiomatic Semantics I. Roman Manevich Ben-Gurion University
Spring 2015 Program Analysis and Verification Lecture 4: Axiomatic Semantics I Roman Manevich Ben-Gurion University Agenda Basic concepts of correctness Axiomatic semantics (pages 175-183) Hoare Logic
More informationAxiomatic Semantics: Verification Conditions. Review of Soundness and Completeness of Axiomatic Semantics. Announcements
Axiomatic Semantics: Verification Conditions Meeting 12, CSCI 5535, Spring 2009 Announcements Homework 4 is due tonight Wed forum: papers on automated testing using symbolic execution 2 Questions? Review
More informationHoare Logic and Model Checking
Hoare Logic and Model Checking Kasper Svendsen University of Cambridge CST Part II 2016/17 Acknowledgement: slides heavily based on previous versions by Mike Gordon and Alan Mycroft Introduction In the
More informationAbstract Interpretation II
Abstract Interpretation II Semantics and Application to Program Verification Antoine Miné École normale supérieure, Paris year 2015 2016 Course 11 13 May 2016 Course 11 Abstract Interpretation II Antoine
More informationHowever another possibility is
19. Special Domains Let R be an integral domain. Recall that an element a 0, of R is said to be prime, if the corresponding principal ideal p is prime and a is not a unit. Definition 19.1. Let a and b
More informationLecturecise 22 Weak monadic second-order theory of one successor (WS1S)
Lecturecise 22 Weak monadic second-order theory of one successor (WS1S) 2013 Reachability in the Heap Many programs manipulate linked data structures (lists, trees). To express many important properties
More informationBASIC CONCEPTS OF ABSTRACT INTERPRETATION
BASIC CONCEPTS OF ABSTRACT INTERPRETATION Patrick Cousot École Normale Supérieure 45 rue d Ulm 75230 Paris cedex 05, France Patrick.Cousot@ens.fr Radhia Cousot CNRS & École Polytechnique 91128 Palaiseau
More informationStatic Program Analysis
Static Program Analysis Xiangyu Zhang The slides are compiled from Alex Aiken s Michael D. Ernst s Sorin Lerner s A Scary Outline Type-based analysis Data-flow analysis Abstract interpretation Theorem
More informationModel Checking & Program Analysis
Model Checking & Program Analysis Markus Müller-Olm Dortmund University Overview Introduction Model Checking Flow Analysis Some Links between MC and FA Conclusion Apology for not giving proper credit to
More informationP is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.
Complexity Theory Problems are divided into complexity classes. Informally: So far in this course, almost all algorithms had polynomial running time, i.e., on inputs of size n, worst-case running time
More informationWhat are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos
What are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos armandobcm@yahoo.com February 5, 2014 Abstract This note is for personal use. It
More informationThe non-logical symbols determine a specific F OL language and consists of the following sets. Σ = {Σ n } n<ω
1 Preliminaries In this chapter we first give a summary of the basic notations, terminology and results which will be used in this thesis. The treatment here is reduced to a list of definitions. For the
More informationRelations. We have seen several types of abstract, mathematical objects, including propositions, predicates, sets, and ordered pairs and tuples.
Relations We have seen several types of abstract, mathematical objects, including propositions, predicates, sets, and ordered pairs and tuples. Relations use ordered tuples to represent relationships among
More information30 Classification of States
30 Classification of States In a Markov chain, each state can be placed in one of the three classifications. 1 Since each state falls into one and only one category, these categories partition the states.
More informationA Tutorial on Program Analysis
A Tutorial on Program Analysis Markus Müller-Olm Dortmund University Thanks! Helmut Seidl (TU München) and Bernhard Steffen (Universität Dortmund) for discussions, inspiration, joint work,... 1 Dream of
More informationSEMANTICS OF PROGRAMMING LANGUAGES Course Notes MC 308
University of Leicester SEMANTICS OF PROGRAMMING LANGUAGES Course Notes for MC 308 Dr. R. L. Crole Department of Mathematics and Computer Science Preface These notes are to accompany the module MC 308.
More informationStatic Program Analysis
Static Program Analysis Thomas Noll Software Modeling and Verification Group RWTH Aachen University https://moves.rwth-aachen.de/teaching/ws-1617/spa/ Software Architektur Praxis-Workshop Bringen Sie Informatik
More informationEDA045F: Program Analysis LECTURE 10: TYPES 1. Christoph Reichenbach
EDA045F: Program Analysis LECTURE 10: TYPES 1 Christoph Reichenbach In the last lecture... Performance Counters Challenges in Dynamic Performance Analysis Taint Analysis Binary Instrumentation 2 / 44 Types
More informationClassical Program Logics: Hoare Logic, Weakest Liberal Preconditions
Chapter 1 Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions 1.1 The IMP Language IMP is a programming language with an extensible syntax that was developed in the late 1960s. We will
More informationTemporal logics and explicit-state model checking. Pierre Wolper Université de Liège
Temporal logics and explicit-state model checking Pierre Wolper Université de Liège 1 Topics to be covered Introducing explicit-state model checking Finite automata on infinite words Temporal Logics and
More informationDR.RUPNATHJI( DR.RUPAK NATH )
Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology
More informationSyntax. Notation Throughout, and when not otherwise said, we assume a vocabulary V = C F P.
First-Order Logic Syntax The alphabet of a first-order language is organised into the following categories. Logical connectives:,,,,, and. Auxiliary symbols:.,,, ( and ). Variables: we assume a countable
More informationAxiomatic Semantics. Lecture 9 CS 565 2/12/08
Axiomatic Semantics Lecture 9 CS 565 2/12/08 Axiomatic Semantics Operational semantics describes the meaning of programs in terms of the execution steps taken by an abstract machine Denotational semantics
More informationChapter 4: Computation tree logic
INFOF412 Formal verification of computer systems Chapter 4: Computation tree logic Mickael Randour Formal Methods and Verification group Computer Science Department, ULB March 2017 1 CTL: a specification
More informationGoal. Partially-ordered set. Game plan 2/2/2013. Solving fixpoint equations
Goal Solving fixpoint equations Many problems in programming languages can be formulated as the solution of a set of mutually recursive equations: D: set, f,g:dxd D x = f(x,y) y = g(x,y) Examples Parsing:
More informationMathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations
Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations D. R. Wilkins Academic Year 1996-7 1 Number Systems and Matrix Algebra Integers The whole numbers 0, ±1, ±2, ±3, ±4,...
More information