SAT based Abstraction-Refinement using ILP and Machine Learning Techniques

Similar documents
Learning Abstractions for Model Checking

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

SAT Based Abstraction-Refinement Using ILP and Machine Learning Techniques

Approximating min-max k-clustering

On the Chvatál-Complexity of Knapsack Problems

CTL, the branching-time temporal logic

where x i is the ith coordinate of x R N. 1. Show that the following upper bound holds for the growth function of H:

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

ABSTRACT MODEL REPAIR

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking

Outline. CS21 Decidability and Tractability. Regular expressions and FA. Regular expressions and FA. Regular expressions and FA

Named Entity Recognition using Maximum Entropy Model SEEM5680

ABSTRACT MODEL REPAIR

Lecture 2: Symbolic Model Checking With SAT

On the capacity of the general trapdoor channel with feedback

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

An Introduction To Range Searching

Using BDDs to Decide CTL

Counterexample-Guided Abstraction Refinement

DRAFT - do not circulate

Information collection on a graph

Information collection on a graph

On Wrapping of Exponentiated Inverted Weibull Distribution

Convex Optimization methods for Computing Channel Capacity

Recursive Estimation of the Preisach Density function for a Smart Actuator

ECE 534 Information Theory - Midterm 2

LIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).

Combinatorics of topmost discs of multi-peg Tower of Hanoi problem

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

arxiv: v2 [quant-ph] 2 Aug 2012

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Strong Matching of Points with Geometric Shapes

Finding Shortest Hamiltonian Path is in P. Abstract

Principles. Model (System Requirements) Answer: Model Checker. Specification (System Property) Yes, if the model satisfies the specification

Cryptography. Lecture 8. Arpita Patra

Game Specification in the Trias Politica

Machine Learning: Homework 4

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)

Data Mining Project. C4.5 Algorithm. Saber Salah. Naji Sami Abduljalil Abdulhak

State Estimation with ARMarkov Models

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

CSE 311 Lecture 02: Logic, Equivalence, and Circuits. Emina Torlak and Kevin Zatloukal

GIVEN an input sequence x 0,..., x n 1 and the

Introduction to Probability and Statistics

SAT-Solving: From Davis- Putnam to Zchaff and Beyond Day 3: Recent Developments. Lintao Zhang

Anytime communication over the Gilbert-Eliot channel with noiseless feedback

A Social Welfare Optimal Sequential Allocation Procedure

Node-voltage method using virtual current sources technique for special cases

Decision Trees / NLP Introduction

XReason: A Semantic Approach that Reasons with Patterns to Answer XML Keyword Queries

Computation Tree Logic

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

Fig. 4. Example of Predicted Workers. Fig. 3. A Framework for Tackling the MQA Problem.

Lecture 3: Decision Trees

Probability Estimates for Multi-class Classification by Pairwise Coupling

Radial Basis Function Networks: Algorithms

Published: 14 October 2013

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

HARMONIC EXTENSION ON NETWORKS

Hotelling s Two- Sample T 2

Binary Decision Diagrams and Symbolic Model Checking

Notes on Machine Learning for and

Learning Decision Trees

Theory of Parallel Hardware May 11, 2004 Massachusetts Institute of Technology Charles Leiserson, Michael Bender, Bradley Kuszmaul

Decision Tree Learning

22.615, MHD Theory of Fusion Systems Prof. Freidberg Lecture 13: PF Design II The Coil Solver

Linear diophantine equations for discrete tomography

Figure : An 8 bridge design grid. (a) Run this model using LOQO. What is the otimal comliance? What is the running time?

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Decision Trees. Lewis Fishgold. (Material in these slides adapted from Ray Mooney's slides on Decision Trees)

Finite State Model Checking

18.312: Algebraic Combinatorics Lionel Levine. Lecture 12

A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search

Feedback-error control

Proof Nets and Boolean Circuits

Outline. EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Simple Error Detection Coding

Minimax Design of Nonnegative Finite Impulse Response Filters

Generation of Linear Models using Simulation Results

Spectral Clustering based on the graph p-laplacian

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material

Lilian Markenzon 1, Nair Maria Maia de Abreu 2* and Luciana Lee 3

AM 221: Advanced Optimization Spring Prof. Yaron Singer Lecture 6 February 12th, 2014

ON POLYNOMIAL SELECTION FOR THE GENERAL NUMBER FIELD SIEVE

Principles of Computed Tomography (CT)

A randomized sorting algorithm on the BSP model

Elliptic Curves and Cryptography

Parallelism and Locality in Priority Queues. A. Ranade S. Cheng E. Deprit J. Jones S. Shih. University of California. Berkeley, CA 94720

Learning Decision Trees

Best approximation by linear combinations of characteristic functions of half-spaces

Brownian Motion and Random Prime Factorization

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

Verifying Two Conjectures on Generalized Elite Primes

Inference for Empirical Wasserstein Distances on Finite Spaces: Supplementary Material

Memoryfull Branching-Time Logic

Generalized Coiflets: A New Family of Orthonormal Wavelets

John Weatherwax. Analysis of Parallel Depth First Search Algorithms

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

E( x ) = [b(n) - a(n,m)x(m) ]

Transcription:

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 1 SAT based Abstraction-Refinement using ILP and Machine Learning Techniques Edmund Clarke James Kukula Anubhav Guta Ofer Strichman

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 2 Abstraction in Model Checking I Set of variables V = {x 1,..., x n }. Set of states S = D x1 D xn. Set of initial states I S. Set of transitions R S S. Transition system M = (S, I, R).

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 3 Abstract Model Abstraction Function h : S Ŝ ˆM = (Ŝ, Î, ˆR) I h h h h h Ŝ = {ŝ s. s S h(s) = ŝ}

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 4 Abstract Model Abstraction Function h : S Ŝ ˆM = (Ŝ, Î, ˆR) I I Î = {ŝ s. I(s) h(s) = ŝ}

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 5 Abstract Model Abstraction Function h : S Ŝ ˆM = (Ŝ, Î, ˆR) I I ˆR = {(ŝ 1, ŝ 2 ) s 1. s 2. R(s 1, s 2 ) h(s 1 ) = ŝ 1 h(s 2 ) = ŝ 2 }

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 6 Model Checking AG, is a non-temoral roositional formula resects h if for all s S, h(s) = s = ~ ~ ~ ~ ~ resects h

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 7 Model Checking AG, is a non-temoral roositional formula resects h if for all s S, h(s) = s = ~ ~ ~ ~ ~ ~ does not resect h

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 8 Preservation Theorem Let ˆM be an abstraction of M corresonding to the abstraction function h, and be a roositional formula that resects h. Then ˆM = AG M = AG ~ ~ ~

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 9 Converse of Preservation Theorem ˆM = AG M = AG ~ ~ ~ ~ ~ ~ Counterexamle is surious. Abstraction is too coarse.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 10 Refinement h is a refinement of h if 1. s 1, s 2 S, h (s 1 ) = h (s 2 ) imlies h(s 1 ) = h(s 2 ). 2. s 1, s 2 S such that h(s 1 ) = h(s 2 ) and h (s 1 ) h (s 2 ). ~ ~ ~ ~ ~ ~

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 11 Refinement h is a refinement of h if 1. s 1, s 2 S, h (s 1 ) = h (s 2 ) imlies h(s 1 ) = h(s 2 ). 2. s 1, s 2 S such that h(s 1 ) = h(s 2 ) and h (s 1 ) h (s 2 ). ~ ~ ~ ~ ~ ~

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 12 Abstraction-Refinement 1. Generate an initial abstraction function h. 2. Build abstract machine ˆM based on h. Model check ˆM. If ˆM = ϕ, then M = ϕ. Return TRUE. 3. If ˆM = ϕ, check the counterexamle on the concrete model. If the counterexamle is real, M = ϕ. Return FALSE. 4. Refine h, and go to ste 2.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 13 Abstraction Function Partition variables V into visible(v) and invisible(i) variables. V = {v 1,..., v k }. The artitioning defines our abstraction function h : S Ŝ. The set of abstract states is and the abstraction functions is Ŝ = D v1 D vk h(s) = (s(v 1 )... s(v k )) x1 x2 x3 x4 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 } 0 Refinement : Move variables from I to V. x1 x2 0

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 14 Building Abstract Model ˆM can be comuted efficiently if R is in functional form, e.g. sequential circuits. R(s, s ) = i( m j=1 x j = f x j (s, i)) ˆR(ŝ, ŝ ) = s I i( x j V ˆx j = f x j (ŝ, s I, i)) x1 x2 x3 x4 x5 x6 x1 x2 i1 i2 i3 i1 i2 i3 x3 x4 x5 x6

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 15 Checking the Counterexamle Counterexamle : ŝ 1, ŝ 2,... ŝ m Set of concrete aths for counterexamle : ψ m = { s 1... s m I(s 1 ) m 1 i=1 R(s i, s i+1 ) m i=1 h(s i ) = ŝ i } The right-most conjunct is a restriction of the visible variables to their values in the counterexamle. Counterexamle is surious ψ m is emty. Solve ψ m with a SAT solver.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 16 Checking the Counterexamle Similar to BMC formulas, excet Path restricted to counterexamle. Also restrict values of (original) inuts that are assigned by counterexamle. If ψ m is satisfiable we found a real bug. If ψ m is unsatisfiable, refine.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 17 Refinement Find largest index f (failure index), f < m such that ψ f is satisfiable. The set D of all states d f such that there is a concrete ath d 1...d f in ψ f is called the set of deadend states. Abstract Trace Concrete Trace } Dead end No concrete transition from D to a concrete state in the next abstract state.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 18 Refinement Since there is an abstract transition from ŝ f to ŝ f+1, there is a non-emty set of transitions φ f from h 1 (ŝ f ) to h 1 (ŝ f+1 ). φ f = { s f, s f+1 R(s f, s f+1 ) h(s f ) = ŝ f h(s f+1 ) = ŝ f+1 } The set B of all states b f such that there is a transition b f, b f+1 in φ f is called the set of bad states. Abstract Trace Concrete Trace } Dead end Bad {

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 19 Refinement Dead States ~ ~ Bad States ~ ~ ~ ~

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 20 Refinement There is a surious transition from ŝ f to ŝ f+1. Surious transition because D and B lie in the same abstract state. Refinement : Put D and B is searate abstract states. d D, b B (h (d) h (b)) ~ ~ ~ ~ ~ ~

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 21 Refinement as Searation Let S = {s 1...s m } and T = {t 1...t n } be two sets of states (binary vectors) of size l, reresenting assignments to a set of variables W, W = l. (The state searation roblem) Find a minimal set of variables U = {u 1...u k }, U W, such that for each air of states (s i, t j ), 1 i m, 1 j n, there exists a variable u r U such that s i (u r ) t j (u r ). Let H denote the searating set for D and B. The refinement h is obtained by adding H to V. Proof : Since H searates D and B, for all d D, b B there exists u H s.t. d(u) b(u). Hence, h(d) h(b).

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 22 Refinement as Searation and Learning For systems of realistic size, It is not ossible to generate D and B, either exlicitly or symbolically. Comutationally exensive to searate large D and B. Generate samles for D(denoted S D ) and B(denoted S B ) and try to infer the searating variables from the samles. State of the art SAT solvers like Chaff can generate many samles in a short amount of time. Our algorithm is comlete because a counterexamle will eventually be eliminated in subsequent iterations.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 23 Searation using Integer Linear Programming Searating S D from S B as an Integer Linear Programming (ILP) roblem: Min I i=1 v i subject to: ( s S D ) ( t S B ) 1 i I, s(v i ) t(v i ) v i 1 v i = 1 if and only if v i is in the searating set. One constraint er air of states, stating that at least one of the variables that searates the two states should be selected.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 24 Examle s 1 = (0, 1, 0, 1) t 1 = (1, 1, 1, 1) s 2 = (1, 1, 1, 0) t 2 = (0, 0, 0, 1) Min 4 i=1 v i subject to: v 1 + v 3 1 /* Searating s 1 from t 1 / v 2 1 /* Searating s 1 from t 2 / v 4 1 /* Searating s 2 from t 1 / v 1 + v 2 + v 3 + v 4 1 /* Searating s 2 from t 2 / Otimal value of the objective function is 3, corresonding to one of the two otimal solutions (v 1, v 2, v 4 ) and (v 3, v 2, v 4 ).

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 25 Searation using Decision Tree Learning ILP-based searation: Minimal searation set Comutationally exensive Decision Tree Learning based searation: Non otimal Comutationally efficient

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 26 Decision Tree Learning Inut : Set of examles with classification. Each examle assigns values to a set of attributes. Outut : Decision Tree Each internal node is a test on some attribute. Each leaf corresonds to a classification. v1 0 1 v2 v4 0 1 0 1 1 +1 +1 1

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 27 Searation using Decision Tree Learning Searating S D from S B as a Decision Tree Learning roblem: Attributes corresond to the invisible variables. The classifications are +1 and 1, corresonding to S D and S B, resectively. The examles are S D labeled +1, and S B labeled 1. Searating set : All the variables resent at an internal nodes of the decision tree. Proof: Let d S D and b S B. The decision tree will classify d as +1 and b as 1. So, there exists a node n in the decision tree, labeled with a variable v, such that d(v) b(v). By construction, v lies in the outut set.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 28 Examle s 1 = (0, 1, 0, 1) t 1 = (1, 1, 1, 1) s 2 = (1, 1, 1, 0) t 2 = (0, 0, 0, 1) E = ((0, 1, 0, 1), +1), ((1, 1, 1, 0), +1), ((1, 1, 1, 1), 1), ((0, 0, 0, 1), 1) v1 0 1 v2 v4 0 1 0 1 1 +1 +1 1 Searating set : {v 1, v 2, v 4 }

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 29 Decision Tree Learning Algorithm DecT ree(examles, Attributes) ID3 Algorithm 1. Create a Root node for the tree. 2. If all examles are classified the same, return Root with this classification. 3. Let A = BestAttribute(Examles, Attributes). Label Root with attribute A. 4. Let Examles 0 and Examles 1 be subsets of Examles having values 0 and 1 for A, resectively. 5. Add a 0 branch to the Root ointing to subtree generated by Dectree(Examles 0, Attributes {A}).

6. Add a 1 branch to the Root ointing to subtree generated by Dectree(Examles 1, Attributes {A}). 7. return Root. The BestAttribute rocedure returns an attribute (which is a variable in our case) that causes the maximum reduction in entroy if the set is artitioned according to this variable.

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 30 Efficient Samling Direct search towards samles that contain more information. Iterative Algorithm. At each iteration, the algorithm finds new samles that are not searated by the current searating set. Let SeSet denote the searating set for the current set of samles. New samles that are not searated by SeSet are comuted by solving Φ(SeSet). = ψ f φ f v i SeSet v i = v i

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 31 Efficient Samling SeSet = ; i = 0; reeat forever { If Φ(SeSet) is satisfiable, derive d i and b i from solution; else exit; SeSet = Searating Set for { i j=0 {d j }, ij=0 {b j }}; i = i + 1; }

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 32 Exeriments NuSMV frontend. Cadence SMV. A ublic domain ILP solver. Chaff. Exeriments conducted on a 1.5GHz Athlon with 3Gb RAM running Linux. We used the IU family of circuits, which are various abstractions of an interface control circuit from Synosys.

Circuit SMV Samling - ILP Samling - DTL Eff. Sam. - DTL Time BDD(k) Time BDD(k) S L Time BDD(k) S L Time BDD(k) S L IU30 0.7 116 0.1 1 0 1 0.1 1 0 1 0.1 1 0 1 IU35 0.6 149 0.1 2 0 1 0.1 2 0 1 0.1 2 0 1 IU40 1.2 225 6.3 21 3 4 0.9 18 5 6 0.6 11 2 3 IU45 37.5 2554 6.1 17 3 4 1.1 18 5 6 0.7 10 2 3 IU50 23.3 2094 19.7 100 13 14 9.8 90 13 14 24.0 1274 4 17 IU55 - - - - - - 2072 51703 6 9 3.0 64 1 6 IU60 - - 7.8 183 4 7 7.8 183 4 7 4.5 109 1 6 IU65 - - 7.9 192 4 7 7.9 192 4 7 3.8 47 1 5 IU70 - - 8.1 192 4 7 8.2 192 4 7 3.8 47 1 5 IU75 102.9 7068 32.0 142 9 10 24.5 397 13 14 24.1 550 2 7 IU80 603.7 39989 31.7 215 9 10 44.0 341 13 14 24.1 186 2 7 IU85 2832 76232 33.1 230 9 10 44.6 443 13 14 25.2 198 2 7 IU90 - - 33.0 230 9 10 44.6 443 13 14 25.4 198 2 7

Circuit SMV Samling - ILP Samling - DTL Eff. Sam. - DTL Time BDD(k) Time BDD(k) S L Time BDD(k) S L Time BDD(k) S L IU30 7.3 324 8.0 113 3 20 7.5 113 3 20 6.5 113 3 20 IU35 19.1 679 11.8 186 4 21 12.7 186 4 21 11.0 186 4 21 IU40 53.6 1100 25.9 260 6 23 19.0 207 5 22 16.1 207 5 22 IU45 226.1 6060 28.3 411 5 22 25.3 411 5 22 22.1 411 5 22 IU50 1754 25102 160.4 2046 13 32 85.1 605 10 27 15120 3791 7 31 IU55 - - - - - - - - - - - - - - IU60 - - - - - - - - - - - - - - IU65 - - - - - - - - - - - - - - IU70 - - - - - - - - - - - - - - IU75 - - 1080 3716 21 38 586.7 1178 16 33 130.5 1050 5 26 IU80 - - 1136 3378 21 38 552.5 1158 16 33 153.4 1009 5 26 IU85 - - 1162 3493 21 38 581.2 1272 16 33 167.7 1079 5 26 IU90 - - 965 3712 20 37 583.3 1271 16 33 167.1 1079 5 26

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques 33 Conclusions and Future Work Our algorithm outerforms standard model checking in both execution time and memory requirements. Exloit criteria other than size of searating set for characterizing a good refinement. Exlore other learning techniques.