Analog Circuit Verification by Statistical Model Checking

Similar documents
Analog Circuit Verification by Statistical Model Checking

Bayesian statistical model checking with application to Stateflow/Simulink verification

Bayesian Inference and MCMC

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Bayesian RL Seminar. Chris Mansley September 9, 2008

Concept of Statistical Model Checking. Presented By: Diana EL RABIH

Statistical Model Checking Applied on Perception and Decision-making Systems for Autonomous Driving

Computational Perception. Bayesian Inference

TUTORIAL 8 SOLUTIONS #

Lecture : Probabilistic Machine Learning

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Point Estimation. Vibhav Gogate The University of Texas at Dallas

Bayesian Methods for Machine Learning

Data Analysis and Uncertainty Part 2: Estimation

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

Some slides from Carlos Guestrin, Luke Zettlemoyer & K Gajos 2

Comparison of Bayesian and Frequentist Inference

Introduction to Probability and Statistics (Continued)

Introduction to Machine Learning. Lecture 2

Bayesian Statistics. Debdeep Pati Florida State University. February 11, 2016

Introduction to Bayesian Statistics 1

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Probability and Estimation. Alan Moses

CSC321 Lecture 18: Learning Probabilistic Models

Bayesian Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

Bayesian Social Learning with Random Decision Making in Sequential Systems

Machine Learning using Bayesian Approaches

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

Answers and expectations

A Robustness Optimization of SRAM Dynamic Stability by Sensitivity-based Reachability Analysis

Introduction to Machine Learning

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Bayesian Regression Linear and Logistic Regression

Introduction to Bayesian Learning. Machine Learning Fall 2018

Basics on Probability. Jingrui He 09/11/2007

Time Series and Dynamic Models

an introduction to bayesian inference

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics

Multiple regression. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Introduc)on to Bayesian Methods

Hierarchical Models & Bayesian Model Selection

A Bayesian Approach to Model Checking Biological Systems

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014

Probability, Entropy, and Inference / More About Inference

Frequentist Statistics and Hypothesis Testing Spring

Introduction Probabilistic Programming ProPPA Inference Results Conclusions. Embedding Machine Learning in Stochastic Process Algebra.

F denotes cumulative density. denotes probability density function; (.)

Statistical Methods in Particle Physics

Linear Models A linear model is defined by the expression

Introduction to Probabilistic Machine Learning

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Bayesian Phylogenetics:

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Bayesian Machine Learning

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Machine Learning

Strong Lens Modeling (II): Statistical Methods

DIFFERENT APPROACHES TO STATISTICAL INFERENCE: HYPOTHESIS TESTING VERSUS BAYESIAN ANALYSIS

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

Review of Probabilities and Basic Statistics

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Compute f(x θ)f(θ) dθ

K. Nishijima. Definition and use of Bayesian probabilistic networks 1/32

Introduction to Artificial Intelligence (AI)

Foundations of Statistical Inference

Announcements. Proposals graded

Inference for a Population Proportion

Artificial Intelligence

Bayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Bayesian Methods with Monte Carlo Markov Chains II

Computational Cognitive Science

Probability and Information Theory. Sargur N. Srihari

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Aarti Singh. Lecture 2, January 13, Reading: Bishop: Chap 1,2. Slides courtesy: Eric Xing, Andrew Moore, Tom Mitchell

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Bayesian Inference for Dirichlet-Multinomials

Bayesian Statistics Part III: Building Bayes Theorem Part IV: Prior Specification

David Giles Bayesian Econometrics

(4) One-parameter models - Beta/binomial. ST440/550: Applied Bayesian Statistics

Rejection sampling - Acceptance probability. Review: How to sample from a multivariate normal in R. Review: Rejection sampling. Weighted resampling

2016 SISG Module 17: Bayesian Statistics for Genetics Lecture 3: Binomial Sampling

Approximate Bayesian Computation

Part III. A Decision-Theoretic Approach and Bayesian testing

Learning Bayesian Networks

Preliminary statistics

Sequential Monitoring of Clinical Trials Session 4 - Bayesian Evaluation of Group Sequential Designs

Part 1: Expectation Propagation

Accouncements. You should turn in a PDF and a python file(s) Figure for problem 9 should be in the PDF

Introduction to Machine Learning CMU-10701

Statistical Inference

Bayesian Inference: Concept and Practice

Transcription:

16 th Asian Pacific Design Automation ti Conference Analog Circuit Verification by Statistical Model Checking Author Ying-Chih Wang (Presenter) Anvesh Komuravelli Paolo Zuliani Edmund M. Clarke Date 2011/1/26 Thi k i d b th Gi S l S t R h C t This work is sponsored by the GigaScale Systems Research Center, Semiconductor Research Corporation, National Science Foundation, General Motors, Air Force and the Office of Naval Research

Overview Introduction Background Assumptions Bayesian Statistics Algorithm Application to Analog Circuits Bounded Linear Temporal Logic Experimental Results Discussion Conclusion and Future Work 2

Introduction Motivation Analog circuits behaviors vary dramatically with changes in prameters MOSFET parameters, temperature, etc. Performance, correctness Circuits resiliency under uncertainties Process variation, noise, etc. How to deal with uncertainties? 3

Introduction Problem Does the design satisfy the property with probability p threshold θ? Monte Carlo: estimate the probability p How good is the estimate? Property : Bounded Linear Temporal Logic (BLTL) Statistical Model Checking (SMC) Recently applied to Stateflow/Simulink models [1] Probabilistic Guarantees Bayesian Hypothesis Testing Bayesian Estimation [1] P. Zuliani, A. Platzer and E. M. Clarke, Bayesian Statistical Model Checking with Application to Stateflow/Simulink Verification, in HSCC 2010. 4

Background Assumptions Monte Carlo Simulation Draw independent and identically distributed (i.i.d.) i sample traces Characterize each simulation trace as Pass/Fail Property Ф + = Biased coin Stochastic system M M Ф with probability p Bernoulli Random Variable Y P(Y=1) = p P(Y=0) = 1-p 5

Background Bayesian Statistics 2.5 2 1.5 Perform Experiments Sequentially 1 0.5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Prior density of p (Beta Distribution) Bayesian 40 Estimation 35 Experimental Results {Pass,Fail,Pass,Pass, } Bayesian Hypothesis Testing 30 25 Bayes 20 B Factor 15 10 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Posterior density of p 6

Overview Analog Circuit Verification Analog Circuit it Descriptions Process Variation Descriptions SPICE Circuit Simulator BLTL Formula φ Execution Traces σ BLTL Model Checker Bayes Test Bayes Factor > T M P θ (φ) M P <θ (φ) Bayes Factor < 1/T 7

Background Bayesian Statistics Null hypothesis H 0 (p θ) Alternative hypothesis H 1 1(p (p<θ) P(H 0 ), P(H 1 ) sum to 1 X: experimental results Bayes Factor Jefferys Test [1960s] Fixed Sample Size Bayes Factor > 100 : Strongly supports H 0 Sequential Version (Fixed Bayes Factor Threshold) 8

Statistical Model Checking Algorithm Require: Property P θ (Φ), Threshold T 1, Prior density g n := 0 {number of traces drawn so far} x := 0 {number of traces satisfying i Φ so far} repeat σ := draw a sample trace of the system (iid) n := n+1 if σ Φ then x := x + 1 endif B:= BayesFactor(n, x, θ, g) until (B > T vb< 1/T ) if (B >T)then ) return H 0 accepted else return HH 0 rejected endif Theorem (Error bounds). When the Bayesian algorithm using threshold T stops, the following holds: Prob ( accept H 0 H 1 ) 1/T Prob ( reject H 0 H 0 ) 1/T 9

Bounded Linear Temporal Logic States: State Variables Atomic Propositions (AP): e 1 ~ e 2 e 1, e 2 : arithmetic expressions over state variables, ~ {<,,>,,=},,, t t Syntax φ :: = AP φ1 φ2 φ F φ G φ Let σ = (s 0,t 0 ), (s 1,t 1 ),... be a simulation trace of the model stays in s i for duration t i. s 0 s 1 s t k 0 t 1 t k σ k : suffix trace of σ starting at state k σ k = (s k,t k ), (s k+1,t k+1 ), 10

Bounded Linear Temporal Logic The semantics of BLTL for suffix trace σ k (trace starting at state k): σ k AP σ k φ iff σ k Ф 1 or σ k Φ 1 φ2 2 σ k φ iff σ k Φ does not hold σ k i 1 t F φ tk+ l t s k iff atomic proposition AP is true in state s k iff exists i 0 s.t. and σ k+i l= 0 Φ Ф s k+i t k t k+1 t k+i time bound t F [10ns] (Vout>1) Within 10ns, Vout should eventually be larger than 1 volt. 11

Bounded Linear Temporal Logic The semantics of BLTL for suffix trace σ k (trace starting at state k): σ k s k t G φ i 1 iff for all i 0 s.t. t t, σ k+i Φ Ф Ф s Ф k+i t k t k+1 t k+i time bound t G [10ns] (Vout>1) In the next 10ns, Vout should always be greater than 1 volt. Nesting Operators F [100ns] G [10ns] (Vout>1) Within 100ns, Vout will be greater than 1 volt and stay greater than 1 volt for at least 10 ns. l= 0 Ф k+ l 12

An Example : OP amp

An Example : OP amp Specifications Specifications 1 Input Offset Voltage < 1 mv 2 Output Swing Range 0.2 V to 1 V 3 Slew Rate > 25 V/μSec 4 Open-Loop Voltage > 8000 V/V Gain 5 Loop-Gain Unit-gain > 10 MHz Frequency 6 Phase Margin > 60 14

Example : OP amp Specifications Specifications : Quantities computed from simulation traces in most cases, can be translated to BLTL directly from definitions e.g. Swing Range: Max(Vout) > 1V and Min(Vout) < 0.2V F [100μs] (V out < 0.2) F [100μs] (V out > 1) Within entire trace, Vout will eventually be greater than 1V and smaller than 0.2V 100μs : the end time of transient simulation 15

Example : OP amp Specifications Frequency-domain properties? Transient response σ =(s 0,t 0 ), (s 1,t 1 )(s ),(s 2,t 2 ), Frequency response σ = (s 0,f 0 ), (s 1,f 1 ),(s 2,f 2 ), Substitute frequency-stamps for time-stamps Specify frequency domain properties in BLTL Check with the same BLTL checker Semantics Changed G [1GHz] φ :φ holds for all frequencies from current to 1GHz larger F [1GHz] φ :φ holds for some frequency from current to 1GHz larger 16

An Example : OP amp Specifications Specifications ( Transient ) BLTL Specifications 1 Input Offset Voltage < 1 mv F [100μs] (V out = 0.6) G [100μs] ((V out = 0.6) ( V in+ V in < 0.001)) 2 Output Swing Range 0.2 V to 1 V F [100μs] (V out < 0.2) F [100μs] (V out > 1) 3 Slew Rate > 25 V/μSec G [100μs] ( ((V out = 1 V in > 0.65) F [0.008μs] (V out = 0.8)) ((V out = 0.2 V in < 0.55) F [0.008μs] (V out = 0.4)) ) Specifications ( Frequency-domain ) BLTL Specifications 4 Open-Loop Voltage Gain > 8000 V/V G [1KHz] (V mag out > 8000) 5 Loop-Gain Unit-gain >10MHz G [10MHz] (V mag out >1) Frequency 6 Phase Margin > 60 F [10GHz] (V mag out = 1) G [10GHz] ((Vmag (V out =1) (Vphase out >60 ) ) 17

Experimental Setup Platform: Linux virtual machine running on a 2.26GHz i3-350m, 4GB RAM computer We model-check the BLTL formulae on previous slides e.g. (Swing Range) H 0 :M P θ [F [100μs] (V out < 0.2) F [100μs] (V out > 1)] 18

Experimental Results Target yield: 0.95 Monte Carlo analysis: do not satisfy /satisfy yield threshold Monte Carlo (1000 samples) Measured Value Specifications Mean Stddev Yield 1 Input Offset Voltage (V).436.597.826 2 Swing Range Min (V).104.006 1.00 Swing Range Max (V) 1.08.005 1.00 3 Negative Slew Rate (V/μSec) -40.2 1.17 1.00 Positive Slew Rate (V/μSec) 56.4 2.54 1.00 4 Open-Loop Voltage Gain (V/V) 8768. 448.975 5 Loop-Gain UGF (MHz) 19.99.30 1.00 6 Phase Margin ( ) 64.1.44 1.00 19

Experimental Results Statistical Model Checking Null Hypothesis (H 0 ): From Specs, e.g. H 0 :M P θ [F [100μs] (V out < 0.2) F [100μs] (V out > 1)] Probability Threshold θ: Set to Target Yield Bayes Factor Threshold T: Test Strength Needed T = 1000 : Probability of error is less than 0.001 Prior Distribution of p: Uniform : No idea 20

Experimental Results Monte Carlo : do not satisfy /satisfy yield threshold 0.95 SMC testing result : Prior: Uniform, T = 1000, θ = 0.95 reject / accept null hypothesis Monte Carlo (1000 samples) Measured Value SMC(θ = 0.95 ) Specifications Mean Stddev Yield Samples/Runtime 1 Input Offset Voltage (V).436.597.826 31/39s 2 Swing Range Min (V).104.006 1.00 77/98s Swing Range Max (V) 1.08.005 1.00 77/98s 3 Negative Slew Rate (V/μSec) -40.2 1.17 1.00 77/98s Positive Slew Rate (V/μSec) 56.4 2.54 1.00 77/98s 4 Open-Loop Voltage Gain (V/V) 8768. 448.975 239/303s 5 Loop-Gain UGF (MHz) 19.99.30 1.00 77/98s 6 Phase Margin ( ) 64.1.44 1.00 77/98s 21

Experimental Results Prior: uniform, T = 1000, θ in [0.7, 0.999] SMC testing result : reject / accept null hypothesis Spec Null Hypothesis 1 M P θ [F [100μs] (V out = 0.6) G [100μs] ((V out = 0.6) ( V in+ V in <0.001))] 2 M P <02) θ [F [100μs] (V out 0.2) F [100μs] (V out >1)] 3 M P θ [G [100μs] ( ((V out = 1 V in > 0.65) F [0.008μs] (V out < 0.8)) ((V out < 0.2 V in < 0.55) F [0.008μs] (V out < 0.4)) )] Samples/Runtime Probability threshold θ Spec 07 0.7 08 0.8 09 0.9 099 0.99 0.999 1 77/105s 9933/12161s 201/275s 10/13s 7/9s 2 16/18s 24/27s 44/51s 239/280s 693/813s 3 16/23s 24/31s 44/57s 239/316s 693/916s 22

Experimental Results Prior: uniform, T = 1000, θ in [0.7, 0.999] SMC testing result : reject / accept null hypothesis Spec Null Hypothesis 4 M P θ [G [1KHz] (Vmag out > 8000)] 5 M P θ [G [10MHz] (Vmag out >1)] 6 M P θ [F [10GHz] (Vmag out = 1) G [10GHz] ((Vmag out =1) (Vphase out >60 ))] Samples/Runtime Probability threshold θ Spec 0.7 0.8 0.9 0.99 0.999 4 23/26s 43/49s 98/114s 1103/1309s 50/57s 5 16/18s 24/28s 44/51s 239/279s 693/807s 6 16/20s 24/30s 44/55s 239/303s 693/882s 23

Discussion Statistical Model Checking is faster when The threshold probability θ is away from the unknown probability p, e.g. p 1 θ 0.7 0.8 0.9 0.99 0.999 Samples 16 24 44 239 693 Easily integrated in the validation flow Relies only on Monte Carlo sampling and SPICE Add-ons: Online BLTL monitoring, Bayes factor calculation Low computation overhead Runtime dominated by SPICE simulation 24

Conclusion Introduced SMC to analog circuit verification Avoid Monte Carlo simulation drawing unnecessary samples when evidence is enough. Demonstrated the feasibility of using BLTL specifications for a simple analog circuit BLTL can specify complex interactions between signals Future Works More experiments larger examples with more complicated specifications Introducing the technique of importance sampling 25

Thank you. 26

Bayesian Statistical Model Checking Suppose satisfies with (unknown) probability p p is given by a random variable (defined on [0,1]) with density g g represents the prior belief that satisfies Generate independent and identically distributed (iid) sample traces σ 1,,σσ n X = {x 1,,x n }, x i : the i th sample trace σ i satisfies x i =1iff x i =0iff Then, x i will be a Bernoulli trial with conditional density (likelihood function) :f(x = x i 1-x i i u) u (1 u) 27

Statistical Model Checking Calculate Bayes Factor using posterior and prior probability bilit Posterior density (Bayes Theorem) (cond. iid Bernoulli s) Likelihood function 28

Statistical Model Checking H 0 : p θ, H 1 : p < θ, Prior Probability (g: prior density of p) P( H ) π g( udu ) 0 0 1 = = PH ( 1) gudu ( ) 1 0 θ θ = = π 0 Bayes Factor of sample X={x 1,,x n }, and hypotheses H 0, H 1 is 1 P ( H X ) P ( H ) f( x u) f( xn u) g( u) du θ 1 π PH ( X) PH ( ) f( x u) f( x u) gudu ( ) 0 1 1 θ 0 = θ 1 0 π 0 0 1 n 29

Beta Prior Prior g is Beta of parameters α>0,, β>0 F (, ) ( ) is the Beta distribution function (i.e., Prob(X u))

Why Beta priors? Defined over [0,1] Beta distributions are conjugate to Binomial distributions: If prior g is Beta and likelihood function is Binomial then posterior is Beta Suppose likelihood Binomial(n,x), prior Beta(α,β): posterior f(u x 1,,x n ) where x = Σ i x i f(x 1 u) f(x n u) g(u) = u x (1 u) n-x u α-1 (1 u) β-1 = u x+α -1 (1 u) n-x+β-1 Posterior is Beta of parameters x+α and n-x+β 07/16/09

Beta Density Shapes 3.5 8 3 7 6 2.5 5 2 4 1.5 3 2 1 1 0.5 0 01 0.1 02 0.2 03 0.3 04 0.4 05 0.5 06 0.6 07 0.7 08 0.8 09 0.9 1 α=.5 β=.5 3.5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α=2 β=.5 3 2.5 2 1.5 1 0.5 α=4 β=10 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Computing the Bayes Factor Proposition The Bayes factor of H 0 :M P θ (Φ) vs H 1 :M P <θ (Φ) for n Bernoulli samples (with x n successes) and prior Beta(α,β) where F (, ) ( ) is the Beta distribution function.

Bayesian Interval Estimation - IV 2.5 2 40 width 2δ 1.5 35 1 0.5 30 25 20 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 prior is beta(α=4,β=5) 15 10 5 0 0 01 0.1 02 0.2 03 0.3 04 0.4 05 0.5 06 0.6 07 0.7 08 0.8 09 0.9 1 posterior density after 1000 samples and 900 successes is beta(α=904,β=105) posterior mean = 0.8959

Related Work Monte Carlo Simulation A random variable X with unknown distribution ib ti f(x) p = PX ( > t ) Monte Carlo is a statistical method to estimate probability of an event (such as X > t) under unknown distribution. The goal is to find p = E { I ( X > t} N 1 pˆ = I( X > t) p MC N i = 1 N i 1, x> t I( x> t) = 0,otherwise The Monte Carlo method generates N independent samples of X (X 1,,X N )to form a estimate of p 35

Related Work Importance Sampling Importance Sampling samples from a biased distribution that the rare event happens more often. Let f(x) is the density of X, f*(x) is the density of X* p= E I( X > t} { } = I( x> t) f ( x) dx f( x) * = I( x> t) f ( xdx ) * f ( x ) { ( * t) W( X * )} = E I X > W( x) = f ( x) * f ( x) Sampling from a biased random variable X*, IS estimator: 1 N * * pˆ IS = W ( Xi ) I ( Xi > t ) N i = 1 36

Bayesian Interval Estimation - V Require: BLTL property Φ, interval-width δ, coverage c, prior beta parameters α,β n := 0 {number of traces drawn so far} x := 0 {number of traces satisfying so far} repeat σ := draw a sample trace of the system (iid) n := n + 1 if σ Φ then x := x + 1 endif mean = (x+α)/(n+α+β) (t 0,t 1 ) = (mean-δ, mean+δ) I:= PosteriorProbability (t 0,t 1,n,x,α,β) until (I > c) return (t 0,t 1 ), mean

Bayesian Interval Estimation - VI Recall the algorithm outputs the interval (t 0,t 1 ) Define the null hypothesis H 0 : t 0 < p < t 1 Theorem (Error bound). When the Bayesian estimation algorithm (using coverage ½< c < 1) stops we have Prob ( accept H 0 H 1 ) (1/c -1)π 0 /(1-π 0 ) Prob ( reject H 0 H 0 ) (1/c -1)π 0 /(1-π π 0 ) π 0 is the prior probability of H 0