Logic Synthesis. Basic Definitions. k-cubes

Similar documents
Fundamental Algorithms for System Modeling, Analysis, and Optimization

Multilevel Logic Synthesis Algebraic Methods

Heuristic Minimization of Two Level Circuits

UNIT 5 KARNAUGH MAPS Spring 2011

Multi-Level Logic Optimization. Technology Independent. Thanks to R. Rudell, S. Malik, R. Rutenbar. University of California, Berkeley, CA

Lecture 6: Manipulation of Algebraic Functions, Boolean Algebra, Karnaugh Maps

Logic Minimization. Two-Level. University of California. Prof. Srinivas Devadas. Prof. Richard Newton Prof. Sanjit Seshia. Prof.

ECE 3060 VLSI and Advanced Digital Design

Computational Boolean Algebra. Pingqiang Zhou ShanghaiTech University

Lecture 5: NAND, NOR and XOR Gates, Simplification of Algebraic Expressions

Chapter 2 Combinational Logic Circuits

Optimizations and Tradeoffs. Combinational Logic Optimization

CHAPTER 5 KARNAUGH MAPS

Alebraic division: { Let f dividend = ac + ad + bc + bd + e and f divisor = a + b { Then f quotient = c + d f remainder = e { Because (a + b) (c + d)

Logic Synthesis and Verification

Karnaugh Maps Objectives

Midterm1 Review. Jan 24 Armita

PLA Minimization for Low Power VLSI Designs

Lecture 6: Gate Level Minimization Syed M. Mahmud, Ph.D ECE Department Wayne State University

This form sometimes used in logic circuit, example:

Simplification of Boolean Functions. Dept. of CSE, IEM, Kolkata

Advanced Digital Design with the Verilog HDL, Second Edition Michael D. Ciletti Prentice Hall, Pearson Education, 2011

DIGITAL ELECTRONICS & it0203 Semester 3

Lecture 7: Karnaugh Map, Don t Cares

(a)

Chapter 2 Combinational Logic Circuits

Review. EECS Components and Design Techniques for Digital Systems. Lec 06 Minimizing Boolean Logic 9/ Review: Canonical Forms

L6: Two-level minimization. Reading material

CS/EE 181a 2010/11 Lecture 4

Chapter 4 Optimized Implementation of Logic Functions

for Digital Systems Simplification of logic functions Tajana Simunic Rosing Sources: TSR, Katz, Boriello & Vahid

ENG2410 Digital Design Combinational Logic Circuits

Introduction to Digital Logic Missouri S&T University CPE 2210 Karnaugh Maps

The Karnaugh Map COE 202. Digital Logic Design. Dr. Muhamed Mudawar King Fahd University of Petroleum and Minerals

ECE 697B (667) Spring 2003

Chap 2. Combinational Logic Circuits

CHAPTER III BOOLEAN ALGEBRA

Gate-Level Minimization

EECS150 - Digital Design Lecture 19 - Combinational Logic Circuits : A Deep Dive

UNIT 4 MINTERM AND MAXTERM EXPANSIONS

ELC224C. Karnaugh Maps

Review for Test 1 : Ch1 5

CHAPTER 3 BOOLEAN ALGEBRA

L4: Karnaugh diagrams, two-, and multi-level minimization. Elena Dubrova KTH / ICT / ES

CHAPTER III BOOLEAN ALGEBRA

CHAPTER 7. Exercises 17/ / /2 2 0

Digital Circuit And Logic Design I. Lecture 4

CS/EE 181a 2008/09 Lecture 4

211: Computer Architecture Summer 2016

Karnaugh Map & Boolean Expression Simplification

Unit 6. Quine-McClusky Method. Unit 6 1

Chapter 2 Combinational Logic Circuits

Unit 2 Session - 6 Combinational Logic Circuits

Two-level logic minimization: an overview

Digital Logic Design. Combinational Logic

E&CE 223 Digital Circuits & Systems. Lecture Transparencies (Boolean Algebra & Logic Gates) M. Sachdev

Minimization techniques

Synthesis of 2-level Logic Exact and Heuristic Methods. Two Approaches

Number System conversions

ECE 238L Boolean Algebra - Part I

Standard Expression Forms

E&CE 223 Digital Circuits & Systems. Lecture Transparencies (Boolean Algebra & Logic Gates) M. Sachdev. Section 2: Boolean Algebra & Logic Gates

Principles of Computer Architecture. Appendix B: Reduction of Digital Logic. Chapter Contents

CHAPTER 7. Solutions for Exercises

Logic Design I (17.341) Fall Lecture Outline

14:332:231 DIGITAL LOGIC DESIGN

Boolean Algebra. Sungho Kang. Yonsei University

MC9211 Computer Organization

Ch 2. Combinational Logic. II - Combinational Logic Contemporary Logic Design 1

Systems I: Computer Organization and Architecture

Computer Organization I. Lecture 13: Design of Combinational Logic Circuits

Simplifying Logic Circuits with Karnaugh Maps

Chapter 3. Boolean Algebra. (continued)

CSE 140: Components and Design Techniques for Digital Systems

Logic Simplification. Boolean Simplification Example. Applying Boolean Identities F = A B C + A B C + A BC + ABC. Karnaugh Maps 2/10/2009 COMP370 1

EEE130 Digital Electronics I Lecture #4

UNIT 3 BOOLEAN ALGEBRA (CONT D)

Contents. Chapter 3 Combinational Circuits Page 1 of 36

CPE100: Digital Logic Design I

Chapter 7 Logic Circuits

Reduction of Logic Equations using Karnaugh Maps

Signals and Systems Digital Logic System

ESE535: Electronic Design Automation. Today. Problem. EDA Use PLA. Programmable Array Logic (PLAs) Two-Level Logic Optimization

Logical Design of Digital Systems

Week-I. Combinational Logic & Circuits

COM111 Introduction to Computer Engineering (Fall ) NOTES 6 -- page 1 of 12

Chapter 2 Combinational logic

Karnaugh Maps ف ر آ ا د : ا ا ب ا م آ ه ا ن ر ا

Lecture 4: Four Input K-Maps

Irredundant Sum-of-Products Expressions - J. T. Butler

Configurational Analysis beyond the Quine-McCluskeyMilan, algorithm 8 February / 29

Combinational Logic. Review of Combinational Logic 1

Inadmissible Class of Boolean Functions under Stuck-at Faults

Anand Raghunathan MSEE 348

ELEC Digital Logic Circuits Fall 2014 Logic Minimization (Chapter 3)

Combinational Logic Fundamentals

REPRESENTATION AND MINIMIZATION OF BOOLEAN FUNCTIONS. Part II Minimization of Boolean Functions.

Large-Scale SOP Minimization Using Decomposition and Functional Properties

Class Website:

CSE140: Components and Design Techniques for Digital Systems. Logic minimization algorithm summary. Instructor: Mohsen Imani UC San Diego

Transcription:

Logic Synthesis Minimization of Boolean logic Technology-independent mapping Objective: minimize # of implicants, # of literals, etc. Not directly related to precise technology (# transistors), but correlated consistent with objectives for any technology Technology-dependent mapping Linked to precise technology/library Technology-independent mapping Two-level minimization sum of products (SOP)/product of sums (POS) Karnaugh maps visual technique Quine-McCluskey method algorithmic Heuristic minimization fast and pretty good, but not exact Multi-level minimization Basic Definitions Specification of a function f On-set f on : set of input combinations for which f evaluates to 1 Off-set f off : set of input combinations for which f evaluates to 0 Don t care set f dc : set of input combinations over which function is unspecified Cubes Can represent a function of k variables over a k-dimensional space Example: f(x 1,x 2,x 3 ) = m(0,3,5,6) + d(7) f on = {0,3,5,6}; f dc = {7}; f off = {1,2,4} 011 Graphically: 111 001 x 3 000 010 101 x 2 100 110 x 1 k-cubes k-cube: k-dim. subset of f on 0-cube = vertex in f on k-cube = a pair of (k-1) cubes with a Hamming distance of 1 Examples A 0-cube is a vertex A 1-cube is an edge A 3-cube is a 3D cube A 4-cube is harder to visualize but can be shown as A 2-cube is a face 1

More defintions Implicant A k-cube whose vertices all lie in the f on f dc and contains at least one element of f on Prime implicant Ak-cube implicant such that no k+1-cube containing this cube is an implicant Cover A set of implicants whose union contains all elements of f on and no elements of f off (may contain some elements of f dc ) Minimum cover A cover of minimum cost (e.g., cardinality) A min cardinality cover composed only of prime implicants exists (if not, can combine some implicants into larger prime implicants) Quine-McCluskey Method Illustration by example: f(x 1,x 2,x 3,x 4 ) = m(0,5,7,8,9,10,11,14,15) 0-cubes 1-cubes 2-cubes 0 (0000) x 0,8 (-000) A 8,9,10,11 (10--) D 5 (0101) x 57(01-1) 5,7 1) B 10,11,14,1511 14 15 (1-1-) 1 E 7 (0111) x 7,15 (-111) C 8 (1000) x 8,9 (100-) x 9 (1001) x 8,10 (10-0) x 10 (1010) x 9,11 (10-1) x 11 (1011) x 10,11 (101-) x Prime implicants 14 (1110) x 10,14 (1-10) x 15 (1111) x 11,15 (1-11) x 14,15 (111-) x x implies that the cube has been combined into a larger cube PI s minterms Prime implicant table A (0,8) B (5,7) C (7,15) D (8,9,10,11) E (10,11,14,15) 0 x 5 x 7 x x 8 x x 9 x 10 x x 11 x x 14 x 15 x x Essential Prime Implicant (PIs): The only PI that covers a minterm (encircled in the table) Must be included in any cover Here, essential PIs = A,B,D,E form a cover! WARNING: this was luck in general, essential PIs will not form a cover! 2

Reducing the prime implicant table PI table reduction In general, essential PIs will not form a cover Reduce table by removing essential PIs, corresponding minterms Further reduction: can remove Dominating rows Dominated columns Row m 1 dominates row m 2 Column J dominates column K PI s P Q R S minterms m 1 x x x m 2 x x m 3 x x m 4 x x PI s J K L M minterms m 1 x x m 2 x x m 3 x x x m 4 x x Branch-and-bound algorithm May still not have a cover Example: example from previous slide after removing dominating row m 1 and consequently empty column P Can enumerate possibilities using a search tree Binary search tree: include or exclude PI Reduced PI table PI s R S minterms m 4 x x PI s Q R S minterms m 2 x x m 3 x x m 4 x x include Done Cover = {Q,R} include R Q exclude Done Cover = {R,S} exclude Done Cover = {Q,S} Branch-and-bound algorithm (contd.) ESPRESSO-EXACT Implementation of branching algorithm from previous slide Traversal to a leaf node of the tree yields a cover (though possibly not a minimum cost cover) ESPRESSO-EXACT adds bounding at any node: If Cost node + LB subtree > Best_cost_so_far, do not search the subtree Cost node = cost (e.g., number of implicants) chosen so far LB subtree = a lower bound on the cost of a subtree (can be determined by solving a maximal independent set problem) Best_cost_so_far = cost of best cover found so far through the traversal of the search tree; initialized to 3

Heuristic Logic Minimization Apply a sequence of logic transformations to reduce a cost function Transformations Expand: Input expansion Enlarge cube by combining i smaller cubes Reduces total number of cubes Output expansion Use cube for one output to cover another Reduce: break up cube into sub-cubes Increases total number of cubes Hope to allow overall cost reduction in a later expand operation Irredundant Remove redundant cubes from a cover Example Expand: input expansion z y Redundant! d x On-set member xyz f 000 1 01-1 -11 1 Off-set member xyz f 0-0 1 01-1 -11 1 xyz f 0-0 1-11 1 Examples from G. Hachtel and F. Somenzi, Logic Synthesis and Verification Algorithms, Kluwer Academic Publishers, Boston, MA, 1996. Example Expand: output expansion Two output functions of three variables each with initial covers shown below z f 1 f 2 y x xyz F 1 F 2 0-1 10 1-0 10 00-01 -00 01-11 01 z f 1 f 2 y x xyz F 1 F 2 0-1 11 1-0 10 00-01 -00 01-11 01 Examples from G. Hachtel and F. Somenzi, Logic Synthesis and Verification Algorithms, Kluwer Academic Publishers, Boston, MA, 1996. 4

Other operators Reduce Reduce Future Expand operation Irredundant Identified as redundant; removed Irredundant Example of an application of operators (10 cubes) --11 11 1001 01 1101 10-001 10-1-0 01 1010 01 0110 10-010 10-100 10 1000 10 (12 cubes) reduce -011 10 --11 11 1001 01 1101 10-001 10-1-0 01 1010 01 reduce 0110 10 0010 10 1010 10-100 10 1000 10 expand expand expand (9 cubes) --11 11 1001 01 1101 10-0-1 10-1-0 01 1010 01 0-10 10-100 10 10-0 10 Example from S. Devadas, A. Ghosh and K. Keutzer, Logic Synthesis, McGraw-Hill, New York, NY, 1994. Example of a minimization loop F = Expand(F, D) F = Irredundant(F,D) do { Cost = F F = Reduce(F,D) (FD) F = Expand(F,D) F = Irredundant(F,d) } while ( F < Cost) F= Make_sparse(F,D) Make_sparse reduces output parts of a cube (e.g., from 11 to 10) to remove redundant connections) Example: xyz F 1 F 2 11-10 -01 10 1-1 11 0-0 01 xyz F 1 F 2 11-10 -01 10 1-1 01 0-0 01 5

Implementation of operators Uses unate recursive paradigm Definition: Shannon expansion F(x 1,x 2,, x i,, x n ) = x i F(x 1,x 2,, x i,, x n ) + x i F(x 1,x 2,, x i,, x n ) =xf i xi +x i F xi (notationally) Unate function Positive unate in variable x i : F xi F xi F = x i F xi + F xi Negative unate in variable x i : F xi F xi F = F xi + x i F xi Unate function: positive unate or negative unate in each variable Unate recursive paradigm Recursively perform Shannon expansions about the variables until a unate function is obtained Why unate functions? Various operations (tautology checking, complementation, etc. are easy for unate functions) Unateness Example Unate cover (not minimum) Every column has only 1 s and s, or only 0 s and s w Note on notation: 0 1 The table at left 1 represents the on set 0 1 Nonunate cover: nonunate in y and z (both 1 and 0 appear in the columns) w 1 1 0 1 1 1 0 Unate recursive paradigm a b c d e 1 1 0 0 1 1 1 0 1 F c F c 1 1 0 1 a b c d e 1 0 a b c d e 1 0 1 1 1 0 1 F e F e (Unate function) Expand about binate variable a b c d e 1 0 1 0 (Unate function) a b c d e 1 1 (Unate function) 6

Example: Unate complementation (Example to show that unate operations are easy ) Basic result: If F = x F x + x F x then F = x (F x ) + x (F x ) Proof: Let G = x (F x ) +x )+ (F x ) Show that F+G = 1 and F.G = 0 0 10 Complement 1 0 1 1 1 1 1 0 0 0 0 1 x x Complement 11 1 Complement 0 y y 0 Complement 1 Complement Empty set! Cofactors with respect to sets of cubes Can generalize the Shannon expansion to F = c F c + c F c where c is an set of cubes Result: c F F c is a tautology Example of finding a cofactor with respect to a cube: Cofactor of F = p q r s 1 1 0 0 1 0 1 1 1 1 with respect to c = [1 1 ] F c contains elements of F on that agree with c at all non-don t care positions (in this example, in variables p and q) If so: replace non-don t cares by and copy the rest of the cube Following this prescription, F c is p q r s 0 1 1 Checking for Tautology Checking for tautology: 1. F is a tautology F xj is a tautology and F xj is a tautology 2. Let C be a unate cover of F. Then F is a tautology C has a row of all - s Binate variable Example p q r s Tautology! p q r s 1 1 0 0 1 1 1 0 0 0 r r p q r s 1 1 0 0 Tautology! Since all leaf nodes are tautologies, the function is a tautology 7

The Expand Operator and Tautology Consider the function f with a cover G and f dc specified in the table f Objective: to expand 000 1 to 0 0 1 0 0 0 1 Need to check if the expansion is valid, i.e., it does 0 1 1 11 1 not overlap with f off Define c i = 0 0 0 1 d i = difference between c i and 0 0 1 = 0 1 0 1 here 1 0 0 Need to know if d i Q = (G \ c i ) f dc : if so, can expand In other words, check if Q di is a tautology Q di can easily be verified here to be 1 here, which is a tautology The Irredundant Operator and Tautology Objective: to check if a cube c i in a cover G of function F is redundant In other words, check if c i Q = (G \ c i ) F dc In other words, check if Q ci is a tautology Multilevel logic optimization Motivation Two-level optimization (SOP, POS) is too limiting Useful for structures like PLA s, but most circuits are not designed in that way May require gates with a large number of inputs Restricts sharing of logic gates between outputs Multilevel optimization permits more than two levels of gates between the inputs and the outputs Necessarily heuristic Reference for this part: G. De Micheli, Synthesis and Optimization of Digital Circuits, McGraw-Hill, New York, NY, 1994. 8

Basic Transformations Elimination r = p + a ; s = r + b s = p + a + b Decomposition v = a d + bd + cd + a e j= a + b + c; v = jd + a e Extraction p = ce+de; t = ac+ad+bc+bd+e k = c+d; p = ke; t = ka+kb+e Simplification u = q c+qc +qc u = q+c Substitution t = ka+kb+e; q = a+b t = kq+e (Others exist; these are the most common) Transformations Apply the transformations heuristically Two methods: Algorithmic: algorithm for each transformation type Rule-based: according to a set of rules injected into the system by a human designer A typical synthesis script script.rugged in the SIS synthesis system from Berkeley sweep; eliminate 1 simplify m nocomp eliminate 1 sweep; eliminate 5 simplify m nocomp resub a fx resub a; sweep eliminate 1; sweep full_simplify m nocomp Explanation sweep: eliminates single-input vertices (w = x; y = w+z becomes y = x+z) eliminate k: eliminate defined earlier; Eliminates vertices so that area estimate increases by no more than k simplify m nocomp: simplify defined earlier Invokes ESPRESSO to minimize without computing full off-set ( nocomp ) full_simplify m nocomp: as above, but uses a larger don t care set resub a: algebraic substitute for vertex pairs fx: extracts double cube and single cube expressions 9

Algebraic model Also known as weak division Manipulation according to rules of polynomial algebra Support of a function Sup(f) = set of all variables v that occur as v or v in a minimal representation of f Sup(ab+c) = {a,b,c}; Sup(ab+a b) = {b} f is orthogonal to g (or f g) if Sup(f) Sup(g) = g is an algebraic (or weak) divisor of f when f = g h + r, provided h and g h g divides f evenly if r = Example If f = ab+ac+d; g = b+c, then f = ag + d (here h = a, r = d) The quotient, loosely referred to as f/g, is the largest cube h such that f = gh + r Computing the quotient f/g Given f = {set of cubes c i }, g = {set of cubes a i } Define h i = {b j a i b j f} for all cubes a i g (all multipliers of a cube a i that produce elements in f) f/g = i=1 to g h i Example f = abc + abde + abh + bcd, or f = {abc,abde,abh,bcd} g = c + de + h, or g = {c,de,h} h 1 = f/c = ab + bd, or {ab,bd}; h 2 = f/de = ab, or {ab}; h 3 = f/h = ab, or {ab} f/g = h 1 h 2 h 3 = {ab} (Confirmation: f = ab(c+de+h) + bcd = (f/g) g + r) Complexity of this method = f. g Doing this more efficiently Encode a i g with integer codes with a unique bit position for each literal in sup(g) g = {c,de,h}; sup(g) = {c,d,e,h}; encoding = {1000,0110,0001} Encode c i f with the same encoding f = {abc,abde,abh,bcd}; encoding = {1000,0110,0001,1100} Sort {a i, c j } by their encodings to get 1100: bcd 1000: c, abc h 1 = ab 0110: de, abde h 2 = ab 0001: h, abh h 3 = ab (Not the same h i s, but the intersection is the same) Complexity = O(n log n) where n = f + g 10

Finding good divisors Now that we know how to divide how do we find good divisors? Primary divisors P(f) = {f/c f is a cube} Example: f = abc + abde f/a = bc + bde is a primary divisor f/ab = c + de is a primary divisor g is cube free if the only cube dividing g evenly (i.e., with remainder zero) is 1. Example: c+de Kernels K(f) = set of primary divisors that are cube-free f/ab belongs to the set of kernels; f/a does not. Kernels are good candidates for divisors Kernels and co-kernels For f = abc + abde, f/ab = c + de c+de is a kernel ab is a co-kernel Co-kernel of a kernel is not unique f = acd + bcd + ae + be f/a = f/b = cd+e Kernel = cd+e Co-kernels = {a,b} f/cd = f/e = a+b Kernel = a+b Co-kernels = {cd,e} Finding all kernels Kernel (f) Find c f so that f/c f is cube-free and c f has the largest number of literals K = Kernel1(0,f/c f ) if (f is cube-free) return(f K) return(k) Kernel1(j,g) R = {g} for (i = j+1; i n; i++) if ( i th literal l i has 0 or 1 terms) continue c e = cube that evenly divides g/l i and has the max number of literals /* kernel already identified */ if (l k is not in c e for all k i) R = R Kernel1(i,(g/l i )/c e ) return(r) 11

Example F = abc(d+e)(k+l) + agh + m a F/a = bc(d+e)(k+l) + gh b c F/ab = c(d+e)(k+l) F/ac = b(d+e)(k+l) [Leads to kernels (d+e) and (k+l)] Triggers if condition that finds that this kernel was found earlier and prunes the search tree here Example: extraction and resubstitution F 1 = ab(c(d+e)+f+g)+h F 2 = ai(c(d+e)+f+j)+k 1. Generate kernels for F 1, F 2 2. Select K 1 K(F 1 ) and K 2 K(F 2 ) such that K 1 K 2 is not a cube 3. Set the new variable v to K 1 K 2 4. Rewrite F i = v (F i /v) + r i For the example: v 1 = d+e F 1 = ab(cv 1 +f+g)+h; F 2 = ai(cv 1 +f+j)+k v 2 = cv 1 +f F 1 = ab(v 2 +g)+h; F 2 = ai(v 2 +j)+k Generic factorization algorithm Factor(F) If (F has no factor) return(f); D = Divisor(F); (Q,R) = Divide(F,D); /* F = QD + R*/ return(factor(q),factor(d),factor(r)); (Q) F (D) F (R)) Divisor function identifies divisors, for example, based on a kernel-based algorithm Divide function may be algebraic (weak) division 12

Don t care based optimization: an outline Two types of don t cares considered here Satisfiability don t cares Observability don t cares Others: SPFD s (sets of pairs of functions to be differentiated) Satisfiability don t cares (SDC s) Example: Consider Y 1 = a b Y 2 = c d Y 3 = Y 1 Y 2 Since Y 1 = a b is enforced by one equation, the minterms of Y 1 (a b ) can be considered to be don t cares In other words, Y 1 a b + Y 1 (a+b) corresponds to a don t care Similarly, Y 2 (c d ) is also a don t care Don t care based optimization (contd.) Observability don t cares (ODC s) For r = p+q, if p = 1, then q is an observability don t care Similarly, can define ODC s for AND operations, etc. Example of don t care based optimization y 1 = xw, y 2 = x +y, f = y 1 +y 2 Cost = 1 AND + 2 OR s + 1 NOT Minimize function for y 1 SDC(y 1 ) = y 2 (x +y) = y 2 xy +y 2 x +y 2 y ODC(y 1 ) = y 2 y 1 = w w x y y 2 y 1 1 1 1 1 101 0 0 10 ODC SDC s y 1 = w, y 2 = x +y, f = y 1 + y 2 y 2 = x +y, f = w + y 2 (eliminate) (Cost: 2 OR s + 1 NOT) Acknowledgements Hardly anything in these notes is original, and they borrow heavily from sources such as G. De Micheli, Synthesis and Optimization of Digital Circuits, McGraw-Hill, New York, NY, 1994. S. Devadas, A. Ghosh and K. Keutzer, Logic Synthesis, McGraw-Hill, New York, NY, 1994 G. Hachtel and F. Somenzi, Logic Synthesis and Verification Algorithms, Kluwer Academic Publishers, Boston, MA, 1996. Notes from Prof. Brayton's synthesis class at UC Berkeley (http://www-cad.eecs.berkeley.edu/~brayton/courses/219b/219b.html) Notes from Prof. Devadas's CAD class at MIT (http://glenfiddich.lcs.mit.edu/~devadas/6.373/lectures) Possibly other sources that I may have omitted to acknowledge (my apologies) 13