IHS 3: Test of Digital Systems R.Ubar, A. Jutman, H-D. Wuttke Integrierte Hard- und Softwaresysteme
RT-Level Design data path and control path on RT-level RT level simulation Functional units (F1,..,F4) Register Multiplexer / Demultiplexer 2
Fault Modeling on High Level DDs High-level DDs (RT-level): M1 y 1 Function 0 M1 = R1 1 M1 = IN y 2 M2 Function 0 M2 = R1 1 M2 = IN y 3 M3 Function 0 M3 = M1+ R2 1 M3 = IN 2 M3 = R1 3 M3= M2* R2 R2 y 4 Operation Function 0 Reset R2 = 0 1 Hold R2 = R 2 2 Load R2 = M3 R 1 IN y 1 y 2 y 3 y 4 M 1 M 2 b a + * R 2 0 c d M 3 y 4 e R 2 1 2 0 0 y 3 y 1 R 1 + R 2 0 R 2 1 2 3 IN R 1 y 2 1 0 1 IN + R 2 R 1 * R 2 IN* R 2 3
Fault Modeling on High Level DDs High-level DDs (RT-level): Terminal nodes represent: RTL-statement faults: data storage, (e.g. #0) data transfer, (e.g. IN) data manipulation faults (e.g. R 1 *R 2 ) Nonterminal nodes represent: a RTL-statement R 1 M 1 + faults: label, b timing condition, M IN 2 * logical condition, register decoding, operation decoding, control faults R 2 0 y 1 y 2 y 3 y 4 c d y 4 M 3 e 1 2 0 0 y 3 y 1 R 1 + R 2 1 1 IN IN + R 2 2 R 2 0 R 2 3 R 1 y 2 0 1 R 1 * R 2 IN* R 2 4
Decision Diagrams for Microprocessors High-Level DDs for a microprocessor (example): Instruction set: DD-model of the microprocessor: A I 1,6 IN I 1 : MVI A,D A IN I 2 : MOV R,A R A I 3 : MOV M,R OUT R I 4 : MOV M,A OUT A I 5 : MOV R,M R IN I 6 : MOV A,M A IN I 7 : ADD R A A + R I 8 : ORA R A A R I 9 : ANA R A A R OUT R I A I 4 5 3 2 R A IN 1,3,4,6-10 2,3,4,5 7 8 9 10 A A + R A R A R A I 10 : CMA A A A R 5
Hierarchical Test Generation Transistor level Defect mapping Logic level & & & 1 & & & R 1 IN M 1 M 2 RT Level + * M 3 R 2 x 1 x 2 x 3 x 4 x 5 Defect W d Logic level Component level dy y* System level Error Hierarchical test (fault propagation) 6
Experiments Use the example A+B/2 (avarage value) Find for each test method best parameters Functional test Deterministic test Functional BIST Logical BIST Circular BIST Note them and compare results 7
High-Level Decision Diagrams Applet Algorithm: average (A+B)/2 a) high speed 2 operation units (F2, F3) Add and Div in in 1 clock cycle 4 steps Cost: 83 8
High-Level Decision Diagrams Applet Algorithm: average (A+B)/2 b) low cost Only 1 Unit (F4) 2 clock cycles for operation 5 clocks Cost: 73 9
High-Level Decision Diagrams Applet Algorithm: high speed average (A+B)/2 (4 steps Cost: 83) Simulation Load from Input Reg1 Load from Input Reg2 Add, Div and store in 1 clock cycle Output and End 10
Applets for Learning RT- Level Test Functional Test mode: The test vectors are operands results can be observed at the output, no extra test parts Deterministic Test mode: Gate level test for each FU separatly, generate test vectors, local test panel, cumulative FC calculation BIST mode: Functional-, Logical- Circular Built In Self Test via extra test part: Test Pattern Generator and Signature analyzer 11
Functional Test The test vectors are operands, results can be observed at the output, no extra test parts needed 12
High-Level Decision Diagrams Applet Algorithm: average (A+B)/2 Funct. Test 13
Deterministic Test Adder (F2) Schematic level Gate level test Boolean derivation SSBDDs Path activation Fault propagation Fault observation 14
Deterministic Test Insert test vectors manually simulate the fault coverage of the vector (FC vec) and the sequence (FC) 15
Overview 1. Introduction 2. Theory: Boolean differential algebra 3. Theory: Decision diagrams 4. Fault modelling 5. Test generation 6. Fault simulation 7. Fault diagnosis 8. Testability measuring 9. Design for testability 10. Built in Self-Test 16
Built In Self Test (BIST) Functional No additional part Logical Additoinal TPG Circular Additional TPG/SA 17
Built-In Self-Test Motivations for BIST: Need for a cost-efficient testing Doubts about the stuck-at fault model Increasing difficulties with TPG (Test Pattern Generation) Growing volume of test pattern data Cost of ATE (Automatic Test Equipment) Test application time Gap between tester and UUT (Unit Under Test) speeds Drawbacks of BIST: Additional pins and silicon area needed Decreased reliability due to increased silicon area Performance impact due to additional circuitry Additional design time and cost 18
Built-In Self-Test in SoC System-on-Chip testing Test architecture components: Test pattern source & sink Source SoC Test Access Mechanism SRAM CPU Peripherial Component Interconnect Wrapper Core Under Test SRAM ROM Test Access Mechanism Sink Test Access Mechanism Core test wrapper Solutions: Off-chip solution need for external ATE Combined solution MPEG UDL DRAM mostly on-chip, ATE needed for control On-chip solution BIST 19
Built-In Self-Test Components BIST Control Unit Test Pattern Generation (TPG) Circuitry Under Test CUT Test Response Analysis (TRA) BIST components: Test pattern generator (TPG) Test response analyzer (TRA) TPG & TRA are usually implemented as linear feedback shift registers (LFSR) Two widespread schemes: test-per-scan test-per-clock 20
Linear Feedback Shift Register (LFSR) Pseudorandom Test generation by LFSR: Standard LFSR 1 x x 2 x 3 x 4 Modular LFSR x 4 x 2 x 1 x 3 Polynomial: P(x) = 1 + x 3 + x 4 21
LFSR Find configurations of the LSFR (Linear Feedback Shift Register) that generates optimal test vektors (TPG/SA) Parameters: Feedback polynom Initial State 22
LFSR: Signature Analyser LFSR UUT 1 x x 2 x 3 x 4 FF FF FF FF Response string for Signature Analysis Test Patterns (when generating tests) Signature (when analyzing test responses) Stimuli 23
BIST: Signature Analysis Aliasing: UUT Response L SA N L - test length N - number of stages in Signature Analyzer L k 2 All possible responses Faulty response All possible signatures N k 2 Correct response N << L 24
BIST: Signature Analysis TPG / SA Registers for LFSR Configure Initial state Polynom Parameters TPG LFSR SA LFSR 25
BIST: Signature Analysis Aliasing: UUT Response L SA N L - test length N - number of stages in Signature Analyzer L k 2 - number of different possible responses N << L No aliasing is possible for those strings with L - N leading zeros since they are represented by polynomials of degree N - 1 that are not divisible L N by characteristic polynomial of LFSR. There are such strings Probability of aliasing: P L 2 2 N L 1 1 2 L 1 P 2 1 N 26
BIST: Signature Analysis Parallel Signature Analyzer: Single Input Signature Analyser UUT x 4 x 2 x 1 x 3 x 4 x 2 x 1 x 3 UUT Multiple Input Signature Analyser (MISR) 27
Built-In Self-Test Signature calculating for multiple outputs: LFSR - Test Pattern Generator LFSR - Test Pattern Generator Combinational circuit Combinational circuit Multiplexer Multiplexer LFSR - Signature analyzer LFSR - Signature analyzer 28
Test-per-Clock BIST Architectures BILBO - Built- In Logic Block Observer: CSTP - Circular Self-Test Path: LFSR - Test Pattern Generator LFSR - Test Pattern Generator & Signature analyser Combinational circuit Combinational circuit LFSR - Signature analyzer 29
Reconfiguration for Self-Test Scan-IN Pseudorandom test generator R Module Module R Test controller Scan-OUT Signature Analyser 30
Test-per-Scan BIST Architectures STUMPS: Self-Testing Unit Using MISR and Parallel Shift Register Sequence Generator Test Pattern Generator LOCST: LSSD On-Chip Self-Test Scan Path BS BS CUT R1 CC1... Rn CCn TPG SA MISR SI Test Controller SO Error 31
Problems with BIST The main motivations of using random patterns are: - low generation cost - high initial efeciency Problem: low fault coverage LFSR Decoder 1 & Fault Coverage Counter Reset Time If Reset = 1 signal has probability 0,5 then counter will not work and 1 for AND gate may never be produced 32
Problems with BIST The main motivations of using random patterns are: - low generation cost - high initial efeciency Problem: low fault coverage Possible patterns from LFSR: Pseudorandom test: 0 2 n -1 Fault Coverage Hard to test faults Dream solution: find LFSR so that: 0 2 n -1 Time 33
Store-and-Generate test architecture The main motivations of using random patterns are: - low generation cost - high initial efeciency Problem: low fault coverage Long PR test: Pseudorandom test: 0 2 n -1 Fault Coverage Hard to test faults Pseudorandom Using many seeds: test: 0 2 n -1 Time 34
Store-and-Generate test architecture ROM TPG UUT ADR RD Counter 2 Counter 1 CL ROM contains test patterns for hard-to-test faults Each pattern P k in ROM serves as an initial state of the LFSR for test pattern generation (TPG) Counter 1 counts the number of pseudorandom patterns generated starting from P k After finishing the cycle for Counter 2 is incremented for reading the next pattern P k+1 35
Hybrid Built-In Self-Test Single test processes: ROM...... BIST Controller PRPG......... CORE UNDER TEST MISR SoC Core Hybrid test set contains a limited number of pseudorandom and deterministic vectors Pseudorandom test is improved by a stored test set which is specially generated to target the random resistant faults Optimization Pseudorandom Test Determ. Test 36
Hybrid BIST for Multiple Cores Embedded tester for testing multiple cores Embedded Tester C2670 C3540 Test Controller BIST Test access mechanism BIST Tester Memory BIST BIST BIST C1908 C880 C1355 SoC 37
Functional Self-Test N cycles Register block ALU K*N Functional test K*N >> K Fault simulator Fault coverage K Data Signature analyser Test patterns are produced on-line during the working mode of the system 38
Functional Hybrid Self-Test MUX M Automatic Test Pattern Generator Register block ALU Deterministic test set Random resistant faults K Signature analyser Test patterns are stored in the memory Data 39
TURBO-TESTER: Low-Level TPG Tools http://www.pld.ttu.ee/tt/ Levels: Gate Macro Methods: Deterministic Random Genetic Fault models: Stuck-at-faults Stuck-opens Delay faults Methods: Single fault Parallel Deductive Design Test Generation BIST Simulation Test Fault Simulation Fault Location Methods: BILBO CSTP Store/Generate Test Optimization Fault Table Fault Diagnosis 40
Overview 1. Introduction 2. Theory: Boolean differential algebra 3. Theory: Decision diagrams 4. Fault modelling 5. Test generation 6. Fault simulation 7. Fault diagnosis 8. Testability measuring 9. Design for testability 10. Built in Self-Test 41
Ad Hoc Design for Testability Techniques Method of Test Points: Block 1 Block 2 Block 1 is not observable, Block 2 is not controllable Improving controllability and observability: OP Block 1 1 Block 2 CP OP Block 1 & Block 2 CP 1- controllability: CP = 0 - normal working mode CP = 1 - controlling Block 2 with signal 1 0- controllability: CP = 1 - normal working mode CP = 0 - controlling Block 2 with signal 0 42
Ad Hoc Design for Testability Techniques Find good additional observation points that improves the fault Coverage (FC) Find optimal operands (= test vektors) Parameters: Observation points Operands 43
Ad Hoc Design for Testability Techniques Method of Test Points: Block 1 Block 2 Block 1 is not observable, Block 2 is not controllable Improving controllability: Block 1 1 & Block 2 CP1 CP2 Block 1 MUX Block 2 CP1 CP2 Normal working mode: CP1 = 0, CP2 = 1 Controlling Block 2 with 1: CP1 = 1, CP2 = 1 Controlling Block 2 with 0: CP2 = 0 Normal working mode: CP2 = 0 Controlling Block 2 with 1: CP1 = 1, CP2 = 1 Controlling Block 2 with 0: CP1 = 0, CP2 = 1 44
Ad Hoc Design for Testability Techniques Multiplexing monitor points: To reduce the number of output pins for observing monitor points, multiplexer can be used: 2 n observation points are replaced by a single output and n inputs to address a selected observation point Disadvantage: only one observation point can be observed at a time x 0 x 1 x n 0 1 2 n -1 MUX OUT 45
Ad Hoc Design for Testability Techniques Demultiplexer for implementing control points: x x 1 x 2 0 1 DMUX N CP1 CP2 CPN To reduce the number of input pins for controlling testpoints, demultiplexer and a latch register can be used. N clock times are required between test vectors to set up the proper control values x N N = 2 n 46
Time-sharing of outputs for monitoring To reduce the number of output pins for observing monitor points, timesharing of working outputs can be introduced: no additional outputs are needed To reduce the number of inputs, again counter or shift register can be used Disadvantage: only one observation point can be observed at a time Original circuit MUX 47
Time-sharing of inputs for controlling Normal input lines 0 1 N DMUX CP1 CP2 CPN To reduce the number of input pins for controlling test points, time-sharing of working inputs can be introduced. To reduce the number of inputs for driving the address lines of demultiplexer, counter or shift register can be used 48
Ad Hoc Design for Testability Techniques Partitioning of large combinational circuits: C1 C2 DMUX1 MUX1 DMUX2 C1 C2 MUX3 MUX2 MUX4 The time complexity of test generation and fault simulation grows faster than a linear function of circuit size Partioning of large circuits reduces these costs I/O sharing of normal and testing modes is used Three modes can be chosen: - normal mode - testing C1 - testing C2 (bolded lines) 49
Scan-Path Design IN Scan-IN Combinational circuit q R q OUT Scan-Path design allows to control and observe internal flipflops, which means that the task of sequential testing has been transformed to the task of testing a combinational circuit T = 0 - normal working mode T = 1 - scan mode Scan-OUT q Scan-IN T & & 1 D C T q Scan-OUT Normal mode : flip-flops are connected to the combinational circuit Test mode: flip-flops are disconnected from the combinational circuit and connected to each other to form a shift register 50
Scan-Path Design and Testability Two possibilities for improving controllability/observability SCAN OUT MUX OUT IN DMUX SCAN IN 51
Parallel Scan-Path IN Combinational circuit OUT Scan-IN 1 Scan-OUT 1 Scan-IN 2 R1 R2 In parallel scan path flip-flops can be organized in more than one scan chain Scan-OUT 2 52
Partial Scan Path Scan-In Hierarhical test generation with Scan-Path: Control Part R 2 0 y 4 0 R 1 IN y 1 y 2 y 3 y 4 M 1 M 2 a b + * c d e M 3 R 2 Data Part 1 Scan-Out Bus R 2 2 0 0 y 3 y 1 R 1 +R 2 1 1 IN 2 3 R 1 y 2 0 1 IN + R 2 R 1 *R 2 IN* R 2 53
Logical BIST Test Microprogram Parameters Program steps Used reisters Tested functions 54
Random Access Scan IN Scan-IN Scan-CL X-Address & DC Combinational circuit q R q Scan-OUT OUT In random access scan each flip-flop in a logic network is selected individually by an address for control and observation of its state Y-Address DC 55
Boundary Scan Standard 56
Conclusions Self-Test of modules (components) is important, but is not the Panacea Hierarchical and Functional Test have the same importance Functional Fault Model is a universal means for mapping defects, faults and errors from level to level, providing a formal basis for hierarchical approach Decision Diagrams can be used successfully not only at logical levels, but also at higher RTL or behavioral levels 57
References Contact data: Tallinn Technical University Computer Engineering Department Address: Raja tee 15, 12618 Tallinn, Estonia Tel.: +372 620 2252, Fax: +372 620 2253 E-mail: raiub@pld.ttu.ee www.ttu.ee/ˇraiub/ 58
Overview 1. Introduction 2. Theory: Boolean differential algebra 3. Theory: Decision diagrams 4. Fault modelling 5. Test generation 6. Fault simulation 7. Fault diagnosis 8. Testability measuring 9. Design for testability 10. Built in Self-Test 59
Testability of Design Types General important relationships: T (Sequential logic) < T (Combinational logic) Solutions: Scan-Path design strategy T (Control logic) < T (Data path) Solutions: Data-Flow design, Scan-Path design strategies T (Random logic) < T (Structured logic) Solutions: Bus-oriented design, Core-oriented design T (Asynchronous design) < T (Synchronous design) T: Testability 60
Testability Estimations for Circuit Types Circuits less controllable Decoders Circuits with feedback Counters Clock generators Oscillators Self-timing circuits Self-resetting circuits Circuits less observable Circuits with feedback Embedded RAMs ROMs PLAs Error-checking circuits Circuits with redundant nodes 61
Testability Measures Evaluation of testability: Controllability C 0 (i) C 1 (j) Observability O Y (k) O Z (k) Testability Controllability for 1 needed 1 2 20 1 2 20 & 1 i j x & 1 2 20 Defect Probability of detecting 1/p 60 k & 1 2 20 1 Y Z 62
Probabilistic Testability Measures Controllability calculation: Value: minimum number of nodes that must be set in order to produce 0 or 1 For inputs: C 0 (i) = p(x i =0) C 1 (i) = p(x i =1) = 1 - p(x i =0) For other signals: recursive calculation rules: x & y x 1 x 2 & y p y = p x1 p x2 p y = 1 - p x x 1 1 y p y = 1 - (1 - p x1)(1 - p x2) x 2 x 1 x n... & y p y n i 1 p xi x 1 x n... 1 y p y 1 n i 1 (1 p xi ) 63
Probabilistic Testability Measures Probabilities of reconverging fanouts: x 1 x 2 x 1 x 2 & & a b y 1 y p y = (1 p x1 ) p x2 + (1 p x2 ) p x1 = 0,25 + 0,25 = 0,5 p y = 1 - (1 - p a ) (1 - p b ) = 1-0,75*0,75 = 0,44 Signal correlations: x & y p y = p x p x = p x 2? 64
Calculation of Signal Probabilities Parker - McCluskey algorithm: x 1 x 2 & & a b 1 y p y = 1 - (1 - p a ) (1 - p b ) = = 1 - (1 - p x1 (1 - p x2 ))(1 - p x2 (1 - p x1 )) = = 1 - (1 - p x1 + p x1 p x2 ) (1 - p x2 + p x1 p x2 )= = 1 (1 - p x2 + p x1 p x2 -p x1 + p x1 p x2 -p 2 x1 p x2 + + p x1 p x2 -p x1 p 2 x2 + p 2 x1 p 2 x2) = = 1 (1 - p x2 + p x1 p x2 -p x1 + p x1 p x2 -p x1 p x2 + + p x1 p x2 -p x1 p x2 + p x1 p x2 ) = = p x2 -p x1 p x2 + p x1 -p x1 p x2 + p x1 p x2 - -p x1 p x2 + p x1 p x2 -p x1 p x2 ) = = p x1 + p x2-2p x1 p x2 = 0,5 65
Calculation of Signal Probabilities Straightforward methods: 1 2 2 1 & a Calculation gate by gate: p a = 1 p 1 p 2 = 0,75, & c p b = 0,75, p c = 0,4375, p y = 0,22 3 2 2 2 3 & b & For all inputs: p k = 1/2 y Parker - McCluskey algorithm: p y = p c p 2 = (1- p a p b ) p 2 = = (1 (1- p 1 p 2 ) (1- p 2 p 3 )) p 2 = = p 1 p 2 2 + p 22 p 3 - p 1 p 23 p 3 = = p 1 p 2 + p 2 p 3 - p 1 p 2 p 3 = 0,38 66