A New Multiple Weight Set Calculation Algorithm

Similar documents
Logic BIST. Sungho Kang Yonsei University

Relating Entropy Theory to Test Data Compression

Built-In Test Generation for Synchronous Sequential Circuits

Test Pattern Generator for Built-in Self-Test using Spectral Methods

On Application of Output Masking to Undetectable Faults in Synchronous Sequential Circuits with Design-for-Testability Logic

Introduction to VLSI Testing

ONE of the main challenges in very large scale integration

Reduction of Detected Acceptable Faults for Yield Improvement via Error-Tolerance

UMBC. At the system level, DFT includes boundary scan and analog test bus. The DFT techniques discussed focus on improving testability of SAFs.

One-Dimensional Linear Hybrid Cellular Automata: Their Synthesis, Properties and Applications to Digital Circuits Testing

Built-In Self-Test. Outline

Dictionary-Less Defect Diagnosis as Surrogate Single Stuck-At Faults

SIGNATURE ROLLBACK WITH EXTREME COMPACTION A TECHNIQUE FOR TESTING ROBUST VLSI CIRCUITS WITH REDUCED HARDWARE OVERHEAD

Brief Contributions. Optimal Selective Huffman Coding for Test-Data Compression 1 INTRODUCTION 2 SELECTIVE HUFFMAN ENCODING REVIEW

VECTOR REPETITION AND MODIFICATION FOR PEAK POWER REDUCTION IN VLSI TESTING

VLSI Physical Design Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

VLSI System Testing. Testability Measures

SIMULATION-BASED APPROXIMATE GLOBAL FAULT COLLAPSING

ECE 512 Digital System Testing and Design for Testability. Model Solutions for Assignment #3

Outline - BIST. Why BIST? Memory BIST Logic BIST pattern generator & response analyzer Scan-based BIST architecture. K.T. Tim Cheng 08_bist, v1.

EECS 579: Test Generation 4. Test Generation System

Design for Testability

Design for Testability

EGFC: AN EXACT GLOBAL FAULT COLLAPSING TOOL FOR COMBINATIONAL CIRCUITS

Department of Electrical and Computer Engineering University of Wisconsin Madison. Fall Final Examination

Combinational Equivalence Checking using Boolean Satisfiability and Binary Decision Diagrams

Fault Tolerant Computing CS 530 Random Testing. Yashwant K. Malaiya Colorado State University

Counting Two-State Transition-Tour Sequences

Delay Testing from the Ivory Tower to Tools in the Workshop

Predicting IC Defect Level using Diagnosis

STATISTICAL FAULT SIMULATION.

An Adaptive Decompressor for Test Application with Variable-Length Coding

IHS 3: Test of Digital Systems R.Ubar, A. Jutman, H-D. Wuttke

Structural Delay Testing Under Restricted Scan of Latch-based Pipelines with Time Borrowing

On Random Pattern Testability of Cryptographic VLSI Cores

How does the computer generate observations from various distributions specified after input analysis?

Fault Tolerant Computing CS 530 Random Testing. Yashwant K. Malaiya Colorado State University

Generation of High Quality Non-Robust Tests for Path Delay Faults

EECS150 - Digital Design Lecture 21 - Design Blocks

L9: Galois Fields. Reading material

EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs)

Linear Feedback Shift Registers (LFSRs) 4-bit LFSR

Transition Faults Detection in Bit Parallel Multipliers over GF(2 m )

Hardware testing and design for testability. EE 3610 Digital Systems

Overview. 4. Built in Self-Test. 1. Introduction 2. Testability measuring 3. Design for testability. Technical University Tallinn, ESTONIA

Low complexity bit-parallel GF (2 m ) multiplier for all-one polynomials

ECE 3060 VLSI and Advanced Digital Design. Testing

Research Article Antirandom Testing: A Distance-Based Approach

A Compiled-Code Parallel Pattern Logic Simulator With Inertial Delay Model

Circuit for Revisable Quantum Multiplier Implementation of Adders with Reversible Logic 1 KONDADASULA VEDA NAGA SAI SRI, 2 M.

An Effective Test and Diagnosis Algorithm for Dual-Port Memories

Test Generation for Designs with On-Chip Clock Generators

Improved Cascaded Stream Ciphers Using Feedback

Transforming FSMs for Synthesis by Fault Tolerant Nano-PLAs

AN IMPROVED LOW LATENCY SYSTOLIC STRUCTURED GALOIS FIELD MULTIPLIER

of an algorithm for automated cause-consequence diagram construction.

Single Stuck-At Fault Model Other Fault Models Redundancy and Untestable Faults Fault Equivalence and Fault Dominance Method of Boolean Difference

EECS150 - Digital Design Lecture 23 - FFs revisited, FIFOs, ECCs, LSFRs. Cross-coupled NOR gates

WITH increasing complexity in systems design from increased

FPGA accelerated multipliers over binary composite fields constructed via low hamming weight irreducible polynomials

INFORMATION THEORETIC AND SPECTRAL METHODS OF TEST POINT, PARTIAL-SCAN AND FULL-SCAN FLIP-FLOP INSERTION TO IMPROVE INTEGRATED CIRCUIT TESTABILITY

Pattern History Table. Global History Register. Pattern History Table. Branch History Pattern Pattern History Bits

PLA Minimization for Low Power VLSI Designs

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

A Hardware Approach to Self-Testing of Large Programmable Logic Arrays

Design of an Online Testable Ternary Circuit from the Truth Table

Cycle Error Correction in Asynchronous Clock Modeling for Cycle-Based Simulation

Configurational Analysis beyond the Quine-McCluskeyMilan, algorithm 8 February / 29

LINEAR FEEDBACK SHIFT REGISTER BASED UNIQUE RANDOM NUMBER GENERATOR

Delay-Dependent Stability Criteria for Linear Systems with Multiple Time Delays

A Low-Error Statistical Fixed-Width Multiplier and Its Applications

Optimization of 1D and 2D Cellular Automata for Pseudo Random Number Generator.

Determining Appropriate Precisions for Signals in Fixed-Point IIR Filters

Fault Collapsing in Digital Circuits Using Fast Fault Dominance and Equivalence Analysis with SSBDDs

DISTANCE BASED REORDERING FOR TEST DATA COMPRESSION

Outline. EECS Components and Design Techniques for Digital Systems. Lec 18 Error Coding. In the real world. Our beautiful digital world.

Case Studies of Logical Computation on Stochastic Bit Streams

Technology Mapping for Reliability Enhancement in Logic Synthesis

Signature Analysis for Test Responses

EECS150 - Digital Design Lecture 26 - Faults and Error Correction. Types of Faults in Digital Designs

EECS Components and Design Techniques for Digital Systems. Lec 26 CRCs, LFSRs (and a little power)

Fault Modeling. 李昆忠 Kuen-Jong Lee. Dept. of Electrical Engineering National Cheng-Kung University Tainan, Taiwan. VLSI Testing Class

Optimal Common-Centroid-Based Unit Capacitor Placements for Yield Enhancement of Switched-Capacitor Circuits

Analysis and Synthesis of Weighted-Sum Functions

VHDL Implementation of Reed Solomon Improved Encoding Algorithm

ELEC3227/4247 Mid term Quiz2 Solution with explanation

T st Cost Reduction LG Electronics Lee Y, ong LG Electronics 2009

Keywords: Genetic algorithms, automatic test pattern generation, test data compression, test length, transition delay fault, skewed-load

Cryptanalysis of Achterbahn

Contributions to the Evaluation of Ensembles of Combinational Logic Gates

Fault Tolerant Computing CS 530 Fault Modeling. Yashwant K. Malaiya Colorado State University

Longest Path Selection for Delay Test under Process Variation

EECS 579: Logic and Fault Simulation. Simulation

Scalable Systolic Structure to Realize Arbitrary Reversible Symmetric Functions

A COMBINED 16-BIT BINARY AND DUAL GALOIS FIELD MULTIPLIER. Jesus Garcia and Michael J. Schulte

Residue Number Systems Ivor Page 1

Interconnect Yield Model for Manufacturability Prediction in Synthesis of Standard Cell Based Designs *

Test Generation for Designs with Multiple Clocks

On the Analysis of Reversible Booth s Multiplier

Finding Optimum Clock Frequencies for Aperiodic Test. Sindhu Gunasekar

Transcription:

A New Multiple Weight Set Calculation Algorithm Hong-Sik Kim Jin-kyue Lee Sungho Kang hskim@dopey.yonsei.ac.kr jklee@cowboys.yonsei.ac.kr shkang@yonsei.ac.kr Dept. of Electrical Eng. Yonsei Univ. Shinchon-dong 134, Seodaemoon Gu, Seoul, Korea Abstract The number of weighted random depends on the sampling probability of the corresponding deterministic test pattern. Therefore if the weight set is extracted from the deterministic pattern set with high sampling probabilities, the test length can be shortened. In this paper we present a new multiple weight set generation algorithm that generates high performance weight sets by removing deterministic with low sampling probabilities. In addition, the weight set that makes the variance of sampling probabilities for deterministic test small, reduces the number of the deterministic test with low sampling probability. Henceforth we present a new weight set calculation algorithm that uses the optimal candidate list and reduces the variance of the sampling probability. The Results on ISCAS 85 and ISCAS 89 benchmark circuits prove the effectiveness of the new weight set calculation algorithm. 1. Introduction As the integrity of VLSI circuits increases, the complexity and cost of their tests have increased. BIST (built-in self test) is one of the attractive solutions to these problems. As a BIST pattern generator, a pseudo-random test pattern generator or an LFSR (Linear feedback shift register) is widely adopted for its low hardware overhead. However since there are many random pattern resistive faults that can reduce the efficiency of the BIST test generator, a large number of random are required to achieve high fault coverage. To overcome this problem, several solutions have been proposed. One of the solutions is the reseeding of an LFSR [1,2]. But the amount of the seeds is quite large to reduce the test application time. Several DFT techniques such as the test point insertion can reduce test or the test time with high fault coverage. These DFT techniques are to modify the circuit under test (CUT) more random pattern testable [3-8]. But these kinds of DFT application can exert timing impact on the designed circuit by inserting redundant logic on the critical path. Weighted random tests have been proposed to solve this problem [9-13]. In weighted random tests, each primary input have different biased probability of logic value 1. A weighted random test pattern generator is composed of an LFSR and some combination logic to bias the input probability. The test generator biases the probability that logic value 1 occurs at each primary input through the weight calculation logic. It has been proved through many experiments that weighted random are very efficient for detecting random pattern resistant faults. The methodology of generating weighted random test can be classified into two kinds according to the source of weight calculation. One is the approach based on some structural analysis on the circuit topology [12-13], while the other is based on the deterministic test pattern set. The former has the advantage of being used in an ATPG (Automatic Test Pattern Generation) process and consumes less time to generate weight sets than the latter. But it cannot guarantee sufficient fault coverage with adequate test length. However, the latter can provide high fault coverage with the reasonable test length. The new algorithm is based on the approach with the deterministic test pattern set. The new weight generation algorithm consists of two stages. The first stage makes a candidate list for generating weight sets from the deterministic test. In the second stage, the variance of the sampling probabilities of test in the candidate list is reduced. The sampling probability of a deterministic test pattern with a weight set is defined as the probability that the test pattern occurs during the weighted random test generation cycle. Therefore a high sampling probability means less time to generate the corresponding test pattern. The deterministic test with low probability than those of an LFSR are removed from the deterministic test pattern set and the remaining are included in the candidate list to calculate a weight set. This comes out from the idea where the weighted random pattern generator should have better sampling probability than that of an LFSR. From the candidate list, the weight set and the sampling probability are calculated. The weight set that makes the sampling probabilities of the deterministic test in the candidate list low increases the test cycle. By means of reducing the variance ITC INTERNATIONAL TEST CONFERENCE 878 0-7803-7169-0/01 $10.00 2001 IEEE

of the sampling probabilities for the deterministic test in the candidate list, the low sampling probabilities can be increased [11]. Therefore the weight set calculated from the candidate list is modified to reduce the variance of the sampling probabilities. The simulation results show that the new algorithm has fewer weight sets and smaller test. The lower the number of weight sets, the lower the hardware overhead in multiple weight set BIST. Similarly, the lower the number of the test, the shorter the test application time. Therefore the new methodology can reduce the overall hardware overhead and the test time using the weighted random BIST. This paper would be organized as follows. Section 2 explains the basic notations that are used in this paper. In section 3, the method for selecting a candidate set from deterministic test is explained. The methodology of reducing the variance of sampling probabilities is described in section 4. In section 5, a new multiple weight set calculation algorithm is discussed. Section 6 shows the simulation results and finally section 7 concludes this paper. 2. Basic Notations To explain the process of the new weight set calculation algorithm, the following notations are used. Let a deterministic test pattern and a test pattern set be t j and T = t 1, t 2,,t l accordingly, where l is the test length. The i-th bit of a deterministic test pattern t j and the weight of bit position i are defined as t j [ and w i, respectively. w i is calculated by the equation (1). tj T tj[ = 1 wi = (1) tj T tj[ X Let W = w 1, w 2,,w m and P j be a weight set of an m-input circuit and the sampling probability of the test pattern t j. The sampling probability P j is the probability that each deterministic test pattern t j occurs during the test generation. The equation of calculating the P j is as follows. Pj = m t= 1, tj[ X ( wi tj[ ) + (1 wi) (1 tj[ ) (2) In addition, the concept of the sampling probability of a test pattern with an LFSR is required for the new weight calculation algorithm. The sampling probability of a test pattern with an LFSR, P(lfsr), can be easily calculated by the equation (2) with all the weight set to 0.5. To reduce the variance of sampling probabilities, the average is calculated first by the equation (3). l Pj j= A = 1 l (3) The variance of the sampling probabilities for the deterministic test, V, is calculated by the equation (4). V l j= = 0 ( Pj A) l To reduce the variance, the effect of modifying the weight of i-th bit is evaluated by E i. This is calculated by the equation (5). E j = m t= 1, tj[ X 2 (4) ( A Pj) tj[ ) + ( Pj A) (1 tj[ ) (5) In the fault simulation with weighted random, if the number of test that do not detect any fault exceeds a certain number, then the next weight generation must proceed. So the fixed number of test should be defined. In this paper the variable NP means the number of test that do not detect any fault continuously with a weight. After all the processes end, the applied by the weighted random pattern generator include the NP s that should be removed to calculate the total test. Therefore the test are computed as follows. test = total number of weighted random test ( 1) NP 3. Selecting Candidate from Deterministic Test Patterns The weight calculation is based on the probability mechanism. When a weight set is generated, the corresponding sampling probability of each test pattern is determined. Deterministic pattern whose sampling probability with the calculated weight set is lower than that of an LFSR, are deleted from the deterministic test pattern set and the remainders are used as candidates for the weight generation. It is motivated from the idea that weighted random should have better sampling probabilities than those of an LFSR. Figure 1 shows the process of making a candidate list from deterministic test for generating a weight set. First of all, deterministic test generation makes a deterministic test pattern set, or a structure Pattern_list, which will be used for the first weight calculation. Then 879

the weight set is calculated with the Pattern_list, and the sampling probabilities of all the deterministic test are calculated. The flag of each Pattern_list is designated, if the probability of each pattern is lower than that of an LFSR. The difference between the previous average sampling probability and the current average probability, sampling_prob, is calculated. This process is repeated while the sampling_prob is not being zero. After the sampling_prob becomes zero, the list with the whose flags are zero is the best candidate for weight calculation. deterministic_test_gen(); while( sampling_prob!= 0) calculate_weight_set(pattern_list); calculate_sampling_prob(); for( i = 0; i < num_vec; i ++) if ( P(t i ) < P(lfsr) ) Pattern_list[.flag = 1; update_candidate_set(pattern_list); sampling_prob = previous_prob current_prob; calculate_weight_set(pattern_list); Figure 1. Pseudo code for selecting candidate Table 1. Sampling probabilities of deterministic test bit1 bit 2 bit 3 bit 4 bit 5 P i P(lfsr) Test1 0 0 1 X X 0.132 0.125 Test 2 1 0 0 X X 0.106 0.125 Test 3 X 1 0 1 X 0.054 0.125 Test 4 X 1 1 0 X 0.181 0.125 Test 5 X 1 1 1 X 0.136 0.125 Test 6 X 0 X 0 0 0.159 0.125 Test 7 X X 1 1 1 0.153 0.125 Test 8 1 0 1 X X 0.264 0.125 Test 9 X 1 X 0 0 0.127 0.125 Test 10 X 0 X 0 1 0.158 0.125 Weight 0.667 0.444 0.714 0.429 0.500 - - The process shown in Figure 1 will be explained by using an example. Table 1 presents the first deterministic test. The last row includes the weight set and the last column includes the sampling probability of the j-th test pattern. The sampling probability of each test pattern with an LFSR is on the second to the last column. Since the probabilities of test 2 and test 3 are lower than those of the LFSR, they are removed from the candidate list and remainders are included in the candidate list. Table 2 shows the results after removing the test. The last column is the previous sampling probability and the seventh column is the current sampling probability. The current probability increased after removing the. This means that the test could be generated in less clock cycles. Table 2. Sampling probabilities after removing the test Bit1 bit 2 bit 3 bit 4 bit 5 P i Previous Test1 0 0 1 X X 0.286 0.132 Test 4 X 1 1 0 X 0.286 0.181 Test 5 X 1 1 1 X 0.143 0.136 Test 6 X 0 X 0 0 0.190 0.159 Test 7 X X 1 1 1 0.167 0.153 Test 8 1 0 1 X X 0.286 0.264 Test 9 X 1 X 0 0 0.143 0.127 Test 10 X 0 X 0 1 0.190 0.158 Weight 0.500 0.429 1 0.333 0.500 0.211 0.164 4. Variance of Sampling Probabilities It is important to generate a weight set that reduces the number of test. The minimum of the sampling probabilities should be increased since the test application cycle depends on the minimum sampling probability. One of the methods of increasing the minimum of the sampling probabilities is to decrease the variance of the sampling probabilities. The method for generating a weight set which decreases the variance of the sampling probabilities is shown in Figure 2. After calculating the weight set from the test in the candidate list, the average of the sampling probabilities, A, is computed by the equation (3), where l is the number of the deterministic. Then the variance of the sampling probabilities for the deterministic test, V, is calculated through the equation (4). To reduce the variance, the effect of modifying the weight set of i-th bit is evaluated by E i, which is calculated through the equation (5). V is decreased by increasing w i when t j [ is logic value 1 and P j is smaller than A, or when t j [ is logic value 0 and P j is larger than A. V is also decreased by decreasing w i when t j [ is logic value 0 and P j is smaller than A, or when t j [ is logic value 1 and P j is larger than A. Therefore, if E i is positive, w i should be increased, and if E i is negative, w i should be decreased. The rate α, the extent to which we increases or decreases the, should be confirmed. α is an userdefined constant larger than 1. When E i is positive, w i is multiplied by α and when E i is negative, w i is divided by α. 880

Reduce_variance(Candidate_list) calculate_weight_set(candidate_list); calculate_sampling_prob(candidate_list); calculate_average_variance(); while( V n < V p ) calculate_ei(); modify_weight_set(); calculate_weight_set(candidate_list); calculate_sampling_prob(candidate_list); calculate_average_variance(); generation and the fault simulation cycles, the easy-to-test faults with random are removed from the fault list. Then an ATPG generates the deterministic test pattern set for the remaining faults. After the deterministic test generation, a weight set and sampling probabilities are calculated, and the inefficient deterministic test are removed from the test list. In this cycle, the methodology described in the section 3 is adopted. The weighted random test are generated and fault simulation is performed. Finally the calculated weight set is rounded for a BIST implementation. An original weight is rounded to one of these values: 1/16, 1/8, 1/4, 3/8, 1/2, 5/8, 3/4, 15/16. The, 0 and 1 are replaced by 1/16 and 15/16, respectively. Figure 2. Pseudo code for reducing variance Table 3. E i and modified weight set Bit1 bit 2 bit 3 bit 4 bit 5 Ei 0 0.169 0.113 0.077-0.024 Weight 0.500 0.450 0.952 0.350 0.476 Table 4. Sampling probabilities after reducing variance Bit1 bit 2 bit 3 bit 4 bit 5 P i Previous Test1 0 0 1 X X 0.262 0.286 Test 4 X 1 1 0 X 0.278 0.286 Test 5 X 1 1 1 X 0.150 0.143 Test 6 X 0 X 0 0 0.187 0.190 Test 7 X X 1 1 1 0.159 0.167 Test 8 1 0 1 X X 0.262 0.286 Test 9 X 1 X 0 0 0.153 0.143 Test 10 X 0 X 0 1 0.187 0.190 Weight 0.500 0.450 0.952 0.350 0.476 - - Table 3 shows the value of E of each bit calculated by the equation (5) and the weight set modified according to the sign of E. In this example, the value of α is defined as 1.05. The sampling probabilities generated by the equation (2) using the modified weight set are presented in table 3. The deterministic test pattern that is most unlikely to occur during weighted random test application cycle is the pattern with the minimum sampling probability. The minimum sampling probability before reducing the variance is 0.143 but the minimum sampling probability after reducing the variance is 0.150. Therefore the reducing the variance leads to the reduction of the test application length. 5. Total Weight Set Generation Algorithm Figure 3 shows the overall flow chart of a new weight set generation algorithm. First, in the pseudo random test Figure 3. New weight set generation algorithm The rounding is performed as follows. Let the variable x be the weight to be rounded and the function f() be the rounding function, respectively. And let the variable y be the rounded weight. y = f(x) if x < 1/8 then f(x) = 1/16 if n/8 < x < (n +1)/8, then f(x) = n /8 if 7/8 < x, then f(x) = 15/16 where n = 1, 2,,6 881

The deleted deterministic are used for calculating the next weight set. So the weight set with the deleted test and the corresponding sampling probabilities are calculated. And from the new deterministic test pattern set, the with lower probabilities than those of an LFSR are removed and the remainders are included in the new candidate set, which is used for calculating the next weight set. Until all the testable faults excluding the redundant faults are removed from the fault list, this process is repeated. Example : Let us assume that the following test are made during the deterministic test pattern generation. t 1 = 1, 1, X, 1, 1 t 2 = 0, 1, 0, 1, 1 t 3 = 0, X, X, 0, 0 t 4 = 1, 0, 1, 1, 0 First of all, a weight set is calculated from the test set T = t 1, t 2, t 3, t 4 and the sampling probabilities of test are calculated. The weight set is W=w 1 =0.500, w 2 =0.667, w 3 =0.500, w 4 =0.750, w 5 =0.500 and the sampling probability is P=p1=0.125, p2=0.063, p3=0.063, p4=0.031. If the sampling probability of a test pattern is lower than that of an LFSR, the is removed and the remainders constitute the candidate list. Since the sampling probability of the LFSR is P lfsr = pl 1 =0.063, pl 2 =0.031, pl 3 =0.125, pl 4 =0.031, the pattern t 2 is removed and the remainders constitute the candidate list C=c 1 = t 1, c 2 = t 3, c 3 = t 4. From the candidate list, the weight set and the sampling probability are calculated. The weight set is W=w 1 =0.667, w 2 =0.500, w 3 =1.000, w 4 =0.667, w 5 =0.333, and the sampling probability is P = pl 1 =0.074, pl 2 =0.074, pl 3 =0.148. The average of the previous sampling probabilities is 0.073 while the average of the current sampling probabilities is 0.097. Therefore the average increases. And then the weight set is modified according to the sign of E i in order to reduce the variance of the sampling probabilities. Since the E = E 1 = 0.075, E 2 = 0.074, E 3 = 0.075, E 4 = 0.075, E 5 = 0.075, the weight set is modified as W=w 1 =0.635, w 2 =0.525, w 3 =0.952, w 4 =0.635, w 5 =0.350. After the modification of the weight set, the rounding on the weight set and the fault simulation are performed. The new weight set is calculated and the above procedure is repeated until the target fault coverage is achieved. 6. Simulation Results We have implemented the new weight set generation algorithm in C, on an Ultra Sparc 60 running Solaris operating system and the results are derived on ISCAS 85 and ISCAS 89 benchmark circuits. The efficiency of the weight calculation algorithm can be evaluated by the number of weight sets and the number of test. The number of weight sets directly affects the BIST hardware overhead. Similarly the nu mber of test affects the test time or the test cost. To prove the efficiency, the new algorithm is compared with the Minimum Variance [11] that has shown the best results on ISCAS 85 benchmark circuits. The results of comparison on ISCAS 85 benchmark circuits are shown in Table 5 when NP is 512. The in the Tables means the total number of weighted random test excluding NP which don t detect any new fault. In all cases, the new algorithm is more efficient than Minimum Variance. The number of test of the new algorithm is fewer than or almost equal to the number of test of [11]. And the number of weight sets of the new algorithm is much fewer than or equal to the number of weight sets of [11] on all circuits. For some circuits, the number of test is slightly larger than the number of [11]. However the number of weight sets is smaller in those cases. Table 5. Comparison with Minimum Variance (NP = 512) Minimum Variance[11] c432 2 288 2 352 c499 1 800 1 768 c880 2 480 2 512 c1355 1 1312 3 960 c1908 3 2944 3 3296 c2670 2 3680 3 5504 c3540 3 2368 3 2400 c5315 2 1248 2 2080 c7552 4 7008 6 7392 Table 6 describes the results in case when NP = 1024. The number of weight sets of the new algorithm on some larger circuits decreased from the number of weight sets when NP = 512. The number of weight sets of the new algorithm is fewer than that of the weight sets of [11] and the number of test of the new algorithm is almost equal to or fewer than that of test of [11]. The results of the new algorithm have higher performance than those of Minimum Variance when NP is 1024, too. Table 7 describes the comparison with Minimum Variance in the case when NP = 256. The number of test increased slightly compared with the case when NP = 512. All the numbers of the weight sets of the new algorithm are equal to or fewer than those of Minimum Variance. And the number of test is almost equal to the number of test of [11]. The results of the 882

new algorithm have higher performance than those of Minimum Variance when NP is 256. Therefore results show that regardless of NP, the performance of the new algorithm is much higher than that of Minimum Variance on most of benchmark circuits. Table 6. Comparison with Minimum Variance (NP = 1024) Minimum Variance [11] c432 2 288 2 352 c499 1 800 1 768 c880 2 480 2 544 c1355 1 1312 2 1152 c1908 3 2528 3 3648 c2670 2 4224 3 5407 c3540 1 3648 4 2272 c5315 1 1632 2 1216 c7552 3 10112 6 9376 Table 7. Comparison with Minimum Variance (NP = 256) Minimum Variance[11] c432 2 228 2 352 c499 1 800 1 768 c880 2 480 4 544 c1355 1 1312 2 960 c1908 2 2624 4 2880 c2670 3 2720 5 3552 c3540 3 2560 3 2400 c5315 2 1472 2 2080 c7552 12 5152 14 5120 The comparison with X-test [14] on ISCAS 85 and ISCAS 89 benchmark circuits is shown in Table 8. Since X-test shows the best results on ISCAS 89 benchmark circuits, it is used for performance comparison with our new algorithm. Table 8 shows the results when NP = 512 since X-test provide the results only when NP is 512. In the case of ISCAS 89 benchmark circuits, the results of large size benchmark circuits are summarized. All the weight sets of the new algorithm are fewer than or equal to those of X-test except for cs1196, cs1238 and cs38584. In cs1196, cs1238 and cs38584, the new algorithm has much fewer test. In small size benchmark circuits, the test are nearly the same. And in large size circuits, the new algorithm makes a lower number of test than X-test. In almost all benchmark circuits, the new algorithm has weight sets fewer than or equal to X-test and test of the new algorithm are fewer than or equal to those of X-test. Table 8. Result of fault simulation (NP = 512) X-test[14] c432 2 288 2 843 c499 1 800 1 704 c880 2 480 1 796 c1355 1 1312 2 2897 c1908 3 2944 4 4872 c2670 2 3680 6 7552 c3540 3 2368 3 3171 c5315 2 1248 2 1416 c7552 4 7008 21 18849 cs953 2 2240 2 1997 cs1196 5 5760 4 10358 cs1238 9 6528 7 9738 cs1423 2 1088 2 1467 cs9234 6 11808 9 23083 cs13207 4 4384 3 7532 cs15850 4 3104 5 14495 cs38417 4 11488 26 39956 cs38584 5 8608 3 14536 According to the results on ISCAS 85 and ISCAS 89 benchmark circuits, the new algorithm generates fewer multiple weight sets and smaller test than those of previous works. 7. Conclusion Weighted random test pattern generation is widely known as a good solution for the test of circuits with random pattern resistant faults. This technique can be implemented as a Built-in Self Test or a built-off test schemes. The performance of a weight calculation algorithm can be evaluated by the number of weight sets and the number of test. The new algorithm for calculating weight sets is composed of two stages. One is for selecting the candidate list for weight calculation and the other is for reducing the variance of the sampling probabilities. The first stage is based on the fact that the weighted random test should have better sampling probability than those of an LFSR. And the second stage is based on the fact that the minimum sampling probability is one of the major factors increasing the number of test and reducing the variance of sampling probabilities can increase the minimum sampling probability. The new algorithm uses high performance candidate lists by removing the deterministic test 883

with lower sampling probability, and modifies the weight set by reducing the variance of sampling probabilities. The new algorithm can generate more efficient multiple weight sets than previously known methods. In most of benchmark circuits, the number of weight sets and the number of the test are quite lower than those of the previous works. In the case when the number of weight sets is larger, the number of test is much smaller. The simulation results on the ISCAS 85 and ISCAS 89 circuits prove that the new algorithm generates high performance weight sets. [12] M. Bershteyn, Calculation of Multiple Sets of Weighted Random Testing, Proc. of International Test Conference, 1993, pp. 1023-1030. [13] R. Lisanke, F. Brglez, A. J. Degeus and D. Gregory, Testability Driven Random Test Pattern Generation, IEEE Trans. on Computer Aided Design, 1997, pp. 1082-1087. [14] B. Reeb and H. J. Wunderlich, Deterministic pattern generation for weighted random pattern testing, Proc. of European Design and Test Conference, 1996, pp. 30-36. References [1] S. Venkataraman, J. Rajski, S. Hellebrand and S. Tamick, An Efficient BIST Scheme Based on Reseeding of Multiple Polynomial Linear Feedback Shift Registers, Proc. of International Conference on Computer Aided Design, 1993, pp. 572-577. [2] S. Hellebrand, S. Tamick, J. Rajski and B. Courtois, Generation of Vector Patterns thorough Reseeding of Multiple-polynomial Linear Feedback Shift Registers, Proc. of International Test Conference, 1992, pp. 120-129. [3] C. H. Chen, T. Karnik, and D. G. Saab, Structural and behavioral synthesis for testability techniques, IEEE Trans. on Computer-Aided Design, vol. 13, 1994, pp. 777-785. [4] V. S. Iyenhar and D. Brand, Synthesis and pseudorandom pattern testable designs, Proc. of International Test Conference, 1989, pp. 501-508. [5] B. Seiss, P. Trouborst, and M. Schalz, Test point insertion for scan based BIST, Proc. of European Test Conference, 1991, pp. 253-262. [6] Y. Savaria, M. Youssef, B. Kaminska, and M. Koudil, Automatic test point insertion for pseudo -random testing, Proc. of International Symposium on Circuits and Systems, 1991, pp. 1960-1963. [7] K. T. Cheng and C. J. Lin, Timing-driven test point insertion for full-scan and partial-scan BIST, Proc. of International Test Conference, 1995, pp. 506-514. [8] N. A. Touba and E. J. McCluskey, Test point insertion based on path tracing, Proc. of VLSI Test Symposium, 1996, pp. 2-8. [9] F. Brglez. C. Gloster, and G. Kedem, Hardware-based weighted random pattern generation for boundary scan, Proc. of Design Automation Conference, 1989, pp. 264-274. [10] H. J. Wunderlich, Multiple distribution for biased random test, IEEE Trans. on Computer-Aided Design, vol. 9, 1990, pp. 584-593. [11] H. K. Lee and S. Kang, A new weight set generation algorithm for weighted random pattern generation, Proc. of International Conference on Computer Design, 1999, pp. 160-165. 884