Signal Encoding Schemes for Low-Power Interface Design
|
|
- Walter Turner
- 5 years ago
- Views:
Transcription
1 Signal Encoding Schemes for ow-power Interface Design Ki-Wook Kim, Kwang-Hyun Baek, and Sung-Mo Kang Abstract Coupling effects between on-chip interconnects must be addressed in ultra deep submicron VSI and system-on-a-chip (SoC) designs. We obtain the lower and upper bounds on coupling effects for randomly distributed, independent data streams based on the information theory. Novel low-power bus encoding scheme is proposed to minimize coupled switchings which dominate the on-chip bus power consumption. The coupling-driven bus invert method employs encoder and decoder that are geared to minimize the coupled transition activities. Experimental results indicate that our encoding method saves effective switchings as much as 30% in an 8-bit bus with one-cycle redundancy. 1 Introduction Increased coupling effect between interconnects in ultra deep submicron technology not only aggravates the power-delay metrics, but also deteriorates the signal integrity due to capacitive and inductive crosstalk noise. Conventional approaches to interconnect synthesis aim at optimal interconnect structures in terms of interconnect topology, wire width and spacing, and buffer location and sizes [1]. In this paper, we study a signal encoding scheme to minimize coupling effects between interconnects. Signal encoding schemes have been proposed to minimize transition activities on buses while ignoring cross-coupled capacitances. When statistical properties are unknown a priori, the bus-invert method [2] and the on-line adaptive scheme [3] can be applied to encode randomly distributed signals. On the other hand, highly correlated access patterns exhibit a spatio-temporal locality which can be exploited for energy reduction [4] in Gray code [5], [6], the T0 method [7], and the workingzone encoding [8]. ower bounds for minimum achievable transition activity have been derived for noiseless buses in [9] and for noisy buses in [10]. In [11], a segmentation method was introduced to reduce power consumption. Specification transformation approaches were used to reduce the number of memory accesses at the behavioral level [12]. The effectiveness of various encoding schemes was compared at the system level in [13]. Most of the previous bus-encoding schemes were designed to minimize transition activities on each signal line as if each line were isolated from neighboring lines, hence ignoring coupling effects. Such an assumption may be valid for off-chip buses where the impedances of transmission lines are appropriately adjusted. However, this is not the case for long on-chip buses which are particularly prevalent Ki-Wook Kim is with Pluris, Inc., CA Kwang-Hyun Baek is with Department of Electrical and Computer Engineering, University of Illinois at Urbana- Champaign, I Sung-Mo Kang is with University of California at Santa Cruz, CA
2 Buffer On-chip bus SRAM CPU core Bus interface Encoder/Decoder Encoder/Decoder Bus interface DRAM ASIC arge coupling capacitances Serial port Fig. 1. Tightly cross-coupled on-chip buses in a system-level chip design in a system-on-a-chip. For example, the wire aspect ratio (namely, wire thickness/width) is expected to be over 2.4 for intermediate wiring (namely, third metal layer and fourth metal layer) in 0.18 µm seven-layered metal process [14]. Accordingly, coupling has become an important issue with scaled supply voltage when we consider signal integrity and power dissipated by coupling capacitances, referred to as coupling power. Shielding can be a way to avoid crosstalk problem with area overhead. In this paper, we propose a new encoding scheme for static on-chip bus structure to minimize coupling power. The key idea is that coupling effects can be alleviated by transforming the signal sequences traveling on-chip buses that are closely placed. Small blocks of encoding and decoding logic are employed at the transmitter and receiver of on-chip buses as shown in Fig. 1. The encoder and decoder (codec) should have a low-complexity architecture so that the power and delay overhead due to the codec circuitry can be compensated by significant savings in switching activities on tightly coupled buses. The authors in [15] proposed a generic bus encoding scheme considering coupling effect. However, state explosion or complex codec circuitry may limit the practical application of their approach. 2 Interconnect Power Characteristics The time-averaged charge Q av provided by the power supply to all interconnect capacitances is given by Q av = p C tot V Ĉtot V dd = (Ĉs + Ĉx) V dd (1) where p denotes the switching probability, and C tot denotes the total lumped capacitance. The effective capacitance Ĉtot is defined by both physical capacitances and switching activities. The effective capacitance accounts for time-averaged charge stored in physical capacitances provided by the power supply. One can model the physical capacitance of a wire with width w, length l, overlapping segment length y, and edge-to-edge distance d such that ( ) y dmin C tot = (C a w + 2C f ) l + C x (2) d 2
3 Rd R adjacent net S1 S2 Cs C x C C primary net Fig. 2. A distributed RC model for the interconnects where C a is unit area capacitance (F/cm 2 ) to substrates, C f is unit length fringe capacitance (F/cm) and C x is unit length coupling capacitance (F/cm) with minimum spacing d min. To be precise, C f and C x are not constant due to complex fringing effects [16], but for our purpose the model is adequate. The area and fringe capacitance are barely independent of the distance d, which is referred to as the self-capacitance C s. Thus, we have a wire capacitance model as shown in Fig. 2. The wire capacitance is composed of the self capacitance C s and the coupling capacitance C x. For each capacitive component, we define the effective capacitance as Ĉ s = Y C s (3) Ĉ x = Z C x (4) where Y and Z are the average number of effective transitions per cycle for C s and C x, respectively, which are computed as follows. First, we resolve the self transition activity Y for the self capacitance C s. et p i x,y denote the transitional probability that the value of signal i changes from x to y, which can be represented by the signal probability p(i n =x) at the end of n-th cycle, and the conditional probability p(i n+1 =y i n =x) such that p i x,y = p(in =x ) p(in+1 =y in =x ), where x, y {0, 1}. If we assume that there is no glitch in the signals, the self transition activity Y is given by Y = p i 0,1 (5) since the capacitance C s will be charged up only when a low-to-high transition takes place. Next, the coupled transition activity Z is computed according to the correlated switching between physically adjacent interconnects. There are four types of possible transitions when we consider dynamic charge distribution over coupling capacitances as illustrated in Fig. 3. We suppose that there are two parallel wires placed with minimum spacing. A type I transition occurs when one of the signals switches while the other stays unchanged such that the coupling capacitance is then charged up to k 1 C x V dd, where the coefficient k 1 is introduced as a reference for other types of transition. In a type II transition, one bus switches from low to high while the other switches from high to low. The effective capacitance will be larger than k 1 by a factor of (k 2 /k 1 ) which is usually two. In a type III transition, both signals switch simultaneously and C x will not be charged. However, because of possible misalignment of the two transitions, the amount of power consumption varies according 3
4 C x H (a) Type I k 1 Q Cx H H (b) Type II k 2 Q H C x k 3 Q H (c) Type III C x k Q 4 (d) Type IV Fig. 3. Transition types: (a) Single line switching; (b) both lines switching in opposite direction; (c) both lines switching in the same direction; (d) no switching. to the dynamic characteristics by a factor of k 3. In a type IV transition, there is no dynamic charge distribution over coupling capacitance. Thus, we set k 4 to zero. Throughout this paper, we assume k 2 /k 1 = 2, and k 3 = k 4 = 0. et p ij xy,qr denote the joint transitional probability defined by p ij xy,qr = p(i n =x jn =y ) p(in+1 =q j=r n+1 i n =x j=y), n where x, y, q, r {0, 1}. Each type of transition contributes to the effective coupling capacitance between a wire i and a wire j as follows. Z = k 1 (p ij 00,01 + pij 00,10 + pij 11,01 + pij 11,10 )+ k 2 (p ij 01,10 + pij 10,01 ) + k 3(p ij 00,11 + pij 11,00 ) (6) We consider only charging up the coupling capacitance in computing dynamic power dissipation. Suppose that two adjacent lines have a signal transition from 11 to 10. In the initial state of 11, both signals are high, thus there is no potential across the coupling capacitance ( V = 0). So, the coupling capacitance contains no charge when signals start to switch. At the end of transition, signals switch to 10. One signal is high, while the other signal is low. Such a potential difference across the coupling capacitance ( V = V dd ) leads to charge up the coupling capacitance. Thus, the signal switching from 11 to 10 is accounted for dynamic power consumption. On the other hand, the discharging transition for the coupling capacitance is not taken into account for dynamic power consumption. One example is the transition from 10 to 11. Initial potential across the coupling capacitance is V dd, which decreases to 0 at the end of transition. In other words, the coupling capacitance is discharged when such input vectors are applied. Because the power supply is used as a reference of power consumption, we do not consider discharging events in dynamic power computation. The total lumped capacitance for a bus can be computed according to Equations (5) and (6). Accordingly, the dynamic power consumed by the interconnects and drivers is given by P dyn = (Y (C s + C ) + ZC x ) V 2 dd f c (7) 4
5 We define the capacitance ratio η = C x C s+c for a terminated bus. We assume that the capacitance ratio η is four in 0.18 µm technology in this work. The capacitance ratio increases as the aspect ratio of the interconnect increases. 3 Information Theory Preliminaries We define an information source, called an ensemble X, as a collection of discrete symbols, x, representing an indication, an event or an indivisible and definable quantity or operation that the source transmits. Each symbol in an ensemble is associated with probability mass function p(x) = P r{x = x}, x X [17]. Thus the information source can transmit any one of different symbols with a given probability. The information content of each symbol is quantified in a way that the length of a binary codeword is proportional to the inverse of the probability of occurrence, because the more unexpected events are, the more information is conveyed (namely, the more surprising events) [18]. The quantity of a symbol is represented by self-information such that I(x) = log 2 1 p(x) bits. (8) The self-information can be readily derived in the special case of equi-probable events. Suppose that there are a total of q symbols in the alphabet, then the number of bits, N, to represent all q symbols is q = 2 N. Now with equi-probable symbols we have p(x i ) = q, so the number of bits to represent 1 the symbol (quantity of information content) N = I(x) = log 2 p(x) bits. The definition of selfinformation can be extended to quantify the information of the source by considering all the symbols in the alphabet. The entropy, namely, average information, of an ensemble X is defined by H(X) = x X p(x) log 2 p(x). (9) The entropy is expressed in bits with the convention that for p(x) = 0, 0 log (1/0) = 0. The entropy measures the information content or uncertainty of the information source X. Equation (9) is originated from the entropy of statistical thermodynamics, thus the term of entropy is used. As extensions to an information source, the joint entropy is derived for multiple different source alphabets that operate jointly or concurrently. The joint entropy H(X, Y ) of a pair of discrete random variables (X, Y ) with a joint distribution p(x, y) is defined as H(X, Y ) = p(x, y) log p(x, y). (10) x X y Y The joint entropy H(X, Y ) represents the average bits of information per joint pair of symbols, (x i, y i ). The conditional entropy H(Y X) of a pair of discrete random variables (X, Y ) with a conditional distribution p(y x) is defined as H(Y X) = x X p(x, y) log p(y x). (11) y Y 5
6 H(x) H (Entropy) 0.7 y H (y) x 1 H (y) p Fig. 4. The entropy function H(p) A chain rule holds for the entropy given by H(X, Y ) = H(X) + H(Y X). As a natural extension of the entropy to a Markov source, the entropy rate of a stochastic process {X i } is defined by when the limit exists. 1 H(X ) = lim n n H(X 1, X 2,, X n ) (12) The function H(x) is defined on the real interval [0, 1] as H(X) = x log 2 x (1 x) log 2 (1 x) bit as plotted in Fig. 4. The function H(x) maps the probability of a binary-valued, independent variable to its entropy. The inverse function, H 1 (y), of H, is defined on the real interval [0, 1] as, H 1 (y) = x, if y = H(x) and x [0, 0.5]. The function H 1 (y) maps the entropy of a binaryvalued, independent variable to a probability value that lies between 0 and Achievable Bounds on Coupling We assume that the input signals X to the interconnects come from a first-order Markov information source which is ergodic and stationary. The source is also assumed to be memoryless such that the probability distributions for subsequent signals are independent of the previous signals. Second extension of the input symbols X 2 is employed where a block of symbols (two symbols in this case) are considered at the same time rather than individual symbols in order to reflect the coupling effects involving multiple interconnects. The following theorem bounds the transitional probability p of a signal [9]. 6
7 (a) switching in inbetween lines (b) switching in border lines l0 l1 Charge shift in two coupling cap. l0 l1 Charge shift in one coupling cap. l2 l2 (N-2) lines l3 l3... l l N-1 N border lines N-2 N x 2 coupling cap. 2 N x 1 coupling cap. δ(1) = 2(N-1) N Fig. 5. Asymptotic coupling charge variation δ(1) when one bit out of N bits switches. Theorem 4.1: For a random process {X i }, let the symbols be encoded in a uniquely decodable manner into {B i } with an expected number of R bits/symbol. The transitional probability p i of B i is bounded by ( ) ( ) H H H 1 p i 1 H 1 R R where H is the entropy rate of {X i }. The bounds in Equation (13) are asymptotically achievable if {X i } is a stationary and ergodic process. Definition: Asymptotic coupling charge variation δ(m) is the cycle-averaged charge variation in coupling capacitance caused by the transitions in m-bit lines out of N bits. Asymptotic coupling charge variation is represented as multiples of k 1. In order to derive bounds on the coupled transition activity, we will employ emmas 4.1 and 4.2 below. emma 4.1 provides the asymptotic coupling charge variation when m bits out of N bits switch during a cycle. emma 4.2 gives the coupled transition activity when each bit switches, independently of the other bits, with the same transitional probability p. Theorem 4.2 bounds the coupled transition activity based on Theorem 4.1 and emma 4.2 for independent data source. emma 4.1: et {B i }, 0 i N 1, be N-bit data items. If m bits out of the N bits switch, the asymptotic coupling charge variation δ(m) is given by Proof: δ(m) = 2m N 1 N As a basis, Fig. 5 shows the asymptotic coupling charge variation when one bit out of N-bit lines switches. There are two types of transitions in view of the coupling charge variation. On one hand, (13) (14) 7
8 when a bit line l i (1 i N 2) that is located between two lines switches as shown in Fig. 5(a), either charging or discharging takes place in two coupling capacitance, namely Cx i 1,i and Cx i,i+1, respectively. Given uniform distribution of switching line selections, the probability of selecting one switching line among N bits is (N 2)/N. Because two coupling capacitances are associated with this type of transition, 2(N 2)/N coupling charge variation takes place on average. On the other hand, if the switching bit line (l 0 or l N 1 ) is neighbored by only one bus line, then there is only one coupling capacitance of which charge is affected by the switching as shown in Fig. 5(b). Since there are two border bus lines, the probability of such events is 2/N. Thus, the asymptotic coupling charge variation is 2/N. To sum up, the asymptotic coupling charge variation for one bit switching δ(1) is 2(N 1)/N. Now we can generalize the number of switching bits by m, such that 0 m N. The number of combinations for choosing m bits out of N bits is ( N m). When m bits switch, 2m coupling capacitances are involved in charge redistribution unless signal transitions occur on the border bus lines and switching lines are adjacent. Considering the cases in which the border bus lines switch and in which adjacent lines switch, we have the asymptotic coupling charge variation δ(m) such that δ(m) = (N) ( 2m m 2 N 2 ) ( m 1 2 N 2 ) m 2 ( N m) = 2(N 1)( N 1) m 1 ( N m) (15) = 2m (N 1) N In this work, the dynamic power consumption is our concern, thus, charging up the coupling capacitance has to be accounted for power consumption, while discharging event, such as transition, is not considered. Thus, half of the asymptotic coupling charge variation, namely charging up the coupling capacitance, is only considered in computing the coupled transition activity. Therefore, the coupled transition activity Z is represented by Z= N δ(m) m=1 2 q(m) where q(m) denotes the probability of an m-bit transition out of N bits. emma 4.2: et {B i } be a 0-1 valued random process, in which a bit B i (0 i N 1) switches independently of other bits with transitional probability p i = p (0 i N 1). Then the coupled transition activity Z per bus cycle is if transitions on {B i } follow the binomial distributions. Z = (N 1) p (16) Proof: If transitions on {B i } follow binomial distributions, the probability of m-bit transitions is 8
9 given by δ(m) 2 pm (1 p) N m. Hence, the coupling transition activity Z is Z = N m=1 δ(m) 2 = (N 1) p p m (1 p) N m (17) Theorem 4.2: et {X i } be a random process with entropy rate H, and whose symbols are encoded in a uniquely decodable manner into {B i } with an expected number of R bits/symbol. Suppose B i switches independently with transitional probability p i = p (0 i N 1). Then, the coupled transition activity per bus cycle is bounded by ( ) H (N 1) H 1 Z (N 1) R [ ( )] H 1 H 1 R The bounds in Equation (18) are asymptotically achievable if {X i } is a stationary and ergodic process. Proof: Theorem 4.1 and emma 4.2 lead to Equation (18) Example 4.1: The transition activities and corresponding coupling effects can be mitigated by introducing redundancies. Two types of redundancies, namely, spatial redundancy and temporal redundancy, can be used to mitigate transition activities. Spatial redundancy reduces the density of transitions by introducing additional bus lines. For instance, bus invert algorithm can be implemented by adopting spatial redundancy wherein one extra bus line is used to notify a receiver of the phase of transferring data. On the other hand, extra cycles are used to make transition density sparse which is said to be temporal redundancy. In bus invert scheme based on temporal redundancy, the inversion of the following data bits is determined by an additional control bit transferred ahead of data bits. If we use one additional cycle as a temporal redundancy in 8-bit bus (N = 8), then the entropy rate H = 8 and R = H + 1 = 9 for randomly distributed independent data. According to Theorem 4.2, the lower limit of couplings per bus cycle is couplings/symbol. For a bus line, we have coupling/bit in a bus cycle (18) 5 ow Power Encoding Schemes 5.1 General Codec Architecture Fig. 6 illustrates a generic codec architecture for two bit signals. The encoder consists of two components: a predictor and an encoding function block E. The prediction function x(n) is a function of past input values given by x(n) = f(x(n 1), x(n 2),, x(n K)). We consider K = 1 for low-complexity codec architecture. The combinational function E reduces the average number of self transitions and coupled switching between y i (n) and y j (n). The encoding function E differs 9
10 Encoder Decoder x(n) i E (n) y i Bus(i) y i (n) D x(n) i x(n) i x(n) j E yj(n) Bus(j) yj(n) D x(n) j x(n) j Fig. 6. ow-power encoder-decoder framework from the architectures proposed in [3], [19] in that it takes as input the data on adjacent buses to account for coupling effect. In general, an original input signal x i (n) in i-th bus line is encoded to y i (n) at cycle n based on the encoding function given by y i (n) = E(x i (n), x i (n), x i 1 (n), x i 1 (n), x i+1 (n), x i+1 (n)), where x i 1 (n) and x i+1 (n) are signal in bus lines adjacent to x i (n). The input data x i (n) and the prediction function x i (n) account for the reduction of self transition activities in y i (n). Signal integrity and coupling power depend on both the current value of neighboring signals (x i 1 (n) and x i+1 (n)) and the transition histories ( x i 1 (n) and x i+1 (n)). As a mirror of the encoder, the decoder consists of two components: a decoding function block D and a register to keep the prediction function x(n). The decoding logic function is given by x i (n) = D(y i (n), x i (n), y i 1 (n), x i 1 (n), y i+1 (n), x i+1 (n)). The decoder D realizes the inverse function of the encoder E. In choosing a codec scheme, we need to take into account two major issues. The first criterion is a tradeoff between architecture complexity and encoding efficiency. Rent s rule [20] states that there is a simple power-law relationship between the number of I/O terminals for a logic block and the number of gates contained in that block for a given degree of parallelism. It means that considerations for adjacent input pins are likely to add logic gates to a codec functional block. Increased number of logic gates in turn can induce overhead power consumption and longer propagation delay. Therefore, it should be ensured that benefits from data encoding are large enough to compensate for the overhead of a codec architecture. Secondly, the codec system should guarantee the unique decodability constraints, even in the presence of physical noise. The signal integrity can be ensured by asserting spatial redundancy (extra control lines) or temporal redundancy (extra clock cycles) or by selecting appropriate supply voltage, the size of transmitter and receiver, and the clock frequency. 10
11 x0(n-1) x 1 (n-1) x2(n-1) x3(n-1) x4(n-1) x(n-1) 5 x6(n-1) x(n-1) 7 inv_in x0(n) x x2(n) 1 (n) x 3 (n) x 4 (n) x 5 (n) x 6 (n) x 7 (n) x i (n) x i (n-1) x i+1 (n) x i+1 (n-1) inv Coupling Counter Encoder (E) inv y 0 (n) y 1 (n) y(n) 2 y(n) 3 y(n) 4 y(n) 5 y(n) 6 y(n) 7 ql i qh i x(n) 0 x0(n-1) x(n) 1 x 1 (n-1) x(n) 2 x 2 (n-1) x(n) 3 x 3 (n-1) x(n) 4 x 4 (n-1) x(n) 5 x 5 (n-1) x(n) 6 x 6 (n-1) x(n) 7 x 7 (n-1) inv_in CE CE CE CE CE CE CE ql 0 qh 0 ql 1 qh 1 ql 2 qh 2 ql 3 qh 3 ql 4 qh 4 ql 5 qh 5 ql 6 qh 6 Majority voter 8-out-of-15 inv Coupling encoder (CE) Coupling counter Fig bit bus encoder for the coupling-driven bus-invert scheme. 5.2 The Coupling-Driven Bus-Invert Scheme The bus-invert method [2] is limited to reduce transition activities while assuming that coupling power contribution can be ignored. However, coupling power becomes a dominant component of dynamic power as wires become thinner and taller. To reflect the technology trend appropriately, we propose a coupling-driven bus-invert method to tackle the coupling power reduction problem. The bus-invert method flips the data signal when the number of switching bits is more than half of the number of signal bits. In the same context, we invert the input vector, when the coupling effect of the inverted signals is less than that of the original signals. The problems are then how to accurately account for the coupling effect, and to effectively implement the scheme with low hardware overhead. Before addressing these issues, we state our assumptions. First, synchronous latches are located at the transmitter side, thus all the transitions shall take place at the same time on the bus. The simultaneous transitions exclude type III transitions by setting k 3 = 0. It means that the results we achieve are on the lower end of power saving. Second, statistics on the information source are not given in advance. Hence this scheme is suitable for data bus encoding, where it is difficult to extract accurate probabilistic information off-line. Enumeration method is employed to represent the coupling effect. If a bus line l i is located between two other lines, a signal transition on l i can trigger charge shifts on both coupling capacitances connected to l i 1 and l i+1, respectively. In other words, at most two couplings can be initiated by a signal transition. Thus, 2(N 1) bits are sufficient to represent the whole set of couplings in an 11
12 l0 H l0 Cs Vdd H l1 l2 Cx Vdd Cs Vdd H l1 l2 Cs Vdd Cx Vdd H l3 2Cx Vdd H l3 linv H linv Cs Vdd (a) transmission without inversion (b) transmission with inversion Fig. 8. The CBI scheme versus the BI scheme N-bit bus per bus cycle, because there are (N 1) coupling capacitances and each capacitance can experience zero, one or two units of coupling if coupling intensity is assumed to be digitized. The encoder architecture is shown in Fig. 7. According to the types of correlated transition between neighboring buses, the coupling encoder generates a codeword as follows: 00 for a type III or IV transition, 01 for a type I transition, and 11 for a type II transition. The reason that we assign 11 to a type II transition is that switching in different directions requires to change the polarity of the charge stored in the coupling capacitance, hence consuming about twice the amount of charge required for a type I transition. The codeword 11, instead of 10, helps to make a decision on data inversion using a majority voter, because the majority voter outputs high when at least eight input lines are high out of fifteen inputs. The majority voter is implemented by using full-adder circuitry [2]. The control signal inv can be transmitted to the receiver using extra bus lines or extra transfer cycles. One problem of additional bus lines for control is the area overhead that may not be allowed due to physical constraints. In some cases, widening the space between signal bus lines can reduce the coupling effects more effectively than introducing extra control lines, because the coupling capacitance is inversely proportional to net space. Temporal redundancy is an alternative using extra clock cycles to transfer control signals. We assume that the input stream is transmitted in burst mode that enables us to accommodate temporal redundancy [21]. Example 5.1: Fig. 8 illustrates the difference between the CBI scheme and the BI scheme. Suppose we have 4-bit bus and one additional control line l inv to notify the inversion. Given an input vector transition from 0001 to 0010, the self capacitance C s of l 2 is charged up, which corresponds to (C s V dd ), when the data is transmitted without inversion as shown in Fig. 8(a). The coupling capacitance between l 1 and l 2 is also charged up associated with (C x V dd ) since this is a type I transition. Type II transition between l 2 and l 3 accounts for (2C x V dd ). Meanwhile, data inversion leads to a vector transition from 0001 to 1101 along with control line switching as shown in Fig. 8(b). There are three self capacitances to be charged up, and one coupling 12
13 capacitance experiences a type I transition. In the conventional BI scheme, only the self capacitance is considered in determining inversion. The amount of charge shift for the raw data transfer is (C s V dd ) as shown in Fig. 8(a). On the other hand, data inversion leads to greater charge shift on self capacitance, namely (3C s V dd ). Thus, the BI scheme sends data without inversion as shown in Fig. 8(a) because it is restricted to the charge shifts in self capacitance. However, the CBI scheme makes a different decision because the charge shift due to coupling capacitance is also taken into account. Raw data transfer as in Fig. 8(a) consumes (C s + 3C x )V dd, while inverted data transfer as in Fig. 8(b) consumes (3C s + C x )V dd. Considering that the ratio Cx C s is significant (namely, four) in deep sub micron technology, data inversion is favorable to raw data transfer in terms of power saving. Thus the CBI scheme inverts the data as shown in Fig. 8(b). The following theorem gives the coupled transition activity with independent data source where each bit switches independently with transitional probability p, when the coupling-driven bus-invert algorithm is applied. Theorem 5.1: et {B i } be an N-tuple of 0-1 valued random variables such that B i = < b 0, b 1,, b N 1 >, in which a bit b k (0 k N 1) switches independently of other bits with transitional probability p k = p (0 k N 1). Suppose {B i } are encoded with the coupling-driven bus-invert algorithm with one-cycle redundancy. Then, the coupled transition activity Z per bus cycle is given by Z = N 1 2 N (2N 3) 2 N (N 2) if transitions on {B i } follow the binomial distributions. ( ) N 2 Proof: The coupling-driven bus invert method flips input signal when the number of couplings in an input vector is greater than that of inverted input. The number of couplings for the number of switching bits is shown in Fig. 9(b). If the number of switching bits m is greater than N 2, then data inversion is favorable which results in (N m)-bit switching. Since the occurrence probability of m-bit switching and that of (N m)-bit switching are the same as shown in Fig. 9(a), the average number of couplings in (N m)-bit switching doubles by flipping the input stream as illustrated in the columns CBI encoded data in Fig. 9(b). Meanwhile, if m is N 2, then we need to consider the transitions for each case. There is no gain from inverting signals when at least one of the border lines (line 0 or line N 1) is switching, because transitions in a border bus line have only half the coupling effect compared to the transitions in inner bus lines asymptotically. However, if all the switching take place in inner bus lines, signal inversion N 2 (19) 13
14 reduces the coupling transition activity by 1 ) (N 2 2 N N. Therefore, 2 N Z = 1 ( 2 N 2 1 ( ) N 1 2 (N 1) m 1 m=1 ( ) ( )) N 1 N 2 + (N 1) N 2 1 N 2 = N 1 ( ) N (2N 3) N N (N 2) N 2 (20) Probability number of couplings CBI encoded data Raw data Number of switching bits Number of switching bits (a) (b) 8 Fig. 9. (a) Occurrence probability; (b) number of couplings Example 5.2: The average number of couplings per bus cycle for an 8-bit bus is evaluated to be which is close to the lower limit Z min = with one-cycle redundancy as computed in Example 4.1. According to Equation (16), the average number of couplings per bus cycle for randomly distributed, independent data is 3.5, since p = 0.5 and N = 8. Hence, the percentage reduction in couplings over raw data transmission is 29% which is calculated as when we use the coupling-driven bus-invert method. The coupling-driven bus-invert scheme can employ multiple cycles to notify the receiver whether the transmitted data are inverted or not. Multiple extra cycles for control signals imply that the granularity of unit codeword to be processed becomes finer, hence less couplings are likely to occur on the bus. However, this benefit comes at the cost of extra clock cycles. 6 Experimental Results The encoding scheme proposed was implemented and power consumptions were measured by using HSPICE. All the parasitics were extracted from the physical layout in 0.18µm technology. 14
15 TABE I Percentage reduction in transition activity compared to raw data transmission using the bus-invert method (BI) and the coupling-driven bus-invert method (CBI) with one-cycle redundancy for 8-bit bus. Data BI [2] CBI XP SP TP XP SP TP V V V A A A P P P Avg Applied data streams consist of MPEG video files (V1, V2 and V3 in Table I), MP3 audio files (A1, A2 and A3) and PDF format files (P1, P2 and P3). Table I shows the comparison of proposed coupling-driven bus-invert (CBI) method with the conventional bus-invert (BI) method. All the values in Table I are represented by the percentage reduction compared to raw data transmission without any encoder/decoder. To compare the CBI scheme with the BI scheme fairly, we use one extra cycle to transfer control signals per eight data cycles. So, transmission of eight bytes takes nine cycles. And we implemented the majority voter by using full-adder circuitry as in [2]. The reason that we compare the CBI scheme with the BI method is that the BI method does not require probabilistic information in advance. The results under the column SP present the percentage reduction in self transition activity, which are computed by SP (X) SP (X ) SP (X) 100, where SP (X) denotes the number of self transitions for raw input streams and SP (X ) denotes the number of self transitions for encoded data. The results under the columns XP and TP present the percentage reduction in coupled transition activity and total transition activity, respectively. The total transition activity accounts for both self transition activity and coupled transition activity with a capacitance ratio η = 4. The CBI scheme yields better results than the BI method in terms of coupled transition activity, while the BI scheme shows better reduction in self transition activity. Because of the significant capacitance ratio η, total savings in transition activity in the CBI scheme is greater than the BI scheme. The CBI results show 31.8% average coupling reduction which is in accordance with our theoretical bound 29% in Example 5.2. The deviation comes from the locality and correlations in input data 15
16 TABE II Power consumed by interconnects P(I) in µw and codec circuitry P(E) in µw under various capacitance values in pf. Raw CBI C x C s P(I) P(I) P(E) P(T) % Red streams, which are assumed to be negligible in theoretical analysis. Table II presents the power consumed by both the codec circuitry and bus lines for the CBI scheme. Using 0.18µm technology, power is measured for random data streams using HSPICE with 1.6V power supply and the capacitance ratio η = 4 that is a realistic value [14]. The results under column P (I) correspond to power in µw consumed by bus lines, and the results under column P (E) in µw correspond to the power overhead due to encoder/decoder circuits. In raw data transmission, the total power consumption corresponds to the P (I). On the other hand, in our encoding scheme, the total power consumption under column P (T ) are the sum of interconnect power P (I) and encoder/decoder power P (E). It should be pointed out that within a realistic range of coupling capacitance, the power overhead P (E) due to the codec are relatively small. The percentage of power savings are shown in the column % Red by using the coupling-driven bus-invert encoding scheme. 7 Conclusion Tightly coupled on-chip buses in a system-on-a-chip impose new requirements for interconnect power reduction and signal integrity. We obtained the lower and upper bounds on coupling between adjacent interconnects for randomly distributed, independent data streams using information theory. Novel bus encoding scheme was proposed for reducing power consumed by on-chip buses by decreasing coupled switching. The coupling-driven bus-invert scheme reduces power consumption about 30% with one-cycle redundancy. Simulation results by using HSPICE indicate that the portion of power consumed by the encoder/decoder logic block for the scheme is fairly small in state-of-the-art technology. Therefore, the overhead due to encoder/decoder circuitry is compensated by significant interconnect power savings. References [1] J. Cong, An interconnect-centric design flow for nanometer technologies, in Int. Symp. VSI Technology, Systems, and Applications, June 1999, pp
17 [2] M. R. Stan and W. P. Burleson, Bus-invert coding for low-power I/O, IEEE Transactions on VSI Systems, vol. 3, no. 1, pp , Mar [3]. Benini, A. Macii, E. Macii, M. Poncino, and R. Scarsi, Synthesis of low-overhead interfaces for power-efficient communication over wide buses, in Proc. ACM/IEEE Design Automation Conf., 1999, pp [4] P. R. Panda and N. D. Dutt, Reducing address bus transitions for low power memory mapping, in Proc. European Design and Test Conf., Mar. 1996, pp [5] H. Mehta, R. M. Owens, and M. J. Irwin, Some issues in Gray code addressing, in Proc. the Great akes Symp. VSI, Mar. 1996, pp [6] C.. Su, C. Y. Tsui, and A. M. Despain, Saving power in the control path of embedded processors, IEEE Design and Test of Computers, vol. 11, no. 4, pp , [7]. Benini, G. De Micheli, E. Macii, D. Sciuto, and C. Silvano, Asymptotic zero-transition activity encoding for address busses in low-power microprocessor-based systems, in Proc. the Great akes Symp. VSI, 1997, pp [8] E. Musoll, T. ang, and J. Cortadella, Working-zone encoding for reducing the energy in microprocessor address buses, IEEE Transactions on VSI Systems, vol. 6, no. 4, pp , Dec [9] S. Ramprasad, N. R. Shanbhag, and I. N. Hajj, Information-theoretic bounds on average signal transition activity, IEEE Transactions on VSI Systems, vol. 7, no. 3, pp , Sept [10] R. Hegde and N. R. Shanbhag, Energy-efficiency in presence of deep submicron noise, in Proc. IEEE/ACM Int. Conf. Computer Aided Design, 1998, pp [11] Y. Zhang, W. Ye, and M. J. Irwin, An alternative architecture for on-chip global interconnect: Segmented bus power modeling, in Asilomar Conf. on Signals, Systems, and Computers, 1998, pp [12] F. Catthoor, F. Franssen, S. Wuytack,. Nachtergaele, and H. De Man, Global communication and memory optimizing transformations for low power signal processing systems, in VSI Signal Processing VII, 1994, pp [13] W. Fornaciari, D. Sciuto, and C. Silvano, Power estimation for architectural exploration of HW/SW communication on system-level buses, in Int. Workshop on Hardware/Software Codesign, 1999, pp [14] Semiconductor Industry Association, International technology roadmap for semiconductors, [15] P.-P. Sotiriadis and A. Chandrakasan, Bus energy minimization by transition pattern coding (TPC) in deep submicron technologies, in Proc. IEEE/ACM Int. Conf. Computer Aided Design, 2000, pp [16] J. Cong and. He, Theory and algorithm of local-refinement-based optimization with application to device and interconnect sizing, IEEE Transactions on Computer-Aided Design, vol. 18, no. 4, pp , Apr [17] T. M. Cover and J. A. Thomas, Elements of Information Theory, New York, NY: John Wiley & Sons, [18] R. Togneri, Information Theory and Coding, University of Western Australia, [19] S. Ramprasad, N. R. Shanbhag, and I. N. Hajj, A coding framework for low-power address and data buses, IEEE Transactions on VSI Systems, vol. 7, no. 2, pp , June [20] S. andman and R.. Russo, On a pin versus block relationship for partitions of logic paths, IEEE Transactions on Computers, vol. C-20, pp , [21] M. R. Stan and W. P. Burleson, Two-dimensional codes for low-power, in Proc. Int. Symp. ow Power Electronics and Design, Aug. 1996, pp
18 Response to Reviewers Comments We appreciate the comments and efforts of the associate editor and the reviewers in this paper. We agree with most of the review comments and have answered and modified the contents accordingly. The basic notations for different fonts are Bold face : the original comments from reviews, Plain : our answers to review comments, Italic face : the modified/added contents in the paper. Answers to Reviewer 1 C 1.1: The authors should be commended on preparing a well-written and edited manuscript. That being said, there are still several minor typographical/grammatical errors that could be cleaned up with a thorough proof-reading of the manuscript. A 1.1: We fully agree with this comment and modified appropriately. C 1.2: What is meant by <n> cycle redundancy? This term is used very extensively throughout the paper but is never adequately defined. This is particularly troubling as a naive interpretation can lead to questions regarding the ideas being presented here. This needs to be clarified in the introductory part of the paper. A 1.2: We agree your comments and added sentences in the paper. (Page 9) Two types of redundancies, namely, spatial redundancy and temporal redundancy, can be used to mitigate transition activities. Spatial redundancy reduces the density of transitions by introducing additional bus lines. For instance, bus invert algorithm can be implemented by adopting spatial redundancy wherein one extra bus line is used to notify a receiver of the phase of transferring data. On the other hand, extra cycles are used to make transition density sparse which is said to be temporal redundancy. In bus invert scheme based on temporal redundancy, the inversion of the following data bits is determined by an additional control bit transferred ahead of data bits. C 1.3: The information theoretical discussion may be out of the comfort zone for the average TVSI reader. The ideas and terms are defined only using equations and definitions that will give the reader little, if any, intuition into the ideas trying to be expressed. A better introduction would be nice. 18
19 A 1.3: According to your comments, we clarified the definition and added comments. (Page 5) We define an information source, called an ensemble X, as a collection of discrete symbols, x, representing an indication, an event or an indivisible and definable quantity or operation that the source transmits. Each symbol in an ensemble is associated with probability mass function p(x) = P r{x = x}, x X [17]. Thus the information source can transmit any one of different symbols with a given probability. The information content of each symbol is quantified in a way that the length of a binary codeword is proportional to the inverse of the probability of occurrence, because the more unexpected events are, the more information is conveyed (namely, the more surprising events) [18]. The quantity of a symbol is represented by self-information such that I(x) = log 2 1 p(x) bits. (21) The self-information can be readily derived in the special case of equi-probable events. Suppose that there are a total of q symbols in the alphabet, then the number of bits, N, to represent all q symbols is q = 2 N. Now with equi-probable symbols we have p(x i ) = q, so the number of bits to represent 1 the symbol (quantity of information content) N = I(x) = log 2 p(x) bits. The definition of selfinformation can be extended to quantify the information of the source by considering all the symbols in the alphabet. The entropy, namely, average information, of an ensemble X is defined by H(X) = x X p(x) log 2 p(x). (22) The entropy is expressed in bits with the convention that for p(x) = 0, 0 log (1/0) = 0. The entropy measures the information content or uncertainty of the information source X. Equation (9) is originated from the entropy of statistical thermodynamics, thus the term of entropy is used. As extensions to an information source, the joint entropy is derived for multiple different source alphabets that operate jointly or concurrently. The joint entropy H(X, Y ) of a pair of discrete random variables (X, Y ) with a joint distribution p(x, y) is defined as H(X, Y ) = p(x, y) log p(x, y). (23) x X y Y The joint entropy H(X, Y ) represents the average bits of information per joint pair of symbols, (x i, y i ). The conditional entropy H(Y X) of a pair of discrete random variables (X, Y ) with a conditional distribution p(y x) is defined as H(Y X) = p(x, y) log p(y x). (24) x X y Y 19
20 A chain rule holds for the entropy given by H(X, Y ) = H(X) + H(Y X). As a natural extension of the entropy to a Markov source, the entropy rate of a stochastic process {X i } is defined by when the limit exists. 1 H(X ) = lim n n H(X 1, X 2,, X n ) (25) C 1.4: The abstract implies that several encoding schemes will be described but there appears to be only one (CBI). This should be clarified. A 1.4: We agree these comments and clarified the abstract. (Page 1) Coupling effects between on-chip interconnects must be addressed in ultra deep submicron VSI and system-on-a-chip (SoC) designs. We obtain the lower and upper bounds on coupling effects for randomly distributed, independent data streams based on the information theory. Novel lowpower bus encoding scheme is proposed to minimize coupled switchings which dominate the on-chip bus power consumption. The coupling-driven bus invert method employs encoder and decoder that are geared to minimize the coupled transition activities. Experimental results indicate that our encoding method saves effective switchings as much as 30% in an 8-bit bus with one-cycle redundancy. C 1.5: Define the wire aspect ratio (its meaning should be obvious but nonetheless it should be defined in terms of length and width). A 1.5: We added the definition of the aspect ratio to avoid plausible confusion. (Page 2) For example, the wire aspect ratio (namely, wire thickness/width) is expected to be over 2.4. C 1.6: The model derived in section 2 is a bit confusing. You seem to have ignored the effect of spacing on self-capacitance as the fringing contribution will be a function of the spacing due to its distribution between self- and coupling-capacitance components. A 1.6: Thanks to this comment, we make the notations more clear as follows. (Page 2) One can model the physical capacitance of a wire with width w, length l, overlapping segment length y, and edge-to-edge distance d such that ( ) y dmin C tot = (C a w + 2C f ) l + C x (26) d 20
21 where C a is unit area capacitance (F/cm 2 ) to substrates, C f is unit length fringe capacitance (F/cm) and C x is unit length coupling capacitance (F/cm) with minimum spacing d min. To be precise, C f and C x are not constant due to complex fringing effects [16], but for our purpose the model is adequate. The area and fringe capacitance are barely independent of the distance d, which is referred to as the self-capacitance C s. Thus, we have a wire capacitance model as shown in Fig. 2. C 1.7: The initial description of Type II transitions should mention that they are larger than k1 by a factor of (k2/k1), not k2. A 1.7: We rectified the sentence as the comment pointed out. (Page 3) The effective capacitance will be larger than k 1 by a factor of (k 2 /k 1 ) the value of which is usually two. C 1.8: The initial definition of the transition encoding needs to be improved: the symbols of 0 and 1 are doing the encoding of the transitions and not the other way around. A 1.8: Originally, transition encoding is introduced to explain generic architecture of encoder/decoder framework, but not specifically adopted in our implementation. So, to improve the readability of the paper, we omitted the transition encoding. C 1.9: The definition of delta-prime(m) needs to be improved, a figure would definitely help with the definition (and probably shorten the text). A 1.9: As advised in this comment, we simplified the definitions and made the paper be consistent. (Page 7) As a basis, Fig. 5 shows the asymptotic coupling charge variation when one bit out of N- bit lines switches. There are two types of transitions in view of the coupling charge variation. On one hand, when a bit line l i (1 i N 2) that is located between two lines switches as shown in Fig. 5(a), either charging or discharging takes place in two coupling capacitance, namely Cx i 1,i and Cx i,i+1, respectively. Given uniform distribution of switching line selections, the probability of selecting one switching line among N bits is (N 2)/N. Because two coupling capacitances are associated with this type of transition, 2(N 2)/N coupling charge variation takes place on average. On the other hand, if the switching bit line (l 0 or l N 1 ) is neighbored by only one bus line, then there is only one coupling capacitance of which charge is affected by the switching as shown in Fig. 5(b). Since there are two border bus lines, the probability of such events is 2/N. Thus, the 21
22 asymptotic coupling charge variation is 2/N. To sum up, the asymptotic coupling charge variation for one bit switching δ(1) is 2(N 1)/N. C 1.10: Is there an error in equation 16? When I try to re-derive it using your methods I get a different answer (Z = (N 1)p + (N 1)p N ). A 1.10: I think there is no error in the original derivation. As an example, let us suppose that the probability is p=1/2, and N=2. Then Z = N m=1 δ(m) pm (1 p) N m becomes 1/2 (corresponding to our derivation (N 1) p), while the reviewer s expression of Z = (N 1)p + (N 1)p N gives 3/4. C 1.11: What do the in-line equations given in Section 5.1 for yi(n) and xi(n) mean? Where does the nomenclature that is being used in their expression come from? Is it a typo? A 1.11: We corrected the notation as follows. (Page 10) In general, an original input signal x i (n) in i-th bus line is encoded to y i (n) at cycle n based on the encoding function given by y i (n) = E(x i (n), x i (n), x i 1 (n), x i 1 (n), x i+1 (n), x i+1 (n)), where x i 1 (n) and x i+1 (n) are signal in bus lines adjacent to x i (n). C 1.12: Why are 2(N-1) bits sufficient to represent the whole set of couplings in a N-bit bus per bus cycle? A 1.12: It is because there are (N 1) coupling capacitances and each capacitance can experience zero, one or two units of coupling if coupling intensity is assumed to be digitized. (Page 11) Thus, 2(N 1) bits are sufficient to represent the whole set of couplings in an N-bit bus per bus cycle, because there are (N 1) coupling capacitances and each capacitance can experience zero, one or two units of coupling if coupling intensity is assumed to be digitized. C 1.13: How does the notion of using a resistor string and voltage comparator fit into a lowpower framework? A 1.13: Actually, we implemented a majority voter using a full-adder circuitry of which results are shown in Table II. As you pointed, it may not be appropriate to use a resistor string and voltage comparator for low power application, thus, we omitted that sentence. 22
Explicit Constructions of Memoryless Crosstalk Avoidance Codes via C-transform
Explicit Constructions of Memoryless Crosstalk Avoidance Codes via C-transform Cheng-Shang Chang, Jay Cheng, Tien-Ke Huang and Duan-Shin Lee Institute of Communications Engineering National Tsing Hua University
More informationSchool of EECS Seoul National University
4!4 07$ 8902808 3 School of EECS Seoul National University Introduction Low power design 3974/:.9 43 Increasing demand on performance and integrity of VLSI circuits Popularity of portable devices Low power
More informationEEC 216 Lecture #3: Power Estimation, Interconnect, & Architecture. Rajeevan Amirtharajah University of California, Davis
EEC 216 Lecture #3: Power Estimation, Interconnect, & Architecture Rajeevan Amirtharajah University of California, Davis Outline Announcements Review: PDP, EDP, Intersignal Correlations, Glitching, Top
More informationTemporal Redundancy Based Encoding Technique for Peak Power and Delay Reduction of On-Chip Buses
Temporal Redundancy Based Encoding Technique for Peak Power and Delay Reduction of On-Chip Buses K. Najeeb 1, Vishal Gupta 1, V. Kamakoti 1* and Madhu Mutyam 2 1 Dept. of Computer Science & Engg., Indian
More informationCMPEN 411 VLSI Digital Circuits Spring 2012 Lecture 17: Dynamic Sequential Circuits And Timing Issues
CMPEN 411 VLSI Digital Circuits Spring 2012 Lecture 17: Dynamic Sequential Circuits And Timing Issues [Adapted from Rabaey s Digital Integrated Circuits, Second Edition, 2003 J. Rabaey, A. Chandrakasan,
More information! Memory. " RAM Memory. ! Cell size accounts for most of memory array size. ! 6T SRAM Cell. " Used in most commercial chips
ESE 57: Digital Integrated Circuits and VLSI Fundamentals Lec : April 3, 8 Memory: Core Cells Today! Memory " RAM Memory " Architecture " Memory core " SRAM " DRAM " Periphery Penn ESE 57 Spring 8 - Khanna
More informationSemiconductor memories
Semiconductor memories Semiconductor Memories Data in Write Memory cell Read Data out Some design issues : How many cells? Function? Power consuption? Access type? How fast are read/write operations? Semiconductor
More informationLecture 23. Dealing with Interconnect. Impact of Interconnect Parasitics
Lecture 23 Dealing with Interconnect Impact of Interconnect Parasitics Reduce Reliability Affect Performance Classes of Parasitics Capacitive Resistive Inductive 1 INTERCONNECT Dealing with Capacitance
More informationAutomatic Design of Low-Power Encoders Using Reversible Circuit Synthesis
Automatic Design of Low-Power Encoders Using Reversible Circuit Synthesis Robert Wille 1 Rolf Drechsler 1,2 Christof Osewold 3 Alberto Garcia-Ortiz 3,4 1 Institute of Computer Science, University of Bremen,
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More information! Charge Leakage/Charge Sharing. " Domino Logic Design Considerations. ! Logic Comparisons. ! Memory. " Classification. " ROM Memories.
ESE 57: Digital Integrated Circuits and VLSI Fundamentals Lec 9: March 9, 8 Memory Overview, Memory Core Cells Today! Charge Leakage/ " Domino Logic Design Considerations! Logic Comparisons! Memory " Classification
More informationChapter 8. Low-Power VLSI Design Methodology
VLSI Design hapter 8 Low-Power VLSI Design Methodology Jin-Fu Li hapter 8 Low-Power VLSI Design Methodology Introduction Low-Power Gate-Level Design Low-Power Architecture-Level Design Algorithmic-Level
More informationESE 570: Digital Integrated Circuits and VLSI Fundamentals
ESE 570: Digital Integrated Circuits and VLSI Fundamentals Lec 19: March 29, 2018 Memory Overview, Memory Core Cells Today! Charge Leakage/Charge Sharing " Domino Logic Design Considerations! Logic Comparisons!
More informationDesign for Manufacturability and Power Estimation. Physical issues verification (DSM)
Design for Manufacturability and Power Estimation Lecture 25 Alessandra Nardi Thanks to Prof. Jan Rabaey and Prof. K. Keutzer Physical issues verification (DSM) Interconnects Signal Integrity P/G integrity
More informationTopics. Dynamic CMOS Sequential Design Memory and Control. John A. Chandy Dept. of Electrical and Computer Engineering University of Connecticut
Topics Dynamic CMOS Sequential Design Memory and Control Dynamic CMOS In static circuits at every point in time (except when switching) the output is connected to either GND or V DD via a low resistance
More informationReducing power in using different technologies using FSM architecture
Reducing power in using different technologies using FSM architecture Himani Mitta l, Dinesh Chandra 2, Sampath Kumar 3,2,3 J.S.S.Academy of Technical Education,NOIDA,U.P,INDIA himanimit@yahoo.co.in, dinesshc@gmail.com,
More informationName: Answers. Mean: 83, Standard Deviation: 12 Q1 Q2 Q3 Q4 Q5 Q6 Total. ESE370 Fall 2015
University of Pennsylvania Department of Electrical and System Engineering Circuit-Level Modeling, Design, and Optimization for Digital Systems ESE370, Fall 2015 Final Tuesday, December 15 Problem weightings
More informationSession 8C-5: Inductive Issues in Power Grids and Packages. Controlling Inductive Cross-talk and Power in Off-chip Buses using CODECs
ASP-DAC 2006 Session 8C-5: Inductive Issues in Power Grids and Packages Controlling Inductive Cross-talk and Power in Off-chip Buses using CODECs Authors: Brock J. LaMeres Agilent Technologies Kanupriya
More informationOn Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationClock signal in digital circuit is responsible for synchronizing the transfer to the data between processing elements.
1 2 Introduction Clock signal in digital circuit is responsible for synchronizing the transfer to the data between processing elements. Defines the precise instants when the circuit is allowed to change
More informationELEC516 Digital VLSI System Design and Design Automation (spring, 2010) Assignment 4 Reference solution
ELEC516 Digital VLSI System Design and Design Automation (spring, 010) Assignment 4 Reference solution 1) Pulse-plate 1T DRAM cell a) Timing diagrams for nodes and Y when writing 0 and 1 Timing diagram
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationSpiral 2 7. Capacitance, Delay and Sizing. Mark Redekopp
2-7.1 Spiral 2 7 Capacitance, Delay and Sizing Mark Redekopp 2-7.2 Learning Outcomes I understand the sources of capacitance in CMOS circuits I understand how delay scales with resistance, capacitance
More informationDigital Integrated Circuits. The Wire * Fuyuzhuo. *Thanks for Dr.Guoyong.SHI for his slides contributed for the talk. Digital IC.
Digital Integrated Circuits The Wire * Fuyuzhuo *Thanks for Dr.Guoyong.SHI for his slides contributed for the talk Introduction The Wire transmitters receivers schematics physical 2 Interconnect Impact
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationToward More Accurate Scaling Estimates of CMOS Circuits from 180 nm to 22 nm
Toward More Accurate Scaling Estimates of CMOS Circuits from 180 nm to 22 nm Aaron Stillmaker, Zhibin Xiao, and Bevan Baas VLSI Computation Lab Department of Electrical and Computer Engineering University
More informationNoisy channel communication
Information Theory http://www.inf.ed.ac.uk/teaching/courses/it/ Week 6 Communication channels and Information Some notes on the noisy channel setup: Iain Murray, 2012 School of Informatics, University
More informationESE 570: Digital Integrated Circuits and VLSI Fundamentals
ESE 570: Digital Integrated Circuits and VLSI Fundamentals Lec 24: April 19, 2018 Crosstalk and Wiring, Transmission Lines Lecture Outline! Crosstalk! Repeaters in Wiring! Transmission Lines " Where transmission
More informationA Novel LUT Using Quaternary Logic
A Novel LUT Using Quaternary Logic 1*GEETHA N S 2SATHYAVATHI, N S 1Department of ECE, Applied Electronics, Sri Balaji Chockalingam Engineering College, Arani,TN, India. 2Assistant Professor, Department
More informationESE570 Spring University of Pennsylvania Department of Electrical and System Engineering Digital Integrated Cicruits AND VLSI Fundamentals
University of Pennsylvania Department of Electrical and System Engineering Digital Integrated Cicruits AND VLSI Fundamentals ESE570, Spring 017 Final Wednesday, May 3 4 Problems with point weightings shown.
More informationDigital Integrated Circuits A Design Perspective
Semiconductor Memories Adapted from Chapter 12 of Digital Integrated Circuits A Design Perspective Jan M. Rabaey et al. Copyright 2003 Prentice Hall/Pearson Outline Memory Classification Memory Architectures
More informationECE 407 Computer Aided Design for Electronic Systems. Simulation. Instructor: Maria K. Michael. Overview
407 Computer Aided Design for Electronic Systems Simulation Instructor: Maria K. Michael Overview What is simulation? Design verification Modeling Levels Modeling circuits for simulation True-value simulation
More information! Crosstalk. ! Repeaters in Wiring. ! Transmission Lines. " Where transmission lines arise? " Lossless Transmission Line.
ESE 570: Digital Integrated Circuits and VLSI Fundamentals Lec 24: April 19, 2018 Crosstalk and Wiring, Transmission Lines Lecture Outline! Crosstalk! Repeaters in Wiring! Transmission Lines " Where transmission
More informationPLA Minimization for Low Power VLSI Designs
PLA Minimization for Low Power VLSI Designs Sasan Iman, Massoud Pedram Department of Electrical Engineering - Systems University of Southern California Chi-ying Tsui Department of Electrical and Electronics
More informationISSN (PRINT): , (ONLINE): , VOLUME-4, ISSUE-10,
A NOVEL DOMINO LOGIC DESIGN FOR EMBEDDED APPLICATION Dr.K.Sujatha Associate Professor, Department of Computer science and Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, Tamilnadu,
More informationDesign and Implementation of Memory-based Cross Talk Reducing Algorithm to Eliminate Worst Case Crosstalk in On- Chip VLSI Interconnect
International Journal of Soft omputing and Engineering (IJSE) Design and Implementation of Memory-based ross Talk Reducing Algorithm to Eliminate Worst ase rosstalk in On- hip VLSI Interconnect Souvik
More informationDigital Integrated Circuits A Design Perspective. Semiconductor. Memories. Memories
Digital Integrated Circuits A Design Perspective Semiconductor Chapter Overview Memory Classification Memory Architectures The Memory Core Periphery Reliability Case Studies Semiconductor Memory Classification
More information1 Introduction to information theory
1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through
More informationCS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability
More informationCprE 281: Digital Logic
CprE 28: Digital Logic Instructor: Alexander Stoytchev http://www.ece.iastate.edu/~alexs/classes/ Simple Processor CprE 28: Digital Logic Iowa State University, Ames, IA Copyright Alexander Stoytchev Digital
More informationBit-Stuffing Algorithms for Crosstalk Avoidance in High-Speed Switching
Bit-Stuffing Algorithms for Crosstalk Avoidance in High-Speed Switching Cheng-Shang Chang, Jay Cheng, Tien-Ke Huang, Xuan-Chao Huang, Duan-Shin Lee, and Chao-Yi Chen Institute of Communications Engineering
More informationESE 570: Digital Integrated Circuits and VLSI Fundamentals
ESE 570: Digital Integrated Circuits and VLSI Fundamentals Lec 21: April 4, 2017 Memory Overview, Memory Core Cells Penn ESE 570 Spring 2017 Khanna Today! Memory " Classification " ROM Memories " RAM Memory
More informationChapter 7. Sequential Circuits Registers, Counters, RAM
Chapter 7. Sequential Circuits Registers, Counters, RAM Register - a group of binary storage elements suitable for holding binary info A group of FFs constitutes a register Commonly used as temporary storage
More informationInformation Theory CHAPTER. 5.1 Introduction. 5.2 Entropy
Haykin_ch05_pp3.fm Page 207 Monday, November 26, 202 2:44 PM CHAPTER 5 Information Theory 5. Introduction As mentioned in Chapter and reiterated along the way, the purpose of a communication system is
More informationCARNEGIE MELLON UNIVERSITY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING DIGITAL INTEGRATED CIRCUITS FALL 2002
CARNEGIE MELLON UNIVERSITY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING 18-322 DIGITAL INTEGRATED CIRCUITS FALL 2002 Final Examination, Monday Dec. 16, 2002 NAME: SECTION: Time: 180 minutes Closed
More informationSemiconductor Memories
Semiconductor References: Adapted from: Digital Integrated Circuits: A Design Perspective, J. Rabaey UCB Principles of CMOS VLSI Design: A Systems Perspective, 2nd Ed., N. H. E. Weste and K. Eshraghian
More informationLecture 9: Clocking, Clock Skew, Clock Jitter, Clock Distribution and some FM
Lecture 9: Clocking, Clock Skew, Clock Jitter, Clock Distribution and some FM Mark McDermott Electrical and Computer Engineering The University of Texas at Austin 9/27/18 VLSI-1 Class Notes Why Clocking?
More informationA Low-Error Statistical Fixed-Width Multiplier and Its Applications
A Low-Error Statistical Fixed-Width Multiplier and Its Applications Yuan-Ho Chen 1, Chih-Wen Lu 1, Hsin-Chen Chiang, Tsin-Yuan Chang, and Chin Hsia 3 1 Department of Engineering and System Science, National
More informationName: Answers. Mean: 38, Standard Deviation: 15. ESE370 Fall 2012
University of Pennsylvania Department of Electrical and System Engineering Circuit-Level Modeling, Design, and Optimization for Digital Systems ESE370, Fall 2012 Final Friday, December 14 Problem weightings
More informationECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering
ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering TIMING ANALYSIS Overview Circuits do not respond instantaneously to input changes
More informationVery Large Scale Integration (VLSI)
Very Large Scale Integration (VLSI) Lecture 4 Dr. Ahmed H. Madian Ah_madian@hotmail.com Dr. Ahmed H. Madian-VLSI Contents Delay estimation Simple RC model Penfield-Rubenstein Model Logical effort Delay
More informationSolution (a) We can draw Karnaugh maps for NS1, NS0 and OUT:
DIGITAL ELECTRONICS II Revision Examples 7 Exam Format Q compulsory + any out of Q, Q, Q4. Q has 5 parts worth 8% each, Q,,4 are worth %. Revision Lectures Three revision lectures will be given on the
More informationEE241 - Spring 2000 Advanced Digital Integrated Circuits. Announcements
EE241 - Spring 2 Advanced Digital Integrated Circuits Lecture 11 Low Power-Low Energy Circuit Design Announcements Homework #2 due Friday, 3/3 by 5pm Midterm project reports due in two weeks - 3/7 by 5pm
More informationCHAPTER 7. Exercises 17/ / /2 2 0
CHAPTER 7 Exercises E7. (a) For the whole part, we have: Quotient Remainders 23/2 /2 5 5/2 2 2/2 0 /2 0 Reading the remainders in reverse order, we obtain: 23 0 = 0 2 For the fractional part we have 2
More informationLecture 6 I. CHANNEL CODING. X n (m) P Y X
6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder
More informationUniversity of Toronto. Final Exam
University of Toronto Final Exam Date - Apr 18, 011 Duration:.5 hrs ECE334 Digital Electronics Lecturer - D. Johns ANSWER QUESTIONS ON THESE SHEETS USING BACKS IF NECESSARY 1. Equation sheet is on last
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Sciences
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Sciences Analysis and Design of Digital Integrated Circuits (6.374) - Fall 2003 Quiz #2 Prof. Anantha Chandrakasan
More informationSwitching Activity Calculation of VLSI Adders
Switching Activity Calculation of VLSI Adders Dursun Baran, Mustafa Aktan, Hossein Karimiyan and Vojin G. Oklobdzija School of Electrical and Computer Engineering The University of Texas at Dallas, Richardson,
More informationCMPEN 411 VLSI Digital Circuits Spring Lecture 19: Adder Design
CMPEN 411 VLSI Digital Circuits Spring 2011 Lecture 19: Adder Design [Adapted from Rabaey s Digital Integrated Circuits, Second Edition, 2003 J. Rabaey, A. Chandrakasan, B. Nikolic] Sp11 CMPEN 411 L19
More information9/18/2008 GMU, ECE 680 Physical VLSI Design
ECE680: Physical VLSI Design Chapter III CMOS Device, Inverter, Combinational circuit Logic and Layout Part 3 Combinational Logic Gates (textbook chapter 6) 9/18/2008 GMU, ECE 680 Physical VLSI Design
More informationWhere Does Power Go in CMOS?
Power Dissipation Where Does Power Go in CMOS? Dynamic Power Consumption Charging and Discharging Capacitors Short Circuit Currents Short Circuit Path between Supply Rails during Switching Leakage Leaking
More informationEE 4TM4: Digital Communications II. Channel Capacity
EE 4TM4: Digital Communications II 1 Channel Capacity I. CHANNEL CODING THEOREM Definition 1: A rater is said to be achievable if there exists a sequence of(2 nr,n) codes such thatlim n P (n) e (C) = 0.
More informationALU A functional unit
ALU A functional unit that performs arithmetic operations such as ADD, SUB, MPY logical operations such as AND, OR, XOR, NOT on given data types: 8-,16-,32-, or 64-bit values A n-1 A n-2... A 1 A 0 B n-1
More informationLecture 25. Semiconductor Memories. Issues in Memory
Lecture 25 Semiconductor Memories Issues in Memory Memory Classification Memory Architectures TheMemoryCore Periphery 1 Semiconductor Memory Classification RWM NVRWM ROM Random Access Non-Random Access
More informationDigital Integrated Circuits A Design Perspective. Arithmetic Circuits. Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic.
Digital Integrated Circuits A Design Perspective Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic Arithmetic Circuits January, 2003 1 A Generic Digital Processor MEM ORY INPUT-OUTPUT CONTROL DATAPATH
More informationDepartment of Electrical & Electronics EE-333 DIGITAL SYSTEMS
Department of Electrical & Electronics EE-333 DIGITAL SYSTEMS 1) Given the two binary numbers X = 1010100 and Y = 1000011, perform the subtraction (a) X -Y and (b) Y - X using 2's complements. a) X = 1010100
More informationECE321 Electronics I
ECE321 Electronics I Lecture 1: Introduction to Digital Electronics Payman Zarkesh-Ha Office: ECE Bldg. 230B Office hours: Tuesday 2:00-3:00PM or by appointment E-mail: payman@ece.unm.edu Slide: 1 Textbook
More information5.0 CMOS Inverter. W.Kucewicz VLSICirciuit Design 1
5.0 CMOS Inverter W.Kucewicz VLSICirciuit Design 1 Properties Switching Threshold Dynamic Behaviour Capacitance Propagation Delay nmos/pmos Ratio Power Consumption Contents W.Kucewicz VLSICirciuit Design
More informationXarxes de distribució del senyal de. interferència electromagnètica, consum, soroll de conmutació.
Xarxes de distribució del senyal de rellotge. Clock skew, jitter, interferència electromagnètica, consum, soroll de conmutació. (transparències generades a partir de la presentació de Jan M. Rabaey, Anantha
More informationDigital Integrated Circuits A Design Perspective. Arithmetic Circuits. Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic.
Digital Integrated Circuits A Design Perspective Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic Arithmetic Circuits January, 2003 1 A Generic Digital Processor MEMORY INPUT-OUTPUT CONTROL DATAPATH
More informationEntropy as a measure of surprise
Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify
More informationTAU' Low-power CMOS clock drivers. Mircea R. Stan, Wayne P. Burleson.
TAU'95 149 Low-power CMOS clock drivers Mircea R. Stan, Wayne P. Burleson mstan@risky.ecs.umass.edu, burleson@galois.ecs.umass.edu http://www.ecs.umass.edu/ece/vspgroup/index.html Abstract The clock tree
More informationCMOS Digital Integrated Circuits Lec 13 Semiconductor Memories
Lec 13 Semiconductor Memories 1 Semiconductor Memory Types Semiconductor Memories Read/Write (R/W) Memory or Random Access Memory (RAM) Read-Only Memory (ROM) Dynamic RAM (DRAM) Static RAM (SRAM) 1. Mask
More informationSemiconductor Memory Classification
Semiconductor Memory Classification Read-Write Memory Non-Volatile Read-Write Memory Read-Only Memory Random Access Non-Random Access EPROM E 2 PROM Mask-Programmed Programmable (PROM) SRAM FIFO FLASH
More informationInformation Theory and Coding Techniques
Information Theory and Coding Techniques Lecture 1.2: Introduction and Course Outlines Information Theory 1 Information Theory and Coding Techniques Prof. Ja-Ling Wu Department of Computer Science and
More informationEE241 - Spring 2001 Advanced Digital Integrated Circuits
EE241 - Spring 21 Advanced Digital Integrated Circuits Lecture 12 Low Power Design Self-Resetting Logic Signals are pulses, not levels 1 Self-Resetting Logic Sense-Amplifying Logic Matsui, JSSC 12/94 2
More informationCMPEN 411 VLSI Digital Circuits Spring Lecture 14: Designing for Low Power
CMPEN 411 VLSI Digital Circuits Spring 2012 Lecture 14: Designing for Low Power [Adapted from Rabaey s Digital Integrated Circuits, Second Edition, 2003 J. Rabaey, A. Chandrakasan, B. Nikolic] Sp12 CMPEN
More informationMOSIS REPORT. Spring MOSIS Report 1. MOSIS Report 2. MOSIS Report 3
MOSIS REPORT Spring 2010 MOSIS Report 1 MOSIS Report 2 MOSIS Report 3 MOSIS Report 1 Design of 4-bit counter using J-K flip flop I. Objective The purpose of this project is to design one 4-bit counter
More informationSample Test Paper - I
Scheme G Sample Test Paper - I Course Name : Computer Engineering Group Marks : 25 Hours: 1 Hrs. Q.1) Attempt any THREE: 09 Marks a) Define i) Propagation delay ii) Fan-in iii) Fan-out b) Convert the following:
More informationChapter Overview. Memory Classification. Memory Architectures. The Memory Core. Periphery. Reliability. Memory
SRAM Design Chapter Overview Classification Architectures The Core Periphery Reliability Semiconductor Classification RWM NVRWM ROM Random Access Non-Random Access EPROM E 2 PROM Mask-Programmed Programmable
More informationof Digital Electronics
26 Digital Electronics 729 Digital Electronics 26.1 Analog and Digital Signals 26.3 Binary Number System 26.5 Decimal to Binary Conversion 26.7 Octal Number System 26.9 Binary-Coded Decimal Code (BCD Code)
More informationDelay and Energy Consumption Analysis of Conventional SRAM
World Academy of Science, Engineering and Technology 13 8 Delay and Energy Consumption Analysis of Conventional SAM Arash Azizi-Mazreah, Mohammad T. Manzuri Shalmani, Hamid Barati, and Ali Barati Abstract
More informationImplementation of Clock Network Based on Clock Mesh
International Conference on Information Technology and Management Innovation (ICITMI 2015) Implementation of Clock Network Based on Clock Mesh He Xin 1, a *, Huang Xu 2,b and Li Yujing 3,c 1 Sichuan Institute
More informationSignal integrity in deep-sub-micron integrated circuits
Signal integrity in deep-sub-micron integrated circuits Alessandro Bogliolo abogliolo@ing.unife.it Outline Introduction General signaling scheme Noise sources and effects in DSM ICs Supply noise Synchronization
More informationCan the sample being transmitted be used to refine its own PDF estimate?
Can the sample being transmitted be used to refine its own PDF estimate? Dinei A. Florêncio and Patrice Simard Microsoft Research One Microsoft Way, Redmond, WA 98052 {dinei, patrice}@microsoft.com Abstract
More informationAnnouncements. EE141- Fall 2002 Lecture 25. Interconnect Effects I/O, Power Distribution
- Fall 2002 Lecture 25 Interconnect Effects I/O, Power Distribution Announcements Homework 9 due next Tuesday Hardware lab this week Project phase 2 due in two weeks 1 Today s Lecture Impact of interconnects»
More informationCombinational Logic Design
PEN 35 - igital System esign ombinational Logic esign hapter 3 Logic and omputer esign Fundamentals, 4 rd Ed., Mano 2008 Pearson Prentice Hall esign oncepts and utomation top-down design proceeds from
More information+ (50% contribution by each member)
Image Coding using EZW and QM coder ECE 533 Project Report Ahuja, Alok + Singh, Aarti + + (50% contribution by each member) Abstract This project involves Matlab implementation of the Embedded Zerotree
More informationNovel Bit Adder Using Arithmetic Logic Unit of QCA Technology
Novel Bit Adder Using Arithmetic Logic Unit of QCA Technology Uppoju Shiva Jyothi M.Tech (ES & VLSI Design), Malla Reddy Engineering College For Women, Secunderabad. Abstract: Quantum cellular automata
More informationCMSC 313 Lecture 25 Registers Memory Organization DRAM
CMSC 33 Lecture 25 Registers Memory Organization DRAM UMBC, CMSC33, Richard Chang A-75 Four-Bit Register Appendix A: Digital Logic Makes use of tri-state buffers so that multiple registers
More informationMAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)
MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PART-A 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007)
More informationMODULE III PHYSICAL DESIGN ISSUES
VLSI Digital Design MODULE III PHYSICAL DESIGN ISSUES 3.2 Power-supply and clock distribution EE - VDD -P2006 3:1 3.1.1 Power dissipation in CMOS gates Power dissipation importance Package Cost. Power
More informationTopics to be Covered. capacitance inductance transmission lines
Topics to be Covered Circuit Elements Switching Characteristics Power Dissipation Conductor Sizes Charge Sharing Design Margins Yield resistance capacitance inductance transmission lines Resistance of
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More informationClock-Gating and Its Application to Low Power Design of Sequential Circuits
415 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 47, NO. 103, MARCH 2000 [13] E. Ott, C. Grebogi, and J. A. Yorke, Controlling chaos, Phys. Rev. Lett., vol. 64,
More informationL4: Sequential Building Blocks (Flip-flops, Latches and Registers)
L4: Sequential Building Blocks (Flip-flops, Latches and Registers) Acknowledgements: Lecture material adapted from R. Katz, G. Borriello, Contemporary Logic esign (second edition), Prentice-Hall/Pearson
More informationLogic BIST. Sungho Kang Yonsei University
Logic BIST Sungho Kang Yonsei University Outline Introduction Basics Issues Weighted Random Pattern Generation BIST Architectures Deterministic BIST Conclusion 2 Built In Self Test Test/ Normal Input Pattern
More informationL4: Sequential Building Blocks (Flip-flops, Latches and Registers)
L4: Sequential Building Blocks (Flip-flops, Latches and Registers) Acknowledgements:., Materials in this lecture are courtesy of the following people and used with permission. - Randy H. Katz (University
More informationEE141Microelettronica. CMOS Logic
Microelettronica CMOS Logic CMOS logic Power consumption in CMOS logic gates Where Does Power Go in CMOS? Dynamic Power Consumption Charging and Discharging Capacitors Short Circuit Currents Short Circuit
More informationESE 570: Digital Integrated Circuits and VLSI Fundamentals
ESE 570: Digital Integrated Circuits and VLSI Fundamentals Lec 18: March 27, 2018 Dynamic Logic, Charge Injection Lecture Outline! Sequential MOS Logic " D-Latch " Timing Constraints! Dynamic Logic " Domino
More information