Trace-Driven Steady-State Probability Estimation in FSMs with Application to Power Estimation*

Size: px
Start display at page:

Download "Trace-Driven Steady-State Probability Estimation in FSMs with Application to Power Estimation*"

Transcription

1 Trace-Driven Steady-State Probability Estimation in FSMs with Application to Power Estimation* Abstract - This paper illustrates, analytically and quantitatively, the effect of high-order temporal correlations on steady-state and transition probabilities in finite state machines (FSMs). As the main theoretical contribution, we extend the previous work done on steady-state probability calculation in FSMs to account for complex spatiotemporal correlations which are present at the primary inputs when the target machine models real hardware and receives data from real applications. More precisely: 1) using the concept of constrained reachability analysis, the correct set of Chapman- Kolmogorov equations is constructed; and 2) based on stochastic complementation and iterative aggregation/ disaggregation techniques, exact and approximate methods for finding the state occupancy probabilities in the target machine are presented. From a practical point of view, we show that assuming temporal independence or even using first-order temporal models is not sufficient due to the inaccuracies induced in steady-state and transition probability calculations. Experimental results show that, if the order of the source is underestimated, not only the set of reachable sets is incorrectly determined, but also the steady-state probability values can be more than 100% off from the correct ones. This strongly impacts the accuracy of the total power estimates that can be obtained via probabilistic approaches. 1. Introduction In the last decade, probabilistic approaches have received a lot of attention as a viable alternative to deterministic techniques for analyzing complex digital systems. Logic synthesis [1], verification [2], testing [3] and more recently, low-power design [4] have benefited from using probabilistic techniques. In particular, the behavior of FSMs has been investigated using concepts from the Markov chain (MC) theory. Studying the behavior of the MC provides us with different variables of interest for the original FSM. In this direction, [5][6] are excellent references where steady-state and transition probabilities (as variables of interest) are estimated for large FSMs. Both techniques are analytical in nature but, in order to manage complexity, make some simplifying assumptions, temporal independence of the primary inputs being the most notable one. As we will show in this paper, temporal correlations longer than one time step can significantly affect the overall behavior of the FSM and therefore result in very different values for the actual transition probabilities compared to those predicted in [5][6]. More interestingly, it will be shown that if one ignores the effect of finite-order statistics at the primary inputs of the FSM, it is possible to wrongly predict non-zero steady-state probabilities for some transient states (which normally occur in the beginning of operation, but disappear once the machine reaches its steady-state regime). Knowledge of correct state occupancy probabilities is important in timing verification, state assignment, re-encoding for low-power and power estimation via *This research was supported by DARPA under contract no. F C1627 and NSF under award no. MIP Diana Marculescu, Radu Marculescu, Massoud Pedram Department of Electrical Engineering - Systems University of Southern California, Los Angeles, CA probabilistic or statistical approaches. Addressing these issues, the present paper extends the previous work reported in [5] to explicitly incorporate complex spatiotemporal correlations in steady-state and transition probabilities calculation for FSMs. The analysis itself relies on time-homogeneous discrete-parameter MCs which are used in two different ways: 1) an input modeling MC is used to model the binary input stream that typifies the application data (called also trace) feeding the target FSM 1 ; 2) a composite MC is used to model the serial connection of the input-modeling MC and the MC of the FSM itself. Studying the composite MC requires reachability analysis on the target machine. At this point, our work differs substantially from what other researchers have considered in the past, in the sense that our reachability analysis is constrained by the actual input sequence and accounts for the very specific way in which the input source excites the target FSM. Last but not least, analysis of MCs involves sophisticated numerical techniques. To date, Gauss-Jacobi and power method have been extensively used in steady-state probability calculation [5]. We present instead two different algorithms based on stochastic complementation and iterative aggregation/ disaggregation, which provide a deep insight into theoretical aspects of MC analysis and an efficient solution for a large class of MCs, i.e. nearly completely decomposable (NCD) systems. The present paper thus improves the state-of-the-art in two ways: 1) using the concept of constrained reachability analysis, it constructs the correct set of Chapman-Kolmogorov equations; 2) based on stochastic complementation and iterative aggregation/disaggregation techniques, it presents exact and approximate techniques for finding the state occupancy probabilities in the target machine. The paper is organized as follows. Section 2 presents the basic definitions and notations on FSMs and MCs that will be used throughout the paper. In Section 3 we formulate the problem we want to solve and present the basic Markov model. Section 4 focuses on constrained reachability analysis issue. In Section 5 we present exact and approximate methods for steady-state probabilities calculation, and we point out some issues regarding complexity and convergence of algorithms. Finally, we present some experimental results for common sequential benchmarks, and we conclude by summarizing our main contribution. 2. Preliminaries on finite-order MCs In this section we present the basic definitions and notation. For a complete documentation, we refer the reader to [7][8]. We consider only time-homogeneous discrete-parameter MCs with finite state space and assume that all states are recurrent (i.e. the probability of returning to it after n 1 steps is greater than zero) since all transient states vanish after a finite number of steps. 1 We point out that although the Markov model is derived for a particular input trace, it is completely general and represents in a compact form the whole class of input sequences having the same characteristics.

2 Definition 1. A discrete stochastic process {x n } n 0 is said to be a lag-k MC if at any time step n k: p( x n = α n x n 1 = α n 1 x n 2 = α n 2 x 0 = α 0 ) = (1) p( x n = α n x n 1 = α n 1 x n 2 = α n 2 x n k = α k ) If k = 1, the conditional probabilities p( x n = α n x n 1 = α n 1 ) are called single-step transition probabilities and represent the conditional probabilities of making a transition from state x n-1 to state x n at time step n. In homogeneous MCs these probabilities are independent of n and consequently written as p i = p( x n = x n 1 = i) for all n = 1, 2,... The matrix Q = {p i } is called the transition probability matrix. We note that Q is a stochastic matrix because its elements satisfy the following two properties: 0 p i 1 and p i = 1. An equivalent description of the MC can be given in terms of its state transition graph (STG). Each node in the STG represents a state in the MC, and each edge from node i to node is labelled with the one-step transition probability from state i to state (p i ). It should be noted that any lag-k MC can be reduced to a lag-one MC based on the following result. Proposition 1. [8] If {u n } n 1 is a lag-k MC then {v n } n 1, where v n = (u n, u n+1,..., u n+k-1 ), is a multivariate lag-one MC. For clarity, we will thus refer subsequently only to lag-one MCs; using Proposition 1, all results translate to lag-k MCs. Definition 2. A MC is said to be nearly completely decomposable if its transition matrix Q can be written in the form Q = Q 11 Q 12 Q 1 p Q 21 Q 22 Q 2p, where all non-zero elements in the off- Q p1 Q p2 Q pp diagonal blocks are small (the precise meaning of small will be defined later in Section 5) compared to those in the diagonal blocks. If the off-diagonal blocks are exactly zero, then the MC is said to be completely decomposable. We now turn our attention to distribution defined on the states of a MC. We shall denote by π i (n) the probability that the MC is in state i at step n, that is π i (n) = p(x n = i). In vector notation, π(n) = (π 1 (n), π 2 (n),..., π i (n),...), where π is a row vector and π(n) = π(0) Q n, where π(0) denotes the initial state distribution of the chain. For nondecomposable and aperiodic MCs, it is shown that the limiting distribution π = lim π( n) always exists and it is n independent of the initial probability distribution [7]. In addition, the following important result holds. Proposition 2. [7] For a nondecomposable MC, the equation π Q = π with π = 1 has a unique solution that represents the stationary distribution of the MC. The unique solution of the equation in Proposition 2 (called the set of Chapman-Kolmogorov equations) can actually be determined by solving the system of equations π = π i p i with π = 1. i 3. FSM steady-state analysis: problem formulation In this section, we introduce formally the problem we want to solve and present two Markov models that we use to solve it: one associated to the state lines of the FSM and another one for the input sequence that feeds the target FSM. The probabilistic behavior of an FSM can be studied by regarding its STG as a MC. More precisely, by attaching to each out-going edge of each state in the target FSM a transition probability that corresponds to that particular transition, one can actually obtain a MC as defined in previous section. Furthermore, studying the behavior of the underlying MC gives us different variables of interest for the original FSM. To this end, the set of equations to solve is the one from Proposition 2 (where Q = Q S is the transition matrix associated to the FSM). To set up the matrix Q S, the authors in [5] consider that all external input combinations are equiprobable during the normal operation of the machine and therefore, the one-step transition probability matrix can be obtained from the transition relation in a straightforward manner. However, in practice, the situation can be quite different: various input sequences may exercise the machine in different ways and thus produce substantially different STG structures. Due to the feedback lines, the behavior of the state lines themselves is strongly dependent on the characteristic of the input sequences present at primary inputs and therefore, to set up the Q S matrix which actually accounts for the influence of correlations at the primary inputs on the state lines of the FSM is a key (and nontrivial!) task. To construct the correct Q S matrix which accounts for the effect of the actual input trace, we have to associate a finite-order MC to the primary input stream. 4. Sequence-driven reachability analysis In this section we first present a theoretical framework for sequence-driven reachability analysis, followed by a practical solution to this problem. We point out that sequence-driven reachability analysis differs from classical reachability analysis in that it accounts for constraints on the inputs, that is, the possible set of input vectors applicable to the circuit and their sequencing. In [9] it has been shown that to any first-order MC, one can associate a Stochastic Sequential Machine (SSM) that generates symbols according to the conditional probabilities of the initial MC. Specifically, a synthesis procedure for the SSM modeling the input sequence has been proposed by the authors. Based on this, the target FSM and its input can be viewed as in Fig.1a. auxiliary inputs INPUT SSM TARGET FSM Input SSM Target FSM logic logic (a) x n Target z n = out(x n,s n ) p(x n s n ) FSM logic s n s n+1 = next(x n,s n ) (b) Fig.1: FSM analysis using input sequence modeling The primary inputs of the SSM in Fig.1a (called auxiliary inputs) are generated according to a prescribed probability distribution such that the states of the SSM have exactly the desired probability distribution. Thus, the analysis can be done on the product machine (input SSM, target FSM) which has

3 temporally uncorrelated inputs with a prescribed probability distribution. What we are interested in is the probability distribution on the state lines of the target FSM. Referring to the general FSM in Fig.1b, we proposed until now two interacting Markov models: one for the primary inputs {x n } n 0 and another one for the states {s n } n 0 (which characterizes the behavior of the machine itself). In fact, these two models can be conceptually merged via oint transition probabilities p( x n s n ) and p ( x n s n x n 1 s ) n 1. These probabilities completely characterize the input that feeds the next state and the output logic of the target circuit. For convenience, we introduce a new formulation based on matrices. As we can see in Fig.1b, the first-order MC at the primary inputs x n can be characterized by the matrix of conditional probabilities Q X = ( q ) i, where 1 i, l q i = p( x n = v x n 1 = v i ) and {v 1, v 2,..., v l } represents the set of possible input patterns. On the other hand, the MC defined ointly for the primary inputs and state lines {x n s n } n 0 can be characterized by the matrix, where r ip, q = p( x n s n = v u q x n 1 s n 1 = v i u p ) and {u 1, u 2,..., u m } is the set of reachable states (Q XS is the stochastic matrix of the product machine mentioned above). From this oint characterization, we can easily derive the state probabilities as: ps ( n ) = p( x n s n ) which are actually our variables of interest. Based on Theorem 2 in [17], the following result provides the starting point in finding the correct matrix Q XS. Proposition 3. The matrix Q XS can be written in the form q 11 B 1 q 12 B 1 q 1l B 1 Q XS = q 21 B 2 q 22 B 2 q 2l B 2, where {B i } 1 i l is a set of m m q l1 B l q l2 B l q ll B l degenerate (i.e., containing only 0s and 1s) stochastic matrices defining the next state function for input v i. Specifically, if i B i = ( b ) pq 1 p, q m then Q XS = ( r ) ip, q 1 i, l 1 pq, m all x n i 1 if next( v i, u p ) = u q b pq =. 0 otherwise Corollary 1. The product machine (input SSM, target FSM) (as in Fig.1a) is also a SSM whose auxiliary inputs are excited using the same probability distribution as the one used for the input SSM. From this point on, to compute the steady-state probability for the state lines of the target machine, we can apply any existing approach that computes the probabilities for the states of the product machine (input SSM, target FSM) which has the virtue of having temporally uncorrelated inputs. However, this approach can be very inefficient: the task of synthesizing the exact input SSM may require huge memory and computation time. Instead, we propose to model the input as a Dynamic Markov Tree of order 1 (DMT 1 ) [10]. The DMT 1 model contains information about not only the possible binary vectors that can appear on the inputs of the FSM, but also the sequencing of these vectors. Additionally, the wordwise conditional probabilities for the primary inputs are easily extracted from such a model. The benefits of using DMT 1 for input modeling are threefold: the structure DMT 1 is constructed on demand, therefore it offers a very compact representation; the model provides a set of parameters that completely capture spatiotemporal correlations; its structure is compatible with that of Binary Decision Diagrams (BDDs) [11] which have been successfully used in reachability analysis for FSMs [12][13]. To see how these advantages can be exploited, consider the following example. Example 1: For the sequence S 1, the DMT 1 is given in Fig.2a. The corresponding BDD for this DMT 1 is depicted in Fig.2b. Every possible combination with non-zero probability of occurrence in DMT 1 is part of the ON-set of the corresponding BDD; everything else, represents the OFF-set. S 1 v v 3 v 2 v a b 0 b a 1 a 1 a 1 a b 1 b 1 b 1 b 1 b 1 b 1 b b 0 b 0 a 1 a 1 a (a) (b) Fig.2: The tree DMT 1 and the corresponding BDD The BDD corresponding to a given DMT 1 actually represents the transition relation δ of the machine that models the input to the target FSM ( INPUT SSM in Fig.1a). Having this representation for the input, our task is then to find all the reachable combinations (input, state). Let B = {0, 1}, N the number of primary input variables and δ: B N x B N B defined as δ (x -, x + ) = 1 if vector x + can follow x - on the input modeled as a lag-one MC (that is, if the corresponding conditional probability is non-zero) and zero otherwise. Also, let M be the number of state variables and next: B M x B N B M be the next state function of the FSM. Then, we may employ the following standard procedure [13] to compute the set C of reachable combinations (input, state) for the given FSM and input characterization: C 0 = {(x 0, s 0 ) x 0 is any possible initial input, s 0 is any possible initial state}, C i+1 = C i {(x, s) (x, s ) C i s.t. δ (x, x) = 1 and next (s, x ) = s}, C = C i if C i = C i+1. The above steps can be performed completely symbolically with the aid of BDDs [12][13]. After having the complete set C of possible (input, state) combinations, we can easily build the matrix Q XS based on Proposition 3. We should point out that in general, the matrix Q XS 1) may have transient states or 2) may be decomposable. However, these can be dealt with in a similar way to the approach presented in [5] where: 1) transient states are eliminated and 2) the problem is reduced to finding the steady state distribution for each strongly connected component of the underlying MC. In what follows, we will thus refer only to irreducible MCs or matrices, that is, those in which each state is reachable from any other state in a finite number of steps. This a 0

4 hypothesis has no theoretical limitation to be extended to the case of reducible MCs or MCs with multiple components. 5. Steady-state probability computation We have constructed by now the correct matrix Q XS (which completely characterizes the FSM behavior) and are therefore ready to solve the basic equation π Q XS = π (with Σπ i = 1). 5.1 Classical methods Finding the stationary distribution of MCs with relatively few states is not a difficult problem and standard techniques based on solving systems of linear equations do exist [15]. To find the stationary distribution of a MC, one can always employ direct or iterative methods to solve the Chapman-Kolmogorov equations. Both types of methods work with the stochastic matrix as a whole. However, when the matrix size is large, we must resort to decompositional methods that try to solve smaller problems and then aggregate their solutions to find the needed stationary distribution. 5.2 Stochastic complementation For large-scale problems, it is natural to attempt to somehow uncouple (or decompose) the original MC into smaller chains (which are therefore easier to analyze) and finally, having these partial solutions, to produce the global stationary distribution that corresponds to the original chain. Definition 3. [16] Let Q be a n n irreducible stochastic matrix partitioned as Q = Q 11 Q 12 Q 1p Q 21 Q 22 Q 2p Q p1 Q p2 Q pp where all diagonal blocks are square. For a given index i, let Q i denote the principal block submatrix of Q obtained by deleting the i th row and i th column of blocks from Q, and let Q i* and Q *i designate: Q i = [ Q i1 Q i2 Q ii, 1 Q ii, + 1 Q ip ] and (3) T Q *i = Q 1i Q 2i... Q i 1, i Q i + 1, i... Q pi That is, Q i* is the i th row of blocks with Q ii removed and Q *i is the i th column of blocks with Q ii removed. The stochastic complement of Q ii is defined to be the matrix S ii = Q ii + Q i ( I Q i ) 1 Q (4) *i where I is the unit matrix. It can be shown that every stochastic complement in Q is also an irreducible matrix. In addition, the following theorem has been proven [16]. Theorem 1. Let Q be an n n irreducible stochastic matrix as in (2) whose stationary probability vector π can be written as π = ( ξ 1 Φ 1, ξ 2 Φ 2, ξ, p Φ p ) with Φ i e = 1 for i = 1, 2,..., p; e is a column vector defined as: e = (1,1,...,1) T. Then Φ i is the unique stationary probability vector for the stochastic complement S ii and ξ = (ξ 1, ξ 2,, ξ p ) is the unique stationary probability vector for the p p irreducible stochastic matrix A (called the coupling matrix) whose entries are defined by a i = Φ i Q i e. The coupling matrix A corresponds to an MC in which states belonging to the same block of the partition are collapsed into a (2) single state. Thus, ξ describes the steady-state probability of being in such a set of states. Using this important theorem, the following exact procedure can be used to compute the stationary probability vector. (The input to this procedure is the matrix Q given in Definition 3). 1. Partition Q into a p p block matrix with square diagonal blocks. 2. Form the stochastic complement of each diagonal block: S ii = Q ii + Q i* (I-Q i ) -1 Q *i, i = 1, 2,..., p. 3. Compute the stationary probability vector of each stochastic complement: Φ i S ii = Φ i ; Φ i e = 1, i = 1, 2,..., p. 4. Form the coupling matrix A whose elements are given by: a i = Φ i Q i e. 5. Compute the stationary probability vector of A: ξ A = ξ; ξ e = Construct the stationary probability vector of Q as: π = (ξ 1 Φ 1, ξ 2 Φ 2,, ξ p Φ p ). Fig.3: The stochastic complementation algorithm We point out that the analysis based on stochastic complementation does not depend in any way on matrix Q being NCD and when implementing the stochastic complement approach, we may choose a partitioning solution that is convenient for us. For instance, based on the functionality of the FSM, we may partition matrix Q XS such that combinations (x i, s i ), (x, s ) are put in the same block if and only if s i = s,. In this case, the matrix Q XS is partitioned as in (2) where each submatrix Q i (1 p, q m) has the form (using the notations in Proposition 3) 1 : Q i = 1 2 l q 11 1 q 12 1 q 1l 2 2 q 21 q 22 q 2l q l1 l q l2 l q ll The following remarkable result holds: Theorem 2. If the stochastic complementation algorithm is applied to matrix Q XS and the partitioning is done as in (5), then A is the matrix associated with the MC for the state lines Q S and the corresponding probability distribution is given by ξ. This important result ustifies basically the applicability of stochastic complementation to FSM analysis. We also note that this method has the important feature of being exact, but unfortunately, it is computationally inefficient on monoprocessor machines (it involves the inversion of a large matrix (I - Q i )). Its contribution lies primarily in the insight it provides into theoretical aspects of NCD systems. 5.3 Iterative aggregation/disaggregation In this section, we present an iterative algorithm based on approximate decomposition that rapidly converges to the exact solution when the MC is NCD. The pioneering work on NCD systems comes from Simon and Ando [18] who studied the dynamic behavior of linear systems. The idea behind the dynamic behavior of NCD systems is the existence of two operational regimes: - a short-run dynamics, when strong interactions within each subsystem are dominant and quickly force each subsystem to a local equilibrium almost independently of what is happening in the other subsystems; - a long-run dynamics, when weak interactions among groups 1 This partitioning is state-oriented, while the one in Proposition 3 is input-oriented. (5)

5 begin to become important and the whole system moves toward a global equilibrium; in this global equilibrium the relative values attained by the states at the end of the short-run dynamics period are maintained. As a consequence, the state space of the global MC can be partitioned into disoint sets, with strong interactions among the states of a subset, but with weak interactions among the subsets themselves. This way, each subproblem can be solved separately and the global solution is then constructed from partial solutions. Iterative aggregation/disaggregation (IAD) methods are particularly useful when the global MC is NCD. More precisely, IAD methods work on partitioned state space as an aggregation (or coupling) step followed by a disaggregation one. The coupling step involves generating a stochastic matrix of block transition probabilities (the coupling or aggregation matrix) and then determining its stationary probability vector. The disaggregation step computes an approximation to the probability of each state aggregated within the same block. The basic iterative algorithm is called KMS (after its authors Khoury-McAllister-Stewart) [14] and is described in Fig.4 (Once again, the input of the algorithm is the matrix Q as in Definition 3.) 1. Let π (0) = (π (0) 1, π (0) 2,...,π (0) p ) be a given initial approximation to the solution π, and set m = Compute Φ (m-1) : for i = 1, 2,..., p do ( m 1) Φ i ( m 1) = π i ( m 1) π i 1 (the norm is defined as: x 1 = x ). 3. Construct the aggregation matrix A (m-1) whose elements are given by: (A (m-1) ) i = Φ i (m-1) Q i e. 4. Solve the eigenvector problem: ξ (m-1) A(m-1) = ξ (m-1) ; ξ (m-1) e = 1. 5.(a) Compute the row vector: z (m) = (ξ (m-1) 1 Φ (m-1) 1, ξ (m-1) 2 Φ (m-1) 2,, ξ (m-1) p Φ (m-1) p ). (b) Find π (m) : for k = 1, 2,..., p solve the system of equations: π k = π k Q kk + π Q k + z Q k < k > k 6. Test for convergence: if the accuracy is sufficient, then stop and take π (m) as the solution vector; otherwise, set m = m + 1 and go to step 2. Fig.4: The KMS algorithm In this case, the partitioning criterion is purely numerical. States are aggregated within the same block if they interact strongly enough to favor short-run dynamics. This way, the offdiagonal elements in the global matrix are smaller than those in the blocks on the main diagonal and therefore, the interactions among subsets are minimized. In practice, finding the partitioning of a NCD stochastic matrix is not a trivial task. One way to do it is to ignore the entries in the matrix that are less than some threshold ε and then find the strongly connected components of the underlying MC [15]. Next, the threshold may be increased and the same analysis is done on the components already found. We should note that the lower the threshold, the higher the rate of convergence, at the expense of larger blocks. On the other hand, if the partitioning is done until the blocks become manageable, the convergence rate slows down due to the larger threshold used. More formally, under fairly general conditions, the following result holds for NCD matrices: Theorem 3. [14] If the matrix Q is partitioned such that Q ii 1 = O(1) and Q i 1 = O(ε) for i, then the error in the approximate solution using the iterative aggregation/disaggregation algorithm is reduced by a factor of ε at each iteration. In practice, the KMS algorithm offers the attractive feature of working on individual blocks instead of the whole matrix Q XS. As a consequence, its complexity per iteration is given by O(n max 3 ), where n max is the maximum order over all diagonal blocks in the partitioned matrix Q XS. Compared to the classical power method (which is also iterative in nature) [5], the KMS algorithm has the significant advantage of being applicable to any irreducible MC (aperiodic or periodic) and also having a higher rate of convergence for NCD systems. In these cases, a few iterations will suffice for all practical purposes. We should point out that we can always trade-off space versus time complexity: if the partitioning is such that n max is still too large, we can use a higher value for the threshold such that the size of the largest subset (i.e. n max ) becomes manageable. In this case, the convergence rate will be smaller and thus, the time needed for convergence will increase. Also, the size of the coupling matrix will be larger and hence step 4 in the KMS algorithm becomes critical. However, there is a solution to this problem: when solving the Chapman-Kolmogorov equations for the coupling matrix, we can apply the KMS algorithm in a recursive manner. This approach is called the hierarchical KMS algorithm [15]. 6. Experimental results In this section we provide our experimental results for some common benchmark circuits. In particular, we compare the probability distribution for the states where the order of the input sequence is assumed by default to be one, against the case where the actual order of the source is taken into consideration. We provide in Table 1 the results obtained for stochastic complementation when Fibonacci-type sequences (that is, second-order sequences) are applied at the primary inputs of some mcnc 91 and ISCAS 89 benchmarks. We considered for this only small benchmarks because the exact method based on stochastic complementation is, in general, computationally inefficient. For each example, an appropriate dynamic Markov tree has been built and based on it, a sequence-driven reachability analysis has been performed. Using the obtained set of reachable combinations (input, state) (denoted by #(x, s)), and sparse matrix techniques, the matrix Q XS has been built and further used to determine steady-state probabilities. The partitioning was determined by the same functional criterion as in Theorem 2. We also report for each benchmark the number of reached states (#s) and their corresponding probabilities. Table 1: Steady-state distributions obtained with stochastic complementation for second-order input sequences Circuit PIs/ #(x,s) #s State probability distribution bbara 4/ [ ] bbtas 2/ [ ] dk17 2/3 6 2 [ ] donfile 2/5 9 5 [ ] s400 3/ [ ] s526 3/ [ ] In Table 2, we report our results obtained when applying the KMS algorithm for a larger set of benchmarks. Once again, the input sequences were generated using Fibonacci series. For comparison, we also provide the results when the input is considered by default as having order one. For each case, the sequence-driven reachability analysis is carried out as above and

6 then, using the Q XS matrix, the KMS algorithm is applied using a numerical partitioning criterion with a threshold of In all cases, we report the number of (input, state) combinations reached and the number of reached states. The number of iterations needed to converge for an error less than 10-5 was 3 for the second-order model and between 7 and 15 for the firstorder model. The average run-time per iteration was 2 sec. for each block-matrix on a Sparc 20 workstation. For the larger sized matrices, the hierarchical KMS algorithm was used. For comparison, for first-order models we also provide the maximum and mean percentage errors obtained when comparing the results to the actual second order model (MAX% and MEAN%). As we can see, considering the input of the target FSM as having only one-step temporal correlations may significantly impair the ability of predicting the correct number of reached states. In addition, for the subset of states correctly found as being reached, there is a significant error in the value of steadystate probabilities and total power consumption. For example, when excited using a second-order type of input, benchmark planet has a number of 34 reached states, whereas if the order is (incorrectly) assumed to be one, the number of reached states becomes 48. Moreover, the error in predicting the steady-state probability can be as high as 513% for the first-order model. Generally speaking, a lower order model covers all the states from the original one, but it may also introduce a significant number of extra states and, furthermore, the quality of the results in estimating the steady-state probabilities is seriously impaired. Table 2: First-order vs. second-order model using KMS alg. Circuit PIs/ bbara 4/ bbtas 2/ dk17 2/ donfile 2/ ex1 9/ planet 7/ sand 11/ s / s / s1494 8/ s526 3/ s820 18/ Since the values of these probabilities strongly affect the total power values, we also show the impact of these results on probabilistic power estimation techniques. Knowledge of the steady-state probability distribution for inputs and states and the conditional probabilities from matrix Q XS, allow us to reduce the problem of power estimation for sequential circuits to the one for combinational circuits; therefore techniques like [19] can be successfully applied. We provide in the Total power columns a comparison between the values of power estimated when the order of the input sequences was considered arbitrarily as being one vs. those determined when the actual order has been taken into account (all values are in µw at 20 MHz). As we can see, underestimating the actual order of the input sequence, strongly impacts the accuracy of the values of total power consumption; the error introduced can be as much as 25% for circuit planet. 7. Conclusion Second-order #(x,s) #s Total power First-order #(x,s) #s MAX% MEAN% Total power In this paper we investigated from a probabilistic point of view the effect of finite-order statistics of the input sequence on FSMs behavior. In particular, the effect of temporal correlations longer that one clock-cycle was analyzed for steady-state and transition probabilities calculations. The results presented in this paper can be used in low-power design, synthesis and verification and represent an important step towards understanding the FSM behavior from a probabilistic point of view. Acknowledgments The authors would like to thank Professor Carl Meyer from the Department of Mathematics of North Carolina State University for reading an early draft of the paper and providing valuable comments. References [1] K. Parker and E.J. McCluskey, Probabilistic Treatment of General Combinational Networks, in IEEE Trans. on Computers, vol. C-24, June [2] G. Hachtel, E. Macii, A. Pardo, and F. Somenzi, Probabilistic Analysis of Large Finite State Machines, in Proc. ACM/IEEE Design Automation Conference, pp , June [3] J. Savir, G.S. Ditlow, and P.H. Bardell, Random Pattern Testability, in IEEE Trans. on Computers, vol. C-33, Jan [4] G. Hachtel, M. Hermida, A. Pardo, M. Poncino and F. Somenzi, Reencoding sequential circuits to reduce power dissipation, in Proc. ACM/IEEE Intl. Conf. Computer-Aided Design, pp , Nov [5] G.D. Hachtel, E. Macii, A. Pardo, and F. Somenzi, Markovian Analysis of Large Finite State Machines, in IEEE Trans. on CAD, vol.15, no.12, Dec [6] C.-Y. Tsui, J. Monteiro, M. Pedram, S. Devadas, A. M. Despain, and B. Lin, Power Estimation Methods for Sequential Logic Circuits, in IEEE Trans. on VLSI Systems, vol.3, no.3, Sept [7] J.G. Kemeny, J.L. Snell, and A.W. Knapp, Denumerable Markov Chains, D.Van Nostrand Company, Inc., [8] A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw-Hill Co., [9] D. Marculescu, R. Marculescu, and M. Pedram, Stochastic Sequential Machine Synthesis Targeting Constrained Sequence Generation, in Proc. ACM/IEEE Design Automation Conference, pp , June [10] R. Marculescu, D. Marculescu, and M. Pedram, Adaptive Models for Input Data Compaction for Power Simulators, in Proc. IEEE Asian-South Pacific Design Automation Conference, Japan, Jan [11] R.E. Bryant, Graph Based Algorithms for Boolean Function Manipulation, in IEEE Trans. on Computers, vol. C-35, no. 8, pp , Aug [12] O. Coudert, C. Berthet, and J.C. Madre, Verification of Sequential Machines Based on Symbolic Execution, in Proc. of the Workshop on Automatic Verification Methods for Finite State Systems, Grenoble, France, [13] H.J. Touati, H. Savo, B. Lin, R.K. Brayton, and A. Sangiovanni-Vincentelli, Implicit State Enumeration of Finite State Machines Using BDDs, in Proc. ACM/IEEE Intl. Conf. Computer-Aided Design, pp , Nov [14] R. Koury, D.F. McAllister, and W.J. Stewart, Methods for Computing Stationary Distributions of Nearly-Completely-Decomposable Markov Chains, SIAM Journal of Algebraic and Discrete Mathematics, vol.5, no.2, pp , [15] W.J. Stewart, An Introduction to the Numerical Solution of Markov Chains, Princeton University Press, Princeton, [16] C.D. Meyer, Stochastic Complementation, Uncoupling Markov Chains and the Theory of Nearly Reducible Systems, SIAM Reviews, vol.31, no.2, pp , [17] D. Marculescu, R. Marculescu, and M. Pedram, Sequence Compaction for Probabilistic Analysis of Finite State Machines, in Proc. ACM/IEEE Design Automation Conference, pp , June [18] H.A. Simon and A. Ando, Aggregation of variables in dynamic systems, Econometrica, vol.29, pp , [19] R. Marculescu, D. Marculescu, and M. Pedram, Efficient Power Estimation for Highly Correlated Input Streams, in Proc. ACM/IEEE Design Automation Conference, pp , June 1995.

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 18, NO. 7, JULY

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 18, NO. 7, JULY IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 18, NO. 7, JULY 1999 973 Sequence Compaction for Power Estimation: Theory and Practice Radu Marculescu, Member, IEEE,

More information

CAD tools play a significant role in the efficient design

CAD tools play a significant role in the efficient design IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 2, FEBRUARY 1998 73 Probabilistic Modeling of Dependencies During Switching Activity Analysis Radu Marculescu,

More information

Sequential Equivalence Checking without State Space Traversal

Sequential Equivalence Checking without State Space Traversal Sequential Equivalence Checking without State Space Traversal C.A.J. van Eijk Design Automation Section, Eindhoven University of Technology P.O.Box 53, 5600 MB Eindhoven, The Netherlands e-mail: C.A.J.v.Eijk@ele.tue.nl

More information

PLA Minimization for Low Power VLSI Designs

PLA Minimization for Low Power VLSI Designs PLA Minimization for Low Power VLSI Designs Sasan Iman, Massoud Pedram Department of Electrical Engineering - Systems University of Southern California Chi-ying Tsui Department of Electrical and Electronics

More information

Primary Outputs. Primary Inputs. Combinational Logic. Latches. Next States. Present States. Clock R 00 0/0 1/1 1/0 1/0 A /1 0/1 0/0 1/1

Primary Outputs. Primary Inputs. Combinational Logic. Latches. Next States. Present States. Clock R 00 0/0 1/1 1/0 1/0 A /1 0/1 0/0 1/1 A Methodology for Ecient Estimation of Switching Activity in Sequential Logic Circuits Jose Monteiro, Srinivas Devadas Department of EECS, MIT, Cambridge Bill Lin IMEC, Leuven Abstract We describe a computationally

More information

Switching Activity Estimation Based on Conditional Independence

Switching Activity Estimation Based on Conditional Independence Switching Activity Estimation Based on Conditional Independence Radu Marculescu, Diana Marculescu and Massoud Pedram CENG Technical Report 95-04 Department of Electrical Engineering - Systems University

More information

EGFC: AN EXACT GLOBAL FAULT COLLAPSING TOOL FOR COMBINATIONAL CIRCUITS

EGFC: AN EXACT GLOBAL FAULT COLLAPSING TOOL FOR COMBINATIONAL CIRCUITS EGFC: AN EXACT GLOBAL FAULT COLLAPSING TOOL FOR COMBINATIONAL CIRCUITS Hussain Al-Asaad Department of Electrical & Computer Engineering University of California One Shields Avenue, Davis, CA 95616-5294

More information

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property

Stochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat

More information

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for? Computer Engineering and Networks Overview Discrete Event Systems Verification of Finite Automata Lothar Thiele Introduction Binary Decision Diagrams Representation of Boolean Functions Comparing two circuits

More information

Cycle-based Decomposition of Markov Chains with. Applications to Low Power Synthesis and Sequence. Compaction for Finite State Machines

Cycle-based Decomposition of Markov Chains with. Applications to Low Power Synthesis and Sequence. Compaction for Finite State Machines Cycle-based Decomposition of Markov Chains with Applications to Low Power Synthesis and Sequence Compaction for Finite State Machines Ali Iranli and Massoud Pedram Dept. of Electrical Engineering University

More information

Markov Chains and Spectral Clustering

Markov Chains and Spectral Clustering Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu

More information

Combinational Equivalence Checking using Boolean Satisfiability and Binary Decision Diagrams

Combinational Equivalence Checking using Boolean Satisfiability and Binary Decision Diagrams Combinational Equivalence Checking using Boolean Satisfiability and Binary Decision Diagrams Sherief Reda Ashraf Salem Computer & Systems Eng. Dept. Mentor Graphics Egypt Ain Shams University Cairo, Egypt

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

HIGH-PERFORMANCE circuits consume a considerable

HIGH-PERFORMANCE circuits consume a considerable 1166 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL 17, NO 11, NOVEMBER 1998 A Matrix Synthesis Approach to Thermal Placement Chris C N Chu D F Wong Abstract In this

More information

Binary Decision Diagrams and Symbolic Model Checking

Binary Decision Diagrams and Symbolic Model Checking Binary Decision Diagrams and Symbolic Model Checking Randy Bryant Ed Clarke Ken McMillan Allen Emerson CMU CMU Cadence U Texas http://www.cs.cmu.edu/~bryant Binary Decision Diagrams Restricted Form of

More information

Detecting Support-Reducing Bound Sets using Two-Cofactor Symmetries 1

Detecting Support-Reducing Bound Sets using Two-Cofactor Symmetries 1 3A-3 Detecting Support-Reducing Bound Sets using Two-Cofactor Symmetries 1 Jin S. Zhang Department of ECE Portland State University Portland, OR 97201 jinsong@ece.pdx.edu Malgorzata Chrzanowska-Jeske Department

More information

Case Studies of Logical Computation on Stochastic Bit Streams

Case Studies of Logical Computation on Stochastic Bit Streams Case Studies of Logical Computation on Stochastic Bit Streams Peng Li 1, Weikang Qian 2, David J. Lilja 1, Kia Bazargan 1, and Marc D. Riedel 1 1 Electrical and Computer Engineering, University of Minnesota,

More information

Updating PageRank. Amy Langville Carl Meyer

Updating PageRank. Amy Langville Carl Meyer Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Test Pattern Generator for Built-in Self-Test using Spectral Methods

Test Pattern Generator for Built-in Self-Test using Spectral Methods Test Pattern Generator for Built-in Self-Test using Spectral Methods Alok S. Doshi and Anand S. Mudlapur Auburn University 2 Dept. of Electrical and Computer Engineering, Auburn, AL, USA doshias,anand@auburn.edu

More information

Reducing power in using different technologies using FSM architecture

Reducing power in using different technologies using FSM architecture Reducing power in using different technologies using FSM architecture Himani Mitta l, Dinesh Chandra 2, Sampath Kumar 3,2,3 J.S.S.Academy of Technical Education,NOIDA,U.P,INDIA himanimit@yahoo.co.in, dinesshc@gmail.com,

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Sequential Equivalence Checking - I

Sequential Equivalence Checking - I Sequential Equivalence Checking - I Virendra Singh Associate Professor Computer Architecture and Dependable Systems Lab. Dept. of Electrical Engineering Indian Institute of Technology Bombay viren@ee.iitb.ac.in

More information

Counting Two-State Transition-Tour Sequences

Counting Two-State Transition-Tour Sequences Counting Two-State Transition-Tour Sequences Nirmal R. Saxena & Edward J. McCluskey Center for Reliable Computing, ERL 460 Department of Electrical Engineering, Stanford University, Stanford, CA 94305

More information

Updating Markov Chains Carl Meyer Amy Langville

Updating Markov Chains Carl Meyer Amy Langville Updating Markov Chains Carl Meyer Amy Langville Department of Mathematics North Carolina State University Raleigh, NC A. A. Markov Anniversary Meeting June 13, 2006 Intro Assumptions Very large irreducible

More information

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm

Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability

More information

1 Algebraic Methods. 1.1 Gröbner Bases Applied to SAT

1 Algebraic Methods. 1.1 Gröbner Bases Applied to SAT 1 Algebraic Methods In an algebraic system Boolean constraints are expressed as a system of algebraic equations or inequalities which has a solution if and only if the constraints are satisfiable. Equations

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Sum-of-Generalized Products Expressions Applications and Minimization

Sum-of-Generalized Products Expressions Applications and Minimization Sum-of-Generalized Products Epressions Applications and Minimization Tsutomu Sasao Department of Computer Science and Electronics Kyushu Institute of Technology Iizuka 80-850 Japan Abstract This paper

More information

Stochastic Modeling and Performance Evaluation for Digital Clock and Data Recovery Circuits

Stochastic Modeling and Performance Evaluation for Digital Clock and Data Recovery Circuits Stochastic Modeling and Performance Evaluation for Digital Clock and Data Recovery Circuits Alper Demir Peter Feldmann alpdemir, peterf @research.bell-labs.com Bell Laboratories Murray Hill, New Jersey,

More information

Markov chains. Randomness and Computation. Markov chains. Markov processes

Markov chains. Randomness and Computation. Markov chains. Markov processes Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Principles of Sequential-Equivalence Verification

Principles of Sequential-Equivalence Verification Sequential-Equivalence Verification Principles of Sequential-Equivalence Verification Maher N. Mneimneh and Karem A. Sakallah University of Michigan Editor s note: This article is a general survey of conceptual

More information

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS

STOCHASTIC COMPLEMENTATION, UNCOUPLING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS STOCHASTIC COMLEMENTATION, UNCOULING MARKOV CHAINS, AND THE THEORY OF NEARLY REDUCIBLE SYSTEMS C D MEYER Abstract A concept called stochastic complementation is an idea which occurs naturally, although

More information

Chapter 16 focused on decision making in the face of uncertainty about one future

Chapter 16 focused on decision making in the face of uncertainty about one future 9 C H A P T E R Markov Chains Chapter 6 focused on decision making in the face of uncertainty about one future event (learning the true state of nature). However, some decisions need to take into account

More information

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009 Branch-and-Bound Algorithm November 3, 2009 Outline Lecture 23 Modeling aspect: Either-Or requirement Special ILPs: Totally unimodular matrices Branch-and-Bound Algorithm Underlying idea Terminology Formal

More information

Transformation and synthesis of FSMs for low-power. gated-clock implementation. Luca Benini and Giovanni De Micheli. Stanford University

Transformation and synthesis of FSMs for low-power. gated-clock implementation. Luca Benini and Giovanni De Micheli. Stanford University Transformation and synthesis of FSMs for low-power gated-clock implementation Luca Benini and Giovanni De Micheli Center for Integrated Systems Stanford University Stanford, CA, 94305 Abstract We present

More information

Chapter 3 Deterministic planning

Chapter 3 Deterministic planning Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions

More information

Information Theoretic Measures for Power Analysis 1

Information Theoretic Measures for Power Analysis 1 Information Theoretic Measures for Power Analysis 1 Diana Marculescu, Radu Marculescu, Massoud Pedram Department of Electrical Engineering - Systems University of Southern California, Los Angeles, CA 90089

More information

A Logically Complete Reasoning Maintenance System Based on a Logical Constraint Solver

A Logically Complete Reasoning Maintenance System Based on a Logical Constraint Solver A Logically Complete Reasoning Maintenance System Based on a Logical Constraint Solver J.C. Madre and O. Coudert Bull Corporate Research Center Rue Jean Jaurès 78340 Les Clayes-sous-bois FRANCE Abstract

More information

Basing Decisions on Sentences in Decision Diagrams

Basing Decisions on Sentences in Decision Diagrams Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence Basing Decisions on Sentences in Decision Diagrams Yexiang Xue Department of Computer Science Cornell University yexiang@cs.cornell.edu

More information

SIMULATION-BASED APPROXIMATE GLOBAL FAULT COLLAPSING

SIMULATION-BASED APPROXIMATE GLOBAL FAULT COLLAPSING SIMULATION-BASED APPROXIMATE GLOBAL FAULT COLLAPSING Hussain Al-Asaad and Raymond Lee Computer Engineering Research Laboratory Department of Electrical & Computer Engineering University of California One

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Finite-Horizon Statistics for Markov chains

Finite-Horizon Statistics for Markov chains Analyzing FSDT Markov chains Friday, September 30, 2011 2:03 PM Simulating FSDT Markov chains, as we have said is very straightforward, either by using probability transition matrix or stochastic update

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

by Symbolic Spectral Analysis of Boolean Functions Politecnico di Torino Torino. ITALY 10129

by Symbolic Spectral Analysis of Boolean Functions Politecnico di Torino Torino. ITALY 10129 Predicting the Functional Complexity of Combinational Circuits by Symbolic Spectral Analysis of Boolean Functions Enrico Macii Massimo Poncino Politecnico di Torino Dip. di Automatica e Informatica Torino.

More information

Multi-Level Logic Optimization. Technology Independent. Thanks to R. Rudell, S. Malik, R. Rutenbar. University of California, Berkeley, CA

Multi-Level Logic Optimization. Technology Independent. Thanks to R. Rudell, S. Malik, R. Rutenbar. University of California, Berkeley, CA Technology Independent Multi-Level Logic Optimization Prof. Kurt Keutzer Prof. Sanjit Seshia EECS University of California, Berkeley, CA Thanks to R. Rudell, S. Malik, R. Rutenbar 1 Logic Optimization

More information

Equivalence Checking of Sequential Circuits

Equivalence Checking of Sequential Circuits Equivalence Checking of Sequential Circuits Sanjit Seshia EECS UC Berkeley With thanks to K. Keutzer, R. Rutenbar 1 Today s Lecture What we know: How to check two combinational circuits for equivalence

More information

Large-Scale SOP Minimization Using Decomposition and Functional Properties

Large-Scale SOP Minimization Using Decomposition and Functional Properties 10.2 Large-Scale SOP Minimization Using Decomposition and Functional Properties Alan Mishchenko Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA

More information

Unit 1A: Computational Complexity

Unit 1A: Computational Complexity Unit 1A: Computational Complexity Course contents: Computational complexity NP-completeness Algorithmic Paradigms Readings Chapters 3, 4, and 5 Unit 1A 1 O: Upper Bounding Function Def: f(n)= O(g(n)) if

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

On the Minimization of SOPs for Bi-Decomposable Functions

On the Minimization of SOPs for Bi-Decomposable Functions On the Minimization of SOPs for Bi-Decomposable Functions Tsutomu Sasao Jon T. Butler Center for Microelectronic Systems Department of Electrical and Department of Computer Science and Electronics and

More information

A New Viewpoint on Two-Level Logic Minimization

A New Viewpoint on Two-Level Logic Minimization A New Viewpoint on Two-Level Logic Minimization Olivier Coudert Jean Christophe Madre Henri Fraisse Bull Corporate Research Center Rue Jean Jaurès 78340 Les Clayes-sous-bois, FRANCE Abstract This paper

More information

Polynomial Methods for Component Matching and Verification

Polynomial Methods for Component Matching and Verification Polynomial Methods for Component Matching and Verification James Smith Stanford University Computer Systems Laboratory Stanford, CA 94305 1. Abstract Component reuse requires designers to determine whether

More information

Comparison of perturbation bounds for the stationary distribution of a Markov chain

Comparison of perturbation bounds for the stationary distribution of a Markov chain Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics

More information

Binary Decision Diagrams

Binary Decision Diagrams Binary Decision Diagrams Literature Some pointers: H.R. Andersen, An Introduction to Binary Decision Diagrams, Lecture notes, Department of Information Technology, IT University of Copenhagen Tools: URL:

More information

Olivier Coudert Jean Christophe Madre Henri Fraisse. Bull Corporate Research Center. Rue Jean Jaures Les Clayes-sous-bois, FRANCE

Olivier Coudert Jean Christophe Madre Henri Fraisse. Bull Corporate Research Center. Rue Jean Jaures Les Clayes-sous-bois, FRANCE A New Viewpoint on Two-Level Logic Minimization Olivier Coudert Jean Christophe Madre Henri Fraisse Bull Corporate Research Center Rue Jean Jaures 78340 Les Clayes-sous-bois, FRANCE Abstract This paper

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Combining Memory and Landmarks with Predictive State Representations

Combining Memory and Landmarks with Predictive State Representations Combining Memory and Landmarks with Predictive State Representations Michael R. James and Britton Wolfe and Satinder Singh Computer Science and Engineering University of Michigan {mrjames, bdwolfe, baveja}@umich.edu

More information

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks 1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT

More information

Self-checking Synchronous FSM Network Design with Low Overhead

Self-checking Synchronous FSM Network Design with Low Overhead VLSI DESIGN # 2000 OPA (Overseas Publishers Association) N.V. 2000, Vol. 00, No. 00, pp. 1±12 Published by license under Reprints available directly from the publisher the Gordon and Breach Science Photocopying

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Optimism in the Face of Uncertainty Should be Refutable

Optimism in the Face of Uncertainty Should be Refutable Optimism in the Face of Uncertainty Should be Refutable Ronald ORTNER Montanuniversität Leoben Department Mathematik und Informationstechnolgie Franz-Josef-Strasse 18, 8700 Leoben, Austria, Phone number:

More information

Inadmissible Class of Boolean Functions under Stuck-at Faults

Inadmissible Class of Boolean Functions under Stuck-at Faults Inadmissible Class of Boolean Functions under Stuck-at Faults Debesh K. Das 1, Debabani Chowdhury 1, Bhargab B. Bhattacharya 2, Tsutomu Sasao 3 1 Computer Sc. & Engg. Dept., Jadavpur University, Kolkata

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Transistor Reordering Rules for Power Reduction in CMOS Gates

Transistor Reordering Rules for Power Reduction in CMOS Gates Transistor Reordering Rules for Power Reduction in CMOS Gates Wen-Zen Shen Jiing-Yuan Lin Fong-Wen Wang Dept. of Electronics Engineering Institute of Electronics Telecommunication Lab. & Institute of Electronics

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding

Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding Amir Said (said@ieee.org) Hewlett Packard Labs, Palo Alto, CA, USA Abstract We analyze the technique for reducing the complexity

More information

Modeling and Optimization for Soft-Error Reliability of Sequential Circuits

Modeling and Optimization for Soft-Error Reliability of Sequential Circuits 4065 1 Modeling and Optimization for Soft-Error Reliability of Sequential Circuits Natasa Miskov-Zivanov, Student Member, IEEE, Diana Marculescu, Member, IEEE Abstract Due to reduction in device feature

More information

The Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations

The Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations The Optimal Stopping of Markov Chain and Recursive Solution of Poisson and Bellman Equations Isaac Sonin Dept. of Mathematics, Univ. of North Carolina at Charlotte, Charlotte, NC, 2822, USA imsonin@email.uncc.edu

More information

Approximation Approach for Timing Jitter Characterization in Circuit Simulators

Approximation Approach for Timing Jitter Characterization in Circuit Simulators Approximation Approach for iming Jitter Characterization in Circuit Simulators MM.Gourary,S.G.Rusakov,S.L.Ulyanov,M.M.Zharov IPPM, Russian Academy of Sciences, Moscow, Russia K.K. Gullapalli, B. J. Mulvaney

More information

Title. Citation Information Processing Letters, 112(16): Issue Date Doc URLhttp://hdl.handle.net/2115/ Type.

Title. Citation Information Processing Letters, 112(16): Issue Date Doc URLhttp://hdl.handle.net/2115/ Type. Title Counterexamples to the long-standing conjectur Author(s) Yoshinaka, Ryo; Kawahara, Jun; Denzumi, Shuhei Citation Information Processing Letters, 112(16): 636-6 Issue Date 2012-08-31 Doc URLhttp://hdl.handle.net/2115/50105

More information

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford Probabilistic Model Checking Michaelmas Term 20 Dr. Dave Parker Department of Computer Science University of Oxford Next few lectures Today: Discrete-time Markov chains (continued) Mon 2pm: Probabilistic

More information

Logic BIST. Sungho Kang Yonsei University

Logic BIST. Sungho Kang Yonsei University Logic BIST Sungho Kang Yonsei University Outline Introduction Basics Issues Weighted Random Pattern Generation BIST Architectures Deterministic BIST Conclusion 2 Built In Self Test Test/ Normal Input Pattern

More information

Fault Collapsing in Digital Circuits Using Fast Fault Dominance and Equivalence Analysis with SSBDDs

Fault Collapsing in Digital Circuits Using Fast Fault Dominance and Equivalence Analysis with SSBDDs Fault Collapsing in Digital Circuits Using Fast Fault Dominance and Equivalence Analysis with SSBDDs Raimund Ubar, Lembit Jürimägi (&), Elmet Orasson, and Jaan Raik Department of Computer Engineering,

More information

arxiv: v2 [quant-ph] 5 Dec 2013

arxiv: v2 [quant-ph] 5 Dec 2013 Decomposition of quantum gates Chi Kwong Li and Diane Christine Pelejo Department of Mathematics, College of William and Mary, Williamsburg, VA 23187, USA E-mail: ckli@math.wm.edu, dcpelejo@gmail.com Abstract

More information

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur Module 5 EMBEDDED WAVELET CODING Lesson 13 Zerotree Approach. Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the principle of embedded coding. 2. Show the

More information

Cylindrical Algebraic Decomposition in Coq

Cylindrical Algebraic Decomposition in Coq Cylindrical Algebraic Decomposition in Coq MAP 2010 - Logroño 13-16 November 2010 Assia Mahboubi INRIA Microsoft Research Joint Centre (France) INRIA Saclay Île-de-France École Polytechnique, Palaiseau

More information

USING SAT FOR COMBINATIONAL IMPLEMENTATION CHECKING. Liudmila Cheremisinova, Dmitry Novikov

USING SAT FOR COMBINATIONAL IMPLEMENTATION CHECKING. Liudmila Cheremisinova, Dmitry Novikov International Book Series "Information Science and Computing" 203 USING SAT FOR COMBINATIONAL IMPLEMENTATION CHECKING Liudmila Cheremisinova, Dmitry Novikov Abstract. The problem of checking whether a

More information

On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes

On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook

More information

LUTMIN: FPGA Logic Synthesis with MUX-Based and Cascade Realizations

LUTMIN: FPGA Logic Synthesis with MUX-Based and Cascade Realizations LUTMIN: FPGA Logic Synthesis with MUX-Based and Cascade Realizations Tsutomu Sasao and Alan Mishchenko Dept. of Computer Science and Electronics, Kyushu Institute of Technology, Iizuka 80-80, Japan Dept.

More information

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing

EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing EE 550: Notes on Markov chains, Travel Times, and Opportunistic Routing Michael J. Neely University of Southern California http://www-bcf.usc.edu/ mjneely 1 Abstract This collection of notes provides a

More information

Analytical Models for RTL Power Estimation of Combinational and Sequential Circuits

Analytical Models for RTL Power Estimation of Combinational and Sequential Circuits Analytical Models for RTL Power Estimation of Combinational and Sequential Circuits Subodh Gupta and Farid N. Najm ECE Dept. and Coordinated Science Lab. ECE Department University of Illinois at Urbana-Champaign

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING Nasir M. Rajpoot, Roland G. Wilson, François G. Meyer, Ronald R. Coifman Corresponding Author: nasir@dcs.warwick.ac.uk ABSTRACT In this paper,

More information

Motivation for introducing probabilities

Motivation for introducing probabilities for introducing probabilities Reaching the goals is often not sufficient: it is important that the expected costs do not outweigh the benefit of reaching the goals. 1 Objective: maximize benefits - costs.

More information

Symmetrical, Dual and Linear Functions and Their Autocorrelation Coefficients

Symmetrical, Dual and Linear Functions and Their Autocorrelation Coefficients Symmetrical, Dual and Linear Functions and Their Autocorrelation Coefficients in the Proceedings of IWLS005 J. E. Rice Department of Math & Computer Science University of Lethbridge Lethbridge, Alberta,

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Symbolical artificial intelligence is a field of computer science that is highly related to quantum computation. At first glance, this statement appears to be a contradiction. However,

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering

ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering TIMING ANALYSIS Overview Circuits do not respond instantaneously to input changes

More information

Irredundant Sum-of-Products Expressions - J. T. Butler

Irredundant Sum-of-Products Expressions - J. T. Butler On the minimization of SOP s for Bi-Decomposable Functions T. Sasao* & J. T. Butler *Kyushu Institute of Technology, Iizuka, JAPAN Naval Postgraduate School Monterey, CA -5 U.S.A. Outline Introduction

More information

Encoding Multi-Valued Functions for Symmetry

Encoding Multi-Valued Functions for Symmetry Encoding Multi-Valued Functions for Symmetry Ko-Lung Yuan 1, Chien-Yen Kuo 1, Jie-Hong R. Jiang 1,2, and Meng-Yen Li 3 1 Graduate Institute of Electronics Engineering, National Taiwan University, Taipei

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Chapter 2 Direct Current Circuits

Chapter 2 Direct Current Circuits Chapter 2 Direct Current Circuits 2.1 Introduction Nowadays, our lives are increasingly dependent upon the availability of devices that make extensive use of electric circuits. The knowledge of the electrical

More information

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem Michael Neumann Nung Sing Sze Abstract The inverse mean first passage time problem is given a positive matrix M R n,n,

More information

Let us first give some intuitive idea about a state of a system and state transitions before describing finite automata.

Let us first give some intuitive idea about a state of a system and state transitions before describing finite automata. Finite Automata Automata (singular: automation) are a particularly simple, but useful, model of computation. They were initially proposed as a simple model for the behavior of neurons. The concept of a

More information