Σ Learners: Theory and Hardware Amit Gore and Shantanu Chakrabartty Member, IEEE

Size: px
Start display at page:

Download "Σ Learners: Theory and Hardware Amit Gore and Shantanu Chakrabartty Member, IEEE"

Transcription

1 A Min-Max Optimization Framework for Designing Σ Learners: Theory and Hardware Amit Gore and Shantanu Chakrabartty Member, IEEE Abstract In this paper, we present a framework for constructing Σ learning algorithms and hardware that can identify and track low-dimensional manifolds embedded in a high-dimensional analog signal space. At the core of the proposed approach is a min-max stochastic optimization of a regularized cost function that combines machine learning with Σ modulation. As a result, the algorithm not only produces a quantized sequence of the transformed analog signals but also a quantized representation of the transform itself. The framework is generic and can be extended to higher-order Σ modulators and for different signal transformations. In this paper, the Σ learning is demonstrated for identifying linear compression manifolds which can eliminate redundant analog-to-digital conversion (ADC) paths. This improves the energy efficiency of the proposed architecture compared to a conventional multi-channel data acquisition system. Measured results from a four channel prototype fabricated in a.5µm CMOS process has been used to verify the energy efficiency of the Σ learner and to demonstrate its real-time adaptation capabilities that are consistent with the theoretical and simulated results. One of the salient features of Σ learning is its self-calibration property whereby the performance remains unchanged even in the presence of computational artifacts (mismatch and non-linearities). This property makes the proposed architecture ideal for implementing practical high-dimensional analog-to-digital converters. Index Terms Manifold learning, High-dimensional signal processing, Σ conversion, signal de-correlation, analog-to-digital conversion, Multi-channel ADC. I. INTRODUCTION Advances in miniaturization are enabling integration of an ever increasing number of recording elements within a single device. Examples of such high-density sensors range from microelectrode arrays used in biomedical applications [], [2], [3] to microphone arrays used in acoustic sensing [4], [5]. Typically, the multi-channel analog signals acquired by these recording arrays lie in a high-dimensional space and the key challenge lies in designing adaptive analog-to-digital conversion (ADC) algorithms that exploit the topological properties of high-dimensional space. This is particularly relevant, since the conventional wisdom of three-dimensional Euclidean geometry, which is at the heart of many existing ADCs, can not be directly applied to higher dimensions. To illustrate this, consider a simple geometric example comprising of two Copyright (c) 29 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an to pubs-permissions@ieee.org. This work was performed when Amit Gore was a Ph.D. student in the Department of Electrical and Computer Engineering, Michigan State University. Shantanu Chakrabartty is with the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI All correspondence regarding this paper should be directed to shantanu@egr.msu.edu Fig.. Non-overlapping volume of two concentric spheres with radii r and (r δ) concentric hyper-spheres in D dimensions with radii r > and r δ > respectively. An equivalent arrangement is shown in Fig. for D = 3 dimensions. According to [6], the ratio of volumes of the hyper-spheres in D dimensions is given by V (D) = (r δ)d r D () = ( δ r )D. (2) From the equation (2), it can be seen that for < δ r, V (D) as the dimensionality of the space satisfies D. Thus, this simple example illustrates that: In higher dimensions, the volume and hence the probability of the data is typically concentrated on the surface of a hyper-sphere. For the example shown in Fig. with parameters δ r =. and D = 32, the volume (or probability of occurrence) of data on the outer shell is 97%. Other geometric structures (hypercubes, hyper-ellipsoids, hyper-toroids etc.) also exhibit interesting data concentration properties in higher dimensions. For instance, the volume of a high-dimensional hyper-cube can be shown to be concentrated at the corners. These examples show that information (data distribution) in high-dimensional space is typically concentrated on low-dimensional manifolds, for example, hyper-surfaces or hyper-planes. The distribution becomes even more concentrated when the input signals exhibit significant degree of correlation (redundancy) which is true for analog signals recorded by high-density sensors. Therefore, an efficient high-dimensional ADC should identify the salient low-dimensional manifolds during the process of data conversion, which is contrary to many existing multichannel data acquisition algorithms which operate independently on each of the input dimensions. In machine learning literature, an equivalent problem is known as manifold learning [7] where the objective is to determine parameters of a low- Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

2 2 Fig. 2. Illustration of Σ learning using two-dimensional data: initial hyperplane; limit-cycle about the optimal hyperplane dimensional manifold that can faithfully capture the geometric and statistical property of the high-dimensional input data. Some of the examples of manifold learning include principal component analysis (PCA), Kohonen maps [8], locally linear embedding (LLE) [9] and eigen-maps []. In this paper, we unify manifold learning and analogto-digital conversion such that the proposed algorithm will process the input analog signals to produce not only a digitized representation of the transformed signal, but also a digitized representation of a transformation manifold. At the core of the proposed architecture and the focus of this paper is a stochastic min-max optimization procedure that yields a Σ modulation [] integrated with a manifold learning step. Conceptually, the mechanism of the proposed modulation is illustrated in Fig. 2 for a simple manifold learning task. The example consists of input vectors (represented as circles and squares) which are embedded in a three dimensional space. An optimal manifold that captures the distribution of the input data is a two-dimensional hyper-plane as shown by the shaded area in Fig. 2. Starting from an initial estimate of the hyper-plane (shown in Fig. 2) the Σ learning algorithm proposed in this paper will generate a sequence of approximate (quantized) hyper-planes that exhibit limit-cycle behavior about the optimal hyper-plane (see Fig. 2). The statistics of the limit-cycles will then encode the parameters of the manifold at a desired resolution. The limit-cycle based learning endows the Σ learners with additional properties: The stability of learning is only dependent on the magnitude of the input signals and does not depend on the choice of hyper-parameters, for instance learning rate factors used in neural network architectures [7], [2]. The algorithm naturally inherits the robustness properties of Σ modulation []. In addition, learning endows the proposed algorithm with a self-calibrating property where the mismatch between the sensor channels and the inherent non-linearity of analog computation are compensated. This is similar in spirit of many learning-on-silicon architectures reported in literature that can automatically compensate for hardware artifacts [3]. When the objective of the algorithm is to determine only the parameters of slowly varying low-dimensional manifold and not the transformed data, then the Σ learner can operate at sampling rates lower than the Nyquist rate of the input signals. This setting is similar to analog-to-information converters [4], [5] except that the basis functions are determined adaptively instead of being fixed. The functional architecture of a Σ learner is shown in Fig. 3 where the input is a time varying analog signal x[n] R D with n =, 2,.. representing the discrete time index. The Σ learner also consists of a matrix-vectormultiplier (MVM) which transforms the input signal x[n] according to A[n]x[n] where A[n] R D R D denotes a linear transformation matrix. This transformed signal is then processed by an array of Σ modulator to produce a binary data stream d[n] {, +} D. An adaptation unit uses the binary output d[n] to update the matrix A[n] and in the process learn the parameters of the target manifold A. To illustrate the benefits of the proposed Σ learner over a conventional manifold learning architecture, we will compare their equivalent energy efficiency for a signal compression application. The functional architecture of a conventional manifold learning approach is shown in Fig. 3, where the transformation is performed after the analog-to-digital conversion step (using Σ modulators). For the comparison we will also make the following assumptions which is reasonable for a high-density sensor array: The rank of the target compression manifold A, denoted by M, satisfies M D. This is a valid assumption since signals acquired by high-density sensors (for example microelectrode array or microphone array) exhibit a high degree of correlation. This implies that once the parameters of A has been learned, D M redundant Σ modulators in Fig. 3 can be selectively shutdown to conserve energy. The desired manifold A is quasi-stationary and its parameters vary much slowly compared to the input signal x. This is a reasonable assumption since A depends only on the physical properties of the sensor, sources and the channel. For instance, the distance between the sensor array and the sources or the variation in channel properties (dispersion or attenuation) directly affect A. Thus, identifying and tracking of A consumes only a fraction α of the total operational time of the Σ learner. The operation of a Σ learner would consist of two sequential phases (shown in Fig. 3(c)) which repeat for the duration for which the Σ learner is active. In the first phase, the Σ learner learns/tracks the compression manifold A within a certain number of adaptation cycles (consuming a fraction α of the total operational cycles). In the second phase, the adaptation unit selectively shuts down the D M redundant modulators for the rest of the operational cycles (shown in Fig. 3). If P Σ denotes the power dissipation of a single Σ modulator and if P adt denotes the power dissipation of a single mixed-signal multiply-addition operation, under the following condition P adt < P Σ D(α + M/D). (3) the energy efficiency of the Σ learner can be shown to be superior to that of the conventional approach (shown in Fig. 3). The proof of the inequality (3) is presented in Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

3 3 Fig. 3. Functional architectures corresponding to: the proposed Σ learner and conventional manifold learning approach. (c) Figure illustrating that when Σ learner is applied for a signal compression task, learning can be performed for only a fraction α of the total operational time. Appendix I and is based on the worst case analysis where the power dissipation due to digital signal processing in conventional architecture (see Fig. 3) has been ignored. The inequality (3) shows that the power savings could be significant if the fraction α and the ratio M/D are simultaneously small. In the later sections will verify the inequality by presenting measured results obtained from a Σ learner prototyped in a.5µ CMOS process. Also, using Monte- Carlo analysis we will show that the parameter α is inversely proportional to the input dimension D which will show that the efficiency improvement of the Σ learner remains invariant with the increase in the input dimensionality D. The paper is organized as follows: section II will briefly present a summary of mathematical notations and definitions used in this paper; section III introduces the min-max optimization framework for constructing Σ learner; section IV presents some performance results obtained using Monte- Carlo simulations that quantifies the stability and variance of the Σ learner; section V describes a CMOS implementation of a four dimensional Σ learner; section VI presents measured results obtained using the fabricated prototype; section VII concludes the paper with some final remarks. II. MATHEMATICAL NOTATIONS We will use the following notations and definitions throughout the paper: ) A scalar variable will be denoted by a lower case symbol, for example x. 2) A column vector will be denoted by bold symbol as x and its elements will be denoted by x i, i =, 2,... 3) The L norm of a vector will be denoted by x and is given by x = i x i. The L 2 norm of a vector will be denoted by x 2 and is given by x 2 = i x2 i. An N lim N N n= L norm of a vector will be denoted by x and is given by x = max i x i. A vector constant will be represented by a bold numeral. For example denotes a vector with all elements equal to zero. 4) A matrix will be denoted by an upper-case bold symbol as A and its elements will be denoted by a ij, i =, 2,..; j =, 2,... 5) The L norm of a matrix will be denoted by A and is given by A = max x: x Ax. 6) w T and A T denote a transpose operation for a vector and matrix. 7) Discrete time sequences will be denoted by indices n as x[n], n =, 2,... 8) E x {.} will denote an expectation operation given by E x {.} = x (.)p x(x)dx, where p x (x) denotes the probability density function of x. 9) Empirical expectation will be denoted by E n {.} and is defined as a temporal average of a discrete time sequence. For example, E n {d n } = d[n]. A double expectation operator will then be represented by E n {E n {d[n]}} = N i lim N N 2 i= j= d[j]. ) Differentiation of a function f with respect to a vector or a matrix represents an element-wise operation such that f v = [ f v, f v 2,...] T. III. OPTIMIZATION FRAMEWORK FOR Σ LEARNING In this section, we describe a generalized form of the minmax optimization framework that will integrate learning with analog-to-digital conversion. A special case of the proposed framework was introduced in [6] which was used for neural signal compression. Given a random input vector x R D and an internal state vector w R D, a Σ learner estimates the parameters of a linear transformation matrix A R D R D according to the following optimization criterion: where max (min f(w, A)) (4) A C w f(w,a) = λ w w T E x {Ax}. (5) λ > denotes a hyper-parameter and C denotes a constraint space of the transformation matrix A. The term w bears similarity to the regularization which is extensively used in machine learning algorithms [7], [8]. However,the L norm in Eqn. (5) is an important link connecting the cost function with a single bit quantization. This is illustrated in Fig. 4 which shows an example of a one-dimensional regularization function w. The piece-wise behavior of w leads to discontinuous gradient sgn(w) (shown in Fig. 4) where sgn(.) denotes a signum operation equivalent to a single bit quantization. Even though the regularization framework can be extended to multi-bit quantization, in this paper we will only discuss Σ learners with single quantizers. The minimization step in Eqn. (4) ensures that the state vector w is correlated with the transformed input signal Ax (signal tracking step) and the maximization step in Eqn. (4) adjusts the parameters of A such that it minimizes the correlation (de-correlation step) Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

4 4 Fig. 4. Plot of one dimensional L norm w and its derivative Fig. 5. Optimization contour explaining the limit cycle behavior about the minima. between w and Ax. The formulation bears similarities with game-theoretic approaches [9], [2] where signal tracking and de-correlation have been formulated as conflicting objectives. However, the uniqueness of the proposed approach compared to other optimization techniques to solve Eqn. (4) is the use of bounded gradients to generate Σ limit-cycles. To show this we will first prove a key result: Lemma : For the constraint set {C : A λ}, x and for f(.,.) as defined in Eqn. (5), min w f(w, A) =. Proof: First, we will show that the cost function f(x, A) is bounded below by and then show that this lower-bound lies within the feasible set defined by the constraints. We will use topological property of norms [2] which states that for two integers p, q satisfying p + q =, the following relationship is valid for vectors w and y w T y w p y q (6) Setting y = E x {Ax} and applying Eqn. (6) the following inequality is obtained: w y w T y w T y. (7) Using the definition of the matrix norm and the given constraints, it can be easily seen A Ax y. Thus, y λ. Therefore, the inequality (7) leads to λ w w T E x {Ax} (8) which proves that the cost function f(x, A) is bounded from below by. It can be seen that f(, A) =, therefore, the lower-bound lies within the constraint set and therefore is the minima of f(w, A). Lemma shows that for the constraint set C, the minima of f(w, A) is already known and therefore the result of the minimization does not convey any additional information. However, in the proposed approach, the path to the final solution w = and the limit-cycles about the solution will be of importance. This is illustrated in Fig. 5 using a two-dimensional contour plot where starting from an initial condition, the minimization produces a trajectory towards the minima and ultimately produces a limit-cycle behavior about the minima. The path and the limit-cycles will encode the topology of the optimization manifold defined by f and hence will encode the transformation A. A. First-order Σ Modulation The link between optimization (4) and Σ modulation is through a stochastic gradient minimization [22] of the cost function (5). Under the condition of stationarity on the random vector x and under the assumption that its probability density function of x is well behaved ( gradient of expectation operator is equal to expectation of the gradient operator), the stochastic gradient step with respect to w yields w[n] = w[n ] f(w, A) w (9) (n ) w[n] = w[n ] + A[n ]x[n ] λd[n] () where n signifies the discrete time index and d[n] = sgn(w[n ]) denotes the quantized representation according to the step function shown in Fig. 4. Note that the formulation () does not include any learning rate parameters typically used in other neural network approaches. As the recursion () progresses, bounded limit cycles are produced about the solution w (see Fig. 5) whose property is characterized by the following lemma. Lemma 2: For A[n] C, x and if w 2λ, then w n 2λ for n =, 2,... Proof: Similar to the proof for the stability of a first-order Σ modulation [], we will use mathematical induction to prove the lemma. Let w[n ] 2λ. Since d[n] = sgn(w[n ]), therefore w[n ] λd[n] λ (proof for this claim is given in Appendix II). Using Eqn. (), the following relationship holds w[n] = w[n ] λd[n] + A[n ]x[n ] w[n ] λd[n] + A[n ] 2λ () which proves the lemma using a mathematical induction.. Assuming the initial condition of w[] =, the discrete time recursion () leads to λ N N d[n] = N n= N n= A[n]x[n] + w[n] (2) N Due to the bounded property of w[n] ( according to lemma 2), equation (2) leads to the following asymptotic property: E n {d[n]} = λ E n{a[n]x[n]} (3) where E n {.} denotes an empirical expectation with respect to time index n. Thus, the recursion () produces a quantized sequence whose mean asymptotically encodes the transformed Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

5 5 Fig. 6. Functional verification of a Σ learner using synthetic multi-channel data: analog signals presented as input to the learner; output signals produced by the learner; (c) reconstructed signals using the output of learner and the learned transformation A; (d) convergence of the A input at infinite resolution. For a stationary input sources (sources whose statistical properties are fixed), A[n] converges to an asymptotic value A, which then can be used for reconstruction according to E n {ˆx[n]} λa E n{d[n]} (4) where A is the inverse transform of A. Since for highdensity sensing, the rank of the input space is less than its dimension, it is important to choose a constraint space C where the non-redundant M dimensional space can be easily identified. One such constraint space is represented by upper and lower triangular matrices with their diagonal elements fixed. Also for such a class of transforms, the inverse A can be easily computed using back-substitution techniques. B. Σ learning The maximization step (de-correlation) in Eqn. (4) yields updates for matrix A according to a gradient ascent procedure as f(w, A) A[n] = A[n ] + ξ A (5) n which can be written as A[n] = A[n ] ξw[n ]x[n ] T ; A[n] C. (6) The parameter ξ controls the learning rate of the update (6). Since we are interested in obtaining a digitized representation of the matrix A the parameter ξ in update (6) can be replaced by its binary form as ξ = 2 P and the variables w[n ],x[n] can be replaced by its signed forms as: A[n] = A[n ] 2 P d[n]sgn(x[n ]) T ; A[n] C (7) where we have used the relationship that d[n] = sgn(w[n ]). The updates in Eqn. (7) can be implemented using an updown counter which also acts as a storage for the binary representation of the matrix A. The parameter P will be referred to as the transform resolution as it determines the precision by which the target manifold A can be determined. It is important to point out that unlike conventional neural network algorithms where the choice of the learning rate parameter is important to guarantee stability of the algorithm, the parameter P only affects the performance and not the stability of the Σ learner. This has been verified using Monte-Carlo simulations presented in section IV. To ensure that the matrix A always lies within the constraint space C (space of lower triangular matrices), the updates in equation (7) are only applied to the off-diagonal lower triangular matrix elements. The values of other elements are fixed according to a ij =, i < j, and a ii =. Thus, if A λ is satisfied, then the recursion (7) will asymptotically lead to E n {d[n]sgn(x[n]) T } (8) for A C. Thus, the proposed Σ learning algorithm produces quantized sequences d[n] that are uncorrelated to the signum function of the input signal. We will first illustrate using a synthetic example that the condition (8) is sufficient to identify redundant Σ modulation paths. For this example two sinusoidal signals with different frequencies were first chosen. These signals were then mixed together in linear proportions to generate eight synthetic signals as shown in Fig. 6. These signals were then presented as a input to the Σ learner. Thus, Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

6 6 even though the dimensionality of the input signal space for this example is eight, its rank is two. Fig. 6 shows the output produced by the Σ learning algorithm where the binary stream d[n] has been low-pass filtered and decimated []. The results demonstrate that the Σ learner correctly reduces the dimensionality of the output signals, where only two of the eight channels contain significant energy, where as the energy in the rest of the channels (called residual energy) diminishes to zero. The convergence of the learning algorithm can be visualized for this example using Fig. 6(d), which shows that the A stabilizes after a certain number of iterations. We will quantify the convergence behavior of the Σ learner using adaptation cycles, which is defined as the number of learning iterations required before A converges to ±2% of its stabilized value. The adaptation cycles also determines the parameter α which is the fraction of the total operation period when learning is enabled. The converged value of A is used to reconstruct the input signals using the output according to Eqn. (4) which is shown in Fig. 6(c). In the numerical experiments presented later in Section IV, the quality of reconstruction will be quantified using the reconstruction error which is a mean-square error between the reconstructed output and the input signal. This simple experiment demonstrates the functionality of Σ learning in identifying redundant (nullspace) signal processing paths. This response can also be seen in the frequency domain as shown in Fig. 7 where the FFT of one of the redundant channels (channel 8) is illustrated for a first-order Σ modulator with and without learning. Since, for this example the input signal consisted of two fundamental sinusoids the frequency domain response consists of two distinct impulses. Also, it can be seen from Fig. 7, that in addition to the familiar noise-shaping characteristics of a first-order modulator [], the Σ learner suppresses the sinusoidal signals (redundant signals) after learning. Thus, the learner acts as a band reject filter with respect to signal present in channel and 2. Moreover, any common mode signals, for instance dc offsets, which are associated with all channels are eliminated. C. Higher-order Σ Modulation The formulation of Σ learning could also be extended to higher-order modulation. This can be achieved by incorporating momentum terms in Eqn. () to obtain w[n] =w[n ] + w[n] =w[n ] f(w, A) w + w[n ] w[n 2] n + (A[n ]x[n ] λd[n]) + w[n ] w[n 2] (9) (2) Momentum terms have extensively been used in optimization theory and neural networks for improving the performance of learning algorithms [22], [23] and has been used in Σ learning to improve its convergence speed. Even though Hessian-based formulations have also been proposed Power (db) 5 5 With learner Without learner Frequency (Hz) Fig. 7. FFT response of the first-order Σ learner for the synthetic example in Fig. 6, showing that the 8 th channel rejects frequencies present in the lower dimensional channels for improving the convergence speed of neural network algorithms, they are not suitable for optimizing piece-wise cost function (4). Recursion (2) will generate quantized vector sequences d[n] whose first order expectation as well as second order expectation converge asymptotically according to E n {d[n]} = λ E n{a[n]x[n]} (2) E n {E n {d[n]}} = λ E n{a[n]x[n]}. (22) The proof of convergence for the expressions (2-22) is given in Appendix III. The update in Eqn. (2) is equivalent to a second-order Σ modulator [], thus linking the momentum based gradient descent rule to a second-order Σ modulation. Similar to Eqn. (4), the reconstruction formula based on Eqn. (22) can be expressed in terms of asymptotic value of the linear transform A as E n {ˆx[n]} λa E n {E n {d[n]}} (23) Eqn. (9) can also be generalized to incorporate L th order momentum terms according to f(w, A) w[n] = w[n ] + w + L n(w[n ]) (24) n where L n(.) denotes an L th order difference. In case of L = Eqn. (24) is equivalent to a second order modulation according to Eqn. (9). Higher-order momentum terms have also been used in neural networks [24] to accelerate the dynamics of gradient descent iterations especially where optimization contours are flat. However, without any loss of generality, in this paper we will only investigate properties of first and second order modulators only. The optimization approach using momentum terms can also be used to visualize and understand the dynamics of Σ modulators. Fig. 8-(d) illustrates this using a one-dimensional cost function f(w) = w wx with x <. Fig. 8 corresponds to x =, therefore the stochastic gradient iterations corresponding to the first-order Σ Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

7 7 Adaptation cycles 2 x Residual power (db) Transform resolution (P) Order = Order = 2 Reconstruction error (db) Transform resolution (P) (c) Transform resolution (P) Fig. 8. Illustration of Σ dynamics for first-order modulator with x =, modulator with x > ; (c) higher-order modulators (bold lines indicate higher velocity); (d) effect of reducing the magnitude of x Adaptation cycles OSR Residual power (db) Order = Order = OSR Reconstruction error (db) (c) 6 2 OSR Fig. 9. The effect of OSR of the Σ converter array on the performance of the Σ learner. modulation exhibits limit cycles symmetrical about the minima w =. Fig. 8 shows the equivalent dynamics corresponding to x >, therefore the resulting limit cycles spend more time in the region w >. Fig. 8(c) shows the dynamics of a higherorder modulator, where the momentum factor accelerates the marker towards the minima. The overshoot beyond the minima is proportional to the net velocity at the minima. For higherorder modulators, the acceleration and hence the velocity of approach could be large enough so that the magnitude of limit cycles can become unbounded. Therefore, the stability of Σ modulation can be improved either by reducing the velocity of approach towards the minima (by modifying the shape of the optimization manifold) or by constraining the magnitude of the input x. These approaches are similar to stabilization techniques already used for designing higherorder Σ modulators []. IV. MONTE-CARLO ANALYSIS OF Σ LEARNERS In this section we analyze the performance of the Σ learning algorithm using Monte-Carlo simulations. The system parameters that were included in this study are: Oversampling ratio (OSR), which is defined as the ratio of frequency of Σ learning updates to the bandwidth of the input signal; Sparsity, which is the ratio of the number of input channels D and the rank of the input signal space M and (c) Transform Fig.. The effect of resolution of a signal transformation matrix A on the performance of the Σ learner. resolution (P ), which quantifies the precision of the updates in the Eqn. (7). To understand the effect of these system parameters on the performance of the learning algorithm, a controlled simulation experiment was found to be better suited than using real-life data. For all experiments presented in this section, we used a similar setting as described in Fig. 6 which consists of two sinusoidal signals (with different normalized frequencies) mixed in random proportions. The rank of the eight dimensional input space was fixed to two. The performance of the first-order and the second-order Σ learning algorithm were then quantified using system parameters as described in section III-C. These parameters are: ) Adaptation cycles: The minimum learning iterations before the A converged to ±2% of the stabilized value. 2) Residual power: Quantifies the performance of the algorithm to achieve signal compression. This can be calculated using the signal at the output of the i th channel according to R i = E n {(E n {d i [n]}) 2 }. 3) Reconstruction error: Quantifies the accuracy of the learning algorithm in identifying the compression transform. For the numerical experiments presented in this section, the reconstruction error is calculated by computing the mean-square error between the input signal and signal reconstructed according to Eqn. (23) as R e = E n { x[n] ˆx[n] 2 }. All numerical results presented in this section were obtained after averaging the results over runs, where for each run, two sinusoidal signals (see Fig. 6) were mixed in random proportions to produce eight input signals. Fig. 9 shows that the adaptation cycles required for Σ learning decreased with increase in OSR (for both the first and the second order modulation). This can be attributed to the increased precision in tracking of the stationary manifold with an increase in the OSR. As expected, higher OSR also improved the reconstruction and the suppression of residual error which is shown in Fig. 9 and (c). Fig. summarizes the performance of the Σ learner when the resolution parameter P was varied. It can be seen that the number of adaptation cycles increases with the increase in parameter P. This is because the learning algorithm has to span the signal space at finer increments. Also, from Fig., it can be seen Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

8 8 Adaptation cycles Residual power (db) Mismatch factor (F) Order = Order = 2 Reconstruction error (db) Mismatch factor (F) (c) Mismatch factor (F) Adaptation cycles x 4 2 Residual power (db) Non linearity factor (β) Order = Order = Non linearity factor (β) Reconstruction error (db) (c) Non linearity factor(β) Fig.. The effect of mismatch on the performance of the Σ learner. Fig. 2. The effect of computational non-linearity on the performance of the Σ learner. that the reconstruction error and the residual error decreased with increased resolution. This can be attributed to a more precise determination of the transformation parameters with an increase in P. The next set of experiments were used to evaluate the robustness of the proposed Σ learner to different forms of computational artifacts. In literature, the effect of computational artifacts on the performance of Σ modulation has been extensively studied [25], [26] and several compensation methods have been proposed [27], [28], [29]. Three kinds of artifacts have been considered in this paper which are consistent with other reported numerical study in literature. These include: mismatch and random offset errors introduced while computing the transform; leakage in the Σ recursion and (c) the non-linearity of analog computation. Mismatch and random offset errors were introduced in Σ learning by adding a random error matrix ǫ with ǫ F to the transformation matrix A. The parameter F denotes the mismatch factor and was used to quantify the system performance as shown in Fig.. It can be seen from Fig. that the number of adaptation cycles decreases with increased in the mismatch factor. This was consistent with several results reported in machine learning literature where randomization aids the convergence of the learning algorithm [7], [34]. The residual power and the reconstruction error, however, remain unchanged demonstrating the robustness of Σ learning to mismatch artifacts. The residual power and the reconstruction error of the second order system system showed variations up to 2% and % compared to.5% and.7% of the first order system. The non-linearity of matrix-vector multiplier was modeled using a compressive response in the Σ update according to where w[n] = w[n ] + g(a[n ],x[n ]) d[n]. (25) g(a[n],x[n]) = /( + exp( βa[n]x[n])) (26) with β being a hyper-parameter that controlled the shape of non-linearity (shown in Fig. 3). Fig. 2 shows that the number of adaptation cycles decreased with increase in the parameter β. This can be attributed to higher gradient at the orgin which led to faster convergence speed and hence smaller number of adaptation cycles. For this experiment, the Output.5.5 β = β = 2 β = 3 β = 4 β = 5 Input a β = β = 2 β = 3 β = 4 β = input Fig. 3. Illustration of the self-calibration in a Σ learner: The nonlinearity of the transform as modeled by a sigmoidal function; learned value of the transformation parameter a 2 which tracks the inverse of the non-linear function. maximum variation in the residual power and the reconstruction error was found to be 2.9 % and 3.2 % respectively, demonstrating the low sensitivity of the Σ learner to nonlinear response. To understand the robustness property of the Σ learner to computational non-linearities consider the adapted value of the matrix element a 2 as shown in Fig. 3. The value of a 2 were obtained subsequent to convergence of the Σ learning for different values of inputs. It can be easily verified from Fig. 3 that a 2 g ( ), thus compensating the non-linear effect of Eqn. (26). The final experiment evaluated the effect of signal sparsity on the Σ learning performance. For this setup, the number of channels (dimension of the input vectors) was increased, while keeping the number of independent sinusoidal signals fixed. Thus, the rank of input signal space was always fixed to two, similar to setting described in Fig. 6. Fig. 4 shows that as the sparsity of the input space increases (while the rank is fixed), the number of adaptation cycles reduces. The numerical result illustrate that when the input channels show large degree of correlation (as in high-density sensing), increasing the number of input channels improves the convergence rate of learning. This shows that the learning algorithm can exploit more spatial information to successfully eliminate redundancy in the Σ modulator output with unchanged residual power. For this experiment, the worst case variation in the recon- Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

9 9 Adaptation cycles 2.5 x Residual power (db) Order = Order = 2 Reconstruction error (db) (c) Sparsity (D/M) Sparsity (D/M) Sparsity (D/M) Fig. 6. Schematic of a binary -bit current mode DAC. Fig. 4. The effect of input sparsity on the performance of the Σ learner. struction error increased to 3.5%. Therefore, when the sparsity or the ratio D/M increased, the adaptation cycles and hence the fraction α in Eqn. (3) decreased as O(/D). Thus the inequality 3 can be expressed as P adt /P Σ < O(), implying that the energy efficiency improvement of the Σ learner does not degrade with increase in signal dimensionality. In the next section we quantify the parameters P adt and P Σ through measurement results obtained from a prototype Σ learner fabricated in a.5µm CMOS process. V. CMOS IMPLEMENTATION OF Σ LEARNER For the prototype implementation, the input dimension was chosen to be D = 4 due to constraints on the silicon area. The system level architecture of the Σ learner is shown in Fig. 5. It consists of an array of analog processing units (APUs) which implements the following matrix-vector operation y [n] y 2 [n] y 3 [n] y 4 [n] = a 2 [n] a 3 [n] a 32 [n] a 4 [n] a 42 [n] a 43 [n] x [n] [n] x 3 [n] x 4 [n] Each APU implements a single multiply-accumulate operation between the stored digitized representation of a ij and the input signal x j. The APUs also adapt the stored parameter a ij according to the equation (7). Note that the APU array is organized in a lower-triangular form which ensures that the constraint C is satisfied. Also, the APUs on the diagonal of the array do not adapt, hence denoted by a different symbol. Each APU consists of a transconductor T ij (see Fig. 5 inset) whose bias current is proportional to the matrix element a ij. The bias current modulates the transconductance and hence the output current I OUT is proportional to the product of a ij and the input signal x j. Additions in the matrix-vector-multiplication were implemented using Kirchoff s current summation principle at the common node y i [n] where all outputs of the APUs are connected. The nodes y i [n] are maintained at a virtual ground (by Σ modulators) which ensures that the current summation is accurate. Updates of the parameter a ij (according to Eqn. 6) are implemented using a digitally programmable current DAC (see Fig. 5 inset). An -bit up-down counter stores a digital representation of a ij which is processed by the DAC to Fig. 7. Schematic of the source degenerated transconductor. produce the bias current of the transconductor. Because the updates in Eqn. (7) is binary, the multiplication operator in (7) is implemented using an XNOR gate (see Fig. 5 inset). The output of the XNOR gate drives the up-down control signal of the counter. The design of the up-down counter is based on a network of D flip-flops and has been optimized in this work for area and power dissipation. The counter also incorporates a shift capability where the contents of the counter can be initialized and accessed using a serial-chain interface. Note that for the diagonal APUs the parameters a ii are non-adaptive and hence does not contain any counter and the current-dac. The least significant bits of the Up-Down counter b,.., b 9 drive a -bit current DAC which is implemented using a standard MOS resistive network [3]. The output current I DAC then modulates the bias current of a transconductor whose schematic is shown in Fig. 7. The most significant bit b of the Up-Down counter controls the sign (direction) of the output current, and thus is used for implementing a four-quadrant multiplier. The transconductor consists of a p- MOS input stage which drives a cascoded output stage. The transconductor uses a bump circuit [3] (transistors M B M B4) for source degeneration and hence for increasing the input voltage range (or reducing the transconductance). The bump circuit operates by steering the output current such that the transistor pair implements an equivalent resistor. The direction of the output current is controlled by the b which Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

10 Fig. 5. Architecture of the Σ learner with D = 4. Fig. 8. Architecture of the third-order single loop single bit Σ modulator. switches the currents using transistors MS MS4. Compared to a voltage mode design for the APUs, the proposed current mode transconductor network significantly reduces the required silicon area. We estimate that the current mode transconductor reduces the area by a factor of 5 compared to its voltage mode counterparts designed with 2f F capacitive network. For the implementation of the Σ learner, a third-order modulator was chosen. Compared to a single-order modulator the third-order modulator can achieve a higher conversion rate and hence achieve higher energy efficiency (for a fixed OSR) []. Since the output of the APU array are currents, the first stage of the modulator uses a current-mode continuous time integrator. The subsequent modulator stages were implemented using voltage mode circuits (switched capacitor integrators) to avoid intermediate voltage to current conversion stages. Such hybrid Σ modulators [32] have been shown to relax design constraints on amplifier unity gain frequency, power budgets of continuous time modulators and the scalability of discrete time modulators. Fig. 8(d) shows the gain parameters of the 3 rd order hybrid (mixed mode) Σ modulator that were chosen to maintain the stability of the transconductor network. The first stage of the modulator is a continuous time current mode integrator as shown in Fig. 8 where reference currents through transistor devices M9 M2 were biased to avoid Σ overload condition. As shown in Fig. 8, the multiplication between the digital bit d and the reference current is implemented by switching (on/off) the cascoded current source (sink). Switching at the source as opposed to switching at the drain has several advantages [33] as it reduces the channel charge injection [33] and clock feed-through at the integration node. The size of the integrating capacitor (C INT ) in the first stage modulator was chosen to avoid integrator saturation as well as to limit the integrator swing within the accepted input range of the second stage modulator. If the reference current of the first stage of the modulator is denoted by I ref, the clock period be denoted by T CLK and the input Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

11 Fig. 9. Microphotograph of the fabricated Σ learner. TABLE I Measured specification of the 4-dimensional Σ learner Parameters Values Technology.5 µm 2P3M CMOS Die Size 3 mm 3 mm ( 4 channels ) Supply 3.3 V Channels 2 4 Input Range 3mV Input Frequency Bandwidth 4 khz Sampling Frequency 25 khz Over-sampling ratio (OSR) 32 SNR 53.8 db SNDR db SFDR 6.72 db Total Power dissipation.5 mw at 25KHz ( 4 channel ) Active area of the 3 rd 85 um 28 um order Σ modulator Active area of the 2483 um 876 um 4 dimensional system voltage range of the second stage be denoted by V lim, then the size of the integrating capacitor is chosen according to C INT = I ref T CLK V lim (27) The second and the third stage modulators are switched capacitor modulators whose single ended version is shown in Fig. 8(c). The loop gain of the third order hybrid Σ modulator with capacitor sizes are shown in Fig. 8(e). For all integrators, a standard folded cascode op-amp is used which provides an open loop DC gain of 6dB. This is in conjunction with the minimum required gain greater than twice the oversampling ratio of the Σ converter [] to reduce the effects of signal leakage due to a non-ideal integrator. A. Circuit characterization VI. MEASUREMENT RESULTS Fig. 9 shows the micrograph of a prototype Σ learner which was fabricated in a.5µm CMOS process. Table I summarizes the measured specifications of the Σ learner prototype. The SNR, SNDR and the SFDR metrics are reported for a single Σ modulator whose input is driven by a single APU, while the other APUs are disabled. For all the experiments reported in this section, the input voltage swing was limited to 3mV which was determined by the linear operating range of the source degenerated transconductor. The power dissipation of each of the components of the Σ learner are summarized in table II. The power dissipation of a single Σ modulator was measured to be P Σ = 23µW where as the power dissipation of a single APU was measured to be P adt = 5.68µW. Based on these measured quantities and the discussion presented in section I, it can be seen that the energy efficiency of the Σ learner supersedes that of to a conventional multi-channel data converter. In fact, we estimate that power savings of more than 6% can be achieved using the proposed Σ learning architecture. The power dissipation metrics of independent channels are summarized in Table II, where the metric for the channel j is calculated as P Mod + P DC + (j )(P NDC + P CNT )µw (28) Normalized Current T T 22 T 33 T Input Voltage (V) Fig. 2. Measured response of the source degenerated transconductors. The mismatch between the transconductors was determined to be less than 4%. with P Mod representing the power dissipation due to the modulator, P DC represents the static power dissipation, P NDC represents the power dissipation of a single transconductor and P CNT represents the power dissipation of the counter and current DACs. TABLE II Measured power dissipation of the Σ learner System Component Power 3 rd order mixed mode Modulator (P Mod ) 23 µw Diagonal cell (P DC ) 4.44 µw Non-diagonal Cell (P NDC ) 6.5 µw -bit counter/shifter (P CNT ) 55 µw System Channel Power Channel µW Channel µW Channel µW Channel µW Fig. 2 shows the measured response of the transconductors used in the APUs. The currents produced by the transconductors were measured after decimating the output of the 3rd order modulator using a fourth-order digital sinc filter. The measured response show a linear operating range of 3mV Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

12 2.6.5 DAC Output.3.3 a 2.6 DAC Input Input Signal x (V) 2 Fig. 23. A phasor diagram showing the two sinusoidal signals represented by vectors x, differing by a phase θ. =.8*x =.9*x Fig. 2. learning. The non-linearity of the current DAC value of a 2 after.5 a = 25mV = 5mV = 75mV = mv = 25mV = 5mV = 75mV =25mV =5mV =75mV =mv =25mV =5mV =75mV Clock Cycles (x 24) Fig. 22. Measured convergence of the parameter a 2 for different magnitudes of and a fixed magnitude of x. with a worst case mismatch of 4%. Similar mismatch has been observed in the transconductors of the non-diagonal cells. In addition to mismatch, the non-diagonal cells also exhibit a non-linear response due to the current DACs as shown in Fig. 2. However, we have already shown using Monte- Carlo simulations that the mismatch and non-linear response can be compensated by the Σ learning algorithm. The selfcalibrating and compensating ability of the Σ learner was verified using the following experimental set up. A DC signal of magnitude 2mV was applied to the first channel x and the DC signal applied to the second channel was varied in steps of 5mV from 2mV to +2mV. For each of these values, the counters were first initialized to the maximum value of 247 (+23) after which the Σ learner was run for 2 clock cycles. Fig. 22 shows the value of the parameter a 2 obtained after each clock cycle showing a similar convergence response as observed in numerical simulations. Fig. 22 shows that the system is stable under the overload condition of input signals (x, > ±5mV ) exceeding the input range of the system. a θ (degrees) Fig. 24. The adapted value of the parameter a 2 for different magnitude and the phase difference between the sinusoidal signals. The stabilized value of the transform parameter a 2 after adaptation cycles for all DC inputs of are plotted in Fig. 2. The response is the inverse function of the DAC non-linearity showing that the Σ learner is able to compensate for the non-linearity of the DAC similar to the numerical experiments reported in Fig. 2. The next set of experiments were used to verify the response of the Σ learner to identify redundant Σ modulator paths. For this experiment, two sinusoidal inputs of the same Channel Power (db) =.8*x =.9*x θ (degrees) Fig. 25. The residual power on the second channel as the phase difference between the two signals is varied from o to 8 o Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

13 3 CH PSD (db) CH PSD (db) Frequency (Hz) (d) Frequency (Hz) CH 2 PSD (db) CH 2 PSD (db) Frequency (Hz) (e) Frequency (Hz) CH 3 PSD (db) CH 3 PSD (db) 5 (c) 2 4 Frequency (Hz) (f) Frequency (Hz) Reconstructed Signal Power (db) θ (degrees) Reconstructed Signal Power (db) θ (degrees) Fig. 26. Signal power in each channel before and after adaptation Fig. 27. The Reconstructed signal power for the 2 nd channel and the 3 rd channel. frequency but different phase was presented as inputs to the Σ learner. For the sake of convenience the two sinusoidal signal are shown using a phasor diagram as shown in Fig. 23. The magnitude of the sinusoid is represented by the length of the vector and the phase difference between the sinusoids is represented by θ. The phase and the magnitude of the second sinusoidal signal was varied with respect to the first x. Fig. 24 shows the adapted value of a 2 as the phase difference was varied from o to 8 o. It can be clearly seen that the parameter a 2 tracks the phase difference of the two signal and is minimum (zero) when the phase difference is 9. The non-linear response of the a 2 is due to the non-linearity of the DAC. The residual power at the output of the second modulator is shown in Fig. 25. It can be seen that the output is attenuated by more than 5dB when the phase difference is o and 8 o. This demonstrates that the Σ learner can identify the non-redundant subspace (whose rank is unity when the phasors are aligned with respect to each other) when θ = o and θ = 8 o. Fig. 26 shows the frequency domain response of the Σ learner when the adaptation is disabled and when the adaptation is enabled. The inputs of the system were applied with a khz sinusoidal signal and the FFT response of the Σ modulator outputs was observed. Plots, and (c) of Fig. 26 show the power spectral density of 3 channels when the learning is disabled. Fig. 26(d), (e) and (f) show the FFT response of the Σ learner when the learning is enabled. In addition to showing the noise-shaping characteristics the plot shows that the Σ learner indeed suppresses the cross-channel redundancy. It must be noted that the signal transformation operation is a linear operation which is seen by observing no harmonics in the uncorrelated signal plots of Fig. 26(d), (e) and (f). To validate that the output of the Σ learner can be used for reconstructing the input, the input signals were reconstructed using the modulator output and the digitized representation of the transform matrix A. The reconstruction of the 2 nd and the 3 rd channel as a function of phase difference between signals x and are shown in Fig. 27 and respectively. The signal was reconstructed with the help of transform parameter a 2 and the modulator output of the first channel d where as the signal x 3 was reconstructed with the help of parameters a 3, a 32 and the modulator outputs d and d 2. It can be seen that the reconstructed error less than.5lsb can be achieved. The result demonstrates that the learned compression manifold A faithfully captures the support of the input data which is consistent with the Monte-Carlo simulations. VII. DISCUSSIONS AND CONCLUSIONS In this paper, we have presented an optimization framework that integrates manifold learning with Σ modulation. The framework produces not only a quantized sequence of transformed analog signals but also a quantized representation of the transform itself. The approach was shown to be applicable to higher-order modulations and also to different forms of analog signal compression. It was shown through extensive Monte-Carlo simulations and results obtained from a fabricated prototype that the proposed algorithm is robust to computational artifacts introduced by analog computation which includes mismatch and non-linearity, demonstrating that the approach could be effectively used for designing highdimensional analog-to-digital converters. The future work in this area would entail optimization of the analog transforms in terms of area and power. One of the possible methods could be to incorporate sub-microwatt matrix-vector-multipliers as reported in [35] or to use charge-pump circuits to eliminate the area consuming counters. Reducing the area of the analog transforms is important since more input channels (D) can be accommodated. We have already shown using equation (3) that increasing D would improve the performance of the Σ learner compared to a conventional architecture in terms of its energy efficiency. Also future work in this area would include extending Σ learning to multi-bit continuous time modulators [36] and to neurally inspired modulators which includes time-encoding machines [37]. Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

14 4 APPENDIX I Let P Σ denote the power dissipation of a single Σ modulator and P adt denote the power dissipation required for mixed-signal multiplication and adaptation. During the first phase of the Σ learner operation (fraction α of the total time), all the D modulators and D 2 adaptation elements are active. Once the subspace of dimension M has been identified, the rest of the operation (fraction α of the operational time), M Σ modulators and MD matrix elements are active. The total power dissipation P STL for operating the Σ learner is given by P STL = α ( DP Σ + D 2 P adt ) + ( α)(mp Σ + MDP adt ). (29) The estimated power dissipation for a conventional D channel data acquisition system (shown in Fig. 3 b) is given by P conv = DP Σ + MDP DSP (3) where P DSP refers to the power dissipation for a single DSP operation used for estimating the transform A. For the sake of simplicity we will ignore the power dissipation due to the DSP. To achieve superior power efficiency compared to the conventional system the relationship P STL < P conv needs to be satisfied which leads to P adt ( α)( M/D) < (3) P Σ D[α + ( α)m/d]. For M D and α, the relative power dissipation of the mixed-signal adaptation needs to satisfy P adt P Σ < which proves the inequality (3). D(α + M/D) APPENDIX II To prove the claim in Lemma 2, that S = w[n ] λd[n] < λ, given that w[n ] < 2λ, we will use the relationship d[n] = sgn(w[n ]). The following relationships will prove the claim S = w[n ] λsgn(w[n ]) (32) sgn(w[n ])( w[n ] λ) (33) sgn(w[n ])λ (34) = λ (35) where we have used the equality sgn(w[n ]) =. APPENDIX III To prove the expression (22), we start with the recursions for a second order Σ learner given by w[n] =w[n ] + (A[n ]x[n ] λd[n]) + w[n ] w[n 2] (36) Let v[n] = w[n] w[n ], therefore, the recursion is written as v[n] = v[n ] + (A[n ]x[n ] λd[n]) (37) Assuming the initial condition of v[] = and solving the discrete time recursion leads to λ N d[n] = N A[n]x[n] + v[n] (38) N N N n= n= Since v[n] is finite and bounded, therefore asymptotically we get λ N d[n] = N A[n]x[n] (39) N N n= n= Therefore, the empirical expectation E n (.) of both sides give E n {d[n]} = λ E n{a[n]x[n]} (4) Substituting v[n] = w[n] w[n ] we get E n {E n {d[n]}} = λ E n{a[n]x[n]} + w[n] (4) N The finite and bounded nature of w[n] gives E n {E n {d[n]}} = λ E n{a[n]x[n]} (42) REFERENCES [] K.D. Wise, D. J. Anderson, J. F. Hetke, D. R. Kipke, and K. Najafi, Wireless implantable microsystems: High-density electronic interfaces to the nervous system, Proceedings of the IEEE, vol.92(), pp.38-45, Jan 24. [2] C. T. Nordhausen, E. M. Maynard, and R. A. Normann, Single unit recording capabilities of a -microelectrode array, Brain Research, vol. 726, pp. 294, 996. [3] Harrison, R. R.; Watkins, P. T.; Kier, R. J.; Lovejoy, R. O.; Black, D. J.; Greger, B.; Solzbacher, F.; A Low-Power Integrated Circuit for a Wireless -Electrode Neural Recording, IEEE Journal of Solid-State Circuits, vol. 42(), pp.23-33, Jan 27. [4] J. E. Greenberg and P. M. Zurek, Microphone array hearing aids in microphone arrays: Signal Processing Techniques and Application, Springer, Berlin, pp , 2. [5] R.N. Miles and R.R. Hoy; The development of a biologically-inspired directional microphone for hearing aids, Audiology and Neuro-Otology (2) : 86-94, 26. [6] D. W. Henderson, Experiencing Geometry: In Euclidean, Spherical, and Hyperbolic Spaces, Prentice-Hall, 2. [7] S. Haykin, Neural Networks: A comprehensive foundation. Prentice Hall, 2nd edition, 998. [8] T. Kohonen, Self-organizing Maps. Springer publications 2. [9] S.T. Roweis and L.K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 22, Vol. 29. no. 55, pp , Dec. 2. [] D. Donoho and C. Grimes, Hessian eigenmaps: new tools for nonlinear dimensionality reduction, In Proceedings of National Academy of Science, pp , 23. [] J. C. Candy and G. C. Temes, Oversampled methods for A/D and D/A conversion in Oversampled Delta-Sigma Data Converters, Piscataway, NJ: IEEE Press, 992, pp [2] C. Bishop, Neural Networks for Pattern Recognition. Oxford University Press, 995. [3] G. Cauwenberghs and M. Bayoumi, Learning on Silicon. Kluwer Academic Publishers, Boston 999. [4] D.L. Donoho, Compressed Sensing, IEEE Transactions on Information Theory, Volume 52, Issue 4, pp , April 26. [5] J.N Laska, S. Kirolos, M.F. Duarte, T.S. Ragheb, R.G. Baraniuk, Y. Massoud, Theory and implementation of an analog-to-information converter using random demodulation, IEEE International Symposium on Circuits and Systems (ISCAS), pp , May 27. [6] A. Gore and S. Chakrbartty, Large margin analog-to-digital converters with applications in neural prosthetics, Adv. Neural Information Processing Systems (NIPS 26). [7] V. Vapnik, The Nature of Statistical Learning Theory. New York: Springer-Verlag, 995. Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

15 5 [8] Girosi, F., Jones, M. and Poggio, T. Regularization Theory and Neural Networks Architectures. Neural Computation, vol. 7, pp , 996. [9] D. Fudenberg and D. Levine, The Theory of Learning in Games. MIT Press, 998. [2] T. Basar and P. Bernhard, H Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach. Springer 995. [2] W. Rudin, Functional Analysis. New York: McGraw-Hill, 99. [22] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 24. [23] D.E. Rumelhart, G.E. Hinton, R. J. Williams, Learning internal representations by error propagation, Institute for Cognitive Science, University of California, San Diego, 986. In D.E. Rumelhart and J.L. McClelland (Eds.), Parallel distributed processing (Vol., pp ). Cambridge, MA: MIT Press. [24] B. Pearlmutter, Gradient descent: second order momentum and saturating error, In Advances in Neural Information Processing Systems (NIPS), pp , 992. [25] P.M. Aziz, H.V. Sorensen, J. van der Spiegel, An overview of sigmadelta converters, IEEE Signal Processing Magazine, Volume 3, Issue, pp. 6-84, Jan [26] I. Galton and H. T. Jensen, Delta-Sigma Modulator Based A/D Conversion without Oversampling, IEEE Transactions on Circuits and Systems II, Volume 42, Issue 2, pp , Dec [27] I. Galton and H.T. Jensen, Oversampling parallel delta-sigma modulator A/D conversion, IEEE Transactions on Circuits and Systems II, Volume 43, Issue 2, pp. 8-8, Dec [28] R. D. Batten, A. Eshraghi, T. S. Fiez, Calibration of parallel Σ ADCs, IEEE Transactions on Circuits and Systems II, Vol.49(6), pp , June 22. [29] V. Ferragina, A. Fornasari, U. Gatti, P. Malcovati, F. Maloberti, Gain and offset mismatch calibration in time-interleaved multipath A/D sigma-delta modulators, IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 5(2), pp , Dec. 24. [3] B. Linares-Barranco and T. Serrano-Gotarredona, On the design and characterization of femtoampere current-mode circuits, Solid-State Circuits, IEEE Journal of Volume 38, Issue 8, Aug. 23 Page(s): [3] P.M. Furth and H.A. Ommani. Low-voltage highly-linear transconductor design in subthreshold CMOS. Circuits and Systems, 997. Proceedings of the 4th Midwest Symposium on Volume, 3-6 Aug. 997 Page(s):56-59 vol. [32] Kulchycki, S.D.; Trofin, R.; Vleugels, K.; Wooley, B.A.; A 77-dB Dynamic Range, 7.5-MHz Hybrid Continuous-Time/Discrete-Time Cascaded SigmaDelta Modulator, IEEE Journal of Solid-State Circuits, Volume 43, Issue 4, April 28 Page(s): [33] G. Cauwenberghs and V. Pedroni, A Charge-Based CMOS Parallel Analog Vector Quantizer, Adv. Neural Information Processing Systems (NIPS*94), Cambridge, MA: MIT Press, vol. 7, pp , 995. [34] I. H. Witten, E. Frank, Data Mining. Academic Press, 25. [35] S. Chakrabartty, G.Cauwenberghs, A Sub-microwatt Analog VLSI Trainable Pattern Classifier, IEEE Journal of Solid-State Circuits, vol. 42, no: 5, May 27. [36] J. De Maeyer, P. Rombouts, and L. Weyten, Efficient multibit quantization in continuous-time Σ modulators, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 54, no. 4, pp , Apr. 27. [37] A.A. Lazar, E. K. Simonyi, L.T. Toth, An Overcomplete Stitching Algorithm for Time Decoding Machines, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 55, no. 9, pp , Oct. 28. Shantanu Chakrabartty (M 96) received his B.Tech degree from Indian Institute of Technology, Delhi in 996, M.S and Ph.D in Electrical Engineering from Johns Hopkins University, Baltimore, MD in 2 and 24 respectively. He is currently an assistant professor in the department of electrical and computer engineering at Michigan State University. From he was with Qualcomm Incorporated, San Diego and during 22 he was a visiting researcher at University of Tokyo. His current research interests include low-power analog and digital VLSI systems, hardware implementation of machine learning algorithms with application to biosensors and biomedical instrumentation. Dr. Chakrabartty was a recipient of The Catalyst foundation fellowship from and received the best undergraduate thesis award in 996. He is currently a member for IEEE BioCAS technical committee, IEEE Circuits and Systems Sensors technical committee and serves as an associate editor for Advances in Artificial Neural Systems from Hindawi publications. Amit Gore received the Bachelor s (B.E.) degree in instrumentation engineering from the University of Pune, Pune, India, in 998, and the M.S. degree and Ph.D degree in electrical and computer engineering from Michigan State University, East Lansing, in 22 and 28 respectively. Currently, he is with General Electric Global Research, Niskayuna, New York. His research interests are low-power sigmadelta converters, analog signal processing and lowpower mixed-signal VLSI design. Authorized licensed use limited to: Michigan State University. Downloaded on August 4, 29 at 3:53 from IEEE Xplore. Restrictions apply.

Nyquist-Rate D/A Converters. D/A Converter Basics.

Nyquist-Rate D/A Converters. D/A Converter Basics. Nyquist-Rate D/A Converters David Johns and Ken Martin (johns@eecg.toronto.edu) (martin@eecg.toronto.edu) slide 1 of 20 D/A Converter Basics. B in D/A is a digital signal (or word), B in b i B in = 2 1

More information

Modeling All-MOS Log-Domain Σ A/D Converters

Modeling All-MOS Log-Domain Σ A/D Converters DCIS 04 Modeling All-MOS Log Σ ADCs Intro Circuits Modeling Example Conclusions 1/22 Modeling All-MOS Log-Domain Σ A/D Converters X.Redondo 1, J.Pallarès 2 and F.Serra-Graells 1 1 Institut de Microelectrònica

More information

Pipelined multi step A/D converters

Pipelined multi step A/D converters Department of Electrical Engineering Indian Institute of Technology, Madras Chennai, 600036, India 04 Nov 2006 Motivation for multi step A/D conversion Flash converters: Area and power consumption increase

More information

Oversampling Converters

Oversampling Converters Oversampling Converters David Johns and Ken Martin (johns@eecg.toronto.edu) (martin@eecg.toronto.edu) slide 1 of 56 Motivation Popular approach for medium-to-low speed A/D and D/A applications requiring

More information

Low-Noise Sigma-Delta Capacitance-to-Digital Converter for Sub-pF Capacitive Sensors with Integrated Dielectric Loss Measurement

Low-Noise Sigma-Delta Capacitance-to-Digital Converter for Sub-pF Capacitive Sensors with Integrated Dielectric Loss Measurement Low-Noise Sigma-Delta Capacitance-to-Digital Converter for Sub-pF Capacitive Sensors with Integrated Dielectric Loss Measurement Markus Bingesser austriamicrosystems AG Rietstrasse 4, 864 Rapperswil, Switzerland

More information

Switched-Capacitor Circuits David Johns and Ken Martin University of Toronto

Switched-Capacitor Circuits David Johns and Ken Martin University of Toronto Switched-Capacitor Circuits David Johns and Ken Martin University of Toronto (johns@eecg.toronto.edu) (martin@eecg.toronto.edu) University of Toronto 1 of 60 Basic Building Blocks Opamps Ideal opamps usually

More information

A novel Capacitor Array based Digital to Analog Converter

A novel Capacitor Array based Digital to Analog Converter Chapter 4 A novel Capacitor Array based Digital to Analog Converter We present a novel capacitor array digital to analog converter(dac architecture. This DAC architecture replaces the large MSB (Most Significant

More information

Slide Set Data Converters. Digital Enhancement Techniques

Slide Set Data Converters. Digital Enhancement Techniques 0 Slide Set Data Converters Digital Enhancement Techniques Introduction Summary Error Measurement Trimming of Elements Foreground Calibration Background Calibration Dynamic Matching Decimation and Interpolation

More information

Second and Higher-Order Delta-Sigma Modulators

Second and Higher-Order Delta-Sigma Modulators Second and Higher-Order Delta-Sigma Modulators MEAD March 28 Richard Schreier Richard.Schreier@analog.com ANALOG DEVICES Overview MOD2: The 2 nd -Order Modulator MOD2 from MOD NTF (predicted & actual)

More information

EE 230 Lecture 40. Data Converters. Amplitude Quantization. Quantization Noise

EE 230 Lecture 40. Data Converters. Amplitude Quantization. Quantization Noise EE 230 Lecture 40 Data Converters Amplitude Quantization Quantization Noise Review from Last Time: Time Quantization Typical ADC Environment Review from Last Time: Time Quantization Analog Signal Reconstruction

More information

Top-Down Design of a xdsl 14-bit 4MS/s Σ Modulator in Digital CMOS Technology

Top-Down Design of a xdsl 14-bit 4MS/s Σ Modulator in Digital CMOS Technology Top-Down Design of a xdsl -bit 4MS/s Σ Modulator in Digital CMOS Technology R. del Río, J.M. de la Rosa, F. Medeiro, B. Pérez-Verdú, and A. Rodríguez-Vázquez Instituto de Microelectrónica de Sevilla CNM-CSIC

More information

Summary Last Lecture

Summary Last Lecture EE247 Lecture 19 ADC Converters Sampling (continued) Sampling switch charge injection & clock feedthrough Complementary switch Use of dummy device Bottom-plate switching Track & hold T/H circuits T/H combined

More information

Nyquist-Rate A/D Converters

Nyquist-Rate A/D Converters IsLab Analog Integrated ircuit Design AD-51 Nyquist-ate A/D onverters כ Kyungpook National University IsLab Analog Integrated ircuit Design AD-1 Nyquist-ate MOS A/D onverters Nyquist-rate : oversampling

More information

UNIVERSITY OF CALIFORNIA College of Engineering Department of Electrical Engineering and Computer Sciences

UNIVERSITY OF CALIFORNIA College of Engineering Department of Electrical Engineering and Computer Sciences UNIVERSITY OF CALIFORNIA College of Engineering Department of Electrical Engineering and Computer Sciences E. Alon Final EECS 240 Monday, May 19, 2008 SPRING 2008 You should write your results on the exam

More information

Power Dissipation. Where Does Power Go in CMOS?

Power Dissipation. Where Does Power Go in CMOS? Power Dissipation [Adapted from Chapter 5 of Digital Integrated Circuits, 2003, J. Rabaey et al.] Where Does Power Go in CMOS? Dynamic Power Consumption Charging and Discharging Capacitors Short Circuit

More information

Successive Approximation ADCs

Successive Approximation ADCs Department of Electrical and Computer Engineering Successive Approximation ADCs Vishal Saxena Vishal Saxena -1- Successive Approximation ADC Vishal Saxena -2- Data Converter Architectures Resolution [Bits]

More information

ECE Branch GATE Paper The order of the differential equation + + = is (A) 1 (B) 2

ECE Branch GATE Paper The order of the differential equation + + = is (A) 1 (B) 2 Question 1 Question 20 carry one mark each. 1. The order of the differential equation + + = is (A) 1 (B) 2 (C) 3 (D) 4 2. The Fourier series of a real periodic function has only P. Cosine terms if it is

More information

EE 435. Lecture 36. Quantization Noise ENOB Absolute and Relative Accuracy DAC Design. The String DAC

EE 435. Lecture 36. Quantization Noise ENOB Absolute and Relative Accuracy DAC Design. The String DAC EE 435 Lecture 36 Quantization Noise ENOB Absolute and elative Accuracy DAC Design The String DAC . eview from last lecture. Quantization Noise in ADC ecall: If the random variable f is uniformly distributed

More information

Higher-Order Σ Modulators and the Σ Toolbox

Higher-Order Σ Modulators and the Σ Toolbox ECE37 Advanced Analog Circuits Higher-Order Σ Modulators and the Σ Toolbox Richard Schreier richard.schreier@analog.com NLCOTD: Dynamic Flip-Flop Standard CMOS version D CK Q Q Can the circuit be simplified?

More information

Lecture 7, ATIK. Continuous-time filters 2 Discrete-time filters

Lecture 7, ATIK. Continuous-time filters 2 Discrete-time filters Lecture 7, ATIK Continuous-time filters 2 Discrete-time filters What did we do last time? Switched capacitor circuits with nonideal effects in mind What should we look out for? What is the impact on system

More information

Q. 1 Q. 25 carry one mark each.

Q. 1 Q. 25 carry one mark each. GATE 5 SET- ELECTRONICS AND COMMUNICATION ENGINEERING - EC Q. Q. 5 carry one mark each. Q. The bilateral Laplace transform of a function is if a t b f() t = otherwise (A) a b s (B) s e ( a b) s (C) e as

More information

Design for Manufacturability and Power Estimation. Physical issues verification (DSM)

Design for Manufacturability and Power Estimation. Physical issues verification (DSM) Design for Manufacturability and Power Estimation Lecture 25 Alessandra Nardi Thanks to Prof. Jan Rabaey and Prof. K. Keutzer Physical issues verification (DSM) Interconnects Signal Integrity P/G integrity

More information

Extremely small differential non-linearity in a DMOS capacitor based cyclic ADC for CMOS image sensors

Extremely small differential non-linearity in a DMOS capacitor based cyclic ADC for CMOS image sensors Extremely small differential non-linearity in a DMOS capacitor based cyclic ADC for CMOS image sensors Zhiheng Wei 1a), Keita Yasutomi ) and Shoji Kawahito b) 1 Graduate School of Science and Technology,

More information

INTRODUCTION TO DELTA-SIGMA ADCS

INTRODUCTION TO DELTA-SIGMA ADCS ECE37 Advanced Analog Circuits INTRODUCTION TO DELTA-SIGMA ADCS Richard Schreier richard.schreier@analog.com NLCOTD: Level Translator VDD > VDD2, e.g. 3-V logic? -V logic VDD < VDD2, e.g. -V logic? 3-V

More information

ECE-343 Test 1: Feb 10, :00-8:00pm, Closed Book. Name : SOLUTION

ECE-343 Test 1: Feb 10, :00-8:00pm, Closed Book. Name : SOLUTION ECE-343 Test : Feb 0, 00 6:00-8:00pm, Closed Book Name : SOLUTION C Depl = C J0 + V R /V o ) m C Diff = τ F g m ω T = g m C µ + C π ω T = g m I / D C GD + C or V OV GS b = τ i τ i = R i C i ω H b Z = Z

More information

EE247 Lecture 16. Serial Charge Redistribution DAC

EE247 Lecture 16. Serial Charge Redistribution DAC EE47 Lecture 16 D/A Converters D/A examples Serial charge redistribution DAC Practical aspects of current-switch DACs Segmented current-switch DACs DAC self calibration techniques Current copiers Dynamic

More information

SWITCHED CAPACITOR AMPLIFIERS

SWITCHED CAPACITOR AMPLIFIERS SWITCHED CAPACITOR AMPLIFIERS AO 0V 4. AO 0V 4.2 i Q AO 0V 4.3 Q AO 0V 4.4 Q i AO 0V 4.5 AO 0V 4.6 i Q AO 0V 4.7 Q AO 0V 4.8 i Q AO 0V 4.9 Simple amplifier First approach: A 0 = infinite. C : V C = V s

More information

Lecture 10, ATIK. Data converters 3

Lecture 10, ATIK. Data converters 3 Lecture, ATIK Data converters 3 What did we do last time? A quick glance at sigma-delta modulators Understanding how the noise is shaped to higher frequencies DACs A case study of the current-steering

More information

Multi-bit Cascade ΣΔ Modulator for High-Speed A/D Conversion with Reduced Sensitivity to DAC Errors

Multi-bit Cascade ΣΔ Modulator for High-Speed A/D Conversion with Reduced Sensitivity to DAC Errors Multi-bit Cascade ΣΔ Modulator for High-Speed A/D Conversion with Reduced Sensitivity to DAC Errors Indexing terms: Multi-bit ΣΔ Modulators, High-speed, high-resolution A/D conversion. This paper presents

More information

Piecewise Nonlinear Approach to the Implementation of Nonlinear Current Transfer Functions

Piecewise Nonlinear Approach to the Implementation of Nonlinear Current Transfer Functions 1 Piecewise Nonlinear Approach to the Implementation of Nonlinear Current Transfer Functions Chunyan Wang Abstract A piecewise nonlinear approach to the nonlinear circuit design has been proposed in this

More information

ECE 407 Computer Aided Design for Electronic Systems. Simulation. Instructor: Maria K. Michael. Overview

ECE 407 Computer Aided Design for Electronic Systems. Simulation. Instructor: Maria K. Michael. Overview 407 Computer Aided Design for Electronic Systems Simulation Instructor: Maria K. Michael Overview What is simulation? Design verification Modeling Levels Modeling circuits for simulation True-value simulation

More information

Acceleration Feedback

Acceleration Feedback Acceleration Feedback Mechanical Engineer Modeling & Simulation Electro- Mechanics Electrical- Electronics Engineer Sensors Actuators Computer Systems Engineer Embedded Control Controls Engineer Mechatronic

More information

ELEN 610 Data Converters

ELEN 610 Data Converters Spring 04 S. Hoyos - EEN-60 ELEN 60 Data onverters Sebastian Hoyos Texas A&M University Analog and Mixed Signal Group Spring 04 S. Hoyos - EEN-60 Electronic Noise Signal to Noise ratio SNR Signal Power

More information

Keywords: sensor compensation, integrated circuit, ic, compensation algorithm, piezoresistive, prt, pressure sensors, temperature

Keywords: sensor compensation, integrated circuit, ic, compensation algorithm, piezoresistive, prt, pressure sensors, temperature Maxim > Design Support > Technical Documents > Tutorials > ASICs > APP 2024 Maxim > Design Support > Technical Documents > Tutorials > Sensors > APP 2024 Keywords: sensor compensation, integrated circuit,

More information

Data Converter Fundamentals

Data Converter Fundamentals Data Converter Fundamentals David Johns and Ken Martin (johns@eecg.toronto.edu) (martin@eecg.toronto.edu) slide 1 of 33 Introduction Two main types of converters Nyquist-Rate Converters Generate output

More information

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut Finite Word Length Effects and Quantisation Noise 1 Finite Word Length Effects Finite register lengths and A/D converters cause errors at different levels: (i) input: Input quantisation (ii) system: Coefficient

More information

Advanced Current Mirrors and Opamps

Advanced Current Mirrors and Opamps Advanced Current Mirrors and Opamps David Johns and Ken Martin (johns@eecg.toronto.edu) (martin@eecg.toronto.edu) slide 1 of 26 Wide-Swing Current Mirrors I bias I V I in out out = I in V W L bias ------------

More information

Lecture 6, ATIK. Switched-capacitor circuits 2 S/H, Some nonideal effects Continuous-time filters

Lecture 6, ATIK. Switched-capacitor circuits 2 S/H, Some nonideal effects Continuous-time filters Lecture 6, ATIK Switched-capacitor circuits 2 S/H, Some nonideal effects Continuous-time filters What did we do last time? Switched capacitor circuits The basics Charge-redistribution analysis Nonidealties

More information

System on a Chip. Prof. Dr. Michael Kraft

System on a Chip. Prof. Dr. Michael Kraft System on a Chip Prof. Dr. Michael Kraft Lecture 3: Sample and Hold Circuits Switched Capacitor Circuits Circuits and Systems Sampling Signal Processing Sample and Hold Analogue Circuits Switched Capacitor

More information

The output voltage is given by,

The output voltage is given by, 71 The output voltage is given by, = (3.1) The inductor and capacitor values of the Boost converter are derived by having the same assumption as that of the Buck converter. Now the critical value of the

More information

Miniature Electronically Trimmable Capacitor V DD. Maxim Integrated Products 1

Miniature Electronically Trimmable Capacitor V DD. Maxim Integrated Products 1 19-1948; Rev 1; 3/01 Miniature Electronically Trimmable Capacitor General Description The is a fine-line (geometry) electronically trimmable capacitor (FLECAP) programmable through a simple digital interface.

More information

AdaptiveFilters. GJRE-F Classification : FOR Code:

AdaptiveFilters. GJRE-F Classification : FOR Code: Global Journal of Researches in Engineering: F Electrical and Electronics Engineering Volume 14 Issue 7 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Advanced Analog Integrated Circuits. Operational Transconductance Amplifier I & Step Response

Advanced Analog Integrated Circuits. Operational Transconductance Amplifier I & Step Response Advanced Analog Integrated Circuits Operational Transconductance Amplifier I & Step Response Bernhard E. Boser University of California, Berkeley boser@eecs.berkeley.edu Copyright 2016 by Bernhard Boser

More information

Conventional Wisdom Benefits and Consequences of Annealing Understanding of Engineering Principles

Conventional Wisdom Benefits and Consequences of Annealing Understanding of Engineering Principles EE 508 Lecture 41 Conventional Wisdom Benefits and Consequences of Annealing Understanding of Engineering Principles by Randy Geiger Iowa State University Review from last lecture Conventional Wisdom:

More information

J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C. A. Mead California Institute of Technology Pasadena, CA 91125

J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C. A. Mead California Institute of Technology Pasadena, CA 91125 WINNER-TAKE-ALL NETWORKS OF O(N) COMPLEXITY J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C. A. Mead California Institute of Technology Pasadena, CA 91125 ABSTRACT We have designed, fabricated, and tested

More information

INSTRUMENTAL ENGINEERING

INSTRUMENTAL ENGINEERING INSTRUMENTAL ENGINEERING Subject Code: IN Course Structure Sections/Units Section A Unit 1 Unit 2 Unit 3 Unit 4 Unit 5 Unit 6 Section B Section C Section D Section E Section F Section G Section H Section

More information

Advanced Analog Integrated Circuits. Operational Transconductance Amplifier II Multi-Stage Designs

Advanced Analog Integrated Circuits. Operational Transconductance Amplifier II Multi-Stage Designs Advanced Analog Integrated Circuits Operational Transconductance Amplifier II Multi-Stage Designs Bernhard E. Boser University of California, Berkeley boser@eecs.berkeley.edu Copyright 2016 by Bernhard

More information

On the design of Incremental ΣΔ Converters

On the design of Incremental ΣΔ Converters M. Belloni, C. Della Fiore, F. Maloberti, M. Garcia Andrade: "On the design of Incremental ΣΔ Converters"; IEEE Northeast Workshop on Circuits and Sstems, NEWCAS 27, Montreal, 5-8 August 27, pp. 376-379.

More information

NAME SID EE42/100 Spring 2013 Final Exam 1

NAME SID EE42/100 Spring 2013 Final Exam 1 NAME SID EE42/100 Spring 2013 Final Exam 1 1. Short answer questions a. There are approximately 36x10 50 nucleons (protons and neutrons) in the earth. If we wanted to give each one a unique n-bit address,

More information

Lecture 4, Noise. Noise and distortion

Lecture 4, Noise. Noise and distortion Lecture 4, Noise Noise and distortion What did we do last time? Operational amplifiers Circuit-level aspects Simulation aspects Some terminology Some practical concerns Limited current Limited bandwidth

More information

ESE 570: Digital Integrated Circuits and VLSI Fundamentals

ESE 570: Digital Integrated Circuits and VLSI Fundamentals ESE 570: Digital Integrated Circuits and VLSI Fundamentals Lec 18: March 27, 2018 Dynamic Logic, Charge Injection Lecture Outline! Sequential MOS Logic " D-Latch " Timing Constraints! Dynamic Logic " Domino

More information

Homework Assignment 08

Homework Assignment 08 Homework Assignment 08 Question 1 (Short Takes) Two points each unless otherwise indicated. 1. Give one phrase/sentence that describes the primary advantage of an active load. Answer: Large effective resistance

More information

Electronics and Communication Exercise 1

Electronics and Communication Exercise 1 Electronics and Communication Exercise 1 1. For matrices of same dimension M, N and scalar c, which one of these properties DOES NOT ALWAYS hold? (A) (M T ) T = M (C) (M + N) T = M T + N T (B) (cm)+ =

More information

A High-Yield Area-Power Efficient DWT Hardware for Implantable Neural Interface Applications

A High-Yield Area-Power Efficient DWT Hardware for Implantable Neural Interface Applications Neural Engineering 27 A High-Yield Area-Power Efficient DWT Hardware for Implantable Neural Interface Applications Awais M. Kamboh, Andrew Mason, Karim Oweiss {Kambohaw, Mason, Koweiss} @msu.edu Department

More information

Design of Analog Integrated Circuits

Design of Analog Integrated Circuits Design of Analog Integrated Circuits Chapter 11: Introduction to Switched- Capacitor Circuits Textbook Chapter 13 13.1 General Considerations 13.2 Sampling Switches 13.3 Switched-Capacitor Amplifiers 13.4

More information

D/A Converters. D/A Examples

D/A Converters. D/A Examples D/A architecture examples Unit element Binary weighted Static performance Component matching Architectures Unit element Binary weighted Segmented Dynamic element matching Dynamic performance Glitches Reconstruction

More information

2N5545/46/47/JANTX/JANTXV

2N5545/46/47/JANTX/JANTXV N//7/JANTX/JANTXV Monolithic N-Channel JFET Duals Product Summary Part Number V GS(off) (V) V (BR)GSS Min (V) g fs Min (ms) I G Max (pa) V GS V GS Max (mv) N. to.. N. to.. N7. to.. Features Benefits Applications

More information

An Anti-Aliasing Multi-Rate Σ Modulator

An Anti-Aliasing Multi-Rate Σ Modulator An Anti-Aliasing Multi-Rate Σ Modulator Anthony Chan Carusone Depart. of Elec. and Comp. Eng. University of Toronto, Canada Franco Maloberti Department of Electronics University of Pavia, Italy May 6,

More information

Chapter 10 Feedback. PART C: Stability and Compensation

Chapter 10 Feedback. PART C: Stability and Compensation 1 Chapter 10 Feedback PART C: Stability and Compensation Example: Non-inverting Amplifier We are analyzing the two circuits (nmos diff pair or pmos diff pair) to realize this symbol: either of the circuits

More information

VLSI Signal Processing

VLSI Signal Processing VLSI Signal Processing Lecture 1 Pipelining & Retiming ADSP Lecture1 - Pipelining & Retiming (cwliu@twins.ee.nctu.edu.tw) 1-1 Introduction DSP System Real time requirement Data driven synchronized by data

More information

Analog Digital Sampling & Discrete Time Discrete Values & Noise Digital-to-Analog Conversion Analog-to-Digital Conversion

Analog Digital Sampling & Discrete Time Discrete Values & Noise Digital-to-Analog Conversion Analog-to-Digital Conversion Analog Digital Sampling & Discrete Time Discrete Values & Noise Digital-to-Analog Conversion Analog-to-Digital Conversion 6.082 Fall 2006 Analog Digital, Slide Plan: Mixed Signal Architecture volts bits

More information

Homework Assignment 11

Homework Assignment 11 Homework Assignment Question State and then explain in 2 3 sentences, the advantage of switched capacitor filters compared to continuous-time active filters. (3 points) Continuous time filters use resistors

More information

ir. Georgi Radulov 1, dr. ir. Patrick Quinn 2, dr. ir. Hans Hegt 1, prof. dr. ir. Arthur van Roermund 1 Eindhoven University of Technology Xilinx

ir. Georgi Radulov 1, dr. ir. Patrick Quinn 2, dr. ir. Hans Hegt 1, prof. dr. ir. Arthur van Roermund 1 Eindhoven University of Technology Xilinx Calibration of Current Steering D/A Converters ir. eorgi Radulov 1, dr. ir. Patrick Quinn 2, dr. ir. Hans Hegt 1, prof. dr. ir. Arthur van Roermund 1 1 Eindhoven University of Technology 2 Xilinx Current-steering

More information

Learning on Silicon: Overview

Learning on Silicon: Overview Learning on Silicon: Overview Gert Cauwenberghs Johns Hopkins University gert@jhu.edu 520.776 Learning on Silicon http://bach.ece.jhu.edu/gert/courses/776 Learning on Silicon: Overview Adaptive Microsystems

More information

Lecture 310 Open-Loop Comparators (3/28/10) Page 310-1

Lecture 310 Open-Loop Comparators (3/28/10) Page 310-1 Lecture 310 Open-Loop Comparators (3/28/10) Page 310-1 LECTURE 310 OPEN-LOOP COMPARATORS LECTURE ORGANIZATION Outline Characterization of comparators Dominant pole, open-loop comparators Two-pole, open-loop

More information

CMPT 889: Lecture 3 Fundamentals of Digital Audio, Discrete-Time Signals

CMPT 889: Lecture 3 Fundamentals of Digital Audio, Discrete-Time Signals CMPT 889: Lecture 3 Fundamentals of Digital Audio, Discrete-Time Signals Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University October 6, 2005 1 Sound Sound waves are longitudinal

More information

Lecture 320 Improved Open-Loop Comparators and Latches (3/28/10) Page 320-1

Lecture 320 Improved Open-Loop Comparators and Latches (3/28/10) Page 320-1 Lecture 32 Improved OpenLoop Comparators and es (3/28/1) Page 321 LECTURE 32 IMPROVED OPENLOOP COMPARATORS AND LATCHES LECTURE ORGANIZATION Outline Autozeroing Hysteresis Simple es Summary CMOS Analog

More information

UNIVERSITÀ DEGLI STUDI DI CATANIA. Dottorato di Ricerca in Ingegneria Elettronica, Automatica e del Controllo di Sistemi Complessi, XXII ciclo

UNIVERSITÀ DEGLI STUDI DI CATANIA. Dottorato di Ricerca in Ingegneria Elettronica, Automatica e del Controllo di Sistemi Complessi, XXII ciclo UNIVERSITÀ DEGLI STUDI DI CATANIA DIPARTIMENTO DI INGEGNERIA ELETTRICA, ELETTRONICA E DEI SISTEMI Dottorato di Ricerca in Ingegneria Elettronica, Automatica e del Controllo di Sistemi Complessi, XXII ciclo

More information

Chapter 8. Low-Power VLSI Design Methodology

Chapter 8. Low-Power VLSI Design Methodology VLSI Design hapter 8 Low-Power VLSI Design Methodology Jin-Fu Li hapter 8 Low-Power VLSI Design Methodology Introduction Low-Power Gate-Level Design Low-Power Architecture-Level Design Algorithmic-Level

More information

EE141Microelettronica. CMOS Logic

EE141Microelettronica. CMOS Logic Microelettronica CMOS Logic CMOS logic Power consumption in CMOS logic gates Where Does Power Go in CMOS? Dynamic Power Consumption Charging and Discharging Capacitors Short Circuit Currents Short Circuit

More information

EXAMPLE DESIGN PART 1

EXAMPLE DESIGN PART 1 ECE37 Advanced Analog Circuits Lecture 3 EXAMPLE DESIGN PART Richard Schreier richard.schreier@analog.com Trevor Caldwell trevor.caldwell@utoronto.ca Course Goals Deepen understanding of CMOS analog circuit

More information

MEMS Gyroscope Control Systems for Direct Angle Measurements

MEMS Gyroscope Control Systems for Direct Angle Measurements MEMS Gyroscope Control Systems for Direct Angle Measurements Chien-Yu Chi Mechanical Engineering National Chiao Tung University Hsin-Chu, Taiwan (R.O.C.) 3 Email: chienyu.me93g@nctu.edu.tw Tsung-Lin Chen

More information

Analog and Telecommunication Electronics

Analog and Telecommunication Electronics Politecnico di Torino - ICT School Analog and Telecommunication Electronics D3 - A/D converters» Error taxonomy» ADC parameters» Structures and taxonomy» Mixed converters» Origin of errors 12/05/2011-1

More information

UNIT 1. SIGNALS AND SYSTEM

UNIT 1. SIGNALS AND SYSTEM Page no: 1 UNIT 1. SIGNALS AND SYSTEM INTRODUCTION A SIGNAL is defined as any physical quantity that changes with time, distance, speed, position, pressure, temperature or some other quantity. A SIGNAL

More information

Analysis of flip flop design using nanoelectronic single electron transistor

Analysis of flip flop design using nanoelectronic single electron transistor Int. J. Nanoelectronics and Materials 10 (2017) 21-28 Analysis of flip flop design using nanoelectronic single electron transistor S.Rajasekaran*, G.Sundari Faculty of Electronics Engineering, Sathyabama

More information

Roundoff Noise in Digital Feedback Control Systems

Roundoff Noise in Digital Feedback Control Systems Chapter 7 Roundoff Noise in Digital Feedback Control Systems Digital control systems are generally feedback systems. Within their feedback loops are parts that are analog and parts that are digital. At

More information

High-Speed, High-Resolution, Radiation-Tolerant SAR ADC for Particle Physics Experiments

High-Speed, High-Resolution, Radiation-Tolerant SAR ADC for Particle Physics Experiments Erik Jonsson School of Engineering & Computer Science High-Speed, High-Resolution, Radiation-Tolerant SAR ADC for Particle Physics Experiments Yun Chiu Erik Jonsson Distinguished Professor Texas Analog

More information

ECE 546 Lecture 10 MOS Transistors

ECE 546 Lecture 10 MOS Transistors ECE 546 Lecture 10 MOS Transistors Spring 2018 Jose E. Schutt-Aine Electrical & Computer Engineering University of Illinois jesa@illinois.edu NMOS Transistor NMOS Transistor N-Channel MOSFET Built on p-type

More information

EE 435. Lecture 16. Compensation Systematic Two-Stage Op Amp Design

EE 435. Lecture 16. Compensation Systematic Two-Stage Op Amp Design EE 435 Lecture 6 Compensation Systematic Two-Stage Op Amp Design Review from last lecture Review of Basic Concepts Pole Locations and Stability Theorem: A system is stable iff all closed-loop poles lie

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

PARALLEL DIGITAL-ANALOG CONVERTERS

PARALLEL DIGITAL-ANALOG CONVERTERS CMOS Analog IC Design Page 10.2-1 10.2 - PARALLEL DIGITAL-ANALOG CONVERTERS CLASSIFICATION OF DIGITAL-ANALOG CONVERTERS CMOS Analog IC Design Page 10.2-2 CURRENT SCALING DIGITAL-ANALOG CONVERTERS GENERAL

More information

Lecture: Adaptive Filtering

Lecture: Adaptive Filtering ECE 830 Spring 2013 Statistical Signal Processing instructors: K. Jamieson and R. Nowak Lecture: Adaptive Filtering Adaptive filters are commonly used for online filtering of signals. The goal is to estimate

More information

Case Studies of Logical Computation on Stochastic Bit Streams

Case Studies of Logical Computation on Stochastic Bit Streams Case Studies of Logical Computation on Stochastic Bit Streams Peng Li 1, Weikang Qian 2, David J. Lilja 1, Kia Bazargan 1, and Marc D. Riedel 1 1 Electrical and Computer Engineering, University of Minnesota,

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

EXAMPLE DESIGN PART 2

EXAMPLE DESIGN PART 2 ECE37 Advanced Analog Circuits Lecture 4 EXAMPLE DESIGN PART 2 Richard Schreier richard.schreier@analog.com Trevor Caldwell trevor.caldwell@utoronto.ca Course Goals Deepen understanding of CMOS analog

More information

P. Bruschi - Notes on Mixed Signal Design

P. Bruschi - Notes on Mixed Signal Design Chapter 1. Concepts and definitions about Data Acquisition Systems Electronic systems An electronic systems is a complex electronic networ, which interacts with the physical world through sensors (input

More information

EE 466/586 VLSI Design. Partha Pande School of EECS Washington State University

EE 466/586 VLSI Design. Partha Pande School of EECS Washington State University EE 466/586 VLSI Design Partha Pande School of EECS Washington State University pande@eecs.wsu.edu Lecture 8 Power Dissipation in CMOS Gates Power in CMOS gates Dynamic Power Capacitance switching Crowbar

More information

A Modeling Environment for the Simulation and Design of Charge Redistribution DACs Used in SAR ADCs

A Modeling Environment for the Simulation and Design of Charge Redistribution DACs Used in SAR ADCs 204 UKSim-AMSS 6th International Conference on Computer Modelling and Simulation A Modeling Environment for the Simulation and Design of Charge Redistribution DACs Used in SAR ADCs Stefano Brenna, Andrea

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK KINGS COLLEGE OF ENGINEERING DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING QUESTION BANK SUBJECT CODE: EC 1354 SUB.NAME : VLSI DESIGN YEAR / SEMESTER: III / VI UNIT I MOS TRANSISTOR THEORY AND

More information

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture Youngchun Kim Electrical and Computer Engineering The University of Texas Wenjuan Guo Intel Corporation Ahmed H Tewfik Electrical and

More information

Last Lecture. Power Dissipation CMOS Scaling. EECS 141 S02 Lecture 8

Last Lecture. Power Dissipation CMOS Scaling. EECS 141 S02 Lecture 8 EECS 141 S02 Lecture 8 Power Dissipation CMOS Scaling Last Lecture CMOS Inverter loading Switching Performance Evaluation Design optimization Inverter Sizing 1 Today CMOS Inverter power dissipation» Dynamic»

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

EE 505. Lecture 27. ADC Design Pipeline

EE 505. Lecture 27. ADC Design Pipeline EE 505 Lecture 7 AD Design Pipeline Review Sampling Noise V n5 R S5 dv REF V n4 R S4 V ns V ns β= + If the ON impedance of the switches is small and it is assumed that = =, it can be shown that Vˆ IN-RMS

More information

Shallow Water Fluctuations and Communications

Shallow Water Fluctuations and Communications Shallow Water Fluctuations and Communications H.C. Song Marine Physical Laboratory Scripps Institution of oceanography La Jolla, CA 92093-0238 phone: (858) 534-0954 fax: (858) 534-7641 email: hcsong@mpl.ucsd.edu

More information

Construction of a reconfigurable dynamic logic cell

Construction of a reconfigurable dynamic logic cell PRAMANA c Indian Academy of Sciences Vol. 64, No. 3 journal of March 2005 physics pp. 433 441 Construction of a reconfigurable dynamic logic cell K MURALI 1, SUDESHNA SINHA 2 and WILLIAM L DITTO 3 1 Department

More information

Switched Capacitor Circuits I. Prof. Paul Hasler Georgia Institute of Technology

Switched Capacitor Circuits I. Prof. Paul Hasler Georgia Institute of Technology Switched Capacitor Circuits I Prof. Paul Hasler Georgia Institute of Technology Switched Capacitor Circuits Making a resistor using a capacitor and switches; therefore resistance is set by a digital clock

More information

1 Introduction The Separation of Independent Sources (SIS) assumes that some unknown but independent temporal signals propagate through a mixing and/o

1 Introduction The Separation of Independent Sources (SIS) assumes that some unknown but independent temporal signals propagate through a mixing and/o Appeared in IEEE Trans. on Circuits and Systems, vol. 42, no. 11, pp. 748-751, November 95 c Implementation and Test Results of a Chip for The Separation of Mixed Signals Ammar B. A. Gharbi and Fathi M.

More information

Digital Integrated Circuits A Design Perspective. Semiconductor. Memories. Memories

Digital Integrated Circuits A Design Perspective. Semiconductor. Memories. Memories Digital Integrated Circuits A Design Perspective Semiconductor Chapter Overview Memory Classification Memory Architectures The Memory Core Periphery Reliability Case Studies Semiconductor Memory Classification

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information