FUNCTIONALS OF BROWNIAN BRIDGES ARISING IN THE CURRENT MISMATCH IN D/A CONVERTERS

Size: px
Start display at page:

Download "FUNCTIONALS OF BROWNIAN BRIDGES ARISING IN THE CURRENT MISMATCH IN D/A CONVERTERS"

Transcription

1 Probability in the Engineering and Informational Sciences, 3, 009, Printed in the U.S.A. doi: /s FUNCTIONALS OF BROWNIAN BRIDGES ARISING IN THE CURRENT MISMATCH IN D/A CONVERTERS MARKUS HEYDENREICH AND REMCO VAN DER HOFSTAD Department of Mathematics and Computer Science Eindhoven University oftechnology 5600 MB Eindhoven, The Netherlands GEORGI RADULOV Department of Electrical Engineering Eindhoven University oftechnology EH 5.15, 5600 MB Eindhoven, The Netherlands Digital-to-analog converters DAC transform signals from the abstract digital domain to the real analog world. In many applications, DACs play a crucial role. Due to variability in the production, various errors arise that influence the performance of the DAC. We focus on the current errors, which describe the fluctuations in the currents of the various unit current elements in the DAC. A key performance measure of the DAC is the Integrated Nonlinearity INL, which we study in this article. There are several DAC architectures. The most widely used architectures are the thermometer and the binary and the segmented architectures. We study the two extreme architectures, namely the thermometer and the binary architectures. We assume that the current errors are independent and identically normally distributed and reformulate the INL as a functional of a Brownian bridge. We then proceed by investigating these functionals. For the thermometer case, the functional is the maximal absolute value of the Brownian bridge, which has been investigated in the literature. For the binary case, we investigate properties of the functional, such as its mean, variance, and density. 009 Cambridge University Press /09 $

2 150 M. Heydenreich, R. van der Hofstad, and G. Radulov 1. CURRENT MISMATCH IN DIGITAL-TO-ANALOG CONVERTERS Digital-to-analog converters DAC transform signals from the abstract digital domain to the real analog world. For many applications, this conversion enables the usage of the computational power of robust digital electronics; for example, digital audio and video, digital control, and telecommunications are fields that require digital-to-analog conversion. The advantageous intelligence of the applications in these fields is implemented with digital logic e.g., microprocessors and used via digital-to-analog D/A conversion in the real analog world. However, the DAC errors, at the end of the application chain, might decrease the performance of the whole system. Therefore, predicting and controlling these errors is crucial. This requirement is further emphasized in the highly integrated mixed-signal systems-on-a-chip SoC, as we will explain now. Currently, the SoC solutions often integrate DAC functionality together with the digital logic for cost-effectiveness. This requires optimal usage of the DAC resources while keeping the errors of the DAC within specified margins, primarily because of the two following reasons. First, the price of the mixed-signal SoC includes the price of the DAC resources even when customers are not interested in the DAC functionality. Second, if the DAC does not comply with its specifications, then the entire SoC chip should be discarded. By a careful design, the DAC errors mainly arise from the uncertainty in the manufacturing process and, hence, they are random. Therefore, statistical rules are used to predict the overall performance for high-volume chip production. Knowledge is required that accurately links the DAC resources with the DAC error margins. An important example of such a relationship is the dependence of the D/A conversion accuracy on the DAC area i.e., the silicon area of the chip that is used for the DAC. Higher conversion accuracy is achieved for larger chip areas [,14]. However, too large areas introduce additional problems that might degrade the conversion accuracy. Thus, precise knowledge and understanding of the relationship between accuracy and chip area is crucial, particularly for high-volume chip systems including DAC. For a general introduction to DAC converters, we refer to [,9,18], whereas Razavi [17] also focuses on technical aspects. The D/A conversion is carried out by switching certain analog quantities, such as voltages, currents, or charges, ON or OFF. For the sake of simplicity, and without loss of generality, this article will assume current as a basic analog quantity. The switching process is controlled by the digital input signal w {0, 1} N, where N is the length of the binary input signal. If all combinations of 0 s and 1 s between the digital bits produce valid input signals [i.e., N = log n + 1], then the coding is called binary and N is called the resolution of the DAC. A DAC that uses binary coding to control its current quantities is called binary DAC. The switched ON current quantities I ui are summed to construct the analog output signal current I out. We will assume that the current quantities are random and that {I uj } n are independent and identically distributed i.i.d. random variables. The sum of all current quantities I outmax = n I uj 1.1

3 FUNCTIONALS OF BROWNIAN BRIDGES 151 is associated to the maximal digital input word w n = 1, 1,...,1 and is called the full-scale current of the DAC. The smallest meaningful difference in the analog output, defined as the output least significant bit LSB, is defined as the full-scale current divided by the number of digital input codes; that is, I lsb := I out max n. 1. For general DAC, errors can be classified as static or dynamic. We will focus on the static errors, which we now introduce. For every digital input word w k {0, 1} N, there is an analog output value I outk. The code words w k {0, 1} N will be assumed to be ordered. The difference between two adjacent output values would ideally be Ī u = E[I uj ], but, in practice, it deviates due to mismatch errors coming from the uncertainty of the manufacturing process; that is, I outk I outk 1 = Ī u + I lsb Ī u + DNLk I lsb, k = 1,..., n. 1.3 Here, I lsb Ī u represents the linear error, that is independent of k, and DNL k = I out k I outk 1 I lsb I lsb 1.4 is the differential nonlinearity in I lsb scale. As the DAC are required to be linear devices, the nonlinear errors are the main concern. The integrated nonlinearity INL measures the nonlinearity of the whole DAC transfer characteristic i.e., the cumulated individual nonlinear errors. This article concentrates on the INL, as defined for a code w k by INL k := k i=1 DNL i = I out k ki lsb I lsb, k = 1,..., n. 1.5 This definition excludes the linear errors by forcing INL 0 = INL n = 0 and normalizes INL k to LSB scale. The maximal absolute INL over all codes is given by INL max := max k=0,...,n INL k. 1.6 The statistic INL max is an important specification, because it indicates how linear the DAC is. Another important figure isyield INL, which indicates how many of the manufactured chips have INL max under a certain specified threshold. This is to say, if a DAC manufacturer guarantees certain linearity, then Yield INL describes what proportion of the produced chips fall within these specifications. Integrated nonlinearity is the most popular DAC specification; see, for example, [9,18]. Its practical importance is very high in all DAC application fields. This general

4 15 M. Heydenreich, R. van der Hofstad, and G. Radulov definition of INL is valid for all DAC architectures and all DAC resolutions. The most commonly used architectures are the binary, segmented, and thermometer architectures. The segmented architecture interpolates between the binary and the thermometer architectures. In this article, we consider the two extreme cases of thermometer and binary DAC architectures. We now describe these DAC architectures. For a thermometer DAC see Fig. 1, I outk = I out T k is given by I T out k = k I uj, k = 0,..., n. 1.7 For a binary DAC see Fig., on the other hand, I outk I B out k = N m 1 B k,m = I B out k is given by j= m 1 I uj, k = 0,..., n, 1.8 FIGURE 1. Thermometer DAC. The codeword w k corresponds to switching T k,i ON for i k and switching T k,i OFF for i > k. FIGURE. Binary DAC. The codeword w k corresponds to switching those i for which B k,i = 1 ON, whereas the i for which B k,i = 0 are switched OFF. The matrix B is given in 1.9.

5 FUNCTIONALS OF BROWNIAN BRIDGES 153 where the switching matrix B is an n N matrix given by B = The matrix B is constructed by writing the first n integers in reversed order binary coding into the rows of B. Note that the columns of the matrix represent the DAC bits i.e., the switches of the grouped current sources. The 0 s represent switched OFF currents, and the 1 s represent switched ON currents. The leftmost column gives the switches for the LSB current I B out 1 switches for the most significant bit MSB current I B out N = I u1, whereas the rightmost column gives the = N 1 j= I N 1 u j. Note that I B out n = N m 1 B n,m j= m 1 I uj = n I uj = ni lsb, 1.10 so that INL n = 0. In practice, the most popular DAC architecture is the segmented one. The segmented DAC architectures implement one part of the input digital bits in a binary way and the other part in a thermometer way. That is how the advantages of both the binary and the thermometer architectures can be combined. Figure 3 shows a segmented DAC architecture. The LSB part is implemented in a binary way and the MSB part is implemented in a thermometer way. The output FIGURE 3. Segmented DAC.

6 154 M. Heydenreich, R. van der Hofstad, and G. Radulov current for a given level of segmentation S is expressed by N S m 1 I out S k = B k,m j= m 1 I uj + k/ N S m+1 N S 1 j=m N S I uj The parameter S {0,..., N} determines the interpolation between the binary and the thermometer parts of the DAC architecture. Indeed, for S = 0, the segmented DAC architecture is transformed to a fully binary architecture. On the other hand, for S = N, the segmented DAC architecture is transformed to a fully thermometer architecture. Note that in both extreme cases, the same unit current sources I uj are used. Thus, the performance difference is only due to the way the I uj are combined to construct the output current I outk. In the binary DAC, the unit currents are first grouped and then switched ON or OFF, whereas in the thermometer DAC, the unit currents are individually switched ON or OFF. More detailed discussions on DAC architectures can be found in the literature [9,18,19]. As discussed, due to manufacturing-related mismatch, the currents I uj always deviate from their designed values i.e., I uj = Ī u + ε j. The mean value Ī u is chosen by the DAC designer, whereas ε j is the random error due to the manufacturing process that we model as an i.i.d. sequence of normal random variable with zero mean and variance σu. Nevertheless, our results remain valid when the {ε j} are i.i.d. with sufficiently many moments. The ratio σ u /Ī u is known as the relative current matching i.e., the unit current matching. The relative matching determines the required transistor area for the particular manufacturing process. The smaller σ u /Ī u, the more accurate I uj, but the larger the required area. For more details on the dependence between relative matching and transistor area, we refer to [14]. Once σ u /Ī u is specified, the required transistor area can be calculated. However, the relationship between σ u /Ī u and DAC nonlinearity, which is crucial to determine the proportion of chips complying to specifications, has never been determined analytically. So far, DAC engineers have used either Monte Carlo simulations or approximations. The Monte Carlo simulations of a DAC model produce empirical results to suggest some design specifications σ u /Ī u. Although not accurate, this approach is very practical; see [5]. A problem arises for the design of a high-resolution DAC, for which N is large. The complexity of the DAC model increases by a factor of for every additional bit in resolution; for example, for N = 14, n = unit elements have to be simulated. Therefore, Monte Carlo simulations are not practical for higher resolutions because they become complex and slow. On the other hand, a number of analytical approximations can be found in the literature. The analytical attempts to describe INL, and in particular INL max, started in The approach in [11] disregards the correlation between the DAC outputs for different input codes. Bastos [] proposed a much simpler formula, which considers only the deviation of the transfer characteristic at the midscale DAC output, which can be a rough, though simple, estimation of the INL max.another approximation was given

7 FUNCTIONALS OF BROWNIAN BRIDGES 155 by van den Bosch, Steyaert, and Sansen [4] by assuming that if the DAC static transfer characteristic at any code INL has error equal to the target value e.g., INL k = 0.5I lsb, then there should be a 50% chance that ultimately INL max is smaller than the target value i.e., INL k 0.5 I lsb for all k. The major approximation inaccuracy is in the probability that both the positive and negative INL limits are reached for the same DAC sample; for example, INL k < 0.5 I lsb is disregarded. Although this approach derives a convenient Normal distribution for INL max, it is inaccurate for higher resolutions, as we show in more detail in this article. In general, approximations lead to transistor overdesign i.e., a transistor area that is too large. In this article, on the other hand, we will present an exact analytical formula, for which no approximation is necessary. Due to the lack of exact analytical formulation of INL and the high complexity of DAC model simulations, the statistics used in industry for a high-volume chip production are hard to predict. Here, we think of the statistics Yield INL, the INL max distribution, the INL max deviation, and the mean. Furthermore, the advantages of some redundancy-based approaches relying on the DAC statistical INL properties cannot be theoretically estimated; see, for example, [16]. Finally, up to now the main DAC architectures i.e., binary, thermometer, and segmented cannot be distinguished with respect to their static linearity properties, so they are wrongly considered identical [,4]. One conclusion from our results is that the INL for binary and thermometer architectures are different. Implications of the results derived in this article for the field of DACs can be found in a second article [15]. A comparison with the results in [4,5,11] is summarized in [15, Table 1].. THERMOMETER CODING: MAXIMUM OF A BROWNIAN BRIDGE For the thermometer coding, we can describe the INL as functional of a Brownian bridge as follows. THEOREM.1 INL max for the Thermometer Coding: As n, I lsb σ u n INL max X,.1 in distribution, in L 1 and L, where the limit X is characterized by for a Brownian bridge {B s } s [0,1], and X = max t [0,1] B t. π E[X] = ln , VarX = π 1 π ln ,.3

8 156 M. Heydenreich, R. van der Hofstad, and G. Radulov and PX x = k e k x, x > 0..4 k=1 Note that in.1, we multiply by I lsb rather than by Ī u. For the convergence in distribution, this makes no difference whatsoever, since I lsb converges to Ī u a.s. by the strong law of large numbers. However, the convergence in L 1 and L fails if we multiply by Ī u, since, in this case, the expected value of INL k = I outk ki lsb /I lsb is not defined, whereas INL k has infinite mean. We recall that a Brownian bridge {B s } s [0,1] is a Markov process on [0, 1] that is obtained from a Wiener process or Brownian motion in either of the following two ways: B1 B s = W s sw 1, where {W s } s [0,1] is a Wiener process. B B s = W s, where {W s } s [0,1] is a Wiener process conditioned on W 1 = 0. For an introduction to Wiener processes and Brownian bridges, we refer to [7]. For the equivalence of B1 and B, cf. [3, pp ]. PROPOSITION.: For the thermometer coding, max I T k=0,...,n out k ki lsb D = σu n max B k/n,.5 k=1,...,n where {B t } t [0,1] is a Brownian Bridge process and D = denotes equality in distribution. PROOF: Since {I uj },...,n is a family of i.i.d. normally distributed random variables with mean Ī u and variance σu, we have that I T out k ki lsb = k Iuj k Ī u.6 is normally distributed with mean 0 and variance kσu. Hence, for a Brownian motion {W t } t 0, I out T D D k ki lsb = σu W k = σu nwk/n,.7 where we used Brownian scaling in the last distributional equality. By substituting k = n and using 1., we obtain ni lsb nī u = D σ u nw1. Combined with.7,

9 FUNCTIONALS OF BROWNIAN BRIDGES 157 this yields I T out k ki lsb D = σu n W k/n k n W 1 D = σ u nbk/n.8 for a Brownian bridge process {B s } s [0,1], where we used B1. After taking absolute values on both sides of.8 and the maximum over k = 1,..., n, we obtain.5. The maximum over the discrete-time points k/n, k = 1,..., n, in.5 can be replaced by the supremum over the whole interval [0, 1] by using the following lemma: LEMMA.3: For C > 4, P max k=1,...,n Bk/n max B log n t C 4n 1 C /8 + n Cn/8..9 t [0,1] n In particular, max k=1,...,n Bk/n converges to maxt [0,1] B t in distribution as n. Moreover, L 1 - and L -convergence holds. The proof of Lemma.3 is deferred to the Appendix. We now use Lemma.3 to complete the proof of Theorem.1. PROOF OF THEOREM.1: The convergence in.1 follows from Proposition. and Lemma.3. For the probability of the upper tail of X [.4], we refer to [3, 11.39]; the computation of mean and variance of X is straightforward integration. For example, we have that E[X] = PX > x dx = 1 k 1 e k x dx = k=1 1 k 1 π 4k k=1 π = ln. The interchange of summation and integral can be justified by looking at X ε = X ε. 3. BINARY CODING: BLOCK INCREMENTS OF A BROWNIAN BRIDGE 3.1. Results and Overview Proof In this section we prove the following theorem characterizing the binary coding statistic.

10 158 M. Heydenreich, R. van der Hofstad, and G. Radulov THEOREM 3.1 INL max for the Binary Coding: As n, I lsb σ u n INL max M, 3.1 in distribution, in L 1 and L, where the limit M is characterized by M = 1 l+1/ l 1 l j Z j Z l 3. for a family {Z l } of i.i.d. standard normal random variables. The expectation of M is given by E[M] =π 1/ l l 1/ , 3.3 and the variance VarM is computed explicitly in 3.3 and can be approximated as VarM Note that in 3.1, we multiply by I lsb rather than by Ī u. As explained for Theorem.1, this makes no difference for the convergence in distribution, although the convergence in L 1 and L fails. The proof of Theorem 3.1 is organized as follows. In Lemma 3.3 we prove the convergence in 3.1, where the limit is characterized in terms of increments of Brownian bridges. After this, Proposition 3.4 shows that the weak limit M can be expressed as the weighted sum of standard normal variables, which proves 3.. Lemma 3.7 states the expression for mean and variance of M. Finally, we give an approximation to the density of M in Section A Brownian Bridge Representation of the Binary INL max Our aim is to derive an expression for INL max for the binary coding. First, we express the nonlinearity of the current steering DAC in terms of a functional of a Brownian bridge. LEMMA 3.: max k=1,...,n I B out k ki lsb D σ u N = n B m 1/n B m 1 1/n, 3.5 where {B s } s [0,1] is a Brownian bridge.

11 FUNCTIONALS OF BROWNIAN BRIDGES 159 Proof: Let {W t } t [0, be a Wiener process; then I uj Ī u D = σu n Wj/n W j 1/n, j = 1,..., n, 3.6 where D = represents equality in distribution. We will further write = rather than D =, because we are interested in the distribution only. Furthermore, I lsb Ī u = 1 n n Iuj Ī u = 1 n σ uw n = 1 n σ u W 1, 3.7 where we recall that the I uj are i.i.d. N Ī u, σ u -distributed. We want to calculate I B out k ki lsb for k being a power of first. The advantage is that, for k = m 1, only the mth block {I uj j = m 1,..., m 1} contributes: I B out m 1 I m 1 lsb = I B out m 1 Ī m 1 u + m 1 Ī u I lsb = m 1 j= m 1 Iuj Ī u + m 1 Ī u I lsb = σ u n W m 1/n m 1 W 1 n W m 1 1/n m 1 1 W 1 n = σ u n B m 1/n B m 1 1/n, where, in the last line, we usedthe representation B1 of Brownian bridges. For calculating max k=1,...,n I B out k ki lsb, we need to take the maximum over every configuration of contributing blocks; hence, max I B k=1,...,n out k ki lsb = σu n max B I {1,...,N} m 1/n B m 1 1/n. 3.8 m I }{{} :=M N We denote by I the subset of {1,..., N} for which the maximum in 3.8 is achieved, and we use the abbreviation ˆB n,m := B m 1/n B m 1 1/n, m = 1,..., N, 3.9 for the increment of the Brownian bridge on the interval [ m 1 1/n, m 1/n ]. Without loss of generality, we may assume that ˆB m I n,m is positive, otherwise the

12 160 M. Heydenreich, R. van der Hofstad, and G. Radulov same argument for B holds. Clearly, Furthermore, 0 = B 1 B 0 = m I ˆB n,m N ˆB n,m = N N ˆB n,m 1l ˆB n,m 0 + ˆB n,m 1l ˆB n,m Using 3.10 in the first line and 3.11 in the second, we obtain N M N = ˆB n,m 1l ˆB n,m 0 which is 3.5. We write as in 3.8 and define = 1 = 1 N ˆB n,m 1l ˆB n,m N ˆB n,m, M := 1 where B denotes a Brownian bridge. N ˆB n,m 1l ˆB n,m 0 M N = 1 max I B σ u n out k=1,...,n k ki lsb 3.1 B l 1 B l, 3.13 LEMMA 3.3: There exists a constant C > 0, such that P M N M >ε CN ε N/ 3.14 for every ε>0. In particular, M N converges to M in distribution as N. Moreover, it converges in L 1 and in L. We show in Proposition 3.4 that M D = M. The proof of Lemma 3.3 is deferred to the Appendix. The combination of Lemmas 3. and 3.3 yields the convergence 1 σ u n max I B k=1,...,n 1 out k ki lsb B l B l in distribution, in L 1 and L,asN and, thus, also n = N 1. This proves 3.1. l=0

13 FUNCTIONALS OF BROWNIAN BRIDGES A Representation of M in Terms of i.i.d. Standard Normals In this section, we prove the following representation formula, which expresses M in terms of independent standard normal random variables. PROPOSITION 3.4 Rewrite of M in Terms of Standard Normals: Let Z 1, Z,... be a sequence of i.i.d. standard normal distributed random variables. Then M can be expressed as M = D 1 l 1 l+1/ l j Z j Z l In other words, M in 3.13 has the same distribution as M in 3.. In order to obtain the limit law of M, we have made use of the representation B1 of the Brownian bridge. Now, we will primarily use B. We will essentially use the following well-known property of Brownian motion. LEMMA 3.5 The Conditional Law of the Middle Point: Let {W s } s=0 be a standard Brownian motion. Then the distribution of W t/ conditional on W t = z is a Normal distribution with mean z/ and variance t/4. LEMMA 3.6 Distribution of {B l} : The distribution of {B l} is equal to B l = l j+1/ l j Z j, 3.17 where {Z j } are i.i.d. standard normal random variables. PROOF: The proof is by induction in l.forl = 1, we use Lemma 3.5 together with the fact that {B s } s [0,1] is a Brownian motion conditioned on B 1 = 0. This implies that the distribution of B 1/ is equal to a normal random variable with mean 0 and variance 1/4. Therefore, B 1/ = 1 Z 1, 3.18 where Z 1 is a standard Normal distribution. This initializes the induction hypothesis. To advance it, assume that l 1 B l 1 = j+1/ l 1 j Z j Then, again using Lemma 3.5, the distribution of B l conditionally on B l 1 is a Normal distribution with mean 1/ B l 1 and variance l+1. Therefore, if we

14 16 M. Heydenreich, R. van der Hofstad, and G. Radulov denote Z l = l+1/ B l 1 B l 1, then Z l is a standard normal random variable independent of B l 1. As a consequence, we have that B l = l+1/ Z l + 1 l B l 1 = j+1/ l j Z j, 3.0 where, in the last step, we have used the induction hypothesis. PROOF OF PROPOSITION 3.4: As a consequence of Lemma 3.6, we obtain the identity l 1 B l 1 B l = j+1/ l j Z j l+1/ Z l. 3.1 Thus, by 3.13, we obtain The Moments of M In this section, we identify the first two moments of M. LEMMA 3.7 Moments of M: The expectation of M is given by 3.3; that is, E[M] = 1 l l 1/ , 3. π and the variance is given by VarM = 1 l l 1/ k k 1/ 3.3 π 1 l<k 1 ρlk 1 + ρ lk arcsinρ lk + 1 1, 6 π with l+k ρ lk = l. 3.4 l k k The variance of M can be approximated by VarM In the proof of Lemma 3.7, we will make use of the following property of the bivariate Normal distribution. LEMMA 3.8 Expected Product of Absolute Values of Normals: Let Y 1 and Y be two standard normal random variables having a bivariate Normal joint distribution with correlation coefficient ρ. Then E[ Y 1 Y ] = 1 ρ + ρ π π arctan ρ = 1 ρ + ρ arcsinρ. 1 ρ π 3.5

15 FUNCTIONALS OF BROWNIAN BRIDGES 163 For a proof of Lemma 3.8, see, for example, [10, Excercise 15.6]. PROOF OF LEMMA 3.7: The expression for the mean is easily derived. We note that l 1 N l := B l 1 B l = j+1/ l j Z j l+1/ Z l, l = 1,,..., 3.6 is normally distributed with mean 0 and variance l v l = j+1 l j = l l, 3.7 and we rewrite M in 3. as M = 1 N l. Since, for a normal random variable Z with mean 0 and variance σ, we have that σ E[ Z ] = π, 3.8 the representation formula 3.16 allows us to identify the mean of the random variable M as E[M] = 1 E [ N l ] = 1 vl = 1 l l 1/ π π 3.9 For the variance of M, we expand VarM = 1 Cov N l, N k + Var N l l<k The variance term is not too hard, as and, therefore, Var N l = Var N l = E[N l ] E[ N l ] = 1 v l π 1 l l = π 3 π Now, fix 1 l < k. Then l 1 CovN l, N k = Cov j+1/ l j Z j l+1/ Z l, l j+1/ l j Z j l 1 = j+1 l j k j l+1 k l 3.33 = l+k.

16 164 M. Heydenreich, R. van der Hofstad, and G. Radulov Therefore, N l, N k is a bivariate Normal distribution with mean 0, 0, variances vl, v k, and correlation coefficient CovN l, N k ρ lk = VarNl VarN k = l+k l l k k Denoting Y l, Y k a bivariate normal distribution with means 0, variances 1, and correlation coefficient ρ lk,wehave Cov N l, N k = v l v k Cov Y l, Y k = v l v k E[ Yl Y k ] E[ Y l ] E[ Y k ] = v l v k E[ Y l Y k ] π By Lemma 3.8, as well as , we obtain VarM = 1 vl v k 1 ρlk π 1 + ρ lk arcsinρ lk π 1 l<k Using 3.7 and 3.34, we can approximate VarM numerically, which yields 3.4. Having completed the proofs of , the proof of Theorem.1 is complete Approximating the Density of M We now derive a formula for the density of M. Without loss of generality, we may assume that Ī u = 0 and σ u = 1; that is, the I u are standard normal distributed. By denoting N = N 1, N,...and Z = Z 1, Z,..., we rewrite 3.6 with the help of the infinite matrix L as N = L Z, where j+1/ l j if j < l L jl = l+1/ if j = l 0 if j > l Note that L is a lower triangular matrix. We will approximate the density of the infinite sum M = 1/ N l by the finite sum M m = 1 m N l, m N Writing N m = N 1,..., N m, Z m = Z 1,..., Z m, and L m for the upper left m m corner of the infinite matrix L, we have that N m = L m Z m. In particular, N m is

17 FUNCTIONALS OF BROWNIAN BRIDGES 165 normally distributed with mean 0,...,0 and covariance matrix m = L m L m T, where we write L m T for the transpose of the matrix L m. Note that m jl = { j+l if j = l l l if j = l Given the mean and covariance matrix of a multivariate Normal distribution, its density is known to be { 1 1 f N mn = π m/ det m exp 1 1/ n m } 1 n T, 3.40 where n = n 1,..., n m R m. We write N m for the pointwise absolute value N 1,..., N m of the m-dimensional vector N m. Its density is given by f N m n = 1 1 π m/ det m 1/ σ { 1,1} m { exp 1 } σ n m 1 σ n T, n [0, m, 3.41 where σ n = σ 1 n 1,..., σ m n m. See, for example, [1, 6.3.0]. The determinant det m and the inverse m 1 of the covariance matrix are easy to compute since L m is a triangular matrix. The results are stated in the following two lemmas. LEMMA 3.9: For all m N, the determinant of m is PROOF: First, we note that det m = mm+3/. 3.4 det m = det L m Since L m is a triangular matrix, its determinant is obtained by multiplying the entries on the diagonal and, hence, m det m = L m jj m = l+1/ = mm+3/ LEMMA 3.10: For all m N, the inverse of m is given by { m 1 j l j+l/ 1 for j = l = jl j + 1 j 1 for j = l. 3.45

18 166 M. Heydenreich, R. van der Hofstad, and G. Radulov PROOF: Since L m is a triangular matrix, it is easy to see that the inverse L m 1 is given by j 1/ if j > l L m 1 jl = j+1/ if j = l if j < l. For j < l, we have that m 1 jl = j l L m 1 jk L m 1 lk = k=1 whereas, for j = l, m 1 jj = j j 1/ l 1/ = j j+l/ 1, 3.47 k=1 j 1 j L m 1 jk = j 1/ + j+1/ = j + 1 j k=1 k=1 In order to calculate the density of M m = 1 N1 + + N m at y 0, we have to integrate 3.40 over the m 1-dimensional surface {n 1,..., n m y = n n m }. This leads to the formula y f M my = 0 y n 1 0 y n 1 n m 0 f N m n 1, n,..., n m 1, y n 1 n m 1 dn m 1 dn dn 1, y [0, Note that there are m 1 integrals. Thus, for m = 1, there are no integrals and f M 1y = f N 1 y = 4 π exp{ 8y }, y [0, Finally, 3.49 gives us a formula for the density of M m. The only numerical problem is the integrals. For larger values of m, an intelligent way of numerical integration seems necessary. However, for sufficiently small m, a mathematical standard package, such as Mathematica, gives good approximations see Fig. 4.

19 FUNCTIONALS OF BROWNIAN BRIDGES m =1 m = m =3 m =4 m =5 m =6 m =7 m =8 density of X for different values of m, in contrast to the den- FIGURE 4. The density of M m sity of X A Disintegration Approach to the Density of M In this section we present a different approach to the density of M. We define the quantity M = 1 W l 1 W l 3.51 for a Wiener process {W s } s [0,1]. It is obvious that M = D M W1 [cf and B]. =0 Let ˆf be the Fourier transform of the joint distribution of M and W 1 ; that is, ˆf k 1, k = E [ exp{ik 1 M + k W 1 } ]. 3.5 Using the independence and stationarity of the increments of the Wiener process in 3.51; we obtain ˆf k 1, k = E [ exp{ik 1 W l 1 W l +k W l 1 W l} ], 3.53 }{{} =ĥ l k 1,k where ĥ l k 1, k = E [ exp{ik 1 Z +k Z} ], 3.54

20 168 M. Heydenreich, R. van der Hofstad, and G. Radulov and Z is a N 0, l -distributed normal random variable. Once we have computed ĥ l, we obtain the joint density of M, W 1 via inverse Fourier transformation as f M,W1 x, y = exp{ ik 1 x ik y} ˆf k 1, k dk 1 dk π The density of M equals the density of M conditioned on W 1 = 0, from which f M x = f M,W 1 x,0 f W1 0 = π It remains to calculate ĥ l k 1, k.ifwelet exp{ ik 1 x} ˆf k 1, k dk 1 dk π ĥ l k = E [ e ikz 1l {Z 0} ], 3.57 then ĥ l k 1, k = ĥ l k 1 + k + ĥ l k 1 k. The dependence of ĥ l on l can easily be eliminated by scaling: ĥ l k = ĥ l/ k, 3.58 where ĥ is as in 3.57 with Z being N 0, 1-distributed. It can be shown that ĥk = 1 e k / +i e k / π k but the inverse Fourier transform in 3.56 seems intractable. k e x / dx, CONCLUSIONS We have derived the distribution of the INL max in terms of Brownian bridges. This distributional identity holds for all architectures, in particular also for the segmented one. In the thermometer case, we have identified the limiting distribution of INL max as the absolute maximum of Brownian bridges, which is well known in the literature. For the binary case, we have identified the limiting distribution of INL max in terms of a Brownian bridge. We have further provided a representation in terms of independent standard normal variables and have computed the mean and variance of INL max. Finally, we have given a procedure that approximates the density. We want to emphasize that the INL max in the thermometer case and in the binary case behave differently. Although the densities look alike e.g., the upper tails are quite close to each other, there are significant changes in the lower tail. The thermometer case has, compared to the binary case, a slightly larger mean but a slightly smaller variance. Even though the distributions in the two cases are close, they are not the same.

21 FUNCTIONALS OF BROWNIAN BRIDGES 169 We still miss the distribution function for the binary case. Random sums of the type in 3.13 have appeared in the literature. In particular, the quantity S = l V l, 4.1 where {V l } are i.i.d. exponential random variables, arises in a variety of applications; see, for example, Ott, Kemperman, and Mathis [13], Guillemin, Robert, and Zwart [8], and Litvak and van Zwet [1]. The density of S can be expressed in terms of an infinite sum; cf. [13, Sect. 5]. When the summands are independent uniform random variables [i.e., {V l } are i.i.d. uniform random variables on 0, 1] the density of S can be computed explicitly [6]. Furthermore, it would be interesting to extend the results to the segmented case. In particular, it would be of interest to investigate which limiting INL max distribution has the smallest mean. This should correspond to the optimal DAC architecture. Practical implications of our results can be found in a companion article [15]. Acknowledgements The work of MH and RvdH was supported by the Netherlands Organisation for Scientific Research NWO, and the work of GR was supported by STW, project ECS We thank Marko Boon for help with the simulations in Figure 4. We thank David Brydges for enlightening discussions on multivariate Normal distributions and Olaf Wittich for pointing our attention to the disintegration approach in Section 3.6. References 1. Bain, L.J. & Engelhardt, M Introduction to probability and mathematical statistics, nd ed. Pacific Grove, CA: Duxbury.. Bastos, J Characterization of MOS transistor mismatch for analog design. Ph.D. thesis, Katholieke Universiteit Leuven. 3. Billingsley, P Convergence of probability measures. New York: Wiley. 4. Bosch, A.v.d., Steyaert, M. & Sansen, W Static and dynamic performance limitations for high-speed D/A converters. Amsterdam: Kluwer Academic Publications. 5. Conroy, C.S.G., Lane, W.A., Moran, M.A., Lakshmikumar, K.R., Copeland, M.A., & Hadaway, R.A Comments, with reply, on Characterization and modeling of mismatch in MOS transistors for precisision analog design. IEEE Journal of Solid-State Circuits 31: Fey-den Boer, A. 006 Personal Communication. 7. Grimmett, G.R. & Stirzaker, D.R Probability and random processes, 3rd ed. NewYork: Oxford University Press. 8. Guillemin, F., Robert, P., & Zwart, B AIMD algorithms and exponential functionals. The Annals of Applied Probability, 141: Jesper, P.G.A Integrated converters. Oxford: Oxford University Press. 10. Kendall, M. & Stuart, A The advanced theory of statistics, Volume 1, 4th ed. London: Griffin. 11. Lakshmikumar, K., Hadaway, R., & Copeland, M Characterization and modeling of mismatch in MOS transistors for precision analog design. IEEE Journal of Solid-State Circuits 16: Litvak, N. & van Zwet, W.R On the minimal travel time needed to collect n items on a circle. The Annals of Applied Probability, 14: Ott, T.J., Kemperman, J.H.B., & Mathis, M The stationary behavior of ideal TCP Congestion Avoidance. Available at

22 170 M. Heydenreich, R. van der Hofstad, and G. Radulov 14. Pelgrom, M.J.M., Duinmaijer, A.C.J., & Welbers, A.P.G Matching properties of MOS transistors. IEEE Journal of Solid-State Circuits 45: Radulov, G.I., Heydenreich, M., Hofstad, R.v.d., Hegt, J.A., & Roermund, A.H.M.v Brownian bridge based statistical analysis of DAC INL caused by current mismatch. IEEE Transactions on Circuits and Systems II: Express Briefs 54: Radulov, G.I., Quinn, P.J., van Beek, P.C.W., Hegt, J.A., & van Roermund, A.H.M A binaryto-thermometer decoder with built-in redundancy for improved DAC yield. In ISCAS, Kos, Greece. 17. Razavi, B Design of analog CMOS integrated circuits. New York: McGraw-Hill. 18. Van de Plassche, R.J Integrated analog-to-digital and digital-to-analog converters. Amsterdam: Kluwer Academic Publishers. 19. Wikner, J.J Studies on CMOS digital-to-analog converters. Ph.D. thesis, Department of Electrical Engineering, Linköping University. APPENDIX A Proof of Lemmas.3 and 3.3 PROOF OF LEMMA.3: We first prove.9. Let {B s } s [0,1] be a Brownian bridge. Then max Bk/n max B t k=1,...,n t [0,1] max k=1,...,n t [ k 1 n, k n ] Bk/n B t. A1 Using representation B1 we can further bound A1 from above by max Wk/n W t 1 + k=1,...,n n W 1 t [ k 1 n, k n ] A for a Wiener process {W s } s [0,1]. Using the Markov property and Brownian scaling, we obtain that for k = 1,..., n, max Wk/n W t D 1 = n max W t, A3 t [ k 1 n, k n ] t [0,1] where D = stands for equality in distribution. Hence, for C > 0, P max Bk/n max B log n t C k=1,...,n t [0,1] n { P max max Wk/n W t C log n k=1,...,n t [ k 1 n, k n ] n n P max W t C log n + P W 1 C t [0,1] } + P 1 n W 1 C log n n n log n. A4 For the first term, we bound for every b 0, P max W t b P max W t b = 4 PW 1 b 4e b /, A5 t [0,1] t [0,1]

23 FUNCTIONALS OF BROWNIAN BRIDGES 171 where we use the reflection principle [7, Them. 6, p. 56] in the second and a standard bound on the tail of standard normals in the third step. Substituting b = C/ log n, we obtain n P max W t C log n n 4n C/ / = 4n 1 C /8. A6 t [0,1] For the second term in A4, we obtain analogously P W 1 C n log n P W 1 C n log n n Cn/8. The bound A4 together with A6 and A7 proves.9. For the convergence in L 1, we use that, with X n = max Bk/n, k=1,...,n X = max t [0,1] B t, A7 A8 we have that E X n X = P X n X t dt. A9 0 We split between t 4 log n/n and t > 4 log n/n. Fort 4 log n/n, we bound P X n X t 1, whereas for t > 4 log n/n, weusea6 and A7 to bound P X n X t 6ne nt /. A10 Substitution of these bounds yields log n E X n X = P X n X t dt 4 + 6n e nt / log n dt = O. 0 n log n 4 n n A11 The convergence in L follows similarly, now using E[X n X ]= t P X n X t dt. 0 We leave the details to the reader. A1 PROOF OF LEMMA 3.3: We observe that, by 3.5, 3.1 and 3.13, and replacing m by N m, MN M N 1 B m B N m 1/n + B m B m 1. A13 m=n Therefore, for an arbitrary chosen constant ε>0, we have that P MN M N 1 >ε P B m B ε N m 1/n > + P B m B m 1 > ε. m=n }{{}}{{} i ii A14

24 17 M. Heydenreich, R. van der Hofstad, and G. Radulov Using representation B1, we see that for a Brownian bridge {B s } s [0,1], and any 0 s < t 1, the following holds for every constant C > 0: P B t B s > C P W t W s t sw 1 > C P W t W s > C/ + P t sw 1 > C/. A15 Using the Markov inequality and Gaussian scaling, we obtain P B t B s > C C t s E[ W1 ] + C t s E[ W 1 ] 4 t s. C π A16 We use A16 to bound i in A14 from above by N 1 B P m B N m 1/n > N 1 ε N 1 8N 1 1 ε m N m 1 N 1 8N ε N/, A17 which converges to 0 as N. The second term ii in A14 can be bounded using the Markov inequality by [ ii ] ε E B m B m 1. A18 m=n Using 3.9, this expectation can be computed as [ ] 1 E B m B m 1 = 1 m m 1/ m=n π m=n and ii ε π m=n m m 1/ 4 ε π 1 N/, A19 A0 which converges to 0 as N. The combination of A14, A17, and A0 shows that for C = π π 1, P MN M >ε CN ε N/ A1 for every ε>0, that is, M N converges to M in distribution as N. The convergence in L 1 and in L follows as in the proof of Lemma.3.

ir. Georgi Radulov 1, dr. ir. Patrick Quinn 2, dr. ir. Hans Hegt 1, prof. dr. ir. Arthur van Roermund 1 Eindhoven University of Technology Xilinx

ir. Georgi Radulov 1, dr. ir. Patrick Quinn 2, dr. ir. Hans Hegt 1, prof. dr. ir. Arthur van Roermund 1 Eindhoven University of Technology Xilinx Calibration of Current Steering D/A Converters ir. eorgi Radulov 1, dr. ir. Patrick Quinn 2, dr. ir. Hans Hegt 1, prof. dr. ir. Arthur van Roermund 1 1 Eindhoven University of Technology 2 Xilinx Current-steering

More information

D/A Converters. D/A Examples

D/A Converters. D/A Examples D/A architecture examples Unit element Binary weighted Static performance Component matching Architectures Unit element Binary weighted Segmented Dynamic element matching Dynamic performance Glitches Reconstruction

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

PARALLEL DIGITAL-ANALOG CONVERTERS

PARALLEL DIGITAL-ANALOG CONVERTERS CMOS Analog IC Design Page 10.2-1 10.2 - PARALLEL DIGITAL-ANALOG CONVERTERS CLASSIFICATION OF DIGITAL-ANALOG CONVERTERS CMOS Analog IC Design Page 10.2-2 CURRENT SCALING DIGITAL-ANALOG CONVERTERS GENERAL

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

EE 435. Lecture 36. Quantization Noise ENOB Absolute and Relative Accuracy DAC Design. The String DAC

EE 435. Lecture 36. Quantization Noise ENOB Absolute and Relative Accuracy DAC Design. The String DAC EE 435 Lecture 36 Quantization Noise ENOB Absolute and elative Accuracy DAC Design The String DAC . eview from last lecture. Quantization Noise in ADC ecall: If the random variable f is uniformly distributed

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

The Multivariate Gaussian Distribution [DRAFT]

The Multivariate Gaussian Distribution [DRAFT] The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,

More information

Nyquist-Rate D/A Converters. D/A Converter Basics.

Nyquist-Rate D/A Converters. D/A Converter Basics. Nyquist-Rate D/A Converters David Johns and Ken Martin (johns@eecg.toronto.edu) (martin@eecg.toronto.edu) slide 1 of 20 D/A Converter Basics. B in D/A is a digital signal (or word), B in b i B in = 2 1

More information

ECE Lecture #10 Overview

ECE Lecture #10 Overview ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions

More information

Harmonic Analysis. 1. Hermite Polynomials in Dimension One. Recall that if L 2 ([0 2π] ), then we can write as

Harmonic Analysis. 1. Hermite Polynomials in Dimension One. Recall that if L 2 ([0 2π] ), then we can write as Harmonic Analysis Recall that if L 2 ([0 2π] ), then we can write as () Z e ˆ (3.) F:L where the convergence takes place in L 2 ([0 2π] ) and ˆ is the th Fourier coefficient of ; that is, ˆ : (2π) [02π]

More information

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

ON A LOCALIZATION PROPERTY OF WAVELET COEFFICIENTS FOR PROCESSES WITH STATIONARY INCREMENTS, AND APPLICATIONS. II. LOCALIZATION WITH RESPECT TO SCALE

ON A LOCALIZATION PROPERTY OF WAVELET COEFFICIENTS FOR PROCESSES WITH STATIONARY INCREMENTS, AND APPLICATIONS. II. LOCALIZATION WITH RESPECT TO SCALE Albeverio, S. and Kawasaki, S. Osaka J. Math. 5 (04), 37 ON A LOCALIZATION PROPERTY OF WAVELET COEFFICIENTS FOR PROCESSES WITH STATIONARY INCREMENTS, AND APPLICATIONS. II. LOCALIZATION WITH RESPECT TO

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework

Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School

More information

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute ENEE 739C: Advanced Topics in Signal Processing: Coding Theory Instructor: Alexander Barg Lecture 6 (draft; 9/6/03. Error exponents for Discrete Memoryless Channels http://www.enee.umd.edu/ abarg/enee739c/course.html

More information

Distance between multinomial and multivariate normal models

Distance between multinomial and multivariate normal models Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter s recursive procedure for bounding the Le Cam distance between a multinomialmodeland its approximating

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

A novel Capacitor Array based Digital to Analog Converter

A novel Capacitor Array based Digital to Analog Converter Chapter 4 A novel Capacitor Array based Digital to Analog Converter We present a novel capacitor array digital to analog converter(dac architecture. This DAC architecture replaces the large MSB (Most Significant

More information

Analog and Telecommunication Electronics

Analog and Telecommunication Electronics Politecnico di Torino - ICT School Analog and Telecommunication Electronics D3 - A/D converters» Error taxonomy» ADC parameters» Structures and taxonomy» Mixed converters» Origin of errors 12/05/2011-1

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Partially Collapsed Gibbs Samplers: Theory and Methods. Ever increasing computational power along with ever more sophisticated statistical computing

Partially Collapsed Gibbs Samplers: Theory and Methods. Ever increasing computational power along with ever more sophisticated statistical computing Partially Collapsed Gibbs Samplers: Theory and Methods David A. van Dyk 1 and Taeyoung Park Ever increasing computational power along with ever more sophisticated statistical computing techniques is making

More information

15-388/688 - Practical Data Science: Basic probability. J. Zico Kolter Carnegie Mellon University Spring 2018

15-388/688 - Practical Data Science: Basic probability. J. Zico Kolter Carnegie Mellon University Spring 2018 15-388/688 - Practical Data Science: Basic probability J. Zico Kolter Carnegie Mellon University Spring 2018 1 Announcements Logistics of next few lectures Final project released, proposals/groups due

More information

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA Holger Boche and Volker Pohl Technische Universität Berlin, Heinrich Hertz Chair for Mobile Communications Werner-von-Siemens

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 2016 MODULE 1 : Probability distributions Time allowed: Three hours Candidates should answer FIVE questions. All questions carry equal marks.

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Multiple points of the Brownian sheet in critical dimensions

Multiple points of the Brownian sheet in critical dimensions Multiple points of the Brownian sheet in critical dimensions Robert C. Dalang Ecole Polytechnique Fédérale de Lausanne Based on joint work with: Carl Mueller Multiple points of the Brownian sheet in critical

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Measurement and Modeling of MOS Transistor Current Mismatch in Analog IC s

Measurement and Modeling of MOS Transistor Current Mismatch in Analog IC s Measurement and Modeling of MOS Transistor Current Mismatch in Analog IC s Eric Felt Amit Narayan Alberto Sangiovanni-Vincentelli Department of Electrical Engineering and Computer Sciences University of

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

Preliminary statistics

Preliminary statistics 1 Preliminary statistics The solution of a geophysical inverse problem can be obtained by a combination of information from observed data, the theoretical relation between data and earth parameters (models),

More information

ON THE ERDOS-STONE THEOREM

ON THE ERDOS-STONE THEOREM ON THE ERDOS-STONE THEOREM V. CHVATAL AND E. SZEMEREDI In 1946, Erdos and Stone [3] proved that every graph with n vertices and at least edges contains a large K d+l (t), a complete (d + l)-partite graph

More information

Asymptotic redundancy and prolixity

Asymptotic redundancy and prolixity Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends

More information

Regular finite Markov chains with interval probabilities

Regular finite Markov chains with interval probabilities 5th International Symposium on Imprecise Probability: Theories and Applications, Prague, Czech Republic, 2007 Regular finite Markov chains with interval probabilities Damjan Škulj Faculty of Social Sciences

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate

More information

Binary addition example worked out

Binary addition example worked out Binary addition example worked out Some terms are given here Exercise: what are these numbers equivalent to in decimal? The initial carry in is implicitly 0 1 1 1 0 (Carries) 1 0 1 1 (Augend) + 1 1 1 0

More information

Stochastic Numerical Analysis

Stochastic Numerical Analysis Stochastic Numerical Analysis Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Stoch. NA, Lecture 3 p. 1 Multi-dimensional SDEs So far we have considered scalar SDEs

More information

Binary addition (1-bit) P Q Y = P + Q Comments Carry = Carry = Carry = Carry = 1 P Q

Binary addition (1-bit) P Q Y = P + Q Comments Carry = Carry = Carry = Carry = 1 P Q Digital Arithmetic In Chapter 2, we have discussed number systems such as binary, hexadecimal, decimal, and octal. We have also discussed sign representation techniques, for example, sign-bit representation

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Combinatorial Batch Codes and Transversal Matroids

Combinatorial Batch Codes and Transversal Matroids Combinatorial Batch Codes and Transversal Matroids Richard A. Brualdi, Kathleen P. Kiernan, Seth A. Meyer, Michael W. Schroeder Department of Mathematics University of Wisconsin Madison, WI 53706 {brualdi,kiernan,smeyer,schroede}@math.wisc.edu

More information

A conceptual interpretation of the renewal theorem with applications

A conceptual interpretation of the renewal theorem with applications Risk, Reliability and Societal Safety Aven & Vinnem (eds) 2007 Taylor & Francis Group, London, ISBN 978-0-415-44786-7 A conceptual interpretation of the renewal theorem with applications J.A.M. van der

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

Discrete Math, Spring Solutions to Problems V

Discrete Math, Spring Solutions to Problems V Discrete Math, Spring 202 - Solutions to Problems V Suppose we have statements P, P 2, P 3,, one for each natural number In other words, we have the collection or set of statements {P n n N} a Suppose

More information

A DFT Approach for Diagnosis and Process Variation-Aware Structural Test of Thermometer Coded Current Steering DACs

A DFT Approach for Diagnosis and Process Variation-Aware Structural Test of Thermometer Coded Current Steering DACs A DFT Approach for Diagnosis and Process Variation-Aware Structural Test of Thermometer Coded Current Steering DACs Abstract A design for test (DFT) hardware is proposed to increase the controllability

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

Partially Collapsed Gibbs Samplers: Theory and Methods

Partially Collapsed Gibbs Samplers: Theory and Methods David A. VAN DYK and Taeyoung PARK Partially Collapsed Gibbs Samplers: Theory and Methods Ever-increasing computational power, along with ever more sophisticated statistical computing techniques, is making

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets

Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets Assortment Optimization under the Multinomial Logit Model with Nested Consideration Sets Jacob Feldman School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853,

More information

A sequential hypothesis test based on a generalized Azuma inequality 1

A sequential hypothesis test based on a generalized Azuma inequality 1 A sequential hypothesis test based on a generalized Azuma inequality 1 Daniël Reijsbergen a,2, Werner Scheinhardt b, Pieter-Tjerk de Boer b a Laboratory for Foundations of Computer Science, University

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

Edge Isoperimetric Theorems for Integer Point Arrays

Edge Isoperimetric Theorems for Integer Point Arrays Edge Isoperimetric Theorems for Integer Point Arrays R. Ahlswede, S.L. Bezrukov Universität Bielefeld, Fakultät für Mathematik Postfach 100131, 33501 Bielefeld, Germany Abstract We consider subsets of

More information

ONE-YEAR AND TOTAL RUN-OFF RESERVE RISK ESTIMATORS BASED ON HISTORICAL ULTIMATE ESTIMATES

ONE-YEAR AND TOTAL RUN-OFF RESERVE RISK ESTIMATORS BASED ON HISTORICAL ULTIMATE ESTIMATES FILIPPO SIEGENTHALER / filippo78@bluewin.ch 1 ONE-YEAR AND TOTAL RUN-OFF RESERVE RISK ESTIMATORS BASED ON HISTORICAL ULTIMATE ESTIMATES ABSTRACT In this contribution we present closed-form formulas in

More information

New lower bounds for hypergraph Ramsey numbers

New lower bounds for hypergraph Ramsey numbers New lower bounds for hypergraph Ramsey numbers Dhruv Mubayi Andrew Suk Abstract The Ramsey number r k (s, n) is the minimum N such that for every red-blue coloring of the k-tuples of {1,..., N}, there

More information

An Efficient Approach to Multivariate Nakagami-m Distribution Using Green s Matrix Approximation

An Efficient Approach to Multivariate Nakagami-m Distribution Using Green s Matrix Approximation IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL 2, NO 5, SEPTEMBER 2003 883 An Efficient Approach to Multivariate Nakagami-m Distribution Using Green s Matrix Approximation George K Karagiannidis, Member,

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES

ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES Submitted to the Annals of Probability ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES By Mark J. Schervish, Teddy Seidenfeld, and Joseph B. Kadane, Carnegie

More information

Analysis and Synthesis of Weighted-Sum Functions

Analysis and Synthesis of Weighted-Sum Functions Analysis and Synthesis of Weighted-Sum Functions Tsutomu Sasao Department of Computer Science and Electronics, Kyushu Institute of Technology, Iizuka 820-8502, Japan April 28, 2005 Abstract A weighted-sum

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

EE 435. Lecture 38. DAC Design Current Steering DACs Charge Redistribution DACs ADC Design

EE 435. Lecture 38. DAC Design Current Steering DACs Charge Redistribution DACs ADC Design EE 435 Lecture 38 DAC Design Current Steering DACs Charge edistribution DACs ADC Design eview from last lecture Current Steering DACs X N Binary to Thermometer ndecoder (all ON) S S N- S N V EF F nherently

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

#A69 INTEGERS 13 (2013) OPTIMAL PRIMITIVE SETS WITH RESTRICTED PRIMES

#A69 INTEGERS 13 (2013) OPTIMAL PRIMITIVE SETS WITH RESTRICTED PRIMES #A69 INTEGERS 3 (203) OPTIMAL PRIMITIVE SETS WITH RESTRICTED PRIMES William D. Banks Department of Mathematics, University of Missouri, Columbia, Missouri bankswd@missouri.edu Greg Martin Department of

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

Hybrid HMM/MLP models for time series prediction

Hybrid HMM/MLP models for time series prediction Bruges (Belgium), 2-23 April 999, D-Facto public., ISBN 2-649-9-X, pp. 455-462 Hybrid HMM/MLP models for time series prediction Joseph Rynkiewicz SAMOS, Université Paris I - Panthéon Sorbonne Paris, France

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there

More information

Semester , Example Exam 1

Semester , Example Exam 1 Semester 1 2017, Example Exam 1 1 of 10 Instructions The exam consists of 4 questions, 1-4. Each question has four items, a-d. Within each question: Item (a) carries a weight of 8 marks. Item (b) carries

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages ELEC206 Probability and Random Processes, Fall 2014 Gil-Jin Jang gjang@knu.ac.kr School of EE, KNU page 1 / 15 Chapter 7. Sums of Random

More information

Introducing the Normal Distribution

Introducing the Normal Distribution Department of Mathematics Ma 3/13 KC Border Introduction to Probability and Statistics Winter 219 Lecture 1: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2, 2.2,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Properties of the Autocorrelation Function

Properties of the Autocorrelation Function Properties of the Autocorrelation Function I The autocorrelation function of a (real-valued) random process satisfies the following properties: 1. R X (t, t) 0 2. R X (t, u) =R X (u, t) (symmetry) 3. R

More information

EECS240 Spring Lecture 21: Matching. Elad Alon Dept. of EECS. V i+ V i-

EECS240 Spring Lecture 21: Matching. Elad Alon Dept. of EECS. V i+ V i- EECS40 Spring 010 Lecture 1: Matching Elad Alon Dept. of EECS Offset V i+ V i- To achieve zero offset, comparator devices must be perfectly matched to each other How well-matched can the devices be made?

More information

K-ANTITHETIC VARIATES IN MONTE CARLO SIMULATION ISSN k-antithetic Variates in Monte Carlo Simulation Abdelaziz Nasroallah, pp.

K-ANTITHETIC VARIATES IN MONTE CARLO SIMULATION ISSN k-antithetic Variates in Monte Carlo Simulation Abdelaziz Nasroallah, pp. K-ANTITHETIC VARIATES IN MONTE CARLO SIMULATION ABDELAZIZ NASROALLAH Abstract. Standard Monte Carlo simulation needs prohibitive time to achieve reasonable estimations. for untractable integrals (i.e.

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

D/A Converters and Iterated Function Systems

D/A Converters and Iterated Function Systems D/A Converters and Iterated Function ystems Toshimichi aito (tsaito@k.hosei.ac.jp), Junya himakawa and Hiroyuki Torikai (torikai@k.hosei.ac.jp) Department of Electronics, Electrical and Computer Engineering,

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores

More information

A Note on Auxiliary Particle Filters

A Note on Auxiliary Particle Filters A Note on Auxiliary Particle Filters Adam M. Johansen a,, Arnaud Doucet b a Department of Mathematics, University of Bristol, UK b Departments of Statistics & Computer Science, University of British Columbia,

More information

Estimation of information-theoretic quantities

Estimation of information-theoretic quantities Estimation of information-theoretic quantities Liam Paninski Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ liam liam@gatsby.ucl.ac.uk November 16, 2004 Some

More information

14.1 Finding frequent elements in stream

14.1 Finding frequent elements in stream Chapter 14 Streaming Data Model 14.1 Finding frequent elements in stream A very useful statistics for many applications is to keep track of elements that occur more frequently. It can come in many flavours

More information