Compression in the Space of Permutations
|
|
- Randolf Stone
- 5 years ago
- Views:
Transcription
1 Compression in the Space of Permutations Da Wang Arya Mazumdar Gregory Wornell EECS Dept. ECE Dept. Massachusetts Inst. of Technology Cambridge, MA 02139, USA University of Minnesota Minneapolis, MN 55455, USA Abstract We investigate the lossy compression of permutations by analyzing the trade-off between the size of a source code and the distortion with respect to l 1 distance of inversion vectors, Kendall tau distance, Spearman s footrule and Chebyshev distance. We show that given two permutations, Kendall tau distance upper bounds the l 1 distance of inversion vectors and a scaled version of Kendall tau distance lower bounds the l 1 distance of inversion vectors with high probability, which indicates an equivalence of the source code designs under these two distortion measures. Similar equivalence is established for all the above distortion measures, every one of which has different operational significance and applications in ranking and sorting. These findings show that an optimal coding scheme for one distortion measure is effectively optimal for other distortion measures above. 1 Introduction In this paper we consider the lossy compression (source coding) of permutations, which is motivated by storing ranking data, and lower bounding the complexity of approximate sorting. In a variety of applications such as admission and recommendation systems (such as Yelp.com and IMDB.com), ranking, or the relative ordering of data, is the key object of interest. In many databases, the principle queries ask for the top k-items or the median object, etc. Noting that a ranking of n items can be represented as a permutation on n elements 1 to n, storing a ranking is equivalent to storing a permutation. In general, to store a permutation of n elements, we need log 2 () n log 2 n O(n) bits. However, if we can tolerate certain error (instead of the top restaurant, say the query returns one of the top five), then how many bits are necessary for storage? In addition to application on compression, source coding of the permutation space is also related to the analysis of comparison-based sorting algorithms. Given a group of elements of distinct values, comparison-based sorting can be viewed as the process of finding a true permutation by pairwise comparisons, and since each comparison in sorting provides at most 1 bit of information, the log-size of the permutation set S n provides a lower bound to the required number of comparisons, i.e., log = n log n + O (n). Similarly, the lossy source coding of permutations provides a lower bound to the problem of comparison-based approximate sorting, which can be seen as searching a true permutation subject to certain distortion. Again, the log-size of the code indicates the amount of information (in terms of bit) needed to specify the 1
2 true permutation subject to certain distortion, which in turn provides a lower bound on the number of pairwise comparisons needed. The problem of approximate sorting has been investigated in [1], where results for the moderate distortion regime are derived with respect to the Spearman s footrule metric [2] (see below for definition). On the other hand, every comparison-based sorting algorithm corresponds to a compression scheme of the permutation space, as we can treat the outcome of each comparison as 1 bit. This string of bits is a (lossy) representation of the permutation that is being (approximately) sorted. However, reconstructing the permutation from the compressed representation may not be straightforward. In our earlier work [3], a rate-distortion theory for permutation space is developed, with the worst-case distortion as the parameter, and the rate rate-distortion functions and source code designs for two different distortion measures, Kendall tau distance and the l 1 distance of the inversion vectors, are derived. In Section 3 of this paper we show that under average-case distortion, the rate-distortion problem under Kendall tau distance and l 1 distance of the inversion vectors are equivalent and hence the code design could be used interchangeably, leading to simpler coding schemes for the Kendall tau distance case (than developed in [3]), as discussed in Section 4. Moreover, the rate-distortion problem under Chebyshev distance is also considered and its equivalence to the cases above is established. Operational meaning and importance of all these distance measures is discussed in Section 2. While these distance measures usually have different intended applications, our findings show that an optimal coding scheme for one distortion measure is effectively optimal for other distortion measures. 2 Problem formulation In this section we discuss aspects of the problem formulation. We provide a mathematical formulation the rate-distortion problem on a permutation space in Section 2.2, introduce the distortions of interest in Section 2.3, and discuss their operation meaning in Section Notation Let S n denote the symmetric group of n elements. We write the elements of S n as arrays of natural numbers with values ranging from 1,..., n and every value occurring only once in the array. For example, [3, 4, 1, 2, 5] S 5. This is also known as the vector notation for permutations. Given a metric d : S n S n R + {0}, we define a permutation space X (S n, d). Throughout the paper, we denote the set {1,..., n} as [n], and let [a : b] {a, a + 1,..., b 1, b} for any two integers a and b. 2.2 Rate-distortion problem Given a permutation space, we define the following rate-distortion problem. Definition 1 (Codebook for permutations under average-case distortion). A (n, D n ) source code C n S n for X (S n, d) is a set of M n = C n permutations such that for 2
3 a σ that is drawn uniformly from S n, there exists π C n that E [d(π, σ)] D n. Let A(n, D n ) be minimum size of the (n, D n ) source codes in X (S n, d). We define the minimal rate for distortion D n as R(D n ) log A(n, D n). log As to the classical rate-distortion setup, we are interested in deriving the trade-off between distortion level D n and the rate R(D n ) as n. In this work we show that for the distortions d(, ) of interest, lim n R(D n ) exists. Instead of requiring E [d(π, σ)] D n in the above definition, we may require P [d(π, σ) > D n ] 0, (1) as n. As we will see, this stronger requirement does not change the asymptotic rate-distortion trade-off. 2.3 Distortion measures There are several distortion measures possible in permutations. Some of the most natural measures include Kendall-tau distance and l p distances, where p {1, 2,..., }. In this section we introduce the distortion measures of interest, Spearman s footrule (l 1 distance between two permutation vectors), Chebyshev distance (l distance between two permutation vectors), the l 1 distance of inversion vectors and Kendall tau distance. Definition 2 (Spearman s footrule). Given two permutations σ 1 and σ 2, the Spearman s footrule between σ 1 and σ 2 is n d l1 (σ 1, σ 2 ) σ 1 σ 2 1 = σ 1 (i) σ 2 (i) Definition 3 (Chebyshev distance). Given two permutations σ 1 and σ 2, the Chebyshev distance between σ 1 and σ 2 is i=1 d l (σ 1, σ 2 ) σ 1 σ 2 = max 1 i n σ 1(i) σ 2 (i) Inversion vector is a representation of a permutation, building upon the notion of inversions. Definition 4 (inversion, inversion vector). An inversion in a permutation σ S n is a pair (σ(i), σ(j)) such that i < j and σ(i) > σ(j). We use I n (σ) to denote the total number of inversions in σ S n, and K n (k) {σ S n : I n (σ) = k} (2) to denote the number of permutations with k inversions. A permutation σ S n is associated with an inversion vector x σ G n [0 : 1] [0 : 2] [0 : n 1], where for i = 1,..., n 1, x σ (i) = { j [n] : j < i + 1, σ 1 (j) > σ 1 (i + 1) }. In words, x σ (i) is the number of inversions in σ in which i + 1 is the first element. 3
4 It is well known that mapping from S n to G n is one-to-one and straightforward [4]. Definition 5 (l 1 distance of inversion vectors). Given two permutations σ 1 and σ 2, we define the l 1 distance of two inversion vectors as n 1 d x,l1 (σ 1, σ 2 ) x σ1 (i) x σ2 (i). (3) i=1 Now we introduce another distortion measure of interest, Kendall tau distance. Definition 6 (Kendall tau distance). The Kendall tau distance d τ (σ 1, σ 2 ) from one permutation σ 1 to another permutation σ 2 is defined as the minimum number of transpositions of pairwise adjacent elements required to change σ 1 into σ 2. The Kendall tau distance is upper bounded by ( n 2). Remark 1 (Bubble-sort). Let e [1, 2,..., n] be the identify permutation, and given a input sequence of σ, d τ (σ, e) is the number of swaps needed in a bubble-sort algorithm [4]. 2.4 Operational meaning of the distortion measures Spearman s footrule is a very well-known measure of disarray (see, [2]), where the sum of the deviations at different positions is measured. The Chebyshev distance measures maximum of the deviations at different positions. Therefore, Spearman s footrule (l 1 distance) measures the total deviation, while Chebyshev distance (l distance) measures the worst-case deviation. In approximate sorting, we may care one measure more than the other, or both. In addition, inversion vector provides a measure of disorder in a sequence. Given a sequence v 1, v 2,..., v n and permutation π such that v π(1) < v π(2) <... < v π(n), then π(n + 1 k) is the index of the k-th largest element, and x π (n k) indicates the number of elements that are smaller than and have indices larger than of the top-k element. In particular, the position of the top-1 element is n x π (n 1). Apart from the natural sorting algorithms, Kendall tau distance has recently attracted a lot of interest in the area of error-correcting codes for Flash memory [5, 6], Kendall tau distance is also a global measure of disarray that is very popular in statistics. It is closely related to the Spearman s footrule, as we will see next. 3 Relationships between distortion measures In this section we show all four distortion measures define in Section 2.3 are closely related to each other. 3.1 Spearman s footrule and Kendall tau distance Theorem 1 (Relationship of Kendall tau distance and l 1 distance of permutation vectors [2]). Let σ 1 and σ 2 be any permutations in S n, then where σ 1 is the permutation inverse of σ. d l1 (σ 1, σ 2 )/2 d τ (σ 1 1, σ 1 2 ) d l1 (σ 1, σ 2 ), (4) 4
5 3.2 l 1 distance of inverse vectors and Kendall tau distance We show that the l 1 distance of inversion vectors and the Kendall tau distance are closely related in Theorems 2 and 3, which helps to establish the equivalence of the rate-distortion problem later. The Kendall tau distance between two permutation vectors provides upper and lower bounds to the l 1 distance between the inversion vectors of the corresponding permutations, as indicated by the following theorem. Theorem 2. Let π 1 and π 2 be any permutations in S n, then 1 n 1 d τ (π 1, π 2 ) d x,l1 (x π1, x π2 ) d τ (π 1, π 2 ) (5) Proof. See Section 5.1. Remark 2. The bound in Theorem 2 is tight as there exists π 1 and π 2 such that the equality is satisfied. For example, when n = 2m, let σ 1 = [1, 3, 5,..., 2m 3, 2m 1, 2m, 2m 2,..., 6, 4, 2], σ 2 = [2, 4, 6,..., 2m 2, 2m, 2m 1, 2m 3,..., 5, 3, 1], then d τ (σ 1, σ 2 ) = n(n 1)/2 and d x,l1 (σ 1, σ 2 ) = n/2. For another instance, let σ 1 = [1, 2,..., n 2, n 1, n], σ 2 = [2, 3,..., n 1, n, 1] then d τ (σ 1, σ 2 ) = n 1 and d x,l1 (σ 1, σ 2 ) = 1. While Theorem 2 shows in general knowing d τ (σ 1, σ 2 ) may not lead a tight lower bound to d x,l1 (σ 1, σ 2 ) due to the 1/(n 1) factor, Theorem 3 shows that it provides a tight lower bound with high probability. Theorem 3. For any π S n, let σ be a permutation chosen uniformly from S n, then for any constant c 1 < 1/2. P [c 1 d τ (π, σ) d x,l1 (π, σ)] = 1 O (1/n) (6) Proof. See Section Spearman s footrule and Chebyshev distance Let σ 1 and σ 2 be any permutations in S n, then d l1 (σ 1, σ 2 ) n d l (σ 1, σ 2 ), (7) and additionally, the scaled Chebyshev distance lower bounds the Spearman s footrule with high probability. Theorem 4. For any π S n, let σ be a permutation chosen uniformly from S n, then for any constant c 2 < 1/3. P [c 2 n d l (π, σ) d l1 (π, σ)] = 1 O (1/n) (8) Proof. See Section
6 4 Rate distortion function Theorem 5 (Equivalence of lossy source codes under different distortion measures). Below, any of the source codes of the left hand side, implies a source code of the right. 1. (n, D n ) source code for X (S n, d τ ) (n, D n ) source code for X (S n, d x,l1 ), 2. (n, D n ) source code for X (S n, d x,l1 ) (n, D n /c 1 + O (n)) source code for X (S n, d τ ) for any c 1 < 1/2, 3. (n, D n ) source code for X (S n, d l1 ) (n, D n ) source code for X (S n, d τ ). 4. (n, D n ) source code for X (S n, d τ ) (n, 2D n ) source code for X (S n, d l1 ), 5. (n, D n /n) source code for X (S n, d l ) (n, D n ) source code for X (S n, d l1 ), 6. (n, D n ) source code for X (S n, d l1 ) (n, D n /n/c 2 + O (1)) source code for X (S n, d l ) for any c 1 < 1/3. Proof. Statement 1 follow directly from (5). For statement 2, let A n {σ : c 1 d τ (σ, π) d x,l1 (σ, π), π S n }, then Theorem 3 indicates that A n = (1 O (1/n)). Let C n be the (n, D n ) source code for X (S n, d x,l1 ) and σ be a permutation chosen uniformly from S n, then for let π σ be the codeword for σ in C n, E [d τ (π, σ)] = 1 d τ (σ, π σ ) = 1 d τ (σ, π σ ) + d τ (σ, π σ ) σ S n σ A n σ S n\a n 1 d x,l1 (σ, π σ ) /c 1 + n 2 /2 σ A n σ S n\a n D n /c 1 + O (1/n) (n 2 ) = D n /c 1 + O (n). Statement 3 and 4 follow directly from Theorem 1. Statement 5 follows from (7). For statement 6, similar to the proof for state 2, define B n {σ : c 2 n d l (σ, π) d l1 (σ, π), π S n }, then by Theorem 4, E [d l (π, σ)] = 1 d l (σ, π σ ) = 1 d l (σ, π σ ) + d l (σ, π σ ) σ S n σ Bn σ S n\b n 1 d l1 (σ, π σ ) + σ Bn σ S n\b n n D n /n/c 2 + O (1/n) n = D n /n/c 2 + O (1). 6
7 We obtain Theorem 6 as a direct consequence of Theorem 5. Theorem 6 (Rate distortion function for all distortion measures). For permutation spaces X (S n, d x,l1 ), X (S n, d τ ), and X (S n, d l1 ), { 1 if D = O (n) R(D) = 1 δ if D = Θ ( n 1+δ), 0 < δ 1. (9) For permutation space X (S n, d l ), { 1 if D = O (1) R(D) = 1 δ if D = Θ ( n δ), 0 < δ 1. (10) Proof. See [3] for the rate-distortion functions for X (S n, d x,l1 ) and X (S n, d τ ). The rest follows from Theorem Code design implications Theorem 5 indicates that the lossy compression scheme for all the distortion measures in this paper can be used interchangeably, after scaling the distortion properly. In particular, under average-case distortion, one can use the code for permutation space X (S n, d x,l1 ) to permutation spaces X (S n, d τ ), X (S n, d l1 ) and X (S n, d l ). This is beneficial due to the simplicity of the compression algorithm in X (S n, d x,l1 ), which is simply component-wise scalar quantization, as shown in [3]. In this scheme, for the (k 1)-th component, (k = 2,, n), we quantize k points in [0 : k 1], uniformly with m = k/(2d k + 1) points, leading to maximal distortion D k = kd/(n + 2) 2, overall distortion and log of codebook size 5 Proofs log M n = D n = n D k = k=2 (n 1)(n + 2)D (n + 2) 2 D, n (n + 2) 2 log m k n log = (1 δ)n log n + O (n). 2D k=2 5.1 Proofs for Theorem 2 Given two permutations π 1 and π 2, we can find their corresponding inversion vector x π1 and x π2. It is known that [7, Lemma 4] d l1 (x π1, x π2 ) d τ (π 1, π 2 ). Below we first show that in the worst case d l1 (x π1, x π2 ) cannot be tightly lower bounded by d τ (π 1, π 2 ) in Theorem 2 via Lemma 7, but then with high probability the two are with the same order in Theorem 3. 7
8 Lemma 7. For any two permutations π, σ in S n such that d x,l1 (π, σ) = 1, d τ (π, σ) n 1. Proof. Let x π = [a 2, a 3,..., a n ] and x σ = [b 2, b 3,..., b n ], then without loss of generality, we have for a certain 2 k n, { b i i k a i = b i + 1 i = k. Let π and σ be permutations in S n 1 with element k removed from π and σ correspondingly, then x π = x σ, and hence π = σ. Therefore, the Kendall tau distance between σ and π is determined only by the location of element k in σ and π, which is at most n 1. Proof. The proof of [7, Lemma 4] indicates that for any two permutation π 1 and π 2 with k = d x,l1 (π 1, π 2 ), let σ 0 π 1 and σ k π 2, then there exists a sequence of permutations σ 1, σ 2,..., σ k 1 such that d x,l1 (σ i, σ i+1 ) = 1, 0 i k 1. Then k 1 d τ (π 1, π 2 ) = d τ (σ i, σ i 1 ) (a) i=0 k 1 (n 1) = (n 1)d x,l1 (π 1, π 2 ), i=0 where (a) is due to Lemma Proofs for Theorem 3 To prove Theorem 3, we analyze the mean and variance of the Kendall tau distance and l 1 distance of inversion vectors between a permutation in S n and a randomly selected permutation, in Lemma 9 and Lemma 10 respectively. We first show an obvious fact. Lemma 8. Let σ be a permutation chosen uniformly from S n, then x σ (i) Unif ([0 : i]), 1 i n 1. Proof. Obvious. Lemma 9. For any π S n, let σ be a permutation chosen uniformly from S n, and X τ d τ (π, σ), then E [X τ ] = n(n 1), Var [X τ ] = 4 n(2n + 5)(n 1). 72 Proof. Let σ be another permutation chosen independently and uniformly from S n, then we have both πσ 1 and σ σ 1 are uniformly distributed over S n. Note that Kendall tau distance is right-invariant [8], then d τ (π, σ) = d τ (πσ 1, e) and d τ (σ, σ) = d τ (σ σ 1, e) are identically distributed, and hence the result follows [2, Table 1] and [4, Section 5.1.1]. 8
9 Lemma 10. For any π S n, let σ be a permutation chosen uniformly from S n, and X x,l1 d x,l1 (π, σ), then E [X x,l1 ] > n(n 1), Var [X x,l1 ] < 8 (n + 1)(n + 2)(2n + 3). 6 Proof. By Lemma 8, we have X x,l1 = n 1 i=1 a i U i, where U i Unif ([0 : i]) and a i x π (i). Let V i = a i U i, m 1 = min {i a i, a i } and m 2 = max {i a i, a i }, then 1/(i + 1) d = 0 2/(i + 1) 1 d m 1 P [V i = d] = 1/(i + 1) m d m 2 0 otherwise. Hence m 1 E [V i ] = d 2 i d=1 m2 d=m 1 +1 d 1 i = 2(i + 1) [2(1 + m 1)m 1 + (m 2 + m 1 + 1)(m 2 m 1 )] = ( ) 1 (m1 + m 2 ) 2 i(i + 2) + i = 2(i + 1) 2 4(i + 1) > i 4, Var [V i ] E [ ] Vi 2 2 i d 2 (i + 1) 2. i + 1 Then d=0 n 1 E [X x,l1 ] = E [V i ] > i=1 n 1 Var [X x,l1 ] = Var [V i ] < i=1 n(n 1), 8 (n + 1)(n + 2)(2n + 3) (i + 1) (m2 1 + m i) With Lemma 9 and Lemma 10, now we show that the event that a scaled version of the Kendall tau distance is larger than the l 1 distance of inversion vectors is unlikely. Proof for Theorem 3. Let c 1 = 1/3, let t = n2, then noting 7 by Chebyshev s inequality, t = E [c X τ ] + Θ ( n ) Std [Xτ ] = E [X x,l1 ] Θ ( n ) Std [Xx,l1 ], P [c X τ > X x,l1 ] P [c X τ > t] + P [X x,l1 < t] O (1/n) + O (1/n) = O (1/n). The general case of c 1 < 1/2 can be proved similarly. 9
10 5.3 Proof for Theorem 4 Lemma 11. For any π S n, let σ be a permutation chosen uniformly from S n, and X l1 d l1 (π, σ), then E [X l1 ] = n2 3 + O (n), Var [X l 1 ] = 2n O ( n 2). Proof. See [2, Table 1]. Proof for Theorem 4. For any c > 0, cn d l (π, σ) cn(n 1), and for any c 2 < 1/3, Lemma 11 and Chebyshev s inequality indicate Therefore, P [d l1 (π, σ) < c 2 n(n 1)] = O(1/n). P [d l1 (π, σ) c 2 n d l (π, σ)] P [d l1 (π, σ) c 2 n(n 1)] = 1 P [d l1 (π, σ) < c 2 n(n 1)] = 1 O (1/n). References [1] Joachim Giesen, Eva Schuberth, and Milo Stojakovi. Approximate sorting. In Jos R. Correa, Alejandro Hevia, and Marcos Kiwi, editors, LATIN 2006: Theoretical Informatics, volume 3887, pages Springer, Berlin, Heidelberg, [2] Persi Diaconis and R. L. Graham. Spearman s footrule as a measure of disarray. Journal of the Royal Statistical Society. Series B (Methodological), 39(2): , [3] Da Wang, Arya Mazumdar, and Gregory W. Wornell. A rate-distortion theory for permutation spaces. In Proc. IEEE Int. Symp. Inform. Th. (ISIT), pages , [4] Donald E. Knuth. Art of Computer Programming, Volume 3: Sorting and Searching. Addison-Wesley Professional, 2 edition, [5] Anxiao Jiang, M. Schwartz, and J. Bruck. Error-correcting codes for rank modulation. In Proc. IEEE Int. Symp. Inform. Th. (ISIT), pages , [6] A. Barg and A. Mazumdar. Codes in permutations and error correction for rank modulation. IEEE Trans. Inf. Theory, 56(7): , [7] A. Mazumdar, A. Barg, and G. Zemor. Constructions of rank modulation codes. IEEE Trans. Inf. Theory, 59(2): , [8] Michael Deza and Tayuan Huang. Metrics on permutations, a survey. Journal of Combinatorics, Information and System Sciences, 23: ,
Computing with Unreliable Resources: Design, Analysis and Algorithms
Computing with Unreliable Resources: Design, Analysis and Algorithms Da Wang, Ph.D Candidate Thesis Committee: Gregory W. Wornell (Advisor) Yury Polyanskiy Devavrat Shah Signals, Information and Algorithms
More informationTHE term rank modulation refers to the representation of information by permutations. This method
Rank-Modulation Rewrite Coding for Flash Memories Eyal En Gad, Eitan Yaakobi, Member, IEEE, Anxiao (Andrew) Jiang, Senior Member, IEEE, and Jehoshua Bruck, Fellow, IEEE 1 Abstract The current flash memory
More informationApproximate Sorting of Data Streams with Limited Storage
Approximate Sorting of Data Streams with Limited Storage Farzad Farnoud! Eitan Yakoobi! Jehoshua Bruck 20th International Computing and Combinatorics Conference (COCOON'14) Sorting with Limited Storage
More informationPermutations with Inversions
1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 4 2001, Article 01.2.4 Permutations with Inversions Barbara H. Margolius Cleveland State University Cleveland, Ohio 44115 Email address: b.margolius@csuohio.edu
More informationData Representation for Flash Memories 53
Data Representation for Flash Memories 53 Data Representation for Flash Memories 04 Anxiao (Andrew) Jiang Computer Science and Engineering Dept. Texas A&M University College Station, TX 77843, U.S.A. ajiang@cse.tamu.edu
More informationTHE SNOWFLAKE DECODING ALGORITHM
THE SNOWFLAKE DECODING ALGORITHM J. B. NATION AND CATHERINE WALKER Abstract. This paper describes an automated algorithm for generating a group code using any unitary group, initial vector, and generating
More informationThe complexity of the Weight Problem for permutation and matrix groups
The complexity of the Weight Problem for permutation and matrix groups Peter J Cameron 1 School of Mathematical Sciences, Queen Mary, University of London, London, E1 4NS, UK Taoyang Wu 2 Department of
More informationDependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.
MFM Practitioner Module: Risk & Asset Allocation September 11, 2013 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y
More informationDependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.
Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,
More informationAsymptotic efficiency of simple decisions for the compound decision problem
Asymptotic efficiency of simple decisions for the compound decision problem Eitan Greenshtein and Ya acov Ritov Department of Statistical Sciences Duke University Durham, NC 27708-0251, USA e-mail: eitan.greenshtein@gmail.com
More informationPolar Codes are Optimal for Write-Efficient Memories
Polar Codes are Optimal for Write-Efficient Memories Qing Li Department of Computer Science and Engineering Texas A & M University College Station, TX 7784 qingli@cse.tamu.edu Anxiao (Andrew) Jiang Department
More informationSoft Covering with High Probability
Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information
More informationLecture 1: Asymptotics, Recurrences, Elementary Sorting
Lecture 1: Asymptotics, Recurrences, Elementary Sorting Instructor: Outline 1 Introduction to Asymptotic Analysis Rate of growth of functions Comparing and bounding functions: O, Θ, Ω Specifying running
More informationIn-Memory Computing of Akers Logic Array
In-Memory Computing of Akers Logic Array Eitan Yaakobi Electrical Engineering California Institute of Technology Pasadena, CA 91125 yaakobi@caltechedu Anxiao (Andrew) Jiang Computer Science and Engineering
More informationThe complexity of the Weight Problem for permutation and matrix groups
The complexity of the Weight Problem for permutation and matrix groups Peter J Cameron 1 School of Mathematical Sciences, Queen Mary, University of London, London, E1 4NS, UK Taoyang Wu 2 Department of
More informationA CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS
A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS SOURAV CHATTERJEE AND PERSI DIACONIS Abstract. This paper does three things: It proves a central limit theorem for a novel permutation statistic,
More informationNoise Modeling and Capacity Analysis for NAND Flash Memories
Noise Modeling and Capacity Analysis for NAND Flash Memories Qing Li, Anxiao (Andrew) Jiang, and Erich F. Haratsch Flash Components Division, LSI Corporation, San Jose, CA, 95131 Computer Sci. and Eng.
More informationComputing Trusted Authority Scores in Peer-to-Peer Web Search Networks
Computing Trusted Authority Scores in Peer-to-Peer Web Search Networks Josiane Xavier Parreira, Debora Donato, Carlos Castillo, Gerhard Weikum Max-Planck Institute for Informatics Yahoo! Research May 8,
More informationMaximum union-free subfamilies
Maximum union-free subfamilies Jacob Fox Choongbum Lee Benny Sudakov Abstract An old problem of Moser asks: how large of a union-free subfamily does every family of m sets have? A family of sets is called
More informationFactoring Banded Permutations and Bounds on the Density of Vertex Identifying Codes on the Infinite Snub Hexagonal Grid
College of William and Mary W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 2011 Factoring Banded Permutations and Bounds on the Density of Vertex Identifying Codes
More informationDispersion of the Gilbert-Elliott Channel
Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays
More informationTHIS paper is aimed at designing efficient decoding algorithms
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used
More informationLecture 6. Today we shall use graph entropy to improve the obvious lower bound on good hash functions.
CSE533: Information Theory in Computer Science September 8, 010 Lecturer: Anup Rao Lecture 6 Scribe: Lukas Svec 1 A lower bound for perfect hash functions Today we shall use graph entropy to improve the
More informationCLASSICAL error control codes have been designed
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 56, NO 3, MARCH 2010 979 Optimal, Systematic, q-ary Codes Correcting All Asymmetric and Symmetric Errors of Limited Magnitude Noha Elarief and Bella Bose, Fellow,
More informationSecurity in Locally Repairable Storage
1 Security in Locally Repairable Storage Abhishek Agarwal and Arya Mazumdar Abstract In this paper we extend the notion of locally repairable codes to secret sharing schemes. The main problem we consider
More informationOn the average-case complexity of Shellsort
Received: 16 February 2015 Revised: 24 November 2016 Accepted: 1 February 2017 DOI: 10.1002/rsa.20737 RESEARCH ARTICLE On the average-case complexity of Shellsort Paul Vitányi 1,2 1 CWI, Science Park 123,
More informationUpper Bounds on the Capacity of Binary Intermittent Communication
Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,
More informationHow many hours would you estimate that you spent on this assignment?
The first page of your homework submission must be a cover sheet answering the following questions. Do not leave it until the last minute; it s fine to fill out the cover sheet before you have completely
More informationGRAY codes were found by Gray [15] and introduced
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 45, NO 7, NOVEMBER 1999 2383 The Structure of Single-Track Gray Codes Moshe Schwartz Tuvi Etzion, Senior Member, IEEE Abstract Single-track Gray codes are cyclic
More informationA Las Vegas approximation algorithm for metric 1-median selection
A Las Vegas approximation algorithm for metric -median selection arxiv:70.0306v [cs.ds] 5 Feb 07 Ching-Lueh Chang February 8, 07 Abstract Given an n-point metric space, consider the problem of finding
More informationLinear Programming Bounds for Robust Locally Repairable Storage Codes
Linear Programming Bounds for Robust Locally Repairable Storage Codes M. Ali Tebbi, Terence H. Chan, Chi Wan Sung Institute for Telecommunications Research, University of South Australia Email: {ali.tebbi,
More informationAN EXACT SOLVER FOR THE DCJ MEDIAN PROBLEM
AN EXACT SOLVER FOR THE DCJ MEDIAN PROBLEM MENG ZHANG College of Computer Science and Technology, Jilin University, China Email: zhangmeng@jlueducn WILLIAM ARNDT AND JIJUN TANG Dept of Computer Science
More informationLecture 11: Quantum Information III - Source Coding
CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that
More informationPERFECT SNAKE-IN-THE-BOX CODES FOR RANK MODULATION
PERFECT SNAKE-IN-THE-BOX CODES FOR RANK MODULATION ALEXANDER E. HOLROYD Abstract. For odd n, the alternating group on n elements is generated by the permutations that jump an element from any odd position
More informationMotivation. Dictionaries. Direct Addressing. CSE 680 Prof. Roger Crawfis
Motivation Introduction to Algorithms Hash Tables CSE 680 Prof. Roger Crawfis Arrays provide an indirect way to access a set. Many times we need an association between two sets, or a set of keys and associated
More informationThe Sorting Order on a Coxeter Group
FPSAC 2008, Valparaiso-Viña del Mar, Chile DMTCS proc. AJ, 2008, 411 416 The Sorting Order on a Coxeter Group Drew Armstrong School of Mathematics, University of Minnesota, 206 Church St. SE, Minneapolis,
More informationAlgorithmic Probability
Algorithmic Probability From Scholarpedia From Scholarpedia, the free peer-reviewed encyclopedia p.19046 Curator: Marcus Hutter, Australian National University Curator: Shane Legg, Dalle Molle Institute
More informationBinomial coefficient codes over W2)
Discrete Mathematics 106/107 (1992) 181-188 North-Holland 181 Binomial coefficient codes over W2) Persi Diaconis Harvard University, Cambridge, MA 02138, USA Ron Graham AT & T Bell Laboratories, Murray
More information2012 IEEE International Symposium on Information Theory Proceedings
Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical
More informationPermutation distance measures for memetic algorithms with population management
MIC2005: The Sixth Metaheuristics International Conference??-1 Permutation distance measures for memetic algorithms with population management Marc Sevaux Kenneth Sörensen University of Valenciennes, CNRS,
More informationSCALABLE AUDIO CODING USING WATERMARKING
SCALABLE AUDIO CODING USING WATERMARKING Mahmood Movassagh Peter Kabal Department of Electrical and Computer Engineering McGill University, Montreal, Canada Email: {mahmood.movassagh@mail.mcgill.ca, peter.kabal@mcgill.ca}
More informationAs mentioned, we will relax the conditions of our dictionary data structure. The relaxations we
CSE 203A: Advanced Algorithms Prof. Daniel Kane Lecture : Dictionary Data Structures and Load Balancing Lecture Date: 10/27 P Chitimireddi Recap This lecture continues the discussion of dictionary data
More informationA Repetition Test for Pseudo-Random Number Generators
Monte Carlo Methods and Appl., Vol. 12, No. 5-6, pp. 385 393 (2006) c VSP 2006 A Repetition Test for Pseudo-Random Number Generators Manuel Gil, Gaston H. Gonnet, Wesley P. Petersen SAM, Mathematik, ETHZ,
More informationJoint Coding for Flash Memory Storage
ISIT 8, Toronto, Canada, July 6-11, 8 Joint Coding for Flash Memory Storage Anxiao (Andrew) Jiang Computer Science Department Texas A&M University College Station, TX 77843, U.S.A. ajiang@cs.tamu.edu Abstract
More informationCOMPARING TOP k LISTS
SIAM J. DISCRETE MATH. Vol. 17, No. 1, pp. 134 160 c 2003 Society for Industrial and Applied Mathematics COMPARING TOP k LISTS RONALD FAGIN, RAVI KUMAR, AND D. SIVAKUMAR Abstract. Motivated by several
More informationA General Lower Bound on the I/O-Complexity of Comparison-based Algorithms
A General Lower ound on the I/O-Complexity of Comparison-based Algorithms Lars Arge Mikael Knudsen Kirsten Larsent Aarhus University, Computer Science Department Ny Munkegade, DK-8000 Aarhus C. August
More informationCSE 190, Great ideas in algorithms: Pairwise independent hash functions
CSE 190, Great ideas in algorithms: Pairwise independent hash functions 1 Hash functions The goal of hash functions is to map elements from a large domain to a small one. Typically, to obtain the required
More informationOn the longest increasing subsequence of a circular list
On the longest increasing subsequence of a circular list M. H. Albert M. D. Atkinson Doron Nussbaum Jörg-Rüdiger Sack Nicola Santoro Abstract The longest increasing circular subsequence (LICS) of a list
More informationA CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS
A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS SOURAV CHATTERJEE AND PERSI DIACONIS Abstract. This paper does three things: It proves a central limit theorem for novel permutation statistics
More informationOn the Distribution of Multiplicative Translates of Sets of Residues (mod p)
On the Distribution of Multiplicative Translates of Sets of Residues (mod p) J. Ha stad Royal Institute of Technology Stockholm, Sweden J. C. Lagarias A. M. Odlyzko AT&T Bell Laboratories Murray Hill,
More informationFixed Length Distribution Matching
Fixed Length Distribution Matching Patrick Schulte Chair for Communications Engineering patrick.schulte@tum.de May 26, 2015 Doktorandenseminar 1 / 15 Outline Motivation: Fixed Length Distribution Matching
More informationOn completing partial Latin squares with two filled rows and at least two filled columns
AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 68(2) (2017), Pages 186 201 On completing partial Latin squares with two filled rows and at least two filled columns Jaromy Kuhl Donald McGinn Department of
More informationThe Moments of the Profile in Random Binary Digital Trees
Journal of mathematics and computer science 6(2013)176-190 The Moments of the Profile in Random Binary Digital Trees Ramin Kazemi and Saeid Delavar Department of Statistics, Imam Khomeini International
More informationExtending LP-Decoding for. Permutation Codes from Euclidean. to Kendall tau Metric
Extending LP-Decoding for Permutation Codes from Euclidean to Kendall tau Metric April 18, 2012 Justin Kong A thesis submitted to the Department of Mathematics of the University of Hawaii in partial fulfillment
More informationProgress on High-rate MSR Codes: Enabling Arbitrary Number of Helper Nodes
Progress on High-rate MSR Codes: Enabling Arbitrary Number of Helper Nodes Ankit Singh Rawat CS Department Carnegie Mellon University Pittsburgh, PA 523 Email: asrawat@andrewcmuedu O Ozan Koyluoglu Department
More informationarxiv: v2 [cs.it] 25 Jul 2010
On Optimal Anticodes over Permutations with the Infinity Norm Itzhak Tamo arxiv:1004.1938v [cs.it] 5 Jul 010 Abstract Department of Electrical and Computer Engineering, Ben-Gurion University, Israel Moshe
More informationMultidimensional Flash Codes
Multidimensional Flash Codes Eitan Yaakobi, Alexander Vardy, Paul H. Siegel, and Jack K. Wolf University of California, San Diego La Jolla, CA 9093 0401, USA Emails: eyaakobi@ucsd.edu, avardy@ucsd.edu,
More informationError-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories
2 IEEE International Symposium on Information Theory Proceedings Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories Hongchao Zhou Electrical Engineering Department California Institute
More information5 ProbabilisticAnalysisandRandomized Algorithms
5 ProbabilisticAnalysisandRandomized Algorithms This chapter introduces probabilistic analysis and randomized algorithms. If you are unfamiliar with the basics of probability theory, you should read Appendix
More informationClassification of Ordinal Data Using Neural Networks
Classification of Ordinal Data Using Neural Networks Joaquim Pinto da Costa and Jaime S. Cardoso 2 Faculdade Ciências Universidade Porto, Porto, Portugal jpcosta@fc.up.pt 2 Faculdade Engenharia Universidade
More informationA Combinatorial Bound on the List Size
1 A Combinatorial Bound on the List Size Yuval Cassuto and Jehoshua Bruck California Institute of Technology Electrical Engineering Department MC 136-93 Pasadena, CA 9115, U.S.A. E-mail: {ycassuto,bruck}@paradise.caltech.edu
More informationSector-Disk Codes and Partial MDS Codes with up to Three Global Parities
Sector-Disk Codes and Partial MDS Codes with up to Three Global Parities Junyu Chen Department of Information Engineering The Chinese University of Hong Kong Email: cj0@alumniiecuhkeduhk Kenneth W Shum
More informationLIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah
00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements
More informationdata structures and algorithms lecture 2
data structures and algorithms 2018 09 06 lecture 2 recall: insertion sort Algorithm insertionsort(a, n): for j := 2 to n do key := A[j] i := j 1 while i 1 and A[i] > key do A[i + 1] := A[i] i := i 1 A[i
More informationLecture 4: Two-point Sampling, Coupon Collector s problem
Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms
More informationDe Bruijn Sequences for the Binary Strings with Maximum Density
De Bruijn Sequences for the Binary Strings with Maximum Density Joe Sawada 1, Brett Stevens 2, and Aaron Williams 2 1 jsawada@uoguelph.ca School of Computer Science, University of Guelph, CANADA 2 brett@math.carleton.ca
More informationTopics in Algorithms. 1 Generation of Basic Combinatorial Objects. Exercises. 1.1 Generation of Subsets. 1. Consider the sum:
Topics in Algorithms Exercises 1 Generation of Basic Combinatorial Objects 1.1 Generation of Subsets 1. Consider the sum: (ε 1,...,ε n) {0,1} n f(ε 1,..., ε n ) (a) Show that it may be calculated in O(n)
More informationExplicit MBR All-Symbol Locality Codes
Explicit MBR All-Symbol Locality Codes Govinda M. Kamath, Natalia Silberstein, N. Prakash, Ankit S. Rawat, V. Lalitha, O. Ozan Koyluoglu, P. Vijay Kumar, and Sriram Vishwanath 1 Abstract arxiv:1302.0744v2
More informationChapter 5. Divide and Conquer. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Divide-and-Conquer Divide-and-conquer. Break up problem into several parts. Solve each
More informationAsymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University
More informationSome long-period random number generators using shifts and xors
ANZIAM J. 48 (CTAC2006) pp.c188 C202, 2007 C188 Some long-period random number generators using shifts and xors Richard P. Brent 1 (Received 6 July 2006; revised 2 July 2007) Abstract Marsaglia recently
More informationImage and Multidimensional Signal Processing
Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount
More informationLecture 3: Error Correcting Codes
CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error
More informationProblem Set 2 Solutions
Introduction to Algorithms October 1, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Piotr Indyk and Charles E. Leiserson Handout 9 Problem Set 2 Solutions Reading: Chapters 5-9,
More informationlarge number of i.i.d. observations from P. For concreteness, suppose
1 Subsampling Suppose X i, i = 1,..., n is an i.i.d. sequence of random variables with distribution P. Let θ(p ) be some real-valued parameter of interest, and let ˆθ n = ˆθ n (X 1,..., X n ) be some estimate
More informationRANK modulation is a data-representation scheme which
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 61, NO. 8, AUGUST 2015 4209 Rank-Modulation Rewrite Coding for Flash Memories Eyal En Gad, Student Member, IEEE, Eitan Yaakobi, Member, IEEE, Anxiao (Andrew)
More informationAchieving the Full MIMO Diversity-Multiplexing Frontier with Rotation-Based Space-Time Codes
Achieving the Full MIMO Diversity-Multiplexing Frontier with Rotation-Based Space-Time Codes Huan Yao Lincoln Laboratory Massachusetts Institute of Technology Lexington, MA 02420 yaohuan@ll.mit.edu Gregory
More informationAN EXPLICIT UNIVERSAL CYCLE FOR THE (n 1)-PERMUTATIONS OF AN n-set
AN EXPLICIT UNIVERSAL CYCLE FOR THE (n 1)-PERMUTATIONS OF AN n-set FRANK RUSKEY AND AARON WILLIAMS Abstract. We show how to construct an explicit Hamilton cycle in the directed Cayley graph Cay({σ n, σ
More informationQuantization Index Modulation using the E 8 lattice
1 Quantization Index Modulation using the E 8 lattice Qian Zhang and Nigel Boston Dept of Electrical and Computer Engineering University of Wisconsin Madison 1415 Engineering Drive, Madison, WI 53706 Email:
More informationTowards control over fading channels
Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti
More information4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak
4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the
More informationModule 1: Analyzing the Efficiency of Algorithms
Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationPartial Quicksort. Conrado Martínez
Partial Quicksort Conrado Martínez Abstract This short note considers the following common problem: rearrange a given array with n elements, so that the first m places contain the m smallest elements in
More informationGeneralized Gray Codes for Local Rank Modulation
Generalized Gray Codes for Local Rank Modulation Eyal En Gad Elec Eng Caltech Pasadena, CA 91125, USA eengad@caltechedu Michael Langberg Comp Sci Division Open University of Israel Raanana 43107, Israel
More informationA Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding
A Novel Asynchronous Communication Paradigm: Detection, Isolation, and Coding The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation
More informationRecurrence Relations
Recurrence Relations Winter 2017 Recurrence Relations Recurrence Relations A recurrence relation for the sequence {a n } is an equation that expresses a n in terms of one or more of the previous terms
More informationHAMMING DISTANCE FROM IRREDUCIBLE POLYNOMIALS OVER F Introduction and Motivation
HAMMING DISTANCE FROM IRREDUCIBLE POLYNOMIALS OVER F 2 GILBERT LEE, FRANK RUSKEY, AND AARON WILLIAMS Abstract. We study the Hamming distance from polynomials to classes of polynomials that share certain
More informationExponential stability of families of linear delay systems
Exponential stability of families of linear delay systems F. Wirth Zentrum für Technomathematik Universität Bremen 28334 Bremen, Germany fabian@math.uni-bremen.de Keywords: Abstract Stability, delay systems,
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationLecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1
Lecture 10 Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Recap Sublinear time algorithms Deterministic + exact: binary search Deterministic + inexact: estimating diameter in a
More informationComputation at a Distance
Computation at a Distance Samuel A. Kutin David Petrie Moulton Lawren M. Smithline May 4, 2007 Abstract We consider a model of computation motivated by possible limitations on quantum computers. We have
More informationIN this paper, we consider the capacity of sticky channels, a
72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion
More informationLARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011
LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic
More informationAsymptotic redundancy and prolixity
Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends
More informationLower and Upper Bound Theory
Lower and Upper Bound Theory Adnan YAZICI Dept. of Computer Engineering Middle East Technical Univ. Ankara - TURKEY 1 Lower and Upper Bound Theory How fast can we sort? Most of the sorting algorithms are
More informationChapter 5 Divide and Conquer
CMPT 705: Design and Analysis of Algorithms Spring 008 Chapter 5 Divide and Conquer Lecturer: Binay Bhattacharya Scribe: Chris Nell 5.1 Introduction Given a problem P with input size n, P (n), we define
More informationSTA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources
STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various
More information